We’re primarily using Dynatrace for user-experience monitoring, for our Autodesk e-store as well as our Autodesk subscription management.
The performance is really good. It’s really helping us to catch problems and find out where the root cause is.
We’re primarily using Dynatrace for user-experience monitoring, for our Autodesk e-store as well as our Autodesk subscription management.
The performance is really good. It’s really helping us to catch problems and find out where the root cause is.
The main time to resolve issues is coming down with Dynatrace.
Previously, it used to take time to find out what was the exact reason, why this user is failing, what is the user's complaints. Now we can see proactively, this is the component that is failing, and we are going to fix it. The time to the solution has improved.
Regarding AI and managing performance problem in the cloud, it’s very important. We have a lot of monitoring tools. We believe that with the new Dynatrace AI, the alerts will be reduced. It’s only binding to one alert where the exact, root cause of the issue is, instead of giving a thousand alerts, spamming them with all the email alerts and services. Just one alert that says: this is the problem, this is the root cause of all these things.
The real user-experience monitoring is very helpful. We can see what real users are seeing, what JavaScript errors, what pages are very slow for them. As well, it helps to correlate the front-end users to the back-end application components, and the corresponding Method which is failing, as well. We are able to go to the correct spot and fix the issue.
The real-user replay that they demoed here at the Performance 2018 conference, it would be really good if we can get that. And the other is the self-healing; it’s currently not there. We have to forward the events to some outsourced remediation solution and then they work on the event. If they also provided the self-healing option, that would be really good.
So far we haven’t had any issues with the Dynatrace infrastructure itself. We have been using Dynatrace AppMon for the last two and a half years. We’re migrating to Dynatrace now, we have started a PoC, with the new AI etc., to experiment with it.
We haven’t touched scalability for the new Dynatrace yet.
We have called the technical support and most of the calls are returned quickly. Some of them will be a real technical problem and then they have to reach their engineering side; the support is not able to help. Those were the cases where support got delayed.
We had both experiences, unreasonably long for most of the cases, and some were reasonably long delays.
Previously we were using a third party e-store. When Autodesk wanted us to have a custom e-store, built and managed by our Autodesk development team, we wanted to have real user-experience monitoring on all the applications, performance and everything.
It took a lot of time with the initial deployment, of the old solution, the AppMon. With this one we have to check it out. We are doing a PoC. The deployment is going to be smooth and it’s going to be quick, that’s what I hear, here at the Performance 2018 conference. I have yet to implement the install to see that.
We did a PoC with New Relic. New Relic did not capture any events because of the front-end framework that we used at that time was Angular, and New Relic did not support it. We tried AppDynamics and that also did not support it. Then, finally, we went with Dynatrace.
We have used siloed monitoring solutions in the past and there were a lot of events, it was not good. We’re trying to consolidate everything into one. We’re working on tool consolidation. That’s one of the primary plans for our roadmap, for IT.
If we have a solution that provides not just data but real answers about where the problems are, how to fix them, the immediate benefit for our team would be time. We would have time to work on other development efforts, innovation things. It would save a lot of resources as well. That would be the main benefit of it. We’re spending a lot of time to get the answers. With one solution like that, it would be giving us the answers.
What we appreciate most in a vendor is their being more responsive and attending to customer problems; more customer-focused.
I would rate Dynatrace a seven out of 10, because of the issues I mentioned before with technical support.
I would definitely recommend Dynatrace, the new one, with all the new features that are being launched. I think these are not available in other monitoring tools. This is the best one. I would definitely recommend Dynatrace.
I am from a monitoring team, and we had a PCF environment that we were building. We needed a tool to monitor a PCF environment. We were looking at a couple of tools, and we found Dynatrace is a robust tool which provides pretty good monitoring for a PCF environment. That is when we chose Dynatrace.
It will definitely be a benefit. We are using other tools to monitor other environments, and we have to switch between different tools when we have to do RUM, do synthetic monitoring, or do competent monitoring. It is really painful.
This is where I find Dynatrace pretty good, because it has everything in one tool, and you can monitor from infrastructure to synthetic monitoring, RUM, then performance data, etc.
That is the way to go, when you do not have to waste time between setting up the tools and seeing the metrics. That is what all matters. We can save time and still get the work done.
We are just setting up the environment, so I have not done so much with Dynatrace. The thing that I like with Dynatrace is it goes to every part of the containers, and it brings up all the data and all the problems. The way it shows a problem on the dashboard is pretty good.
We finished our PoC. We have deployed our dev environment, test environment, and now we are still setting up our production environment. The reviews that we got from the PCF team, they are pretty much happy with it.
I need more experience, because I have not used the tool. As far as I know about the tool, it covers so much. Once I am more familiar with the tool, then I will have more understanding of what I am missing here. Right now, I need more experience.
We are still setting up the environment. While I was deploying it, it was pretty quick. Therefore, I did not have any complaints. I am pretty good with what I have worked with of the tools, and how much time it takes for the deployment. So, I am pretty happy with it
We had some proxy-related issues in the beginning, but it looks like it was a mix up between our environment and their environment. Maybe it was a lack of knowledge. However, so far, it has been good.
Technical support was pretty good. They jumped on the call, and we spent some time with them in India, and issues were resolved pretty quickly.
We did not use any tools before this product.
It was pretty straightforward and quick. I was amazed by it. I deployed it in six servers in 40 minutes, and that is pretty good.
We needed some monitoring for PCF. Therefore, we did a PoC with a couple of other tools along with Dynatrace. That is when we found that Dynatrace was the tool that we should be using for PCF.
Most important criteria when selecting a vendor:
These three things are the most important criteria that help us decide which vendors to pursue.
The primary reason for choosing any APM solution, it comes through the interface, is to find out the gaps between the application life cycle when someone makes a transaction. Then, we will not know what is causing it to come back so late, delayed, or with latency. We just want to know the pain points of it. That is why we have chosen this APM solution. So far, it is doing a good job, except for some flaws, but that is fine.
When Dynatrace shows us an entire lifecycle. I can go back to my document and see A points to B, then B points to C, and so on. I can then back and see that I plugged it in every box to see how it is behaving and how the product is dynamically showing the AI behaving in a single place, like on a single webpage. This helps us to see the troubleshooting points or what is hiccuping. This is where we go. We reboot, fix it, or do whatever it takes for it to be taken care of.
The PurePath is one feature, which I actually like. When I have a problem that is being detected by the alerting profiles, I just go in and see what the part is talking to, then what its dependencies are. For example, if a middleware application is behaving weird, then has to be sampled by different databases back-ends, or mainframes, we just keep looking at those PurePath to see what it is talking to rather than going back to my library or my documents to see what exactly the architecture design was. The PurePath helps me a lot.
Artificial intelligence depends upon business to business. If you take a travel industry, like airlines, not every month will remain the same as in the next 12 months. Our busy seasons would be around summer, Thanksgiving, Christmas, and New Year (two bottleneck seasons). If you come back and are replicating that scenario of traffic of those issues, or those latency of responses being triggered, it will not be the same as in the rest of the months. When we are plugging in the AI, we need it to have in mind this for each and every business, that the AI implementation should be different.
What happened was when there was an AI sneak peek to our portfolios for our company took an average of the last three months, which would not work for us. If it is taking an average of the first three months, say Jan, Feb and March. Our systems would be quiet because we are not handling our bottleneck capacity of traffic. Then, when it comes to April or May, that is where our busy business season starts. The AI takes the alerting profiles of the first three months, then tries to implement on those next three months, or the next coming 24 hours, and then it just screams a lot.
The AI should be tweaked for the last full year, like smart scheduling. That would help us.
Dynatrace solution is pretty much stable.
Last time, when there were upgrades being made, the alerting profiles had been wiped off, then we had a gap. When Dynatrace made the latest upgrades, newest patch upgrade, or firmwares, the existing alerting profile, which says, "Call me when you see this," or "Call me when you see 10% of these," had been wiped off. Someone has to redo it again. Except that, it has been fixed in the next release.
The scalability of the solution is good. We initially started with the first three major critical applications, which is where I was introduced to this tool. Right now, they are moving on to the top 21 applications, which is going to be good scalability. They did well.
I have not contacted any customer solution or support.
Since we are still on the warranty period, we have still Dynatrace guardians on-site. I can go in and say, "Hey, this is what is happening," and he will get me a solution, or he will say, "Hey, you are doing this wrong. You have to do this." If not, he would say this feature is not yet released, and we are going to have it in next Q release, or whatever. Dynatrace guardian is the first point of contact if I have to ask any questions, he would be the guy.
We never had a centralized application for performance monitoring tool before Dynatrace.
Everyone has Sumo Logic. Someone has Splunk. Someone has Tealeaf. Someone has Riverbed. No one had a consistent idea of what another team was doing for monitoring solutions. When enterprise monitoring took place, that was where a centralized solution needed to come in.
For example, if I was sending a transaction to a different team, and I called it as a transaction, but someone else named it with a Tealeaf ID. There was a disconnect in naming conventions.
When the APM solution came into place, I now know what to call it and they know what to expect from me, so we are on the same page. This helps us in shortening down the time for triaging an issue.
The capacity planning was complex. Everything else was easy.
Our infrastructure setup has been there for quite a long time, then we already had JVMs who monitor our process. We needed to evaluate what options and what benefits were coming down on our plate, then what was a repetitive task which was already being done by other JVMs. We had to evaluate those on different boxes, different portfolios, etc. Then evaluating options were a little tough because we were already using some things and we had to do something new basically from scratch again. This took some time. After we had experience with the first three major applications, we knew what to do with the next 21.
If I had a colleague interested in purchasing Dynatrace, I would ask these questions:
If I had just one solution that could provide real answers, not just data, the immediate benefit would be fixing the issues first. It takes a lot of time for us to dig back where the actual issues on the code base are, especially if it is a network or infrastructure-related. To get answers for most of it, we can fix our issues faster on a priority basis.
Most important criteria for selecting a vendor: Show me what I am not seeing. If you ask me, I am an engineer. I do not want to see the eyes on glass all the time. I want a solution which does it for me. I know how to set my thresholds and throttles. For example, if there is an issue, an exception, or a false exception which is coming in, I know my application:
That is the something which I like a lot regarding the synthetics of application performance monitoring. When I am not seeing and I am being called when there is an issue, which I set my own rules, that is a good idea. That is the great thing and a driving factor for having an APM solution.
Primary use case is to triage business applications and slow performance for our users. It is performing very well for now.
We have been a long standing customer for almost seven years, and using different tool sets from DC RUM to Gomez, and AppMon now.
It helps us get to the resolution quicker, and potentially the root cause, and at least understand what is happening for future identification.
The ability to really drill into performance issues and help our application teams understand what is causing the business's problems.
AI is really important because there are so many different tools that we have, so much data being collected, and being able to really sift through just general users is difficult. Therefore, using some type of AI technology to help identify and pluck out the important parts, it is critical.
We would like to see more external tool integration, which is critical for us. We are a best of breed, or at least try to be, customer for tools.
We would also like to see all the good data in a single view across multiple tools, so that access to integration is critical.
This would definitely gets us going forward in the right direction.
The stability is really good for what we have experienced so far. We have not experienced any downtime with our tool sets.
I have not experienced any issues scaling with the Dynatrace tools, however I have experienced scaling issues with other competitors' tools.
Over the years, they have been very good. Because of growth and popularity, it has been a little more challenging getting information, but they are knowledgeable once we do get them engaged.
We have used siloed monitoring tools in the past. We experience a myriad of issues from getting them configured to providing useful information, and also sometimes licensing issues. Overall, the usefulness of the tool and helping us fix problems became the issues.
We were using other competitors' tools. Now, we have been migrating from some of those other siloed tools, but we do still have a mixture of tool sets.
We are a longtime customer of Dynatrace. We started out with a single product, then brought in a second, and the third, so over time and seeing how the value progresses, we have substituted different tool set with Dynatrace tools.
The initial setup was reasonably straightforward. It was pretty easy to get deployed, and again, getting the value in a reasonable time.
We had someone from the vendor technical support providing assistance as we deployed.
Right now, we are migrating many of our things to Dynatrace. We have already made the selection. Other competitors fell short. Integration flexibility and dashboard reporting capabilities were some of the key issues that we looked at.
Do your homework. Test. Do your proof of concept(s). Be thorough in what you need and defining your reasons for looking at the tools. Even though everybody likes what they see, it may not be a good fit for what you are trying to accomplish.
The tool sets are great. They provide good information. You can always improve them with better data.
If I had just one solution which could provide real answers, not just data, the immediate benefit would be a single pane of glass perspective for the application in our environment. We have been striving for this consistently in the last seven to 10 years. It is absolutely critical. It is where we are working to get to: A single view which is telling us the problem, what to fix, and moving us on to the next problem.
Most important criteria when selecting a vendor: First and foremost is honesty. We have been in technology, and we, among many other teams within our organization, are a much more of a senior team. We have people that have been in the industry for 20 years or so. Just tell me what the issues are. We have the technical wherewithal to know how to work through them. Therefore, being straight up, honest, and having integrity as you are talking about the tools, demonstrating the tools, not only the highlights, but also the pitfalls and things that we need to work through. Have more transparency.
We primarily use it for performance monitoring and to track down solutions to ongoing issues.
Its performance is great. We previously had AppMon, and AppMon is a little bit difficult. But we're PoC-ing Dynatrace, and we love it so far.
We spend less time tracking down what's truly a problem and looking for problems, and more time actually solving the problems.
It's more than just dashboarding. It actually tells you when you have problems, so you don't have to go set up anything. It automatically figures it out. The artificial intelligence is by far the most useful thing I've seen.
Without the AI, I don't think we would be able to grow at all. As we continue to grow, our environment gets more complicated and there are "segmented people" who know little pieces of it. The AI allows one item, the software, to be able to understand everything and provide all the data.
To me, dashboarding is still a little bit sketchy. I'm definitely of the mindset that the problem cards are just more than enough. But when you're making that transition from AppMon, which is very dashboard-oriented, over to Dynatrace, which is no dashboards, there needs to be something in between so that business buys in a little bit. To me, setting up the dashboards is not that easy to do.
There should be something that would help transition, especially customers like us who already are heavily into AppMon. I would transition my dashboards over so that we don't have to recreate them, because recreating them is very difficult in Dynatrace. I get that they're two different systems, but any legitimate company that is doing that...
If you're starting in one or the other, you're totally fine. But if you're porting over... My boss has to do a whole business case on why she wants to do it, and it's really hard to say, "Oh, the dashboards that you had on the team that you were using, you're not going to get over here." Or, "You have to re-create them all over again." People are going to ask questions about cost, who is going to do that. Although the tool automatically does that, business hasn't seen it yet. So it's a really hard sale. I would love to see some kind of integration so that we can say, "Okay, we transferred at least 80% of your dashboards over."
I haven't had it long enough to truthfully rate stability. We've had AppMon long enough, about two years, and that's been rock solid, minus any upgrades. Every time we do an upgrade there's some instability. When it's not being upgraded, it's perfect.
Scalability is great. My biggest concern when we first put it in was the resources that it would take up, network traffic that it might create. But it seems perfectly scalable to any environment. Even on some of our heaviest use servers, it doesn't seem to affect anything. So to me, it can be put on any environment and keep growing.
We have a guardian. We don't actually call technical support. We have somebody on-prem. He's knowledgeable and almost always available. Whenever he's in the office, he's available. Even outside the office, he's pretty available.
I was not involved in the initial setup. I took this over from my boss, and she was involved in the initial setup. I was kind of thrown into it, so I was a little worried about that. But it's been pretty easy.
I'm involved in the switch from AppMon to Dynatrace. To me, that's the biggest upgrade we've got. For AppMon, I did training courses. We did one-on-ones with our guardian from Dynatrace. Even with those, it was still a very complicated tool to learn. With Dynatrace, we picked it up in minutes. It was very intuitive. That's what I can't believe. I almost wish we didn't waste the time doing the training for AppMon. I would have just gone straight to Dynatrace.
As for a tool that would not only give data, but real answers, it would make things even quicker. I actually think that's what Dynatrace does for us right now. It tells us the answer to what the root cause is. It doesn't actually fix it, which I'm hoping it will eventually do, but it actually gives us the right answers right now. That is better than what we had before, which was somebody would go in there and try to find the problem. They may not have gotten to the root cause, so they would put a temporary patch on it, and then it would come back again. Now, we seem to be getting to the root cause every time.
Our most important criteria when selecting a vendor are knowledgeability and the future vision, which, to me, is the most important part of Dynatrace. They're not thinking about, "Here's a tool for today, and we're just going to keep improving it slightly." They already have a master plan for where they want to go, and the tool reflects it. It shows that they're just thinking way ahead.
I'd give it a nine out of 10. I think it's virtually perfect. There are some bugs in it. Sometimes things get hung just for second, and you have to refresh something. Also, they aren't necessarily intuitive, but to me, they're just going to get better over time.
My advice is, start directly with Dynatrace Saas. Don't start with AppMon. Don't do the other older solutions. Just go straight in. Even if you have on-premise, SaaS is much better to start with.
We have a critical enterprise-level project where we have seen a lot of performance issues. We tried to figure out what tool might help us solve some of those performance issues. Then we heard about Dynatrace, so we engaged Dynatrace. It's basically about solving performance issues.
In terms of performance, we're still a work in progress. I think we have made good progress identifying the areas where the problems are, and now it's a matter of just working with the different teams trying to figure out what the roadmap is going to be.
We're learning the tool. At the same time, it's also about educating folks within our organization in terms of what Dynatrace can do. And also how do we apply it? How do we make use of Dynatrace and what do we do with the information we get? How do we take that and go to the next step of implementing the changes?
At least we're able to pinpoint where the problems are instead of just saying, "Here's the results and here are the failures." At least we're able to tell them which calls, which methods were the problem, which interface. That was a huge step for us, to be able to do that.
We are not the DevOps or the application team. We're coming from the testing side and, generally, it's challenging when you are working with an application built by a different team. When you run a test and say, "Hey! Here's your problem," unless you show the proof, you show the information, they're not going to be able to take it and make some changes to it. So the big first step for us was to identify where the problem was within the application.
Now, it's mostly about, "What do we do about it?" You have these problems. What are we doing about making some changes and getting into the roadmap.
PurePath. We just started using it, it's been less than a year. PurePath is really helpful.
And I'm learning now about the dashboards, and the session replay is another that was really fascinating to see. I guess AppMon probably doesn't have those things yet.
The features I'm most excited about are the AI piece, the session playback, and the fact that the deployment is even easier with the new version. Not only deployment, but the setup piece of it, I'm hearing, it's easy. I haven't tried it out but that's really encouraging.
To be honest with you, I think they have a great roadmap. And the fact that they are using the feedback from the customers to build into the roadmap, is a great feature. I have nothing in particular that I want to see.
From what I've seen, from what I've experienced, absolutely no problem with stability. Especially the ease with which it gets deployed, and also the support we typically get has been amazing. We have no regrets using Dynatrace. It's been a good experience so far.
So far we've implemented this for a couple of different applications. It scaled pretty well for us. We're about 2,000 or 3,000 users, but we'll have a better answer as we start rolling it out to more applications. So far, no issues with scaling.
From what I've seen, they're very knowledgeable and easy to work with.
We've used a variety of tools, not so much in the APM space. It was mostly about SiteScope and Wiley, those kind of things.
With Dynatrace we were able to pinpoint where the problems were with PurePath, which was something we did not have. Obviously, we didn't work with an APM solution so I'm only comparing this with a non-APM solution like SiteScope. There, it's mostly about, "Hey, here's your CPU, here's your memory," rather than pinpointing where the actual problems are, which is something that PurePath gives us.
I think it was really, really straightforward. It's the second time around. Some of the things, we did them ourselves.
In terms of AI, when it comes to IT's ability to scale in the cloud and manage performance problems, we don't have a cloud implementation yet. But, in general, what I've seen with AI, I think they're learning. The self-healing thing was really impressive in terms of, if you have a problem, what do you do about it? You get notified automatically. Then how do you fix it? Those are some of the things with AI that I thought were pretty cool.
If there was one solution that could not only provide data but real answers, the immediate benefit of that for our team would be huge. Not just telling us, "Here's the data" - there's so much data out there - but what do you make of it? What's the critical data? I guess that's where Dynatrace is headed with AI and the self-healing. That would be huge. If they can say, "Hey, here's your problem. Here's what you need to do to fix the problem." That would be significant.
I think the solution meets our needs where it is, so from that perspective, it's a nine out of 10.
The most important criteria when working with a vendor or selecting a vendor are customer service and what type of product offerings they have. Do they see the vision of the future in terms of cloud and those kinds of things? Those are some of the things we consider very important.
To a colleague who is looking into this type of solution, I would say we have had a really good experience. If they're in a similar situation, try it out. Do a proof concept. Try it out and see if it's good for you. It may or may not be a good fit. Everybody's different. Try it out.
Primarily, we use it for monitoring our end user performance experience, as well as diagnosing root cause analysis for one of our core applications.
It is performing quite well. We are able to see globally our end user response time tracing down to the user ID. If there is an issue, we are able to diagnose it very quickly. This is the key to diagnosing quickly the root cause, then fixing it.
In the past, we would go into war rooms and contact each vendor fighting over what the issue was, as each vendor would blame the other vendor: infrastructure would blame middleware, middleware would blame server, and so on. Therefore, the issues were not visible.
The app shows you where the problem is, so you can go to the correct person. While there was initial resistance, now they has accepted the tool because they actually see the data is correct (tangible proof).
There are two components. One is the DC RUM, which provides me with the visibility for end-user experience down to user ID. This is one of the key features that we use it for. AppMon gives me the key feature, PurePath, which gives me access to basically the root cause of my issue, JWC. Thus, PurePath for AppMon and DC RUM provides end user experience monitoring.
Dynatrace has a bit of a AI component in one agent. Prior to that, using AppMon, the user needed to be quite skillful to understand where to troubleshoot and what was the root cause. With the add-on of AI, at least upfront it tells you it analyzed all the logs, and it actually gives you a first level analysis instead of having you spending a lot of time trying to understanding the logs. AI is very useful, especially in the modern age to just speed up your diagnosis for finding the trouble/issues.
The mobile app provided by Dynatrace could be improved, especially the DCR mobile app because it does not have some of the basic functions, like push notifications or even customized reports. It is very basic. I can't use it. I have to use the full app version. Whereas if you have a good mobile app, it actually gives you all of the notifications and you can drill down, which would help.
Stability is not an issue.
I have only used it for two applications, so I have not really scaled up. I do not use the cloud version, so I do not have that experienced. So far, in my environment, it works fine and has low overhead against my applications. So, that is the key thing.
Technical support normally goes through our partner. The partner will then contact Dynatrace if they have an issue who will then come in with our partner and consult. When Dynatrace has come in, I have felt they have been knowledgeable.
We were all siloed. The challenge associated with siloed monitoring tools is it only gives you one perspective. It is a web server log, so you need a lot of human intervention to piece things together to find a root cause.
The key driving force towards Dynatrace was we were doing application transformation, so we were running the application and we had performance issues. We immediately needed help troubleshooting. For this case, we actually moved Dynatrace straight into production and it was able to detect the core issues, then we were able to resolve them very fast. Thus, this was the immediate selling point for Dynatrace and we procured it.
We deployed it quite fast. Managing to install AppMon was quite straightforward, but for the DC RUM, building the reports is a bit complex. It needs a lot of training and a partner to actually help out.
A Dynatrace partner will always be willing to give you a trial. Go through the trial to see if there is a benefit for your company. Just try it out, implement it into production, and you will see the benefit.
Not really.
It was delivered when we wanted it and has performed exceptionally well. However, we have had to resolve a lot of things and had a lot of issues with the tool.
If I had just one solution that could provide real answers, not just data, the immediate benefit would be time saved (streamlined) instead of analyzing so many different tools.
Most important criteria when selecting a vendor: You need a good business partner to work with and help you implement the solution, thus it is the implementation and the support. These are the key things that I would look at in a vendor.
We had Dynatrace Synthetic Monitoring in place, and we had Gomez. The whole point of that was to really check for system availability, to make sure we knew if the site was going down, etc. Since then, we've put in the full Dynatrace solution to prevent customer impact, some kind of site outage. That's the whole point of having it, so we can identify problems sooner, fix them, and stop the site going down.
We have had a few instances where we found small problems. They may or may not have been a full site-outage, but they certainly would have had some kind of customer impact. We only put the tool in a year ago but we've already got quite a number of things. We've found the product has helped us to identify an issue and we fixed it before there was any customer impact. So we're seeing the benefit already, which is great.
To use an example, the savings in terms of cost and time. We use it for live monitoring, but we also use it in our performance testing. So that alone, that issue I just talked about, was a performance testing issue, and we would have put that change live without Dynatrace.
Finding that problem in "live", that would have been three or four days of investigation, whereas we found the issue, fixed the issue, reran the tests, all same day. That was days and days and days of cost-savings, in terms of resources, and allowing them to actually do other things that they're there to do.
Being able to identify the blind spots. Before, we had lots of monitoring, but it was all very manual. It was literally taking server logs and dumping them somewhere and someone had to manually go through things. You only monitor what you know about. As soon as we put Dynatrace in, it sprung to life, and we identified problems instantly. The team's reaction was, "Wow, look at that." So finding different parts of the system.
Sometimes you focus on the area where you see the issue, but not necessarily where the root cause is coming from, so you have to go through the full stack and help to identify the problem areas. We've found problems and fixed them in half an hour when it would've taken days before.
I think the one that's coming soon, the customer playback and the session replay. Notwithstanding the challenge we might have around GDPR, and the collection of data - which worries me - what we have quite a lot is, a very specific customer situation or customer problem. Of course, we can see problems in Dynatrace, but we might have a customer call in trying to donate, or trying to create a fundraising page, and we can never recreate the issue.
You don't want to have to go to the customer, "What browser were you using, and what were you doing, what day was it, was it cold outside?" To be able to see exactly what has happened, for us to be able to understand that, gives us extra power really to understand the issue and to fix it. Nine times out of 10, it's probably a really simple thing, that we just need a bit of JavaScript or something to fix.
Also the thing that's really powerful is being able to recognize what the customer's trying to do and contact that customer. And for us again, customer is key. For our Help desk to actually be able to help that customer and say, "We see you were trying to donate," or "We can see this happened to you, we're really sorry, we fixed that issue, please come back, or let us help you on that journey." That's really powerful. In terms of NPS, that's really important to us.
I think that would help with those situations, stop the problem in the first place. But also, if there is a problem, being able to deal with it directly with the customer is fantastic.
It's been stable, I haven't had any problems with it at all.
It absolutely suits us. In terms of the wider bank, within Virgin Money, we can absolutely look to spread it across other applications, which we will be doing. But I think we've probably got the critical ones covered. We can obviously see the benefit, we just need to fight the right battles at the right time to get those things put in.
The team used tech support during the original implementation to make sure that it was going well. And it went very smoothly.
I don't think we've had any problems with it from a Virgin Money Giving perspective. Having said that, we had experts using it who were already within Virgin Money. So we were able to use that internal expertise to help us to implement it into our solutions, which was helpful. So we haven't needed to call tech support.
This is our first APM tool. We haven't been around that long. I look after a system called Virgin Money Giving, and we haven't been around that long - seven or eight years. It's a really successful business, and as that business has grown and grown, you then see the value in these kind of tools. We managed successfully, we didn't really have many system outages and the like, but we saw the benefit as it has been rolling out across the rest of the bank. It's the first tool we've used.
I was only pointing at people to do the initial setup. I don't come from the technical side, I just run the teams that do the stuff, the proper work. So I was involved in terms of helping to make sure it happens, but not at the level of touching it.
We did look at other tooling, but Dynatrace suits us as a solution.
It was the simplicity. Obviously we had heard lots about AppMon, but we went straight into the full Dynatrace solution. The simplicity of the implementation. We literally switched it on and we could see benefit almost instantly.
Also it's the full-stack, one solution that can allow you to track and monitor across the whole of our infrastructure. We haven't got a huge, complicated infrastructure, so its probably quite simple for us, versus people who've got huge amounts of different cloud hosting and all that kind of stuff.
Actually having had conversations with Dynatrace, as part of the proof of concept, it feels like they're constantly looking to innovate. Coming here, to the Performance 2018 conference, there are things about which I'm saying, "I can't wait for that to come." And that's really nice for us as a customer, to be waiting for the next thing to come to help our business.
We we haven't really gotten anywhere near the area of AI and IT's ability to scale up the cloud and monitor performance management issues. Having been through sessions here, at the Perform 2018 conference, that's definitely something we need to be focusing on. We're not using cloud in any way, as an organization, other than things like Dynatrace. AI is definitely on our roadmap, but we're not there yet. It's something that's coming up a lot, and you can actually see the benefit.
Regarding a solution that could provide real answers, and not just the data, the immediate benefit for our team would be time and cost. We're running a website that needs to be there 24/7, and because we're Virgin Money Giving, we deal with quite personal things. People are raising money for good causes, things that are personal to them. So if our website isn't available for any point in time, it can be really quite heartbreaking for people, people can't donate to their cause, or give money to the charity they want to. The whole customer experience is really important, so anything that allows us to prevent problems sooner, and prevent system problems, is right for the customer. And that's important to our brand.
In terms of selecting a vendor, for us, because Virgin Money as an organization has important values, we need to find a vendor that has the same kind of values. I think there needs to be a synergy around what we're wanting to do.
Also the key thing is support. Sometimes you can have third-party relationships, or vendors that sell you a product and then you don't see them again, and you don't really get the best out of that product. So it needs to be an ongoing relationship, and a genuine partnership. It can't just be a "drop the product over the fence then run off with your money," it needs to be an ongoing relationship.
Also important for us is to help, perhaps, influence the future of the product as well, a genuine partnership.
At the moment I'd say Dynatrace is a 10 out of 10 because I can see the benefit. It's early on in the lifecycle of the product for us, but I can absolutely see the benefit already. I think the thing we do need to do is understand more about the potential. I think we've just scratched the surface. As soon as you switch it on, there is so much information that comes to you, that you're all excited about, all that data. But it's just making sure that you're looking in the right places and doing the right things. At the moment, it's a 10 for me, I absolutely love the product, a year in.
My advice is try it. I think we put it onto an application and, within hours, we had really good powerful data, and we could see problems in the data that needed to be fixed. Trial it on an application and see what happens.