What is our primary use case?
The primary reason for choosing any APM solution, it comes through the interface, is to find out the gaps between the application life cycle when someone makes a transaction. Then, we will not know what is causing it to come back so late, delayed, or with latency. We just want to know the pain points of it. That is why we have chosen this APM solution. So far, it is doing a good job, except for some flaws, but that is fine.
How has it helped my organization?
When Dynatrace shows us an entire lifecycle. I can go back to my document and see A points to B, then B points to C, and so on. I can then back and see that I plugged it in every box to see how it is behaving and how the product is dynamically showing the AI behaving in a single place, like on a single webpage. This helps us to see the troubleshooting points or what is hiccuping. This is where we go. We reboot, fix it, or do whatever it takes for it to be taken care of.
What is most valuable?
The PurePath is one feature, which I actually like. When I have a problem that is being detected by the alerting profiles, I just go in and see what the part is talking to, then what its dependencies are. For example, if a middleware application is behaving weird, then has to be sampled by different databases back-ends, or mainframes, we just keep looking at those PurePath to see what it is talking to rather than going back to my library or my documents to see what exactly the architecture design was. The PurePath helps me a lot.
What needs improvement?
- To improve in Dynatrace the log analytics, this is the first thing that has to be enabled.
- We are using Splunk or Sumo Logic as an enterprise logging tool. We have been there for a long time, since even before the Dynatrace was. There were Splunk APIs that have been exposed, and we can grab the data from there. Dynatrace also has APIs, but they are unfriendly APIs. If they were friendly like Splunk or Sumo Logic had, we might integrate that same data on a single webpage, then start showing these internally. That would be a great help of a feature; friendly APIs.
Artificial intelligence depends upon business to business. If you take a travel industry, like airlines, not every month will remain the same as in the next 12 months. Our busy seasons would be around summer, Thanksgiving, Christmas, and New Year (two bottleneck seasons). If you come back and are replicating that scenario of traffic of those issues, or those latency of responses being triggered, it will not be the same as in the rest of the months. When we are plugging in the AI, we need it to have in mind this for each and every business, that the AI implementation should be different.
What happened was when there was an AI sneak peek to our portfolios for our company took an average of the last three months, which would not work for us. If it is taking an average of the first three months, say Jan, Feb and March. Our systems would be quiet because we are not handling our bottleneck capacity of traffic. Then, when it comes to April or May, that is where our busy business season starts. The AI takes the alerting profiles of the first three months, then tries to implement on those next three months, or the next coming 24 hours, and then it just screams a lot.
The AI should be tweaked for the last full year, like smart scheduling. That would help us.
What do I think about the stability of the solution?
Dynatrace solution is pretty much stable.
Last time, when there were upgrades being made, the alerting profiles had been wiped off, then we had a gap. When Dynatrace made the latest upgrades, newest patch upgrade, or firmwares, the existing alerting profile, which says, "Call me when you see this," or "Call me when you see 10% of these," had been wiped off. Someone has to redo it again. Except that, it has been fixed in the next release.
What do I think about the scalability of the solution?
The scalability of the solution is good. We initially started with the first three major critical applications, which is where I was introduced to this tool. Right now, they are moving on to the top 21 applications, which is going to be good scalability. They did well.
How are customer service and technical support?
I have not contacted any customer solution or support.
Since we are still on the warranty period, we have still Dynatrace guardians on-site. I can go in and say, "Hey, this is what is happening," and he will get me a solution, or he will say, "Hey, you are doing this wrong. You have to do this." If not, he would say this feature is not yet released, and we are going to have it in next Q release, or whatever. Dynatrace guardian is the first point of contact if I have to ask any questions, he would be the guy.
Which solution did I use previously and why did I switch?
We never had a centralized application for performance monitoring tool before Dynatrace.
Everyone has Sumo Logic. Someone has Splunk. Someone has Tealeaf. Someone has Riverbed. No one had a consistent idea of what another team was doing for monitoring solutions. When enterprise monitoring took place, that was where a centralized solution needed to come in.
For example, if I was sending a transaction to a different team, and I called it as a transaction, but someone else named it with a Tealeaf ID. There was a disconnect in naming conventions.
When the APM solution came into place, I now know what to call it and they know what to expect from me, so we are on the same page. This helps us in shortening down the time for triaging an issue.
How was the initial setup?
The capacity planning was complex. Everything else was easy.
Our infrastructure setup has been there for quite a long time, then we already had JVMs who monitor our process. We needed to evaluate what options and what benefits were coming down on our plate, then what was a repetitive task which was already being done by other JVMs. We had to evaluate those on different boxes, different portfolios, etc. Then evaluating options were a little tough because we were already using some things and we had to do something new basically from scratch again. This took some time. After we had experience with the first three major applications, we knew what to do with the next 21.
What's my experience with pricing, setup cost, and licensing?
If I had a colleague interested in purchasing Dynatrace, I would ask these questions:
- What are you using these days?
- What are you missing?
- If you have that missing point as well as what you already have, why not go for Dynatrace?
What other advice do I have?
If I had just one solution that could provide real answers, not just data, the immediate benefit would be fixing the issues first. It takes a lot of time for us to dig back where the actual issues on the code base are, especially if it is a network or infrastructure-related. To get answers for most of it, we can fix our issues faster on a priority basis.
Most important criteria for selecting a vendor: Show me what I am not seeing. If you ask me, I am an engineer. I do not want to see the eyes on glass all the time. I want a solution which does it for me. I know how to set my thresholds and throttles. For example, if there is an issue, an exception, or a false exception which is coming in, I know my application:
- If it comes 100 times a day, don't worry about it.
- If it comes five times a minute, don't worry about it. That is the business clients calling improperly.
- If it happens 500 times in a five minute timeframe, then send me an alert.
That is the something which I like a lot regarding the synthetics of application performance monitoring. When I am not seeing and I am being called when there is an issue, which I set my own rules, that is a good idea. That is the great thing and a driving factor for having an APM solution.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.