Try our new research platform with insights from 80,000+ expert users
PeerSpot user
Senior Director IT at BARBRI Inc.
Real User
Gives us very deep visibility into both user actions and systems interactions, including a view inside containers
Pros and Cons
  • "The Session Replay not only allows us to watch the user in 4K video, but to see the individual steps happening behind the scenes, from a developer perspective. It gives us every single step that a user takes in a session, along with the ability to watch it as a video playback. We can see each call to every server as the user goes through the site. If something is broken or not running optimally, it's going to come up in the Session Replay."
  • "I would love to see Dynatrace get more involved in the security realm. I get badgered by so many endpoint protection companies. It seems like a natural fit to me, that Dynatrace should be playing in that space."

What is our primary use case?

When we started with Dynatrace we were an on-prem organization. We used it in the early days as an APM, the way most people used it.

Our usage of Dynatrace has grown over the years, not as much in terms of capacity as in usability. It is now used by three departments within our organization. It originally started with just my group, which is IT, and then we rolled it out to development because they saw the advantages of being able to identify code bottlenecks in existing code. We've rolled it out to operations and they use Session Replay to troubleshoot customer-specific issues. And the sales department also uses it to gauge productivity and how many visits we get to a particular page, how many times people watch a particular video, how many take a certain practice exam, etc. 

Those use cases are all in addition to its core use, which is to help us keep our infrastructure running. 

We're currently using the Dynatrace SaaS, the Dynatrace ONE product. We're not using anything in the old, modular product. It fits very well for us. We are a cloud organization. We're all Azure now. We migrated from on-prem to cloud about three years ago.

How has it helped my organization?

The automated discovery and analysis definitely help us to proactively troubleshoot production and pinpoint underlying root cause, both from a code perspective as well as an infrastructure perspective. When we get an alert, or we're seeing a degradation in performance, Dynatrace will lead us down the path: Where do we need to look first? It will tell us that it has analyzed so many trillions of dependencies and that it thinks that the problem is "here," and it will point to a query or a line of code or perhaps to a system or to a container that is not functioning properly. Depending on what the problem is, it saves us an enormous amount of time in troubleshooting and identifying problems.

I estimate it has cut our mean time to identification at least in half, if not more. Before, we were relegated to combing through logs. We would take Splunk, look for the error, find out where it was occurring, how many times it was occurring — do all that type of investigation that you normally need to do. We don't have to do that anymore because it's all automated. 

And as far as decreasing our mean time to repair goes, it's closer to 60 to 70 percent. The reason is that we don't need to take such drastic troubleshooting time. We take its recommendation, and the time that we spend is checking that Dynatrace was right. We'll test out a quick fix in dev and then take it to QA and then push it to production. In some instances, it does reduce our MTTR by anywhere from 60 to 70 percent, although it really depends on the problem.

I operate an entire stack on four people, and the only way I'm able to do that is by automating as much as I can and having tools that I can rely on to reduce time-dependent tasks. Dynatrace has allowed me to function and keep my people productive without working them 24/7. Dynatrace works 24/7 for me.

Another thing that Dynatrace gives us is very deep visibility, not only into user actions but systems interactions. How are the systems relating to each other? Are the right systems talking to the right systems? When we first deployed Dynatrace five years ago, it showed us, through its Smartscape tool, that we had servers talking to servers they shouldn't be talking to. That was quite an eye-opener. I've noticed that a lot of companies are trying to copy what Dynatrace came out with in its Smartscape, but to me, it is the best visualization tool of your app stack and network that you'll ever put together, and you don't have to do anything. The system puts all that together. You deploy your one agent, it maps out the system, and you can see everything from application to network to infrastructure connectivity. It depends what you want to see, but it's all Smartscape'd out. You can tell what traffic is going in which direction and where it's going.

In addition, when I first started using Dynatrace, I had a routine. I would come into the office early and go through all of the night's activities. I would check for any problems we had: Was anything broken, were there any active alerts? With Dynatrace Davis, I started getting those reports automatically, through Amazon Alexa, and I do that on my drive to work. Instead of having to go in early and spend time in the office, I'm able to stay at home a little later, have breakfast with the family. Then, when I'm in the car, I invoke Alexa to give me my Dynatrace morning report, which will include my Apdex rating, any open problems, and a summary of closed problems. It's probably one of the least advertised aspects of Dynatrace, and one which I think is among the most highly efficient tools that they offer.

The amount of time we have to devote to maintaining Dynatrace is next to nothing. The time that we spend in Dynatrace is actually using it. We're using it to look at what's happening, what's going on, is something broken, or do we have an alert? We go in to find out what's wrong. Maintaining it is really almost nonexistent.

Another advantage is that it is much more of a proactive tool than it is one for putting out fires. Of course, it helps us tremendously if we have to put out a fire, but our goal is to never have a fire. We want to make sure that any deployments that we put out are fully tested in all aspects of use, so that when things are fully deployed, there isn't any need for a rollback. In the last three years, we've had to roll back a production deployment once. I don't attribute that all to Dynatrace, but I attribute a large part of it to it.

It has increased our uptime because we find out about problems before they're problems. The one goal that my team has, above anything else, is to know about problems before the customer does. If the customer is telling us there's a problem, we have failed. We are so redundant and so HA-built, that there is absolutely no reason for us not to be able to circumvent an issue that is under our control, and to prevent any type of a work stoppage or outage. We can't help it if the internet goes down or if Microsoft has a core problem, but we can certainly help by making sure that it's not our application stack or our infrastructure. I would estimate our uptime is better by at least 20 percent.

In the end, it has decreased our time to market with new innovations and capabilities, because anything that reduces time-to-produce decreases time to market. Once the code has actually been developed, it's in testing and deployment and that's where my window of efficiency is. I can't control how long it takes to build something, but I can control how long it takes to fully test it and deploy it. And there, it has saved us time.

Before we had Dynatrace, and a lot of the processes that Dynatrace has helped us put into place, everything was manual. And the more manual work you have, the more margin for human error you have.

What is most valuable?

The most valuable features really depend on what I'm doing. The most unique feature that Dynatrace offers, in my opinion, is Davis. It's an AI engine and it's heavily integrated into the core product.

The Session Replay not only allows us to watch the user in 4K video, but to see the individual steps happening behind the scenes, from a developer perspective. It gives us every single step that a user takes in a session, along with the ability to watch it as a video playback. We can see each call to every server as the user goes through the site. If something is broken or not running optimally, it's going to come up in the Session Replay. 

We also use the solution for dynamic microservices within a Kubernetes environment. We are in the process of converting from Docker Swarm to Kubernetes, but that is in its infancy for us and will grow as our Kubernetes deployments grow. Dynatrace's functionality in this is really good. 

We use JIRA as well as Jenkins. We have a big DevOps push right now and Dynatrace is an integral part of that push. We're using Azure DevOps, and tying in Dynatrace, Jenkins, and JIRA and trying to automate that whole process. So Dynatrace plays a role in that as well.

In terms of the self-healing, we use the recommendations that it provides. I'd say the Davis engine runs at about 90 percent accuracy in its recommendations. We have yet to allow automated remediation, which is our ultimate goal. It's going to be a bit before we get comfortable with anything doing that type of automated work in production. But I feel that we're as close as we've ever been and we're getting closer.

User management is extremely — and I hate to use the word "easy" — but it really is. And it's a lot easier today than it was when we first started with Dynatrace. We create a lot of customized dashboards both for the executive teams and management teams. These dashboards are central to their areas of oversight. It used to take quite a bit of time to create dashboards. Now it even has an automated tool that takes care of that. You just tell it what you want it to present and everything falls together. It has templated dashboards that you can customize.

The single agent does all of it. Once you deploy the one agent to your environment, it's going to propagate itself throughout the environment, unless you specifically tell it not to. It is the easiest thing that we've ever owned, because we don't have to do anything to it. It self-maintains. Every once in a while we'll have to reinstall the agent on something or a new version will come out and we'll want to deploy it, but for the most part, it's set-it-and-forget-it.

What needs improvement?

I would love to see Dynatrace get more involved in the security realm. I get badgered by so many endpoint protection companies. It seems like a natural fit to me, that Dynatrace should be playing in that space.

I'd also like to see some deeper metrics in network troubleshooting. That's another area that it's not really into.

Buyer's Guide
Dynatrace
April 2025
Learn what your peers think about Dynatrace. Get advice and tips from experienced pros sharing their opinions. Updated: April 2025.
856,873 professionals have used our research since 2012.

For how long have I used the solution?

We're in our fifth year of using Dynatrace. We were the very first paying customer for the new platform, Dynatrace ONE. We used it right at launch.

What do I think about the stability of the solution?

The stability has been phenomenal. I'm not going to say that Dynatrace has never had an outage, but I've never had an outage where Dynatrace wasn't available for me. It's always been there. It's always there when I need it. It's always on. Our uptime is five-nines, and we do attribute a large portion of our ability to maintain that figure to Dynatrace.

What do I think about the scalability of the solution?

In terms of scalability, we don't have anything that it can't do. As we add to our infrastructure, it scales. Yes, every time we add a node, we're going to spend more. But it's up to me to decide if I want to monitor everything or a set of everything. My philosophy is to monitor all of production. Anything that is deployed to production is being monitored by Dynatrace. 

From a dev and test perspective we don't monitor like that. We keep a secondary Dynatrace instance that we use in the event that we need to troubleshoot something in development, but for the most part, our Dynatrace usage is relegated to production. And that's for cost reasons.

We have four environments in our builds. We have production, where we cover everything. We have a development environment, which is a subset of production, with different copies. We have QA, which is where everything goes from development for final testing. And then we have staging, which is the final step before it's pushed to the production clusters.

As we add to production, we add to Dynatrace. That is always going to be the plan. We will not deploy anything to production that doesn't have Dynatrace on it.

I don't get involved in the minutiae, but from what the guys tell me, with Linux servers you don't even blink. They have to watch Windows servers a little bit more because it's more intensive. Windows itself doesn't tend to perform very well when you first build. You've got to massage it and get it to where you want it to be. Dynatrace helps us with that, but Windows is more finicky.

We have about 50 users of Dynatrace between infrastructure, development, operations, and sales.

How are customer service and support?

Their technical support is the best ever. I know I sound like a broken record, but we get chat support on the Dynatrace site, not from some guy in India, but from a high-level tech in the US who has all the answers to the questions. That person is not like some first-level guy who's going to ask you if your machine's booted up. The techs can answer our questions and, if they can't, they open the ticket and get back to us later. It's the best support model I've ever had the pleasure of working with.

Which solution did I use previously and why did I switch?

We were using New Relic at the time. We were having a lot of frustrations with that in terms of its dashboarding capabilities, and the amount of time that my people had to spend keeping it updated and running correctly. We started looking at other products and we ended up settling on Dynatrace. Aside from its major capabilities, what Dynatrace ended up doing for us was to assist us in our migration to the cloud, because it gave us the sizing recommendations and the baselines that we needed to formulate what we were going to start with in Azure.

New Relic was the primary APM at the time and we were just very frustrated with it. We started looking at other products and really didn't see much of a difference in the competition, differences that would warrant going through the change, until we came upon what was then called Ruxit and is now called Dynatrace.

The biggest difference was that the other solutions required overhead. My biggest complaint was the amount of time we had to spend with these tools, because they're supposed to save you time, not take up more of your time. Dynatrace was the first one to actually complete that promise.

We ran hybrid for a year, collecting data on both ends, using Dynatrace both on-prem and in the cloud, and now it's all cloud.

How was the initial setup?

The setup is really not much different, whether you're an on-prem organization or a cloud or even a hybrid. It's still the one agent. I have no experience with their AppMon product, so I can't tell you how much easier the new product is versus the old. But I can tell you that this product that we have been using is the easiest thing we've ever had. The only comment I got from my systems team is, "Why didn't we get this sooner?"

I am not the norm when it comes to policy and procedure. I tend to buck the trends a little bit. If I have a new product that I feel is going to be advantageous to the company and my team as a whole, then once we've done our due diligence, we will just deploy it. I know that larger companies with different criteria and regulations have to follow different channels and paths, through security and infrastructure and storage, etc. But ultimately, as long as you have "air-cover," and by that I mean an executive sponsor who believes in what you're doing, then you really should be able to get it done with minimal effort.

We were fully up and running in a week. It took me longer to remove New Relic than it did to deploy Dynatrace. We only needed one person to deploy Dynatrace. One of my systems people took care of it. I took care of the administrative stuff, creating the initial dashboards and getting the payments set up and so forth, but my systems people took care of the actual deployment of the one agent.

What about the implementation team?

I didn't hire any contractors or deployment services. I signed up for Dynatrace's free trial and we went to town.

What was our ROI?

From a monitoring-tool perspective, Dynatrace has saved us money through consolidation of tools. We used to use a number of tools: PRTG, Pingdom, and we used to pay for an additional Azure service that we don't pay for anymore. And we used to use Splunk for log mining and now we don't. Just in the tools that we eliminated it has saved us $30,000, but there are more soft dollars that I could add to that.

I'm not sure how you come up with an ROI because it's pretty much all soft dollars. It's a line item in my budget that doesn't have to grow unless we grow. We have not experienced a base-price increase from Dynatrace.

What's my experience with pricing, setup cost, and licensing?

Dynatrace is not the cheapest product out there and it's not the most expensive product out there. In our business, you get what you pay for. 

Dynatrace has a place for everybody. How you use it and what your budgetary limitations are will dictate what you do with it. But it's within everybody's reach. If you're a small organization and you have a large infrastructure, you may not be able to monitor the whole thing. You may have to pick and choose what you want to monitor, and you have the ability to do so. Your available funds are going to dictate that.

The only additional costs that I incur are for additional log storage space, which is like $100 a year.

What other advice do I have?

My advice would be to compare and compare again. Everybody's offering free trials, and I know that they're a pain to do, but compare the products, apples for apples. Everybody's going to compare costs, but be sure to compare the functionality. Are you getting what you pay for? Are you getting the bang for your buck out of what the product is returning to you? If all you need to know is "my server's down," you can probably get by with the cheapest thing out there. But if you want to know why the server is down, or that the server is about to go down and you need to do something, then you want a product like Dynatrace.

I go to their Perform conference every year, and it's amazing to me to see the loyalty and dedication from the customer side. It's like a family reunion every year when we go to Perform. I hope we have it next year.

From a core-product perspective, Dynatrace is doing everything that we ever asked for. Everything that we've ever wanted to monitor, it has always been there first.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
reviewer1352679 - PeerSpot reviewer
IT Technical Architect at a insurance company with 5,001-10,000 employees
Real User
Provides traceability from tracing transactions of end users all the way through the back-end systems
Pros and Cons
  • "It has improved our critical incident response, exposing critical issues impacting the environment and our ability to respond to those events prior to client impact as well as resolving those events more quickly. We have use cases where we have studied a 70 percent improvement for response times in an occurring event as well as future reoccurrences being improved."
  • "We can see issues that occur, sometimes before the clients do. Before we have client (or end user) calls for issues, we are able to start troubleshooting and even resolve those issues. We can quickly identify the root cause and impact of the issues as they occur, and this is very helpful for providing the best client experience."
  • "There continues to be some opportunity to expose the infrastructure from a broader reporting standpoint. Overall, the opportunity is in the reporting capability and the ability to more flexibly expose or pivot the data for deeper analysis. Oftentimes, the solution is good at looking narrowly at information, but when you want to broaden that perspective, that's where the challenges come in. At this point, it requires the export of data to external systems to do this."

What is our primary use case?

Our primary use cases are operational awareness, health of the systems, and impact on users. Other use cases include proactive performance management, system checkouts (as we investigate the ability to manage configuration and integration to the CMDB), some usage of it from a product perspective in terms of application usage, and I use it to manage and improve the user experience by understanding user behaviors.

We are in both Azure and AWS. We have both on-premise and cloud Kubernetes environments that we're running in. In fact, we have been using less efficient deployment methodologies. We haven't encountered any limitations in scaling to cloud-native environments. 

We have only used version 1.192 of the Dynatrace product. We have not used any previous versions.

How has it helped my organization?

It has improved our critical incident response, exposing critical issues impacting the environment and our ability to respond to those events prior to client impact as well as resolving those events more quickly. We have use cases where we have studied a 70 percent improvement for response times in an occurring event as well as future reoccurrences being improved.

The solution's use of a single agent for automated deployment and discovery helps our operations significantly. Oftentimes when you are looking at endpoint management, centralized monitoring teams need access to data across systems. They need to manage agents deployed throughout the organization. Remote polling of data can be helpful, but it's not deep enough, especially for APM capabilities. Having one agent significantly simplifies that functionality in such a way that it enables a very small team to manage a very large environment with very limited overhead. It provides the ability for external teams to manage it because they don't need any deeper knowledge of the application than installing the agent. They have the ability to integrate the agent into deployments and to do the work with very limited overhead.

The automated discovery and analysis helps us to proactively troubleshoot production and pinpoint the underlying root cause. We have had scenarios where we can see end user impact. One of the use cases was where we had an individual system and a cluster of nine for a content management system that was having an issue. Through Dynatrace, we were able to quickly identify the one host that was having a problem, take that out of the active cluster, recycle that application instance, bring it back, and reintroduce it to the cluster in a very efficient manner. Historically, these processes take multiple hours in order to diagnose and identify the instance, then do the work. With Dynatrace, we are able to do the work in less than 20 minutes from when it first occurred to issue resolution. Thus, there have been scenarios where you can quickly identify infrastructure issues and back-end services. 

Out-of-the-box, it's the best product that I've seen. Its ability to associate application impact, as well as root cause from an infrastructure standpoint, is by far ahead of anything that I have seen due to its ability to associate infrastructure anomalies to applications. We are still on our journey of identifying the right business KPIs to see how we can associate this data.

Dynatrace is doing an excellent job of giving us 360-degree visibility of the user experience across channels in most technologies. We are working with Dynatrace to expose the full transparency to the mainframe, as we have transactions that call from the cloud onto the mainframe and back out to other services. This is a critical visibility that isn't there yet. Otherwise, with a lot of the cloud and historical systems, we do see a lot of transparency of transaction trace across the environment.

What is most valuable?

  1. Automated discovery
  2. Automated deployments
  3. The AI

These are probably the most key, because it gets into the traceability from tracing transactions of the end user all the way through the back-end systems. We are still working through the mainframe integration, but the scenarios where we can integrate through the mainframe are very useful.

We can see issues that occur, sometimes before the clients do. Before we have client (or end user) calls for issues, we are able to start troubleshooting and even resolve those issues. We can quickly identify the root cause and impact of the issues as they occur, and this is very helpful for providing the best client experience.

We have found the self-management of the management cluster and Dynatrace processes to be highly reliable. There have been minimal issues with managing the infrastructure.

We've targeted deployment of the real-user monitoring to the most critical applications in the company to understand if there's something that's happening in the environment and the user impact. This is to be able to understand the blast radius of issues, helping us understand if an issue is impacting one app or multiple applications. We can then quickly diagnose where the common event is (the root cause), resolve it, and then leverage the product to validate healthy user traffic after completion by seeing transactions be processed again. 

From a synthetic standpoint, we use the synthetics in two ways: 

  1. We do lower-level infrastructure pings (HP pings) primarily in order to validate individual, technology services on the back-end, i.e., the API endpoints. 
  2. We use the front-end synthetics to validate user experience 24/7. When you have low usage periods, you are still able to validate the availability and performance of services to the organization. Oftentimes, changes may be implemented to reduce risk during lower usage times and the synthetics can be valuable to validate during that time.

It has been very easy to deploy and obtain basic information. 

It's very good from a problem troubleshooting perspective.

What needs improvement?

I find the value from the out-of-the-box features to be extremely valuable. However, there will be gaps and challenges as you go into a much broader set of infrastructure technologies to consume that necessary information. This will be a challenge for the company. The things that they need to focus on is the ease of integrating external data sources, which can then also contribute to the AI. There is a ton of value gotten out-of-the-box, but moving to the next steps will be an interesting journey. I know this is something they are focused on now. When bringing in other telemetry, whether it be network devices, databases, or other third-party products that all integrate into a larger ecosystem, there will also be a lot of successes, but there will also be some challenges on this journey.

There is some complexity in the alarm processing logic within the product between the alert policies and problem notifications.

Expand the user session query data to be inclusive and enable that for the application or other telemetry within the system. Currently, in order to analyze the data outside of dashboards, it requires exporting to other reporting systems. If you want to do higher level reporting, then this may make sense. However, there is a desire to be able to do some of that analysis within the product.

There continues to be some opportunity to expose the infrastructure from a broader reporting standpoint. Overall, the opportunity is in the reporting capability and the ability to more flexibly expose or pivot the data for deeper analysis. Oftentimes, the solution is good at looking narrowly at information, but when you want to broaden that perspective, that's where the challenges come in. At this point, it requires the export of data to external systems to do this.

Adoption lagged primarily due to:

  1. The prioritization of monitoring as a functionality when teams do their work, as our teams are more focused on business functionality than nonfunctional requirements.
  2. Getting familiar with the navigation of the product. With our implementation, we have a single node where people get access to all the data within the enterprise. They're able to see everything. It takes time working through the process and getting the correct set of tags and everything else in place to allow them to filter and limit data to what they need to see and can consume. It takes some time for them to understand the data, what's there, and how to consume it as we learn how to limit the data sets to what they really want to see.

For how long have I used the solution?

About two years.

What do I think about the scalability of the solution?

At this point, we have about 1700 host units. We're monitoring 2000 to 3000 systems. We have 300 to 500 users a month using the systems with approximately 700 users overall. 

How are customer service and technical support?

Their Tier 0 is better than most companies that I have ever worked with. Normally, I'll get useful information even at that initial level/Tier 0. 

The in-app chat is extremely helpful. It helps not only with the ability for me to troubleshoot, but the ability for the rest of the organization to ask how-to questions. We have hundreds of those chats across the organization per month which are leveraged by end users.

Everything else is as expected when working through engineering and our product specialists, who have been helpful.

How was the initial setup?

The initial setup and implementation are almost too easy. With real-user monitoring and all the application monitoring, you are introducing change into the environment. It is so easy to set up, configure, and implement that you can get way ahead of your organization technically from where they are from a usability standpoint. We have run into virtually no technical limitations in implementing the product. It has purely been from the ability to get users to adapt, understand, and leverage the value of the product.

We implemented and installed the Dynatrace platform (and everything) within a couple of days. We deployed the product in certain environments within overnight of instrumentation. Onboarding of teams and the training required, that took months. Even though we were able to technically implement the product from non-production into production within a month of deploying everything, having it there, and instrumented. It took us another eight to nine months to onboard individual teams into adopting and leveraging the product. From there, the rolling out is really limited more by organizational change, communication, and facilitating training with teams and their technical capabilities. Key teams have adopted the product and used it very quickly. Therefore, we are seeing value within four weeks of deployment from our centralized critical incident teams, but the product adoption from application and development teams has lagged.

If you are implementing Dynatrace, the first thing is to not underestimate your users and their experience, providing them personal service to onboard and consume the information, then leverage the product on the front-end. Technically the product makes it so easy to implement and deploy, this makes it difficult to stay in front of the rest of the organization when adopting the product. You need to ensure the data starts presenting itself before they are ready and able to consume it. You need to focus that into your implementation.

What was our ROI?

The solution has decreased both our MTTI and MTTR.

In 2018, we were having on average one issue per day. It is one of the reasons that we purchased the product in 2018. Last year, we significantly drilled those numbers down in outage time by 70 to 80 percent, as an organization. While Dynatrace is part of driving that avoidance as well as reduced outage time, it's impossible for us to have a direct correlation of its direct impact because there are so many other factors at play in an organization. I had to change management processes and everything else that could also influence that. However, we know that it was part of that increased uptime to where we've decided to invest significantly more in the product.

What's my experience with pricing, setup cost, and licensing?

It's understandable to do a smaller scale initial evaluation. However, as you identify the product value, don't hesitant in your scope and scale to maximize the initial investment and your opportunity to do a bulk investment of the product.

Which other solutions did I evaluate?

We have other competitive products. The automation instrument will be extremely valuable as we look to consolidate our solution set. The insight to quickly gain information is interesting and good information that we can use. There will be a challenge internally with our teams since application teams were never exposed to infrastructure information and infrastructure teams have never been exposed to application nor end user information. Organizationally, we have to change where people are now going to see this insight and figure out how to leverage it for good, which will be helpful. It will be a game changer in terms of how we can identify and respond to events in the organization from the point of view of data and analysis, as opposed to tribal knowledge and fear.

Dynatrace was initially brought in to eliminate one competitive APM product. We are now on to eliminating the second, and we'll be consolidating all APM on the Dynatrace platform. We are also in the process of consolidating other infrastructure monitoring products on the platform. We expect there will be a small incremental investment from a purely licensing standpoint to consolidate the products, but we expect realization of a significant amount of benefit from the capabilities it provides from root cause analysis, impact analysis, transaction trace observability in the environment, the reduced administrative costs of disparate products, and the ability to integrate data. However, a lot of these were not measured previously because we had a lot of disparate tools across disparate teams managing things. Therefore, we can't measure the savings but we expect it will be significant.

We have CA APM Introscope, New Relic, and AppDynamics. We are users of all three of these products, though we are probably using AppDynamics the least. We have almost completely migrated away from Broadcom and are starting the replacement of New Relic.

Holistically, Dynatrace's traceability starts from the user endpoint, meaning the ability to trace a transaction from a user session all the way through other technologies. We've had more comprehensive traces than with other products. Other products do not offer an easy interface to see the trace of the user session in a comprehensive way. Dynatrace offers the ability to go from a mobile, microservices, or mainframe and be able to trace across all those platforms. It also has the ability to associate or automatically correlate user transactions to applications, then into the underlying infrastructure components. Another Dynatrace benefit is the whole function of the AI as well as bringing in other external data sources. E.g., we are looking at things like a DataPower and F5 data integrations, but also incorporating those into the trace. Finally, there is support of legacy technologies, because it really gets into traceability, AI, and the supportive legacy. Mainframe technologies are the big positive differentiators and kind of come to a conclusive root cause analysis.

CA APM Introscope and New Relic have simpler interfaces to consume data. With Dynatrace, you need to develop plugins to obtain easier API interfaces for pushing data into other products. This is a little easier with the other products. The New Relic Insights product is a stronger reporting feature than what Dynatrace provides.

There are also other products that we are looking at eliminating in other product suites, such as Broadcom UIM, Microsoft SCOM, and Zabbix. We have a lot open source solutions where we're looking to roll out infrastructure, then consolidate and centralized data. The primary function and capabilities gets into mobile to mainframe traceability in order to simplify or expedite impact and root cause analysis processes for the teams. The solution also has the ability to support our modern technologies running in AWS and Kubernetes cluster microservices as well as traceability all the way through the mainframe.

What other advice do I have?

We have integrated our notification systems through PagerDuty, Slack, and our auto ticketing app. This is to generate incident records. The integrations with PagerDuty and Slack are effective. We're in the process of migrating some tools to ServiceNow. Thus, we are in the process of doing synchronization of both the events while also evaluating the CMDB integration with ServiceNow. There are some recent capabilities that make this look more attractive to automate discovery and relationship building that we're looking forward to, but we have not yet implemented. The integration to ServiceNow will be good.

The desire is to have Dynatrace help DevOps focus on continuous delivery and shift quality issues to pre-production. We are not there yet. The vision is there and it makes sense with the information that we see, but we have not had the opportunity. Even though we've been using the product now for two years, we're only now just starting an effort to roll the product out across the enterprise and replace competitive products for application infrastructure monitoring. We'll then have the opportunity for that full CI/CD integration or NoOps opportunity.

We will be rolling out to some highly dense environments in the near future. We haven't run into any performance issues yet. The only issue that we ran into previously is with the automated instrumentation of the product. We accidentally disabled the competitive products that teams were using as we were evaluating Dynatrace. You can get in front of yourself in rollout.

We don't have the solution’s self-healing functionality integrated into the automation product. Dynatrace doesn't have the self-healing capability of restarting services. Therefore, from a monitored application perspective, we haven't enjoyed that capability yet.

We are in the process of testing some parts of the session replay. We see value there and are working through understanding the auditory or compliance impacts to leverage this feature.

Based on my experience and history of the products, I would rate it at least a nine (out of 10). It's been far superior to other products in its capabilities and comprehensiveness, especially across both cloud and legacy technologies, such as older technologies (like mainframes and server-based monolithic applications).

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Dynatrace
April 2025
Learn what your peers think about Dynatrace. Get advice and tips from experienced pros sharing their opinions. Updated: April 2025.
856,873 professionals have used our research since 2012.
Director, Digital Projects and Practices at Rack Room Shoes
Real User
Allows our team to focus more on innovation, rather than on monitoring and bug-squashing
Pros and Cons
  • "The alerting systems are definitely the most valuable feature. The AI engine, "Davis," has proved to be a game-changer for us, as it helps to alert us when there are anomalies found in our applications or in their performance... letting the Davis engine find those anomalies and push them to the top, especially as they relate to business impact, is very valuable to us."
  • "The one area that we get value out of now, where we would love to see additional features, is the Session Replay. The ability to see how one individual uses a particular feature is great. But what we'd really like to be able to see is how a large group of people uses a particular feature. I believe Dynatrace has some things on its roadmap to add to Session Replay that would allow us those kinds of insights as well."

What is our primary use case?

We are using it to monitor our e-commerce applications and the full stack that our e-commerce applications run on. That includes both our Rack Room Shoes domain and our Off Broadway Shoes domain. We use it to monitor the overall health of the entire stack, from the hardware all the way to the user interface. And more specifically, we use it to monitor the real user's experience on the front-end.

How has it helped my organization?

What Dynatrace has really allowed our team to do is focus more on innovation, rather than on monitoring and bug-squashing. Now that we have a tool like Dynatrace, we can continue to do forward-thinking projects while Dynatrace is doing the monitoring and rooting out the root causes. We're spending a lot less time trying to find out what the problem is, versus letting Dynatrace pinpoint where the problem is. We can then validate and remediate much quicker. That's the impact it's had on our business.

The automated discovery and analysis helps us to proactively troubleshoot production and pinpoint underlying root cause. We recently had some issues with database connections. Our database team was scratching their heads, not really knowing where to look. What we were able to do with Dynatrace, because we had some of the Oracle Insights tools built into the database, was to provide, down to the SQL statement, what queries were taking up the most resources on that machine. We provided that to the database team and that gave them a head-start in being able to refactor the data so it was quicker to query. That really helped us speed up the user experience for that specific issue.

Dynatrace helps DevOps to focus on continuous delivery and to shift quality issues to pre-production. We are just now starting to use it in that way. When we first launched Dynatrace, we only had monitoring in our production environment. At that point we were using it as an up-front, first-alert tool for any issues that were happening. Now what we're doing is instrumenting our lower environments with Dynatrace so that it will allow us to monitor our load-testing in those environments, to find out where our breaking points are. So it does allow us to push out products that are much more stable and much less buggy because we're able to find out where our breaking points are in the lower environments. What this is going to do is allow us to do is push out, at a faster rate, more solid, less buggy releases and customer features, and allow us to continue to innovate on the next idea. We're just starting that journey. We just got fully instrumented in our lower environments in the last couple of weeks.

In terms of 360-degree visibility into the user experience across channels, we're only monitoring our digital channels right now, specifically our e-commerce channels. But we do have ways, even within the channel, to dissect by the source they came from. Did a given customer come from a digital ad? Did they come from an email? Did they come to us direct? It does allow us to segment our customers and see how each segment of customer performs as well. This is important for us because we want to make sure that we're not driving specific segments of customers into a bad-performing experience or to a slow response time. It also allows us to adequately determine where to spend our marketing dollars.

Another benefit is that it has definitely decreased our mean time to identification, with the solution and the Davis AI engine bringing the most probable root cause to the top. And within that, it gives us the ability to drill down into the specific issue or query or line of code that is the issue. So it has saved us a lot of time — I would estimate it has saved us 10 hours a week — in remediating issues and trying to find the root cause.

It has also improved uptime, indirectly. Because it gives us alerts early, we're able to mitigate issues before they're actually bigger issues.

What is most valuable?

The alerting systems are definitely the most valuable feature. The AI engine, "Davis," has proved to be a game-changer for us, as it helps to alert us when there are anomalies found in our applications or in their performance. We find that very helpful. There's still a human element to the self-healing capabilities. I wish I could say, "Oh, it's magic. You just plug it in and it fixes all your problems." I wouldn't say that, but what I would say is that the Davis engine gives us that immediate insight and allows us to cater to our solution so that the next time that problem arises it can mitigate it without a lot of human involvement.

Dynatrace's ability to assess the severity of anomalies, based on the actual impact to users and business KPIs, is really good, out-of-the-box. But it does an even better job when, again, we as humans give more instruction and provide more custom metrics that we're trying to monitor that are key to our business. And then, letting the Davis engine find those anomalies and push them to the top, especially as they relate to business impact, is very valuable to us.

We find the solution's ability to provide the root cause of our major issues, down to the line of code that might be problematic, to be valuable.

And we get a lot of value out of the Session Replay feature that allows us to capture up to 100 percent of our customers' real user experiences. That's helped us a lot in being able to find obscure bugs or make fixes to our applications. 

We also use real-user monitoring and Synthetic Monitoring functionalities. We use real-user monitoring for load times, speed index, and overall application index. And we use Synthetic Monitors to make sure that even certain outside, third-party services are available to us at all times. In certain cases, we have been reliant on a third-party service, and our Dynatrace tool has let us know that that service isn't available. We were able to remove that service from our website and reach out to the service provider to find out why it wasn't available.

We also find it to be very easy to use, even for some of our business users. Most of the folks who use the Dynatrace tool do tend to be in the technical field, but use is spread across both the business side, what we call our omni-channel group, as well as our IT group. They all use it for different purposes. I'm beginning to use it on the business side to show the impact that performance has on revenue risk. I can then go back and show that when we have bad performance it affects revenue. And I can put a dollar amount on that. So the user interface is very easy to use, even for the business user.

What needs improvement?

Dynatrace continues to innovate, and that's especially true in the last couple of years. We have continued to provide our feedback, but the one area that we get value out of now, where we would love to see additional features, is the Session Replay. The ability to see how one individual uses a particular feature is great. But what we'd really like to be able to see is how a large group of people uses a particular feature. I believe Dynatrace has some things on its roadmap to add to Session Replay that would allow us those kinds of insights as well.

For how long have I used the solution?

We started using Dynatrace in September of 2017. At that time it was an older product called AppMon. But we quickly upgraded to the current Dynatrace platform the following year. We've been using the SaaS platform ever since.

What do I think about the stability of the solution?

It's been very stable. We've had very little downtime. In the last four years there may have been one outage. Overall, it's been extremely stable. Many times, Dynatrace is our first alert that we have issues with other platforms.

What do I think about the scalability of the solution?

It's extremely scalable. We're one of the small players. We're running with about 70 agents right now. We've been at Dynatrace's conferences and have heard of customers who can deploy 5,000 agents over a weekend and have no issues at all. For our small spec-of-sand space, it's extremely scalable.

We are hosted on Google cloud. That's where all of our VMs are currently set up. Our database is there, our tax server is there. All of our application and web servers are there, and Dynatrace is monitoring all of that for us. We haven't encountered any limitations at all in scaling to our cloud-native environment. We can spin up new auxiliary servers in a matter of minutes and have Dynatrace agents running on them within 15 minutes. We're starting to play a little bit with migrating a version of our application into a Kubernetes deployment and using Dynatrace to monitor the Kubernetes containers as well.

We have plans to increase our usage of Dynatrace. We just recently updated our hosts. We needed to increase the number of host units so that we could put Dynatrace on more servers, and we've already just about used up all of those. So next year, we'll likely have to increase those host units again. And we're going to start using more pieces of Dynatrace that we haven't used before, like management zones and custom metrics.

How are customer service and technical support?

Technical support has been great. The first line of defense is their chat through the UI, which is really simple. They're super-responsive and usually get back to us within minutes. We have a solutions engineer that we can reach out to as well, and they have been very helpful, even with things like setting up training sessions and screen-sharing sessions to help enable our internal teams to be more productive using the tool.

Which solution did I use previously and why did I switch?

We were using a tool called New Relic and we were really just using it as a synthetic monitor to make sure the application was up and running, but we really weren't getting a lot of insights. When we decided that we wanted a tool that could give us more insights and that we needed a tool that could give us the ability to monitor more of our customers' behaviors, there just wasn't another tool like Dynatrace that we felt could do things as well as Dynatrace, through a "single pane of glass." We chose Dynatrace over New Relic at the time because New Relic just didn't have any solutions like it.

We haven't found another tool that can help us visualize and understand our infrastructure, and do triage, like Dynatrace. We haven't found one that can give us that full visibility into the entire stack from VM all the way to the UI. That was really the reason we picked Dynatrace. There just wasn't another tool that we felt could do it like Dynatrace.

The fact that the solution uses a single agent for automated deployment and discovery was the second reason that we chose Dynatrace. The ease of deployment, the fact that we could use the one agent and deploy it on the host and suddenly light up all of these metrics, and suddenly light up all of these dashboards with insights that we didn't have before, made it extremely attractive. It required a lot less on our part to try to do instrumentation. Now, as we add more Dynatrace agents to more of our back-end servers, we think we'll gain even more value out of it.

How was the initial setup?

We started with AppMon, which was more of an on-premise version, where we were installing it, although it still was a one-agent. Then we moved to the SaaS solution, and it was very easy for us to migrate from AppMon to the SaaS solution, and it's been extremely easy to instrument new hosts with the agent.

We were up and running within 30 days when we were first engaged with AppMon. When we migrated to the SaaS solution, it maybe took another 30 days and might have even been less. I wasn't involved with that migration, but I worked closely with the guy who was. I don't remember it taking much longer than 30 days to migrate.

We had an implementation strategy. We knew specifically which application we wanted to monitor, and all of the hardware and services and APIs that that application was dependent on. We went in with a strategy to make sure that all of those things were monitored. And now we've progressed that strategy to start monitoring more of our internal back-end systems as well — the systems that support our stores, not just our e-commerce channel — to see if we can't get more value and maybe even realize more cost savings on our brick and mortar side using Dynatrace.

What was our ROI?

We have definitely seen return on our investment. It has come in the form of being able to produce more stable, less buggy applications and features, and in allowing our team to focus more on innovating new ideas that drive revenue and business, versus maintaining and troubleshooting the existing application.

It hasn't yet saved us money through consolidation of tools, but as we continue to find more value in Dynatrace, it does make us look at other tools and see if we are able to use Dynatrace to consolidate them. We have replaced other application monitoring tools with Dynatrace, but we've not yet consolidated tools.

What's my experience with pricing, setup cost, and licensing?

Whatever your budget is, you can manage Dynatrace and get value out of it, but you need to manage it to what your needs are. That's the one thing we found. We did not budget the right amount to begin with. It has cost us more in the long run than if we would have been able to negotiate it upfront. But we didn't really know what we didn't know until we'd been using Dynatrace for awhile.

Your ability to catch your Session Replay is based on the number of what they call DEM units, digital experience monitoring units. That's where we were short to begin with. There is an additional expense to determining not just the platform subscription but also the number of hosts units that you want to run and the number of DEM units that you need to be able to capture all of the user experiences that you want. In our case, we wanted the ability to capture 100 percent. Maybe in another business someone would only be worried about capturing a sampling of the traffic.

Which other solutions did I evaluate?

We evaluated New Relic, AppDynamics, AppMon, which was the Dynatrace solution at the time, and we also looked at Rigor.

Dynatrace could do pretty much everything. It wasn't just the real-user monitoring piece of it. It was also the full stack health aspect. The Davis AI engine was probably the biggest differentiator among all of the tools. The Davis AI engine and its ability to surface the root cause was a game-changer.

What other advice do I have?

My advice would be to jump all-in. There doesn't seem to be another tool that can do it like Dynatrace, and from what we've seen the last two times we've gone to their Dynatrace Perform conferences, they are dedicated to innovating and adding features to the platform.

We are not yet using Dynatrace for dynamic microservices within a Kubernetes environment. We are beginning to play in that arena. We're looking at tools that will help us migrate from our current VM architecture to a Kubernetes deployment architecture, to enable us to get more into a no-DevOps type of environment. But today, we're still on a virtual machine deployment architecture.

Similarly, we have not integrated the solution with our CI/CD and/or ITSM tools. That is on our roadmap. As we migrate and transition into a no-DevOps and continuous improvement/continuous deployment operation, we'll begin to use Dynatrace as part of our deployment processes.

The solution hasn't yet decreased our time to market for new innovations or capabilities, but we believe that we will realize that benefit going forward, since we'll be leveraging Dynatrace in our lower environments to find out where breaking points are of new features that we release.

We have half-a-dozen regular users who range from our e-commerce architect to DevOps engineers to front-end software developers. My role as a user is more of a senior-level executive or sponsor role. We also have some IT folks, some database administrators and some CI people, but most of our users are in the IT/technical realm.

We don't have a team dedicated to maintaining the solution. We do have a team responsible for it, though. That is the team that just helped instrument our lower environment with Dynatrace. We've got some shared responsibilities and some deployment instructions that are shared across three different groups. They're from IT, our omnichannel group, which is really our business side, and we leverage a third-party for staff augmentation and they use Dynatrace to help us monitor during our off-hours.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
DermotCasey - PeerSpot reviewer
Principal Technology Consultant at Vodafone
Real User
Feature-rich, stable, and straightforward to set up
Pros and Cons
  • "The visibility that it provides is most valuable."
  • "There should be more visibility for network performance monitoring. There should be more metrics for things like 5G and IoT. That would be the main thing because they've moved more to mobile performance rather than fixed networks."

What is most valuable?

The visibility that it provides is most valuable.

What needs improvement?

There should be more visibility for network performance monitoring. There should be more metrics for things like 5G and IoT. That would be the main thing because they've moved more to mobile performance rather than fixed networks.

For how long have I used the solution?

I have been using this solution for over 10 years.

What do I think about the stability of the solution?

Its stability is very good. There are no complaints.

What do I think about the scalability of the solution?

It is pretty scalable.

How are customer service and support?

I haven't had to deal with them too much. I would rate it as average. It has been good enough, but I haven't had too many moments where I had to reach out. I would rate their support a four out of five.

How was the initial setup?

It is straightforward. The agents are pretty straightforward to set up. I would rate it a four out of five in terms of ease.

What's my experience with pricing, setup cost, and licensing?

As compared to New Relic and other providers, it is more expensive, which is its biggest disadvantage. Its biggest advantage is its capability. It is more feature-rich.

What other advice do I have?

My advice would be to try it before you buy it. I would rate it a strong eight out of 10.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
reviewer1170870 - PeerSpot reviewer
Enterprise Monitoring | Information Services at a healthcare company with 5,001-10,000 employees
Real User
Does thorough scanning of services and applications, but SNMP monitoring is not very good
Pros and Cons
  • "It is a very good APM tool. There is a lot of thorough scanning of services and applications. It has got great monitoring features."
  • "Its infra monitoring is not that good. They are mainly into the APM environment, such as network monitoring and other things. Strong end-to-end infrastructure monitoring is missing. SNMP monitoring is currently not very good in this solution."

What is our primary use case?

We are using it for user monitoring and service monitoring.

What is most valuable?

It is a very good APM tool. There is a lot of thorough scanning of services and applications. It has got great monitoring features.

PurePath helps us to identify minor glitches in applications and services. It collects everything from user sessions.

What needs improvement?

Its infra monitoring is not that good. They are mainly into the APM environment, such as network monitoring and other things. Strong end-to-end infrastructure monitoring is missing. SNMP monitoring is currently not very good in this solution.

It is a bit expensive. It could be cheaper.

For how long have I used the solution?

I've been using this solution for the last one and half years.

What do I think about the stability of the solution?

Its stability is good. It does not break easily.

What do I think about the scalability of the solution?

We have not scaled it yet. It is good enough to handle the bulk load. We never faced any performance issues with the tool. We have more than 150 users, and we never saw any issues with it.

How are customer service and support?

Their support is very good.

How was the initial setup?

It was straightforward. The full deployment probably took a week.

What about the implementation team?

We have vendor support, and we collaborated with our vendor for its implementation.

What's my experience with pricing, setup cost, and licensing?

Its license is a bit expensive. We renew it yearly.

What other advice do I have?

I would rate it a seven out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
reviewer1045242 - PeerSpot reviewer
Project Lead Engineer at a construction company with 5,001-10,000 employees
Real User
Useful application monitoring, helpful technical support, effective alerts
Pros and Cons
  • "The most useful features are cloud monitoring, application monitoring, and alert notifications."
  • "The solution could improve by allowing more dashboards customization. This would allow us to monitor the metric better."

What is our primary use case?

I am using Dynatrace for cloud monitoring, application monitoring, and alert notifications.

What is most valuable?

The most useful features are cloud monitoring, application monitoring, and alert notifications.

What needs improvement?

The solution could improve by allowing more dashboards customization. This would allow us to monitor the metric better.

For how long have I used the solution?

I have used Dynatrace within the past 12 months.

What do I think about the stability of the solution?

The solution is stable.

What do I think about the scalability of the solution?

Dynatrace is scalable but there is a cost involved. If your use case requires scalability it is easy to do.

How are customer service and support?

Technical support is always available to assist. They are very good.

Which solution did I use previously and why did I switch?

We use Dynatrace in parallel with New Relic.

Our company is large and we use a lot of applications. We have different tools for different kinds of use cases, based on the cost, I will not always use Dynatrace every time. If my use case is not suitable for Dynatrace, cost-effective, or efficient, then I will not use Dynatrace, I will use something else.

How was the initial setup?

The implementation requires a few hours to make connectivity. New Relic is simpler to set up then Dynatrace.

What about the implementation team?

For the deployment and maintenance, Dynatrace requires one or two people.

What's my experience with pricing, setup cost, and licensing?

Dynatrace is very good and it's provided a lot of information, it plays a positive role in making your application up to date in the market. If you want to monitor some applications only, it would be cheaper if you did cloud monitoring, but the price benefit depends on the use case.

What other advice do I have?

I rate Dynatrace an eight out of ten.

Which deployment model are you using for this solution?

Public Cloud
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
reviewer988488 - PeerSpot reviewer
Managing Director at a computer software company with 501-1,000 employees
Real User
Reduced our offline time and gives great ROI
Pros and Cons
  • "Dynatrace has reduced our total headcount in operations and the mean time to detect and resolve problems. As a result, those challenging offline times are much shorter, if not non-existent, because of this solution."
  • "An area for improvement would be security. In the next release, I'd like to see more network-centric capabilities - Dynatrace is good at the network level, but I have to leverage other network solutions and integrate with them, but a holistic approach including the network as a one-stop-shop would be great."

What is our primary use case?

My primary use cases of this solution are to understand how users are interacting with and experiencing applications and to quickly identify and fix problems.

How has it helped my organization?

Dynatrace has reduced our total headcount in operations and the mean time to detect and resolve problems. As a result, those challenging offline times are much shorter, if not non-existent, because of this solution.

What is most valuable?

The most valuable features are session replay, which allows for full playback of a user's experience; the AI engine "Davis," which does problem identification; and automatic mapping, which gives a visual representation of how applications interact host-to-host or process-to-process.

What needs improvement?

An area for improvement would be security. In the next release, I'd like to see more network-centric capabilities - Dynatrace is good at the network level, but I have to leverage other network solutions and integrate with them, but a holistic approach including the network as a one-stop-shop would be great.

What do I think about the stability of the solution?

Dynatrace's stability is solid - it performs updates very often, so it's always the latest and greatest in a good way.

What do I think about the scalability of the solution?

Dynatrace has phenomenal scalability capabilities.

How are customer service and support?

The technical support is phenomenal - they have a call program called Dynatrace ONE, which is like a customer success program on steroids. 

How was the initial setup?

The initial setup was extremely straightforward and fast. The deployment function was also super fast, typically just a few hours at most, with the right tuning.

What was our ROI?

When used appropriately and applied to the applications that are meaningful for businesses, the ROI is extremely high.

What's my experience with pricing, setup cost, and licensing?

There's a perception that Dynatrace's value could be questioned, but this is down to a lack of due diligence on the front end. When done right, this product always gives good ROI and total cost of ownership.

What other advice do I have?

Dynatrace is really good at keeping some infrastructure details and really good at the application level. I would give this solution a score of ten out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Consultant at a tech services company with 1,001-5,000 employees
MSP
Highlights are AI and OneAgent features
Pros and Cons
  • "A feature that's one of the highlights of Dynatrace is the AI. The second most valuable feature is OneAgent. Between infrastructures, applications, operating systems, you can deploy with just a single agent and can practically install and forget about it."
  • "Dynatrace could be improved by having a fully functional applications and infrastructure monitoring feature. Their existing stack, which is SNMP-based, does not have full infrastructure monitoring, whereas if we compare it with other solutions like New Relic or Datadog, they have moved into infrastructure monitoring. The second improvement I would suggest is in regards to the cost. So far, Dynatrace is the most expensive APM that we sell, even compared to New Relic. I think they can improve a little bit in terms of the license pricing."

What is our primary use case?

The primary use case of Dynatrace is root cause analysis. Dynatrace is used for finding issues if an application is having trouble. 

I'm an integrator, so we have deployed it for customers both on-premise as well as on the cloud. 

What is most valuable?

A feature that's one of the highlights of Dynatrace is the AI. The second most valuable feature is OneAgent. Between infrastructures, applications, operating systems, you can deploy with just a single agent and can practically install and forget about it. 

What needs improvement?

Dynatrace could be improved by having a fully functional applications and infrastructure monitoring feature. Their existing stack, which is SNMP-based, does not have full infrastructure monitoring, whereas if we compare it with other solutions like New Relic or Datadog, they have moved into infrastructure monitoring. 

The second improvement I would suggest is in regards to the cost. So far, Dynatrace is the most expensive APM that we sell, even compared to New Relic. I think they can improve a little bit in terms of the license pricing. 

For how long have I used the solution?

We have been running Dynatrace for three years now. 

What do I think about the stability of the solution?

This solution is stable. 

What do I think about the scalability of the solution?

This solution is scalable. 

How are customer service and support?

The tech support is great. Besides calls, they also have an online presence where we can chat directly to them. If I'm not mistaken, this may be for premier customers, since their support is layered. 

For the other support mechanism, which is for regular customers, they can submit questions into a portal. I think the response time is between four to six hours. They could improve the response time because the premier customers have a fifteen-minute to one-hour response time. 

How was the initial setup?

Dynatrace is quite easy to install. For an SaaS deployment, the customer can practically use it within the same day. If it's on-premise, I think it takes around three days, depending on the complexity. 

We have five customers using Dynatrace and the team size for deployment depends on the complexity. We have one customer with a big project, which was big in scope as well, so we can have as many as six to seven people for deployment. It ranges, so if a customer has a hundred servers—in terms of the time and resources required for the deployment, from start to finish—I think it could be done within a month by two people. It depends on the complexity. 

What about the implementation team?

We implement this solution for customers. 

What's my experience with pricing, setup cost, and licensing?

Dynatrace is the most expensive APM that we sell, compared to competitors' products. The license pricing could be improved. My customers pay for licensing yearly. 

What other advice do I have?

I rate Dynatrace a nine out of ten. I would definitely recommend Dynatrace, especially if the customer has a big budget. An enterprise company should purchase Dynatrace, even when compared to other APM solutions like New Relic. 

Disclosure: My company has a business relationship with this vendor other than being a customer: Integrator
PeerSpot user
Buyer's Guide
Download our free Dynatrace Report and get advice and tips from experienced pros sharing their opinions.
Updated: April 2025
Buyer's Guide
Download our free Dynatrace Report and get advice and tips from experienced pros sharing their opinions.