We changed our name from IT Central Station: Here's why
Get our free report covering Datadog, Dynatrace, Splunk, and other competitors of New Relic APM. Updated: January 2022.
564,997 professionals have used our research since 2012.

Read reviews of New Relic APM alternatives and competitors

User at a financial services firm with 10,001+ employees
Real User
Top 20
Helps us resolve incidents much faster, on both the front-end and the server-side
Pros and Cons
  • "Dynatrace is a single platform. It has all these different tools but they are actually all baked into the OneAgent technology. Within that OneAgent... you have the different tool sets. You have threat analysis, memory dumps, Java analysis, the database statements, and so on. It's all included in this OneAgent. So the management is actually quite easy."
  • "The solution's ability to assess the severity of anomalies based on the actual impact to users and business KPIs is great. It's exactly what we need. The severity impact is based on the users, the availability, and the impact it has on your business."
  • "The solution's ability to assess the severity of anomalies based on the actual impact to users and business KPIs is great. In my opinion, it could be extended even more. I would like it to be more configurable for the end-user. It would be nice to have more business rules applicable to the severity. It's already very good as it is now. It is based on the impact on your front-end users. But it would be nice if we could configure it a bit more."
  • "Another area for improvement is that I would like the alerting to be set up a little bit more easily. Currently, it takes a lot of work to add alerting, especially if you have a large environment, and I consider our environment to be quite large. The alerting takes a lot of administration."

What is our primary use case?

We use it to follow up user experience data. It's all banking applications. For example, when you're viewing your account, you open up your mobile app and the click you do to view your account is measured in Dynatrace. It's stored and we are checking the timing at each moment. 

We are also following up the timing differences between our different releases. When we have a new version release, we are already checking within our test environment to see what the impact of each change is before it goes to production. And we follow that up in production as well.

In addition, we are following up the availability of all our different systems. 

And root cause analysis is also one of the main business cases.

So we have three main use cases:

  1. To follow up what's going on in production
  2. Proactively reacting to possible problems which could happen
  3. Getting insights into all our systems and seeing the correlation between these different systems and improving, in that way, our services to our end users.

We use the on-prem solution, but it's the same as the SaaS solution that they are offering. They have Dynatrace SaaS and Dynatrace Managed, and our is the Managed. Currently we're on version 181, but that changes every month.

How has it helped my organization?

The dynamic microservices for Kubernetes is really value-added because there is a lot of monitoring functionality already built into Kubernetes Docker. There are also free things like Prometheus which can display that. That's very good for technical people. For the owner of the pod itself, that's enough. But those things don't provide any business value. If you want business value from it, you need to extract it to a higher level, and that's where you need the correlations. You need to correlate what is between all these different services. What is the flow like between the services? How are they interconnected? And that's where Dynatrace gives added value. And the fact is that you can combine these data, which are coming from Kubernetes, and include them in Dynatrace, meaning you have a single pane of glass where you can see everything. You can see the technical things, but you have the bigger business value on top of it, as well.

Before Dynatrace, we were testing just by trying out the application ourselves and getting a feeling for the performance. That's how it very often would go. You would start up an application and it was judged by the feeling of the person who was using it at that moment in time. That, of course, is not representative of what the actual end-user feeling would be. We were totally blind. We actually need this to be able to be closer to the customer. To really care about the customer, you need to know what he is doing. 

Also, incidents are resolved much faster by using Dynatrace. And that's for front-end, because we actually know what is going on. But it's also for server-side incidents where we can see the correlation. Using this solution our MTTR has been lowered by 25 percent. It's pinpointing the actual errors or the actual database calls, so it goes faster. But, of course, you still have to do it. It still needs to be implemented. It doesn't do the implementation work for you.

Root cause detection, how the infrastructure components interact with each other, helps. We know what is going wrong and where to pinpoint it. Before, we needed to fill a room with all the experts. The back-end expert would say, "I'm not seeing anything on the back-end." And the network expert would say, "I'm not seeing anything on the network." When you see the interaction between the different aspects, it's immediately clear you have to search in your Java development, or you have to search in your database, because all the other ones don't have any impact on the performance. You see it in Dynatrace because all the numbers are there. It really helps with that. It also helps to pinpoint which teams should work on the solution. In addition to the fact that it's speeding up the process of finding your root cause, it's also lowering the number of people who need to pay attention to the problem. It's just a single team that we need to work on it. All the rest can go home.

It has decreased our mean time to identification by 90 percent, meaning it only takes us one-tenth of the time it used to, because it immediately pinpoints where the problem is.

Dynatrace also helps DevOps to focus on continuous delivery and to shift quality issues to pre-production because we are already seeing things in pre-production. We have Dynatrace in our test environment, so we have a lot of extra information there, and DevOps teams can actually work on that information.

Finally, in terms of uptime, it's signaling whenever something is down and you can react to the fact that it is down a lot faster. That improves the uptime. But the tool itself, of course, doesn't do anything for your uptime. It just signals the fact that it's down faster so you can react to it.

What is most valuable?

The most valuable aspect is the fact that Dynatrace is a correlation tool for all those different layers. It's the correlation from the front-end through to the database. You can see your individual tracks.

One of the aspects that follows from that is the root cause analysis. Because we have these correlations, we can say, "Hey it's going slow on the server side because a database is having connection issues," for example. So the root cause is important, but it's actually based on the correlation between the different layers in your system.

Dynatrace is a single platform. It has all these different tools but they are actually all baked into the OneAgent technology. Within that OneAgent — which is growing quite large, but that's something else — you have the different tool sets. You have threat analysis, memory dumps, Java analysis, the database statements, and so on. It's all included in this OneAgent. So the management is actually quite easy. You have this one tool, and you have server-side and agent-side which are ways of semi-automatically updating it. We don't have to do that much management on it. Even for the quite large environment that we have, the management, itself, is quite limited. It doesn't take a lot of time. It's quite easy.

The solution's ability to assess the severity of anomalies based on the actual impact to users and business KPIs is great. It's exactly what we need. The severity impact is based on the users, the availability, and the impact it has on your business.

We also use the real-user monitoring and we are using the synthetic monitoring in a limited way, for the moment. We are not using session replay. I would like that, but it's still being considered by councils within the company as to whether we are able to use it.

We are using synthetic monitoring to measure the availability of one of our services. It's a very important service and, if it is down, we want business to be notified about this immediately. So we have set up a synthetic monitor, which is measuring the availability of that single service each minute. Whenever there is a problem, an incident will be immediately created and forwarded to the correct person. This synthetic monitoring is just an availability check in HTTP. It's actually a browser which is calling up a page and we are doing some page checks on this page to be sure that it is available. Next to the availability, which the synthetic monitoring gives us, we also measure the performance of this single page, because it's very important for us that this page is fast enough. If the performance of this single page degrades, an incident is also created for the same person, and he can respond to it immediately.

Real-user monitoring is a big part of what we are doing because we are focusing on the actual user experience. I just came from a meeting, 15 minutes ago, where we discussed this issue: a slowdown reported by the users. We didn't see anything on the server side but users are still complaining. We need to see what the users are actually doing. You can do that in debug tools, like Chrome Debugger, to see what your network traffic is and what your page is doing. But you cannot do that in production with your end-users. You cannot request that your end-users open their debug tools and tell you what's going on. That's what Dynatrace offers: insight like the debug tools for your end-user. That's also exactly what we need.

Most of the problems that we can respond to immediately are server problems, but most of the problems that occur, are front-end problems, currently. More and more, performance issues are located on the machine of the end-user, and so you need to have insight into that. A company of our size is obliged to have insight into how its actual users are doing. Otherwise, we're just blind to our user experience.

Dynatrace also provides a really nice representation of your infrastructure. You have all your servers, you have all your services, and you know how they communicate with each other.

What needs improvement?

While it gives you a good view of all the services that are instrumented by Dynatrace — which is good, of course, and that's what it can do — in our case, our infrastructure is a lot bigger than the part that is instrumented by Dynatrace only. So we only see a small part of the infrastructure. There are a number of components which are not instrumentable, like the F5 firewalls, switches, etc. So it gives a good overview of your server infrastructure. That's great, we need that. But it's lacking a bit of network segmentation and switches. So it's not a representation of your entire infrastructure. Not every component is there.

The solution's ability to assess the severity of anomalies based on the actual impact to users and business KPIs is great. In my opinion, it could be extended even more. I would like it to be more configurable for the end-user. It would be nice to have more business rules applicable to the severity. It's already very good as it is now. It is based on the impact on your front-end users. But it would be nice if we could configure it a bit more.

Another area for improvement is that I would like the alerting to be set up a little bit more easily. Currently, it takes a lot of work to add alerting, especially if you have a large environment, and I consider our environment to be quite large. The alerting takes a lot of administration. It could be a lot easier. It would not be that complicated to build in, but it would take some time.

I would also like the visual representation of the graphs to be improved. We have control of the actual measures which are in the graphs, but we are not able to control how the axes are represented or the thresholds are represented. I do know that they are working on that.

For how long have I used the solution?

I have been using the Dynatrace AppMon tool for six years and we changed to the new Dynatrace tool almost three years ago.

What do I think about the stability of the solution?

We haven't had any issues with the stability of Dynatrace, and it's been running for a long time. We use the Managed environment, so it's an on-prem service, but it's quite stable. We are doing the updates pretty regularly. They come in every month but we are doing them every two or three months. First we do them in the test phase and then in the production phase. But we have not experienced any downtime ever.

What do I think about the scalability of the solution?

For us, Dynatrace is scalable and we haven't seen any issues with that. We did need to install a larger server, but that's because we have a managed environment. You don't have that problem if you go with the SaaS environment. We don't see any negative impact on the scale of our products, and we are already quite large. It's quite scalable.

In terms of the cloud-native environments we have scaled Dynatrace to, we are using Dynatrace on an OpenShift platform, which is a Docker Kubernetes implementation from Red Hat. We have Azure for our CRM system, which Dynatrace monitors, but we are not measuring the individual pods in there as it is not a PaaS; it's a SaaS solution of course.

As for the users of the solution, we make a distinction between the users who are deploying stuff and those who are managing the Dynatrace stuff. The latter would be my team, the APM team, and we are four people. The four people are installing the Dynatrace agents, making sure the servers are alright, and making sure the management of the Dynatrace system itself is okay.

The users of the tool are the users of the different business cases. That includes development and business. There are about 500 individual users making use of the different dashboards and abilities within Dynatrace. But we see that number of users, 500, as a bit small. We want to extend that to over 1,000 in near future. But that will take some advertising inside the company.

How are customer service and technical support?

I use Dynatrace technical support on a daily basis. They have a live chat within the tool and that comes for free with the tool itself. All 500 of our users are able to use this chat functionality. I'm using it very frequently, especially when I need to find out where features or functionalities are located within the tool. They can immediately help you with first-line support for the easy questions and that saves you a lot of time. You just chat and say, "Hey, I want to see where this setting can be activated," and they say, "Just click this button and you will be there." 

For the more complex questions, you start with tickets and they will solve them. That takes a little bit longer, depending on how complex your question is. 

But that first-line support is really a very easy way to interact with these people, and you get more out of the tool, faster.

Which solution did I use previously and why did I switch?

We purchased the Dynatrace product because we had some issues with our direct channels, our customer-facing applications. There were complaints from the customer side and we couldn't find the solution.

There were also a number of our most important applications that needed more monitoring. We had a lot of monitoring capabilities on the server side and on the database side, but the correlation between all these monitoring tools was not that easy. When they came up with a problem they would say, "Hey, it's not the mainframe, it's not the database, it's not the network." But what was it? That was still hard to find out. And we were missing some monitoring on the front-end. The user experience monitoring was lacking. We investigated a number of products and Dynatrace came out as the best.

How was the initial setup?

We kind of grew into Dynatrace. Our initial scope was quite small, so it was not that complex. Currently, our scope is a lot broader, but it is not complex for us because we have been working with the tool for such a long time. Overall, it's quite straightforward. If you're starting with this product from scratch and you have to find out everything, it can take some time to learn the product. But it's quite straightforward.

We started with the AppMon tool, which was the predecessor to the current tool. Implementing that went quite fast because it was a very small scope. When we changed to the Dynatrace Managed it took us half a year. And that's not including the contract negotiations. That was for the actual implementation: Finding out all business cases and all the use cases that we had, transforming them into the new tool, and launching it live for a big part of our company. That took half a year.

What about the implementation team?

We hired some external experts from a company in Belgium, which is called Realdolmen. They really helped us in the implementation. They had experience in implementing Dynatrace for other companies already, so that really helped. And I would advise that approach. If you're doing it all by yourself, you are focusing on what your problems are, while if you are adding an external person to it, who is also an expert in the product itself, he will give you insights into how the product can benefit you in ways you couldn't have imagined.

What was our ROI?

The issue of whether Dynatracec has saved us money through consolidation of tools is something we are working on. There are a number of things that we are replacing now by things that are already present in Dynatrace. If you currently have a lot of different tools, it will save you money. But Dynatrace is not the cheapest tool. Money-saving should not be your first concern if you buy Dynatrace.

It depends on your business case, but as soon as you are at a reasonable size and you have different channels to connect within your company — mobile and web and so on — you need to have a view into your infrastructure and that's where Dynatrace provides real benefits. It's not for a simple company. It's not for the bakery store around the corner. But as soon as you hit a reasonable size, it gives enough added value and it's hard to imagine not having it or something comparable.

"Reasonable size" depends a bit on your industry. But it is connected with the number of customers you have. We have about 25,000 concurrent customers, at a given moment in time. As soon as you have more than 1,000 concurrent customers, you need this tool to have enough analysis power. It gives you power for tracking the individual user and it gives you the power to aggregate all the data, to see an overview of how your users are doing. This combination really gives you a lot of benefits.

What's my experience with pricing, setup cost, and licensing?

It is quite costly. Dynatrace was the most expensive, compared to the other products we looked at. But it was also a lot better. If you want value for your money, Dynatrace is the way to go. 

Which other solutions did I evaluate?

In my opinion, the product is extremely good and comparable. We did compare it to AppDynamics and New Relic and we saw that Dynatrace is actually the best product there is. If you are looking for the best, Dynatrace will be your product.

What other advice do I have?

The biggest lesson that I have learned from Dynatrace is that application performance monitoring is very complex, but the easiest part of it is the technical aspect. The more complex thing is all the internal company politics around it. We see a lot of data and if you are targeting some people and say, "Hey, your data bridge is going slowly," they will respond to it very defensively. If they have their own monitoring tools, they can say, "Oh no, my database is going very fast. See my screen is green." But we have the insights. It's all data, and gathering the data is the technical aspect. That's easy. But then convincing people and getting people to agree on what is obvious data is far more complex than the technical aspects.

The way to overcome that is talking. Communication is key.

I'm a little bit skeptical about the self-healing. I have heard a lot about it. I have gone through some Dynatrace instances where they have this self-healing prophecy. I think it's difficult to do self-healing. We are not using it in our company. There is a limited range of problems that you can address with it. It's only if you definitely know that this solution will work for this problem. But problems are always different, every time. And if you have specific knowledge that something will work if a particular problem arises, most of the time you can just avoid having the problem. So I'm a little bit skeptical. We are also not using it because we have a lot of governance on our production environment. We cannot immediately change something in production.

We are using dynamic microservices within a Kubernetes environment, but the self-healing is a little bit baked into these microservices. It's a Docker Kubernetes thing, where you have control over how many containers or pods you want to spin up. So you don't need an extra self-healing tool on top of that.

In terms of integrating Dynatrace with our CI/CD and ITSM tools, we are working on both of those directions, but we are not there yet. We have an integration with our ITSM tool in the sense that we are registering incidents from Dynatrace in our ServiceNow. But we are not monitoring it as a component management system.

We are not doing as much as I would want to for these Quality Gates. That can be improved in our company. Dynatrace could help with that, but I would focus on something else like Keptn, or something else that integrates with Dynatrace, to provide that additional functionality. Keptn would be more suitable for that, than the Dynatrace tool itself, but they are closely linked together. For us, that aspect is a work-in-progress.

I would rate Dynatrace a nine out of 10, because it has really added value to my daily business and what I have to do in performance analysis. It can be improved, and I hope it will be improved and updates will be coming. But it's still a very good tool and it's better than other tools that I have seen.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
RangaNathan
Technical Consultant at a manufacturing company with 5,001-10,000 employees
Consultant
The best full stack observability compared to any other tool
Pros and Cons
  • "For full stack observability, Elastic is the best tool compared with any other tool ."
  • "Elastic APM's visualization is not that great compared to other tools. It's number of metrics is very low."

What is our primary use case?

Elastic APM is a kind of log aggregation tool and we're using it for that purpose. 

What is most valuable?

Elastic APM is very new so we haven't explored much on it, but it's quite interesting. It comes with a free offering included in the same license. So we are looking to explore more. It is still not as mature as other tools like Kibana, AppDynamics or New Relic products related to application performance monitoring. Elastic APM is still evolving, but it's quite interesting to be able to get all the similar options and features in Elastic APM.

What needs improvement?

In terms of what could be improved, Elastic APM's visualization is not that great compared to other tools. It's number of metrics is very low. Their JVM metrics are much less while running on CPU memory and on top of that you get a thread usage. They're not giving much on application performance metrics. In that respect, they have to improve a little bit. If you compare that with other tools, such as New Relic, which is also not giving many insights, it would be good to get internal calls or to see backend calls. We are not getting this kind of metric.

On the other hand, if you go to the trace view, it gives you a good backend calls view. That backend call view is also capturing everything, and we need some kind of control over it, which it does not have. For example, if I don't want some of the sequence selected, there should be controls for that. Moreover you need to do all these things manually. Nowadays, just imagine any product opted to do conservation manually, that would be really disastrous. We don't want to do that manually. For now this needs to be either by API or some kind of automated procedure. If you want to install the APM Agent, because it is manual we would need to tune it so that the APIs are available for the APM site. That's one drawback.

Additionally, the synthetic monitoring and real user monitoring services are not available here. Whereas in New Relic the user does get such services.

The third drawback I see is the control site. For now, only one role is defined for this APM. So if I want to restrict the user domain, for example, if in your organization you have two or three domains, domain A, domain B, domain C, but you want to give access to the specific domain or a specific application, I am not sure how to do that here.

Both the synthetic and process monitoring should be improved. For the JVM, Java Process Monitoring, and any process monitoring, they have to have more metrics and a breakdown of the TCP/IP, and the tools are giving me - they don't provide many metrics in size. You get everything, but you fail to visualize it. The New Relic only focuses on transactions, and Elastic APM also focuses on similar stuff, but I am still looking for other options like thread usage, backend calls, front end calls, or how many front end and backend calls. This kind of metric is definitely required.

We don't have much control. For example, some backend calls trigger thousands of prepared statements, update statements, or select statements, and we don't have any control. If I only want select statement, not update statements, this kind of control should be there and properly supplied. The property file is very big and it is still manual, so if you want control agent properties you need UI control or API control. Nowadays, the world is looking for the API site so they'll be able to develop more smartly. They are looking for these kinds of options to enrich their dashboard creation and management.

For how long have I used the solution?

I'm new to Elastic APM, but I do have very good APM knowledge since I have been using APM almost 10 years and Elastic APM for just two years. I see that Elastic APM is still evolving.

How are customer service and technical support?

Elastic APM's technical support is pretty good and we have a platinum license for log aggregation. They respond very quickly and they follow a very good strategy. They have one dedicated resource especially for us. I'm not sure if that is common for other customers, but they assigned a very dedicated resource. So for any technical issue a dedicated resource will respond. Then, if that resource is busy or not available someone will attend that call or respond with support. In that way, Elastic support fully understands your environment.

Otherwise, if you go with the global support model, they have to understand your environment first and keep asking the same question again and again. How many clusters do you have, what nodes do you have, these kind of questions. Then you need to supply that diagnosis. This is a challenge. If they have a dedicated or a support resource they usually don't ask these questions because they'll understand your environment very well because they have worked with you on previous cases. In that sense they provide very good support and answer the question immediately.

They provide immediate support. Usually they get back you the same or the next day. I think it's pretty good compared to any other support. It was even very good compared to New Relic.

What other advice do I have?

There are two advantages to Elastic APM. It is open source and if somebody wants to try it out in their administration it's free to use. Also, it has full stack observability. For full stack observability, Elastic is the best tool compared with any other tool like New Relic or AppDynamics or Dynatrace. I'm not sure about Dynatrace, since I never worked with it, but I have worked with AppDynamics and New Relic. However, with their log aggregation side, there is still a lot to get implemented here.

I'd like bigger flexibility. That means we would get all the system logs, all the cloud logs, all the kinds of logs aggregated in a single location. On top of that, if they could have better metrics for handling data together it would give a greater advantage for observability. The Observability platform is pretty good because you already have logged data and information like that. If you just add APM data and visualize, you will get much needed information. How are you are going to visualize and how are you going to identify the issues?

For this purpose, Elastic is best. If you are really looking for an observability platform, Elastic provides both of these two options, APM plus log aggregation. But still they have to improve or they have to provide APIs for synthetic monitoring, internet monitoring, etc... If I think about synthetic monitoring, you can't compare New Relic with Elastic today. Elastic is much better.

These are the improvements they have to look at. They support similar functionalities of synthetic monitoring, so it's not a hundred percent APM friendly, but if you look at their observability platform, their full stack observability together with their log aggregation, Elastic APM is a greater advantage.

On a scale of one to ten, I would rate Elastic APM an eight out of 10.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
BrianHeisler
Principal Enterprise Systems Engineer at a healthcare company with 10,001+ employees
Real User
Top 20
An out-of-the-box solution that allows you to quickly build dashboards
Pros and Cons
  • "I like that you can build out a dashboard pretty quickly. There are some things that come out of the box that you don't really need to do, which is great because they're default settings."
  • "I think better access to their engineers when we have a problem could be better."

What is our primary use case?

We deploy agents on-premise to collect data on on-premise VM instances. We don't use Datadog in our cloud network. We do have some Cloud apps that we have it on and we also have Containers. We have it on their headquarters, the main software for them is on their own Cloud.

Eventually, we're building out the process now and using it better. We plan to use Datadog for root cause analysis relating to any kinds of issues we have with software, with applications going down, latency issues, connection issues, etc. Eventually, we're going to use Datadog for application performance, monitoring, and management. To be proactive around thresholds, alerts, bottlenecks, etc. 

Our developers and QA teams use this solution. They use it to analyze network traffic, load, CPU load, CPU usage, and then Tracey NPM, API calls for their application. There are roughly 100 users right now. Maybe there's 200 total, but on a given day, maybe 13 people using this solution.

How has it helped my organization?

It hasn't improved the way our organization functions yet, because there's a lot of red tape to cut through with cultural challenges and changes. I don't think it's changed the way we do things yet, but I think it will — absolutely it will. It's just going to take some time.

What is most valuable?

I like that you can build out a dashboard pretty quickly. There are some things that come out of the box that you don't really need to do, which is great because they're default settings. Once you install the agent on the machine, they pick up a lot of metrics for you that are going to be 70 or more percent of what you need. Out of the box, it's pretty good.

For how long have I used the solution?

I have been using Datadog every day since September 2020. I also used it at a previous company that I worked for.

What do I think about the stability of the solution?

Stability-wise, it's great.

What do I think about the scalability of the solution?

It seems like it'll scale well. We're automating it with Ansible scripts and service now so that when we build a new virtual machine it will automatically install Datadog on that box.

How are customer service and technical support?

The tool itself is pretty good and the customer service is good, but I think they're a growing company. I think better access to their engineers when we have a problem could be better. For example, if I asked the question, "Hey, how do I install it on this type of component?" We'll try to get an engineer on the phone with us to step us through everything, but that's a challenge because they're so busy.

Technically-wise, everything's fine. We don't need any support, everything that I need to do, I can do right out of the box. But as far as, in the knowledge of their engineers on how to configure it on given systems that we have, that's maybe at six because they're just not as available as I would've hoped.

Which solution did I use previously and why did I switch?

We were using AppDynamics. Technically, we still have it in-house because it's tightly wound into certain systems, but we'll probably pull that off slowly over time. The reason we added Datadog and eventually we'll fully switch over is due to cost. It's more cost-friendly to do it with Datadog.

Which other solutions did I evaluate?

Yes, we looked at Dynatrace, AppDynamics, and New Relic. Personally, I wouldn't have chosen Datadog for the POC if it were up to me. Datadog was a leader, but New Relic was looking really good. In the end, the people above me decided to go with Datadog — it's a big company, so they wanted to move fast, which makes sense.

What other advice do I have?

If you're interested in using Datadog, just do your homework, as we did. We're happy so far I think; time will tell as we are still rolling things out. It's a very good company. It's going to be a year before we really can tell anything. If you do your homework, you'll find that if you're really concerned with cost, it's good.

There are some strengths that AppDynamics and Dynatrace have that Datadog I don't think will have down the road, but they're not things we necessarily need — they're outliers. It would be nice to have them, but we can manage without them.

Know what you want. There is no need to pay for solutions like Dynatrace or AppDynamics that are more expensive or things that are just nice to have if you don't absolutely need to have them. That's something people need to understand. You just have to make sure you understand what it is that you need out of the tool — they are all a little different, those three. I would say to anybody that's going with Datadog: you just have to be patient at the beginning. It's a very busy company right now. They're very hot in the market.

Overall, on a scale from one to ten, I would give Datadog a rating of eight. It does what we need it to do, and it seems to be pretty user-friendly in terms of setting things up.

Features-wise, I'd give them a rating of ten out of ten. The better access we get to assistance from the engineers on how to configure dashboards and pulling metrics that we need, that would bring it up a little bit. So overall it would be harder and it would have to be perfect for it. I would say maybe they could bring it to a nine.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Head Of Information Technology at a mining and metals company with 11-50 employees
Real User
Top 20
Great for finding bottlenecks, and offers good stability but is quite expensive
Pros and Cons
  • "The solution helps us save a lot of time on certain tasks."
  • "The cloud licensing needs to be improved. It's quite pricey."

What is our primary use case?

We primarily use the solution for effective application monitoring. 

How has it helped my organization?

It helps us to find out where the bottlenecks are. Once you know, you can go and try to fix that issue. One of the challenges, when you use an ERP system, is the performance and the experience. Whenever we had issues, it was an opportunity for us to find out where the problem is and try to figure it out. It's been helpful in terms of improving system response, in terms of trying to look at that. When there's a problem, we can always go and try to find out. AppDynamics gives synopsis information so we're able to at least find out where exactly the problem is. So that's been very, very helpful on that.

Even though we do not have an end-user experience or database agent, at least on the application side, we still are able to get the information. Otherwise, finding it, trying to find this information, or having a manual process could take some time. It's a time-saving solution for us for sure.

What is most valuable?

The dashboards of the solution are excellent. They can be customized very easily.

The stability is good.

The solution helps us save a lot of time on certain tasks.

What needs improvement?

I have not been able to really spend time on the product itself. Developers are more likely to discuss if there are any shortcomings. My usage is quite limited. It would be unfair for me to comment on missing features. I don't spend enough time with the solution, exploring its capabilities. 

Nothing comes to mind in terms of lack of features. I haven't witnessed any aspect that I felt was lacking.

The cost is an area of concern to me on that one. The cloud licensing needs to be improved. It's quite pricey. There are cheaper options other there - including open-source options. 

For how long have I used the solution?

I've been using the solution for about four years or so. It's been a while. 

What do I think about the stability of the solution?

The stability of the solution is good. I haven't witnessed any issues that would make me worry about its capabilities. It doesn't crash or freeze and there are no bugs or glitches. The performance has been reliable.

What do I think about the scalability of the solution?

We have two users on the solution currently.

I can't speak to how scalable the solution would be as I've never tried to scale the solution myself. I have no knowledge of how easy or hard it would be to scale.

Which solution did I use previously and why did I switch?

I haven't worked on other tools personally.

How was the initial setup?

I can't speak to the implementation process. I did not help set anything up. Therefore, I don't have any experience.

What about the implementation team?

The initial setup was done by our application service provider, an ERP application service provider. They configured it, and therefore we never ran into any kind of setup issues in that respect. 

They were fine. We had a good experience with them overall.

What's my experience with pricing, setup cost, and licensing?

There are other options that are open source that wouldn't cost the company any money.

There are many other open-source tools available. When it comes to price comparison, maybe it falls into different categories. It seems to be an expensive product overall, and with other cheaper options on the market, such as DataDog, companies may prefer to pay less or nothing at all.

At some point, we had decided to look for an alternate. Unfortunately, our hands were full and continue to be. We have so many other projects on that, we don't have time to do anything as time-consuming as switching to something else. If I had three months of free time, I would probably go and pick up an alternate, an open-source solution, and maybe implement that due to the fact that the AppDynamics cost is very, very high.

Which other solutions did I evaluate?

From time to time I do look at some other things, New Relic and some of the other things out there. However, I haven't properly evaluated anything per se. 

What other advice do I have?

We are customers and end-users.

We're always using the latest version of the solution. It's SaaS-based and therefore it is consistently updated immediately as new versions are ready for release. We don't need to manually handle the process. We use AppDynamics' own cloud. We don't use a third-party cloud.

The one area of concern for me is the cost. There are other options - including open-source options.

Overall, I'd rate the solution at a serve out of ten. I'd rate it higher if the solution's price was better.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Other
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Product Manager at a healthcare company with 10,001+ employees
Real User
Top 5Leaderboard
Good technical support and scales well for cloud deployments but needs better UI
Pros and Cons
  • "The technical support and documentation are quite good."
  • "The UI could be better. When I look at the dashboard, for example, the information looks cluttered and unorganized."

What is our primary use case?

We primarily use the solution for monitoring, including the application performance management monitoring in order to monitor how our infrastructure is working, how the network is working, and the general monitoring of databases.

What is most valuable?

The dashboard is pretty good. I can filter it out based on the application layer, for example. 

If I'm interested in how our data is working I can just go and take a look, or if I want to look at applications or filtering, I can look at that as well. I get pretty good visibility from multiple angles. 

The technical support and documentation are quite good.

The cloud deployment model allows for pretty good scaling. 

What needs improvement?

The UI could be better. When I look at the dashboard, for example, the information looks cluttered and unorganized. It needs to work on having better visual representations on hand for the users. 

Sometimes you don't get real-time data on if an application goes down. I need to look for data points in the dashboard, and usually, it takes some time to get loaded into a system. Therefore, there's a delay in seeing the information that' the most important to us. It would be ideal if we had guaranteed real-time visibility on everything.

We have a few hiccups during deployment. It didn't go as smoothly as we hoped. It's a bit complex. 

The solution could be more stable.

It would be useful if there was container monitoring and monitoring for Kubernetes. Analysts are expecting this.

The solution needs to ensure it is relevant for current complex IT environments.

For how long have I used the solution?

I've been using the solution for at least the last 12 or so months. It's been a while. 

What do I think about the stability of the solution?

There is still a lot of improvement that is needed in terms of stability. We figured out that it was not able to monitor or offer some forms of self-service capabilities. It's not fully reliable. 

What do I think about the scalability of the solution?

The more applications that are getting introduced into the organization, the faster the deployment needs to be for us. Overall, as a complete monitoring solution, if a new deployment is coming up, and takes too much time to get running, it may affect the overall monitoring prospect for us. On-premises, for example, it's a bit limiting. However, the cloud is much more effective in terms of scaling big and fast. 

On my project, we have about 300 users using the solution.

I'm not sure if we have plans to increase the solution at this time. If a new application is being onboarded and it needs monitoring, it's a possibility that we might require it to scale. 

How are customer service and technical support?

The documentation and the technical support are both quite good. We're quite happy with the level of service we receive.

How was the initial setup?

We did an agent deployment that needed to collect some metrics. In retrospect, there were a couple of hiccups. I figured it out when I was just trying to deploy different agents on different applications. I realized it totally depends on which application you are using. Some applications take more time to deploy. There's a bit of complexity involved. 

The full deployment took about two weeks or so to complete. 

What's my experience with pricing, setup cost, and licensing?

We currently pay for a yearly subscription.

The pricing is a very complex thing. New Relic and Dynatrace are just much more economic, or, at least, offer a better value. This solution seems to charge differently for different features.

Which other solutions did I evaluate?

We did evaluate other solutions before choosing this product. We evaluated Dynatrace and New Relic.

What other advice do I have?

We are just a customer and an end-user.

I would recommend this solution, however, I would also advise that a company check their requirements and if their existing and possible future applications will be compatible. Otherwise, not everything may be monitored correctly.

I would rate the solution at a six out of ten.

Which deployment model are you using for this solution?

Hybrid Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Get our free report covering Datadog, Dynatrace, Splunk, and other competitors of New Relic APM. Updated: January 2022.
564,997 professionals have used our research since 2012.