Buyer's Guide
Cloud Monitoring Software
November 2022
Get our free report covering Dynatrace, New Relic, Microsoft, and other competitors of Datadog. Updated: November 2022.
656,862 professionals have used our research since 2012.

Read reviews of Datadog alternatives and competitors

Product Director at a insurance company with 10,001+ employees
Real User
Top 20
Gives us a single, integrated tool to simplify support and reduce downtime
Pros and Cons
  • "Those 400 days of hot data mean that people can look for trends and at what happened in the past. And they can not only do so from a security point of view, but even for operational use cases. In the past, our operational norm was to keep live data for only 30 days. Our users were constantly asking us for at least 90 days, and we really couldn't even do that. That's one reason that having 400 days of live data is pretty huge. As our users start to use it and adopt this system, we expect people to be able to do those long-term analytics."
  • "One major area for improvement for Devo... is to provide more capabilities around pre-built monitoring. They're working on integrations with different types of systems, but that integration needs to go beyond just onboarding to the platform. It needs to include applications, out-of-the-box, that immediately help people to start monitoring their systems. Such applications would include dashboards and alerts, and then people could customize them for their own needs so that they aren't starting from a blank slate."

What is our primary use case?

We look at this solution for both security monitoring and operational monitoring use cases. It helps us to understand any kinds of security incidents, typical-scene use cases, and IT operations, including DevOps and DevSecOps use cases.

How has it helped my organization?

We had multiple teams that were managing multiple products. We had a team that was managing ELK and another team that was managing ArcSight. My team was the "data bus" that was aggregating the onboarding of people, and then sending logs through different channels. We had another team that managed the Kafka part of things. There was a little bit of a loss of ownership because there were so many different teams and players. When an issue happened, we had to figure out where the issue was happening. Was it in ELK? Was it in ArcSight? Was it in Kafka? Was it in syslog? Was it on the source? As a company, we have between 25,000 and 40,000 sources, depending on how you count them, and troubleshooting was a pretty difficult exercise. Having one integrated tool helped us by removing the multiple teams, multiple pieces of equipment, and multiple software solutions from the equation. Devo has helped a lot in simplifying the support model for our users and the sources that are onboarding.

We have certainly had fewer incidents, fewer complaints from our users, and less downtime.

Devo has definitely also saved us time. We have reduced the number of teams involved. Even though we were using open-source and vendor products, the number of teams that are involved in building and maintaining the product has been reduced, and that has saved us time for sure. Leveraging Devo's features is much better than building everything.

What is most valuable?

It provides multi-tenant, cloud-native architecture. Both of those were important aspects for us. A cloud-native solution was not something that was negotiable. We wanted a cloud-native solution. The multi-tenant aspect was not a requirement for us, as long as it allowed us to do things the way we want to do them. We are a global company though, and we need to be able to segregate data by segments, by use cases, and by geographical areas, for data residency and the like.

Usability-wise, Devo is much better than what we had before and is well-positioned compared to the other tools that we looked at. Obviously, it's a new UI for our group and there are some things that, upon implementing it, we found were a little bit less usable than we had thought, but they are working to improve on those things with us.

As for the 400 days of hot data, we have not yet had the system for long enough to take advantage of that. We've only had it in production for a few months. But it's certainly a useful feature to have and we plan to use machine learning, long-term trends, and analytics; all the good features that add to the SIEM functionality. If it weren't for the 400 days of data, we would have had to store that data, and in some cases for even longer than 400 days. As a financial institution, we are usually bound by regulatory requirements. Sometimes it's a year's worth of data. Sometimes it's three years or seven years, depending on the kind of data. So having 400 days of retention of data, out-of-the-box, is huge because there is a cost to retention.

Those 400 days of hot data mean that people can look for trends and at what happened in the past. And they can not only do so from a security point of view, but even for operational use cases. In the past, our operational norm was to keep live data for only 30 days. Our users were constantly asking us for at least 90 days, and we really couldn't even do that. That's one reason that having 400 days of live data is pretty huge. As our users start to use it and adopt this system, we expect people to be able to do those long-term analytics.

What needs improvement?

One major area for improvement for Devo, and people know about it, is to provide more capabilities around pre-built monitoring. They're working on integrations with different types of systems, but that integration needs to go beyond just onboarding to the platform. It needs to include applications, out-of-the-box, that immediately help people to start monitoring their systems. Such applications would include dashboards and alerts, and then people could customize them for their own needs so that they aren't starting from a blank slate. That is definitely on their roadmap. They are working with us, for example, on NetFlow logs and NSG logs, and AKF monitoring.

Those kinds of things are where the meat is because we're not just using this product for regulatory requirements. We really want to use it for operational monitoring. In comparison to some of the competitors, that is an area where Devo is a little bit weak.

For how long have I used the solution?

We chose Devo at the end of 2020 and we finished the implementation in June of this year. Technically, we were using it during the implementation, so it has been about a year.

I don't work with the tool on a daily basis. I'm from the product management and strategy side. I led the selection of the product and I was also the product manager for the previous product that we had.

What do I think about the stability of the solution?

Devo has been fairly stable. We have not had any major issues. There has been some down time or slowness, but nothing that has persisted or caused any incidents. One place that we have a little bit of work to do is in measuring how much data is being sent into the product. There are competing dashboards that keep track of just how much data is being ingested and we need to resolve which we are going to use.

What do I think about the scalability of the solution?

We don't see any issues with scalability. It scales by itself. That is one of the reasons we also wanted to move to another product. We needed scalability and something that was auto-scalable.

How are customer service and support?

Their tech support has been excellent. They've worked with us on most of the issues in a timely fashion and they've been great partners for us. We are one of their biggest customers and they are trying really hard to meet our needs, to work with us, and to help us be successful for our segments and users.

They exceeded our expectations by being extremely hands-on during the implementation. They came in with an "all hands on deck" kind of approach. They worked through pretty much every problem we had and, going forward, we expect similar service from them.

Which solution did I use previously and why did I switch?

We were looking to replace our previous solution. We were using ArcSight as our SIEM and ELK for our operational monitoring. We needed something more modern and that could fulfill the roadmap we have. We were also very interested in all the machine learning and AI-type use cases, as forward-facing capabilities to implement. In our assessment of possible products, we were impressed by the features of AI/ML and because the data is available for almost a year. With Devo, we integrated both operational and SIEM functions into one tool.

It took us a long time to build and deploy some of the features we needed in the previous framework that we had. Also, having different tools was leading to data duplication in two different platforms, because sometimes the security data is operational data and vice versa. The new features that we needed were not available in the SIEM and they didn't have a proper plan to get us there. The roadmap that ArcSight had was not consistent with where we wanted to go.

How was the initial setup?

It was a complex setup, not because the system itself is complex but because we already had a system in place. We had already onboarded between 15,000 and 20,000 servers, systems, and applications. Our requirement was to not touch any of our onboarding. Our syslog was the way that they were going to ingest and that made it a little bit easier. And that was also one of our requirements because we always want to stay vendor-agnostic. That way, if we ever need to change to another system, we're not going to have to touch every server and change agents. "No vendor tie-in" is an architectural principle that we work with.

We were able to move everything within six months, which is absolutely amazing. That might be a record. Not only Devo was impressed at how efficiently we did it, but so were people in our company.

We had a very strong team on our end doing this. We went about it very clinically, determining what would be in scope and what would not be in scope for the first implementation. After that, we would continue to tie up any loose ends. We were able to meet all of our deadlines and pivot into Devo. At this point, Devo is the only tool we're using.

We have a syslog team that is the log aggregator and an onboarding team that was involved in onboarding the solution. The syslog team does things like the opening of ports and metrics of things like uptime. We also have four engineers on the security side who are helping to unleash use cases and monitor security. There's also a whole SOC team that does incident management and finding of breaches. And we have three people who are responsible for the operational reliability of Devo. Because it's a SaaS product, we're not the ones running the system. We're just making sure that, if something goes wrong, we have people who are trained and people who can troubleshoot.

We had an implementation project manager who helped track all of the implementation milestones. Our strategy was to set out an architecture to keep all the upstream components intact, with some very minor disruptions. We knew, with respect to some sources, that legacy had been onboarded in certain ways that were not efficient or useful. We put some of those pieces into the scope during the implementation so that we would segregate sources in ways that would allow better monitoring and better assessment, rather than mixing up sources. But our overall vision for the implementation was to keep all of that upstream architecture in place, and to have the least amount of disruption and need for touching agents on existing systems that had already been onboarded. Whatever was onboarded was just pointed at Devo from syslog. We did not use their relays. Instead, we used our syslog as the relays.

What's my experience with pricing, setup cost, and licensing?

Devo was very cost-competitive. We understood that the cost came without the monitoring of content, right out-of-the-box, but we knew they were pointed in that direction.

Devo's pricing model, only charging for ingestion, is how most products are licensed. That wasn't different from other products that we were looking at. But Devo did come with that 400 days of hot data, and that was not the case with other products. While that aspect was not a requirement for us, it was a nice-to-have.

Which other solutions did I evaluate?

We started off with about 10 possibilities and brought it down to three. Devo was one of the three, of course, but I prefer not to mention the names of the others.

But among those we started off with were Elastic, ArcSight, Datadog, Sumo, Splunk, Microsoft systems and solutions, and even some of the Google products. One of our requirements was to have an integrated SIEM and operational monitoring system.

We assessed the solutions at many different levels. We looked at adherence to our upstream architecture for minimal disruption during the onboarding of our existing logs. We wanted minimal changes in our agents. We also assessed various use cases for security monitoring and operational monitoring. During the PoC we assessed their customer support teams. We also looked at things like long-term storage and machine learning. In some of these areas other products were a little bit better, but overall, we felt that in most of these areas Devo was very good. Their customer interface was very nice and our experience with them at the proof-of-value [PoV] level was very strong. 

We also felt that the price point was good. Given that Devo was a newer product in the market, we felt that they would work with us on implementing it and helping us meet our roadmap. All three products that we looked for PoV had good products. This space is fairly mature. They weren't different in major ways, but the price was definitely one of the things that we looked at.

In terms of the threat-hunting and incident response, Devo was definitely on par. I am not a security analyst and I relied on our SIEM engineers to analyze that aspect.

What other advice do I have?

Get your requirements squared and know what you're really looking for and what your mandatory requirements are versus what is optional. Do a proof of value. That was very important for us. Also, don't only look at what your needs are today. Long-term analytics, for example, was not necessarily something we were doing, but we knew that we would want to do that in the coming years. Keep all of those forward-looking use cases in mind as well when you select your product.

Devo provides high-speed search capabilities and real-time analytics, although those are areas where a little performance improvement is needed. For the most part it does well, and they're still optimizing it. In addition, we've just implemented our systems, so there could be some optimizations that need to be done on our end, in the way our data is flowing and in the way we are onboarding sources. I don't think we know where the choke points are, but it could be a little bit faster than we're seeing right now.

In terms of network visibility, we are still onboarding network logs and building network monitoring content. We do hope that, with Devo, we will be able to retire some of our network monitoring tools and consolidate them. The jury is still out on whether that has really happened or not. But we are working actively towards that goal.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Head of Data Architect at LendingTree
Real User
Top 20
Instantaneous response when monitoring logs and KPIs
Pros and Cons
  • "CloudWatch immediately hooks up and connects to the KPIs and all the metrics."
  • "It would be beneficial for CloudWatch to provide an API interface and some kind of custom configuration."

What is our primary use case?

We use the solution to monitor our AWS resources. We used Azure extensively but a couple of years back we moved to use both Azure and AWS. Currently, we have three main use cases. 

Our predominant use case is monitoring our S3 which includes terabytes of data. We monitor all the buckets and containers plus who has access to them, the thresholds, and user data. We constantly watch all the KPIs and CloudWatch metrics. 

Our second use case is watching logs and processes for other products such as AWS tools, AWS Glue, and Redshift which includes a few terabytes of data.

Our third use case is minor and with Athena.

Our fourth use case is new. We just started using SageMaker for a small POC and want to complete all of our data modeling and logs.

In the future, we will be using the solution with Airflow, which will become one of our biggest use cases. 

CloudWatch works very well with any of the AWS resources so we always monitor through it.

How has it helped my organization?

Our business flow has improved because we monitor email thresholds and immediately get an alert from CloudWatch if use goes beyond thresholds. Without this alert, we would have to use external monitoring. 

What is most valuable?

It is valuable that CloudWatch collects all the metrics. I primarily like the RUM. There is an instantaneous response when monitoring logs and KPIs. CloudWatch immediately hooks up and connects to the KPIs and all the metrics. 

What needs improvement?

Even though the product works well with most AWS, it is a nightmare to use with Snowflake. Snowflake is a SaaS product hosted on AWS, but using it with CloudWatch still doesn't give us the support we need so we rely on separate monitoring. 

We have many databases such as MongoDB and SQL Server, RDS, and PostgreSQL. For these, CloudWatch is good but a little basic and additional monitoring tools are required. It's challenging to use one monitoring tool for S3 and another monitoring tool for Snowflake. 

It would be beneficial for CloudWatch to provide an API interface and some kind of custom configuration because everybody uses APIs now. Suppose Snowflake says we'd get all the same things with MongoDB such as APIs, hookups, or even monitoring. That would allow us to build our own custom solution because that is the biggest limitation of CloudWatch. If you go a bit beyond AWS products even if they're hosted on AWS, CloudWatch doesn't work very well. 

I'd also like an improved UI because it hasn't significantly improved in a few years and we want to see it at a more granular level. I get my KPIs in a bucket usage for yesterday but I'd like to see them by a particular date and week. We have three buckets rolled by hundreds of people and I want to see use cases for an individual to determine where I need to customize and provide more room. I want aggregation on multiples, not one terameter. 

For how long have I used the solution?

I have been using the solution for two years. 

What do I think about the stability of the solution?

The solution is very stable with absolutely no issues. We used to see a delay when we were setting up three buckets but now we receive instantaneous notifications. 

What do I think about the scalability of the solution?

The solution is definitely scalable. Most of our development environment uses it and we are running three teams of 150-200 people. Usage levels are different between developers and the support team so the total users at one time is 100-150. 

The solution is managed by our internal AWS maintenance team. Seven people manage our cloud environment and seven manage our platform side for not just CloudWatch, but everything on AWS.

We still need to find a solution for Snowflake and Tableau environments unless CloudWatch provides better support in the future. 

How are customer service and support?

The support staff are seasoned professionals and are good. Amazon provides the benchmark for support and nothing else compares.

Which solution did I use previously and why did I switch?

On-premises, we have used other solutions like Sumo Logic, Azure Logic Apps and others. Not everyone uses AWS so we have a lot of tools we use.

Previously we used some main external app logic but it didn't work well with AWS tools. I would have to figure it out and configure Aurora to do something or find a way to do S3 buckets. Those solutions worked well for on-premises, but not with AWS and clouds.

How was the initial setup?

The setup for this solution is pretty simple and anyone can do it if they are on AWS. Setting up all our VPC and private links connecting to our gateways took some time, but CloudWatch setup was a no-brainer and took a couple of days. 

What about the implementation team?

Our implementation was done in conjunction with a third party. We like to bring in a few engineers to work with our engineers and then we partner with a third party like Slalom to help with integration. Our process is a mix of all three with AWS staff helping for a couple of weeks and Slalom for a couple of months. Our team slowly takes over management. 

What was our ROI?

We plan to increase our usage because we don't have another monitoring tool right now. With the Airflow orchestration, our CloudWatch use will significantly increase as we monitor all of our RUM, notifications, jobs, and runs. Our runs and billings will increase 20-30% once we start using Airflow. 

Because CloudWatch doesn't support all externally hosted products, I rate it a nine out of ten for ROI. 

What's my experience with pricing, setup cost, and licensing?

I don't know specifics about pricing because we pay for all our AWS services in a monthly bundle and that includes CloudWatch, Redshift, VPCs, EC2s, S3s, A39s, and others. We spend about $5 million per year on AWS with CloudWatch being about 5% of that cost. 

Which other solutions did I evaluate?

I did not evaluate other solutions. Once we moved to AWS, we looked for a tool that was native to that cloud. That is the process we are currently undertaking for Snowflake and Tableau because CloudWatch doesn't support them well. We do try to use CloudWatch as much as possible. 

What other advice do I have?

The solution is pretty good because it automatically comes and works well with AWS. Before you use any product from AWS, think about whether it is supported or how it will interface. I suggest using the solution with one product at a time and then transitioning to important interfaces. 

If you find you can't configure the solution with Redshift for example, and are struggling to build your S3 even though both use S3, then you may have to find another monitoring solution. It makes sense to follow Amazon's best practices. They advise not to use certain monitoring components alone but to use them as an integral part of your system. Monitor your ecosystem and think of a high-level picture of it rather than just determining that CloudWatch must be a part of Redshift. This solution is just one part of an entire system. 

I would rate the solution a nine out of ten. 

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Sr. Devops Engineer at BlueStacks
Real User
Top 5Leaderboard
A stable, scalable solution worth the price
Pros and Cons
  • "The solution is really used, is really simple to edit, and is scalable."
  • "I would like to see a more colorful dashboard that is better than other dashboard monitoring tools."

What is our primary use case?

Okay, so basically we are using API monitoring, and URL monitoring, for the differences in the continent. So we have a user on all seven continents, so we need to monitor our API their response time, and if it is accessible from a different continent or not. So yeah, because of that we are using the Stackdriver and it gives very good monitoring and alerts. Yeah.

What needs improvement?

I would like to see a more colorful dashboard that is better than other dashboard monitoring tools. It is scientifically proven that a colorful interface or dashboard brings more interaction. I would like to see color identifiers on the dashboard such as a green light indicating the alert is recovered, and a red light when an alert is firing to help quickly identify if there is an issue.

To be honest, I see that there were lots of changes and they are, when I started using one and a half years ago, there was not so much functionality added, but like nowadays I see that there are lots of things they have added. So until now, it's good and that they're improving. I'm really happy to use Stackdriver, but I want to make some changes to the notification channel. 

Inside that notification channel, you have to choose the group. But while I'm choosing the group in the notification channel, I can add one mail ID. So okay, I am forced to add one, either DL in the sense of a group mail ID or one mail on the other end. So when I want to add a second mail ID, I try to use a comma and some other option, different permutation and combination, but I'm not able to add it. So if there is a way to add, I wonder if it's able to mention that if you want to add another one, you have to put a comma, semi-column, or something like that.

For how long have I used the solution?

I have been using the solution for over one and a half years.

What do I think about the stability of the solution?

I did not face any issues with stability. I faced an issue once when someone found that the Google network was down I guess six months ago. So I got alerts, but we got to know that some of the network problems were I guess in the US and Canada, I don't remember exactly, but it's in the US I know. US and Canada I am a little bit confused. So because of that, we were getting a false alert. So if it is down, I don't know how Google can improve this part. If the network is down, I guess it's obvious that we get the alert. But can Stackdriver also inform us? "Hey guys, this is a network chain, blah blah blah."

What do I think about the scalability of the solution?

The solution is really used, is really simple to edit, and is scalable.

You can just edit and you can... As I mentioned that in the initial setup you have to do it step by step and it is similar to the edit option you have to do it step by step.

How are customer service and support?

I've never contacted the support for this solution because it's really easy, but I have used their support for some of their other products a few times, for some big query, I submitted a support ticket, and they responded but to be honest, GCP product support is a little bit slower as compared to AWS support.

How was the initial setup?

It's not challenging. All the steps are defined, you have to choose one by one there were a lot of options there. A few of the points were in the load balancer. I sometimes face issues during the load balancer setup and there is sometimes a forward slash or backward slash, and I don't check it until it doesn't configure fully. So I want to check on that point that when I put the paths or something like defining there, it should give us some kind of response to my configuration. Once I configure it properly, after two or three minutes I get the answer that it's going to be successful. So yeah, if there is an option for their data where I can perform the testing it would be helpful.

The deployment will take less than five minutes.

What's my experience with pricing, setup cost, and licensing?

The solution is not expensive based on everything it's not expensive, believe me. Compared to the other products, if I'm comparing with other cloud-like solutions, it's not so much expensive. I give this solution an eight out of ten for price because we receive alerts related to product pricing and we have never received an alert for Google Stackdriver.

Which other solutions did I evaluate?

I am still exploring AWS X-Ray and my team is using Microsoft Azure.

What other advice do I have?

I give the solution a nine out of ten.

Our DevOps team and backend email are using the solution. Which is about 23 to 25 people you can say occasionally. Until we get an alert, where we have to configure something new, we are not going to check every time. So you do not need to check on the monitor. So yes, it's occasionally used but you can say that we are checking on a daily basis.

I recommend Google Stackdriver to everyone. I always use the solution as an example when discussing options with people and am really happy with it.

Which deployment model are you using for this solution?

Private Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Google
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Founder at Art World Web Solutions
Real User
Top 5
Has a straightforward setup, with good alerting and triggering features; technical support was responsive and knowledgeable
Pros and Cons
  • "The event alerting feature or the trigger system is what I like most about AppDynamics Server Monitoring. Whenever an issue occurs, the tool automatically generates an even trigger that tells engineers in the company to take action, so it's an essential feature of AppDynamics Server Monitoring. Another valuable feature of the tool is end-to-end monitoring, which means if you need to debug, you can go transaction by transaction, where the issue lies, and how it's linked. For example, if it's a low-performance issue, you can look into it more through AppDynamics Server Monitoring in terms of which area takes too much time to execute. You can also see the SQL queries and the kind of query going on through the tool."
  • "An area for improvement in AppDynamics Server Monitoring is integration; in particular, it needs a better way to integrate with custom applications such as Siebel CRM. Right now, it's challenging to integrate AppDynamics Server Monitoring with Siebel CRM because it sometimes gives an error and cannot integrate properly."

What is our primary use case?

My company uses AppDynamics Server Monitoring for server monitoring and end-to-end monitoring. The tool has an event alerting mechanism that lets people within the company know when any server is down so that you can raise a ticket, and the support team can work on that ticket.

What is most valuable?

The event alerting feature or the trigger system is what I like most about AppDynamics Server Monitoring. Whenever an issue occurs, the tool automatically generates an even trigger that tells engineers in the company to take action, so it's an essential feature of AppDynamics Server Monitoring.

Another valuable feature of the tool is end-to-end monitoring, which means if you need to debug, you can go transaction by transaction, where the issue lies, and how it's linked. For example, if it's a low-performance issue, you can look into it more through AppDynamics Server Monitoring in terms of which area takes too much time to execute. You can also see the SQL queries and the kind of query going on through the tool.

What needs improvement?

An area for improvement in AppDynamics Server Monitoring is integration; in particular, it needs a better way to integrate with custom applications such as Siebel CRM. Right now, it's challenging to integrate AppDynamics Server Monitoring with Siebel CRM because it sometimes gives an error and cannot integrate properly.

I'd like to see more details about each issue in the next release of AppDynamics Server Monitoring. For example, there's a server issue, and my team wants to identify the response time over SQL, but that detail is lacking. If AppDynamics could add slow query logs to AppDynamics Server Monitoring, that would be good.

Another feature I'd like the tool to have is the segregation of requests based on user sessions, for example, having a session ID, so it's easier to identify which session has an issue and needs solving and information on any transaction performed for that particular session. This feature would make integration in AppDynamics Server Monitoring easier.

For how long have I used the solution?

I've been using AppDynamics Server Monitoring for almost eight years now.

What do I think about the stability of the solution?

AppDynamics Server Monitoring is a stable product.

What do I think about the scalability of the solution?

AppDynamics Server Monitoring is a scalable product.

How are customer service and support?

The technical support team for AppDynamics Server Monitoring is responsive and knowledgeable, so I like the support provided. Sometimes, it just takes time for the support team to solve custom application issues because that requires looking into the custom application details, the error behind the application, where's the plug to integrate, how to use the interface, etc.

My team contacts support every day. My team is happy with the support, but the only room for improvement is the time it takes for the AppDynamics Server Monitoring technical support team to resolve the issue. It could be faster because my team has set deadlines for integrating and making custom applications work.

I'd give AppDynamics Server Monitoring technical support a four out of five. The response time is good, but the resolution time needs improvement.

Which solution did I use previously and why did I switch?

My team did a POC with Datadog. AppDynamics Server Monitoring is one of the best tools because it instantly gives you integration details, and you can integrate it more quickly than other solutions. For example, integrating Datadog took a lot longer because it required more steps and more levels to reach to complete the integration.

How was the initial setup?

Initially, the setup process for AppDynamics Server Monitoring was straightforward. It was pretty easy, but when you go deeper into end-to-end monitoring, for example, it gets a little complicated because you need to integrate with your JDK and Java applications and pass on the logs. Setting up the RAM and the initial setup of AppDynamics Server Monitoring, it's easy, but as you go deeper, then it becomes complex.

I'm rating the initial setup for AppDynamics Server Monitoring as five on a scale of one to five because it was pretty easy.

What about the implementation team?

We set up AppDynamics Server Monitoring in-house.

What's my experience with pricing, setup cost, and licensing?

I cannot give information on the pricing for AppDynamics Server Monitoring because I'm not involved. I'm on the integration and technical side.

Which other solutions did I evaluate?

AppDynamics Server Monitoring has a lot of competitors in the market, but I evaluated Datadog.

What other advice do I have?

I'm using a cloud product from AppDynamics for end-to-end monitoring called AppDynamics Server Monitoring.

Maintenance for AppDynamics Server Monitoring happens monthly. A team of five people does the patching for it.

A team of eight people works on AppDynamics Server Monitoring in terms of the initial integration, then another team will take charge, so I'm unable to give the exact number of users of the tool within the company.

My rating for AppDynamics Server Monitoring is nine out of ten because it's a good tool and only has minor areas for improvement.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Associate Consultant at a computer software company with 10,001+ employees
Real User
Top 10
Automatic configuration saves us time, helpful support team, and it helps us to measure and improve our end-user experience
Pros and Cons
  • "This monitoring capability gives us the ability to measure the end-user experience."
  • "Support for cloud-based environments needs to be improved."

What is our primary use case?

We are a solution provider and this is one of the products that we implement for our clients. We use Dynatrace both on-premises and in the cloud. Our use cases involve monitoring application performance. We are also able to see how the underlying infrastructure is performing.

This monitoring capability gives us the ability to measure the end-user experience.

We have other use cases, as well, but this is a summary of what we do with it.

What is most valuable?

There are several features that we find very valuable.

The setup is automated, so you don't have to do any configuration. There is very little manual intervention required.

Once it captures the data, it is able to dynamically analyze the packets and determine a probable route. This is a feature that we use very heavily.

What needs improvement?

Support for cloud-based environments needs to be improved. There is a challenge when it comes to monitoring cloud-native applications. This means that we have to use other tools that we integrate with Dynatrace. If there were another approach to monitoring things automatically then it would be a fantastic feature to add.

Some of the results that we were being given by the AI engine were not a proper output based on what the data input was.

These days, we are seeing that AIOps is becoming more predominant. As such, I would like to see more of the features in Dynatrace, expanding it from a purely monitoring solution into a full-fledged AIOps solution.

For how long have I used the solution?

I have been working with Dynatrace for approximately nine years.

What do I think about the stability of the solution?

With respect to stability, this is not a system that gives users access to the low level. Rather, they interact with the agents. That said, we have had some stability issues with a number of our agent deployments for our customers. One example is that the AI engine was not giving the proper output, based on what the input was.

What do I think about the scalability of the solution?

This is a scalable product but you ought to have multiple instances to scale it.

How are customer service and support?

We have worked with their technical support team on a couple of specific areas, and I would rate them a four out of five.

We have not had to contact support for applications that use simple technology, like Java. However, when it is a complex system such as an ERP or a cloud-based application, sometimes the integration requires that we create specific plugins to capture the data. These are the types of things that we have worked with technical support to resolve.

Which solution did I use previously and why did I switch?

We have worked with various competitors' tools. Some of these are AppDynamics, New Relic, Datadog, Splunk, and others. There are a lot of other tools on the market.

Nowadays, we are working with a lot of different customers and our preference is to implement Dynatrace over the other solutions. The three main reasons for this are the features in general, the ease of implementation, and specifically for the AI capabilities.

How was the initial setup?

The initial setup is straightforward, although it depends on whether the application enrollment is heterogeneous or complex. The initial planning can take some time but the actual installation and setup is not a big process.

The number of staff required for deployment depends on how many applications we're going to configure. If it's only a few applications then you don't need many people. However, if a customer tells us they have a hundred applications that need to be installed in a month's time then obviously, we need more people to help with the deployment.

What about the implementation team?

As product integrators, we deploy this product with our in-house team. We have a good set of people who are trained and certified in Dynatrace.

What other advice do I have?

Over the time that I have used this product, I have worked with several versions. I am now working on the latest one.

The advice that I typically give to my clients is that you shouldn't think that it will do everything. In order to implement it properly, we need to clearly understand what are your specific use cases are, and then work on those.

Use cases can be related to an environment, a technology, or a platform. If it's a cloud-native service, for example, then you won't be able to use Dynatrace because it can't even be installed. You won't get anything out of that. This is an example of how it is not suitable for every situation. The feasibility depends on what you want to use cases are.

I would rate this solution an eight out of ten.

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: My company has a business relationship with this vendor other than being a customer: Implementer
Buyer's Guide
Cloud Monitoring Software
November 2022
Get our free report covering Dynatrace, New Relic, Microsoft, and other competitors of Datadog. Updated: November 2022.
656,862 professionals have used our research since 2012.