IT Central Station is now PeerSpot: Here's why
Buyer's Guide
Cloud Monitoring Software
July 2022
Get our free report covering Dynatrace, New Relic, Microsoft, and other competitors of Datadog. Updated: July 2022.
620,600 professionals have used our research since 2012.

Read reviews of Datadog alternatives and competitors

JerryH - PeerSpot reviewer
Director at a computer software company with 1,001-5,000 employees
Real User
Top 5
Enables us to bring all our data sources into a central hub for quick analysis, helping us focus on priorities in our threat landscape
Pros and Cons
  • "The real-time analytics of security-related data are super. There are a lot of data feeds going into it and it's very quick at pulling up and correlating the data and showing you what's going on in your infrastructure. It's fast. The way that their architecture and technology works, they've really focused on the speed of query results and making sure that we can do what we need to do quickly. Devo is pulling back information in a fast fashion, based on real-time events."
  • "Devo has a lot of cloud connectors, but they need to do a little bit of work there. They've got good integrations with the public cloud, but there are a lot of cloud SaaS systems that they still need to work with on integrations, such as Salesforce and other SaaS providers where we need to get access logs."

What is our primary use case?

Our initial use case is to use Devo as a SIEM. We're using it for security and event logging, aggregation and correlation for security incidents, triage and response. That's our goal out of the gate.

Their solution is cloud-based and we're deploying some relays on-premise to handle anything that can't send it up there directly. But it's pretty straightforward. We're in a hybrid ecosystem, meaning we're running in both public and private cloud.

How has it helped my organization?

We're very early in the process so it's hard to say what the improvements are. The main reason that we bought this tool is that we were a conglomeration of several different companies. We were the original Qualcomm company way back in the day. After they made billions in IP and wireless, they spun us off to Vista Equity, and we rapidly and in succession bought three or four companies in the 2014/2015 timeframe. Since then, we've acquired three or four more. Unfortunately, we haven't done a very good job of integrating those companies, from a security and business services standpoint.

This tool is going to be our global SIEM and log-aggregation and management solution. We're going to be able to really shore up our visibility across all of our business areas, across international boundaries. We have businesses in Canada and Mexico, so our entire North American operations should benefit from this. We should have a global view into what's going on in our infrastructure for the first time ever.

The solution is enabling us to bring all our data sources into a central hub. That's the goal. If we can have all of our data sources in one hub and are then able to pull them back and analyze that data as fast as possible, and then archive it, that will be helpful. We have a lot of regulatory and compliance requirements as well, because we do business in the EU. Obviously, data privacy is a big concern and this is really going to help us out from that standpoint.

We have a varied array of threat vectors in our environment. We OEM and provide a SaaS service that runs on people's mobiles, plus we provide an in-cab mobile in truck fleets and tractor trailers that are both short- and long-haul. That means our threat surface is quite large, not only from the web services and web-native applications that we expose to our customers, but also from our in-cab and mobile application products that we sell. Being able to pull all that information into one central location is going to be huge for us. Securing that type of landscape is challenging because we have a lot of different moving parts. But it will at least give us some insight into where we need to focus our efforts and get the most bang for the buck.

We've found some insights fairly early in the process but I don't think we've gotten to the point where we can determine that our mean time to resolution has improved. We do expect it to help to reduce our MTTR, absolutely, especially for security incidents. It's critical to be able to find a threat and do something about it sooner. Devo's relationship with Palo Alto is very interesting in that regard because there's a possibility that we will be pushing this as a direct integration with our Layer 4 through Layer 7 security infrastructure, to be able to push real-time actions. Once we get the baseline stuff done, we'll start to evolve our maturity and our capabilities on the platform and use a lot more of the advanced features of Devo. We'll get it hooked up across all of our infrastructure in a more significant way so that we can use the platform to not only help us see what's going on, but to do something about it.

What is most valuable?

So far, the most valuable features are the ease of use and the ease of deployment. We're very early in the process. They've got some nice ways to customize the tool and some nice, out-of-the-box dashboards that are helpful and provide insight, particularly related to security operations.

The UI is 

  • clean
  • easy to use
  • intuitive. 

They've put a lot of work into the UI. There are a few areas they could probably improve, but they've done a really good job of making it easy to use. For us to get engagement from our engineering teams, it needs to be an easy tool to use and I think they've gone a long way to doing that.

The real-time analytics of security-related data are super. There are a lot of data feeds going into it and it's very quick at pulling up and correlating the data and showing you what's going on in your infrastructure. It's fast. The way that their architecture and technology works, they've really focused on the speed of query results and making sure that we can do what we need to do quickly. Devo is pulling back information in a fast fashion, based on real-time events.

The fact that the real-time analytics are immediately available for query after ingest is super-critical in what we do. We're a transportation management company and we provide a SaaS. We need to be able to analyze logs and understand what's going on in our ecosystem in a very close to real-time way, if not in real time, because we're considered critical infrastructure. And that's not only from a security standpoint, but even from an engineering standpoint. There are things going on in our vehicles, inside of our trucks, and inside of our platform. We need to understand what's going on, very quickly, and to respond to it very rapidly.

Also, the integration of threat intelligence data provides context to an investigation. We've got a lot of data feeds that come in and Devo has its own. They have a partnership with Palo Alto, which is our primary security provider. All of that threat information and intel is very good. We know it's very good. We have a lot of confidence that that information is going to be timely and it's going to be relevant. We're very confident that the threat and intel pieces are right on the money. And it's definitely providing insights. We've already used it to shore up a couple of things in our ecosystem, just based on the proof of concept.

The solution’s multi-tenant, cloud-native architecture doesn't really affect our operations, but it gives us a lot of options for splitting things up by business area or different functional groups, as needed. It's pretty simple and straightforward to do so. You can implement those types of things after the fact. It doesn't really impact us too much. We're trying to do everything inside of one tenant, and we don't expose anything to our customers.

We haven't used the solution's Activeboards too much yet. We're in the process of building some of those out. We'll be building dashboards and customized dashboards and Activeboards based on what those tools are doing in Splunk. Devo's going to help us out with our ProServe to make sure that we do that right, and do it quickly.

Based on what I've seen, its Activeboards align nicely with what we need to see. The visual analytics are nice. There's a lot of customization that you can do inside the tool. It really gives you a clean view of what's going on from both interfaces and topology standpoints. We were able to get network topology on some log events, right out of the gate. The visualization and analytics are insightful, to say the least, and they're accurate, which is really good. It's not only the visualization, but it's also the ability to use the API to pull information out. We do a lot of customization in our backend operations and service management platforms, and being able to pull those logs back in and do something with them quickly is also very beneficial.

The customization helps because you can map it into your business requirements. Everybody's business requirements are different when it comes to security and the risks they're willing to take and what they need to do as a result. From a security analyst standpoint, Devo's workflow allows you to customize, in a granular way, what is relevant for your business. Once you get to that point where you've customized it to what you really need to see, that's where there's a lot of value-add for our analysts and our manager of security.

What needs improvement?

Devo has a lot of cloud connectors, but they need to do a little bit of work there. They've got good integrations with the public cloud, but there are a lot of cloud SaaS systems that they still need to work with on integrations, such as Salesforce and other SaaS providers where we need to get access logs.

We'll find more areas for improvement, I'm sure, as we move forward. But we've got a tight relationship with them. I'm sure we can get anything worked out.

For how long have I used the solution?

This is our first foray with Devo. We started looking at the product this year and we're launching an effort to replace our other technology. We've been using Devo for one month.

What do I think about the stability of the solution?

The stability is good. It hasn't been down yet.

What do I think about the scalability of the solution?

The scalability is unlimited, as far as I can tell. It's just a matter of how much money you have in your back pocket that you're willing to spend. The cost is based on log ingestion rate and how much retention. They're running in public cloud meaning it's unlimited capacity. And scaling is instantaneous.

Right now, we've got about 22 people in the platform. It will end up being anywhere between 200 and 400 when we're done, including software engineers, systems engineers, security engineers, and network operations teams for all of our mobile and telecommunications platforms. We'll have a wide variety of roles that are already defined. And on a limited basis, our customer support teams can go in and see what's going on.

How are customer service and technical support?

Their technical support has been good. We haven't had to use their operations support too much. We have a dedicated team that's working with us. But they've been excellent. We haven't had any issues with them. They've been very quick and responsive and they know their platform.

Which solution did I use previously and why did I switch?

We were using Splunk but we're phasing it out due to cost.

Our old Splunk rep went to Devo and he gave me a shout and asked me if I was looking to make a change, because he knew of some of the problems that we were having. That's how we got hooked up with Devo. It needed to have a Splunk-like feel, because I didn't want to have a long road or a huge cultural transformation and shock for our engineering teams and our security teams that use Splunk today. 

We liked the PoC. Everything it did was super-simple to use and was very cost-effective. That's really why we went down this path.

Once we got through the PoC and once we got people to take a look at it and give us a thumbs-up on what they'd seen, we moved ahead. From a price standpoint, it made a lot of sense and it does everything we needed to do, as far as we can tell.

How was the initial setup?

We were pulling in all of our firewall logs, throughout the entire company, in less than 60 minutes. We deployed some relay instances out there and it took us longer to go through the bureaucracy and the workflow of getting those instances deployed than it did to actually configure the platform to pull the relevant logs.

In the PoC we had a strategy. We had a set of infrastructure that we were focusing on, infrastructure that we really needed to make sure was going to integrate and that its logs could be pulled effectively into Devo. We hit all of those use cases in the PoC.

We did the PoC with three people internally: a network engineer, a systems engineer, and a security engineer.

Our strategy going forward is getting our core infrastructure in there first—our network, compute, and storage stuff. That is critical. Our network layer for security is critical. Our edge security, our identity and access stuff, including our Active Directory and our directory services—those critical, core security and foundational infrastructure areas—are what we're focusing on first.

We've got quite a few servers for a small to mid-sized company. We're trying to automate the deployment process to hit our Linux and Windows platforms as much as possible. It's relatively straightforward. There is no Linux agent so it's essentially a configuration change in all of our Linux platforms. We're going through that process right now across all our servers. It's a lift because of the sheer volume.

As for maintenance of the Devo platform we literally don't require anybody to do that.

We have a huge plan. We're in the process of spinning up all of our training and trying to get our folks trained as a day-zero priority. Then, as we pull infrastructure in, I want those guys to be trained. Training is a key thing we're working on right now. We're building the e-learning regimen. And Devo provides live, multi-day workshops for our teams. We go in and focus the agenda on what they need to see. Our focus will be on moving dashboards from Splunk and the critical things that we do on a day-to-day basis.

What about the implementation team?

We worked straight with Devo on pretty much everything. We have a third-party VAR that may provide some value here, but we're working straight with Devo.

What was our ROI?

We expect to see ROI from security intelligence and network layer security analysis. Probably the biggest thing will be turning off things that are talking out there that don't need to be talking. We found three of those types of things early in the process, things that were turned on that didn't need to be turned on. That's going to help us rationalize and modify our services to make sure that things are shut down and turned off the way they're supposed to be, and effectively hardened.

And the cost savings over Splunk is about 50 percent.

What's my experience with pricing, setup cost, and licensing?

Pricing is pretty straightforward. It's based on daily log ingestion and retention rate. They keep it simple. They have breakpoints, depending on what your volume is. But I like that they keep it simple and easy to understand.

There were no costs in addition to their standard licensing fees. I don't know if they're still doing this, but we got in early enough that all of the various modules were part of our entitlement. I think they're in the process changing that model a little bit so you can pick your modules. They're going to split it up and charge by the module. But everything was part of the package that we needed, day-one.

Which other solutions did I evaluate?

We were looking at ELK Stack and Datadog. Datadog has a security option, but it wasn't doing what we needed it to do. It wasn't hitting a couple of the use cases that we have Splunk doing, from a logging and reporting standpoint. We also looked at Logstash, some of the "roll-your-own" stuff. But when you do the comparison for our use case, having a cloud SaaS that's managed by somebody else, where we're just pushing up our logs, something that we can use and customize, made the most sense for us. 

And from a capability standpoint, Devo was the one that most aligned with our Splunk solution.

What other advice do I have?

Take a look at it. They're really going after Splunk hard. Splunk has a very diverse deployment base, but Splunk really missed the mark with its licensing model, especially when it relates to the cloud. There are options out there, effective alternatives to Splunk and some of the other big tools. But from a SaaS standpoint, if not best-in-breed, Devo is certainly in the top-two or top-three. It's definitely a strong up-and-comer. Devo is already taking market share away from Splunk and I think that's going to continue over the next 24 to 36 months.

Devo's speed when querying across our data is very good. We haven't fully loaded it yet. We'll see when the rubber really hits the road. But based on the demos and the things that we've seen in Devo, I think it's going to be extremely good. The architecture and the way that they built it are for speed, but it's also built for security. Between our DevOps, our SecOps, and our traditional operations, we'll be able to quickly use the tool, provide valuable insights into what we're doing, and bring our teams up to speed very quickly on how to use it and how to get value out of it quickly.

The fact that it manages 400 days of hot data falls a little bit outside of our use case. It's great to have 400 days of hot data, from security, compliance, and regulatory retention standpoints. It makes it really fast to rehydrate logs and go back and get trends from way back in the day and do some long-term trend analysis. Our use case is a little bit different. We just need to keep 90 days hot and we'll be archiving the rest of that information to object-based long-term storage, based on our retention policies. We may or may not need to rehydrate and reanalyze those, depending on what's going on in our ecosystem. Having the ability to be able to reach back and pull logs out of long-term storage is very beneficial, not only from a cost standpoint, but from the standpoint of being able to do some deeper analysis on trends and reach back into different log events if we have an incident where we need to do so.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Alex Tsoi  - PeerSpot reviewer
Information Technology Consultant at TELUS Corporation
Consultant
Top 10
Enables us to easily nail, locate, and resolve performance issues that would have been very hard to identify
Pros and Cons
  • "Its ability to quickly inventory our resources, figure out interdependencies across them, and assemble a topology of your environment is brilliant. There is a price associated with it. Whenever you target a NetApp environment, it is included in the price but whenever you want to add different vendors, like VMware and Cisco, the price greatly spikes. Inventorization helps us a lot to visualize the environment."
  • "Their pricing model needs improvement."

What is our primary use case?

The main benefit of NetApp Cloud Insights is that it's agentless. It just collects information from the SNMP protocol.

We had NetApp ONTAP installed and we paired with VMware. The biggest challenge for any company is to find the bottleneck on the Edge of the technologies, like between VMware and NetApp. Both products have great reporting and monitoring tools, but whenever it comes to finding issues on the Edge between those products, you can hardly identify whether it's a storage issue or whether it's a virtualization issue. NetApp Cloud Insights greatly blends the performance data, approximates, links together the performance usage from the VMware and NetApp perspectives, and provides you a single pane of glass in terms of the reporting and monitoring through the virtual machine to the hardware storage.

You can find any major bottlenecks or any issue points you need to work on. In any IT infrastructure, whenever you improve, like if you buy faster storage, then you move bottlenecks from storage to the network or to the servers. The biggest challenge is to find where that bottleneck is to resolve the issue. I personally found NetApp Cloud Insights very useful in this sense because we were a heavily virtualized environment.

We were a heavily virtualized environment with the NFS Protocol, as our storage main Storage Protocol, we were able to easily nail, locate, and resolve some performance issues using Cloud Insights, which would have been very hard to identify. 

The biggest challenge though is the licensing model for Cloud Insights. For example, for our environment, the price of purchasing was close to the price of the storage itself, which is why we didn't actually pursue using Cloud Insights further. 

How has it helped my organization?

Cloud Insights has helped us to optimize Cloud spend by removing inefficiencies and identifying abandoned resources in VMware. There were a bunch of older virtual machines that were powered off and they were just left there, forgotten to be decommissioned. That was costing us premium storage space.

What is most valuable?

The visibility and usage optimization are the most valuable features. I liked the product very much. Its dashboards and the alerting systems are very easy to implement in any environment.

Its ability to quickly inventory our resources, figure out interdependencies across them, and assemble a topology of your environment is brilliant. There is a price associated with it. Whenever you target a NetApp environment, it is included in the price but whenever you want to add different vendors, like VMware and Cisco, the price greatly spikes. Inventorization helps us a lot to visualize the environment.

It's priceless when you work with eight different vendors, in a multi-vendor environment and Cloud Insights can actually identify those links between VMware and physical servers and storage, and that helps to troubleshoot and solve issues right there. It helps to proactively make decisions.

Their advanced analytics for pinpointing problem areas are great. If you're using separate tools, for example, vRealize Ops manager and Unified Manager from vRealize from VMware and Unified Manager from ONTAP, you can find some anomalies in both of them, but you can never link them together in one logical structure. Unified Cloud Insights, however, really goes through and links them together. For example, if you have a contention, like virtual CMTS contention, it doesn't mean that your storage has issues. It also can mean that there is a network problem below or some faulty network adapter network port, or even a physical server. In this sense, Cloud Insights is very valuable. It enables you to find out multi-tiered issues.

Advanced Analytics also helped to reduce the time it takes to find performance issues. It just reduces the time to find issues. And it can predict issues which otherwise, would take hours or days to find.

What needs improvement?

Their pricing model needs improvement. 

For how long have I used the solution?

We were testing Cloud Insights for a period of around five months. I installed it, configured it, and just didn't touch it for the first two months because I was busy. For the next three months, we actually used it to troubleshoot some issues and to gather more information on the performance of the environment. We found it very useful and helpful.

We have to install the collectors in your environment and then collectors send information into the NetApp private cloud.

What do I think about the stability of the solution?

It is a greatly stable environment. I didn't have any issues during my assessment time.

What do I think about the scalability of the solution?

Scalability is also great. You just add notes and it works.

I only showed it to two or three people. The number of users doesn't impact the performance of Cloud Insights. It's more about how many connected devices there are.

We had a six node cluster and maybe around a thousand VMs.

Which solution did I use previously and why did I switch?

We didn't use a solution like Cloud Insights before. We were just given a license with instructions and we extended the license three or four times. NetApp reached out to us and pitched the idea of how great Cloud Insights is. We liked that it offered the opportunity to work with a multi-vendor environment.

The free trial of Cloud Insights helped inform our buying decision. We found value in Cloud Insights.

We also use Datadog but it doesn't have the same functionalities as Cloud Insights. 

How was the initial setup?

The initial setup is straightforward. You just spin-up the collector. I think one per vendor, one for NetApp, one for VMware, and then link them to Cloud Insights in the private Cloud. That's it.

You can spin-up the environment within an hour for the entire thing. You have the collectible information after the next hour, you will have your environment on a dashboard in Cloud Insights. 

The price that came back to us was so ridiculous that we didn't end up implementing it.

What other advice do I have?

The biggest lesson I have learned is that basic licensing that is free, is useless. That is the biggest lesson. The basic free Cloud Insights does everything, but only with NetApp products, and this information alone doesn't add value to our troubleshooting.

I would rate NetApp Cloud Insights a nine out of ten. I really liked the product. I would have bought it if it wasn't for the cost. I had a perfect business case for it but it didn't work out. 

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Associate Consultant at a computer software company with 10,001+ employees
Real User
Automatic configuration saves us time, helpful support team, and it helps us to measure and improve our end-user experience
Pros and Cons
  • "This monitoring capability gives us the ability to measure the end-user experience."
  • "Support for cloud-based environments needs to be improved."

What is our primary use case?

We are a solution provider and this is one of the products that we implement for our clients. We use Dynatrace both on-premises and in the cloud. Our use cases involve monitoring application performance. We are also able to see how the underlying infrastructure is performing.

This monitoring capability gives us the ability to measure the end-user experience.

We have other use cases, as well, but this is a summary of what we do with it.

What is most valuable?

There are several features that we find very valuable.

The setup is automated, so you don't have to do any configuration. There is very little manual intervention required.

Once it captures the data, it is able to dynamically analyze the packets and determine a probable route. This is a feature that we use very heavily.

What needs improvement?

Support for cloud-based environments needs to be improved. There is a challenge when it comes to monitoring cloud-native applications. This means that we have to use other tools that we integrate with Dynatrace. If there were another approach to monitoring things automatically then it would be a fantastic feature to add.

Some of the results that we were being given by the AI engine were not a proper output based on what the data input was.

These days, we are seeing that AIOps is becoming more predominant. As such, I would like to see more of the features in Dynatrace, expanding it from a purely monitoring solution into a full-fledged AIOps solution.

For how long have I used the solution?

I have been working with Dynatrace for approximately nine years.

What do I think about the stability of the solution?

With respect to stability, this is not a system that gives users access to the low level. Rather, they interact with the agents. That said, we have had some stability issues with a number of our agent deployments for our customers. One example is that the AI engine was not giving the proper output, based on what the input was.

What do I think about the scalability of the solution?

This is a scalable product but you ought to have multiple instances to scale it.

How are customer service and support?

We have worked with their technical support team on a couple of specific areas, and I would rate them a four out of five.

We have not had to contact support for applications that use simple technology, like Java. However, when it is a complex system such as an ERP or a cloud-based application, sometimes the integration requires that we create specific plugins to capture the data. These are the types of things that we have worked with technical support to resolve.

Which solution did I use previously and why did I switch?

We have worked with various competitors' tools. Some of these are AppDynamics, New Relic, Datadog, Splunk, and others. There are a lot of other tools on the market.

Nowadays, we are working with a lot of different customers and our preference is to implement Dynatrace over the other solutions. The three main reasons for this are the features in general, the ease of implementation, and specifically for the AI capabilities.

How was the initial setup?

The initial setup is straightforward, although it depends on whether the application enrollment is heterogeneous or complex. The initial planning can take some time but the actual installation and setup is not a big process.

The number of staff required for deployment depends on how many applications we're going to configure. If it's only a few applications then you don't need many people. However, if a customer tells us they have a hundred applications that need to be installed in a month's time then obviously, we need more people to help with the deployment.

What about the implementation team?

As product integrators, we deploy this product with our in-house team. We have a good set of people who are trained and certified in Dynatrace.

What other advice do I have?

Over the time that I have used this product, I have worked with several versions. I am now working on the latest one.

The advice that I typically give to my clients is that you shouldn't think that it will do everything. In order to implement it properly, we need to clearly understand what are your specific use cases are, and then work on those.

Use cases can be related to an environment, a technology, or a platform. If it's a cloud-native service, for example, then you won't be able to use Dynatrace because it can't even be installed. You won't get anything out of that. This is an example of how it is not suitable for every situation. The feasibility depends on what you want to use cases are.

I would rate this solution an eight out of ten.

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: My company has a business relationship with this vendor other than being a customer: Implementer
Flag as inappropriate
Sr Director Of Engineering at a financial services firm with 51-200 employees
Real User
Good capabilities, has a helpful interface and is straightforward to set up
Pros and Cons
  • "The initial setup is straightforward."
  • "We want it to work at what it is expected to work at and not really based on the updated configuration which one developer has decided to change."

What is our primary use case?

It's a login solution. We have a bunch of applications running in our cloud and all the logs with stalled applications and rates. We put those logs in Coralogix. Then we analyze those logs for various things, including alerts, data analysis, investigations, et cetera.

What is most valuable?

The overall capability of the platform and the kind of interface they have is excellent. The way I can query the data and pinpoint the issues, the kind of alerts they have, is so advantageous. The functionality is the most essential part for me.

The initial setup is straightforward.

What needs improvement?

We have asked for a couple of features from the company already. What typically happens is a lot of people - and developers are one of the biggest consumers of this product - go to this product to optimize their investigation process and specific configurations. That increases our data flow at times, so the cost changes. And a lot of changes happen due to that. We have asked the company to auto-revert the changes after a while so that the system works typically. We want it to work at what it is expected to work at and not really based on the updated configuration which one developer has decided to change.

For how long have I used the solution?

I’ve been using the solution for over a year.

What do I think about the stability of the solution?

The solution is stable and the performance is good. It’s reliable. There are no bugs or glitches, and it doesn’t crash or freeze.

What do I think about the scalability of the solution?

The scalability is pretty good.

We have about 60 to 70 people using this product. The majority of the backend and seniors use it regularly.

How are customer service and support?

Technical support is above average. They have been pretty fast and pretty supportive on issues.

Which solution did I use previously and why did I switch?

In this company, this I the first solution used. Before this, we were using an in-house solution. However, I've previously used some solutions such as Splunk and Datadog, if I remember correctly. Functionality-wise, this product is more mature compared to them. Plus, there are additional capabilities For example, I can keep my cost in check. Certain functionality in these terms of cost control is better. Overall product, it is slightly better than other products which are used.

How was the initial setup?

The solution is acceptably easy to set up. That said, of course, you need enough technical understanding to set it up.

The deployment took a while for us, almost a month, I would say. The majority of the things were not ready on our side, however. The product was ready from almost day one, yet it took us quite a while to collect all the logs, redirect them to Coralogix, and create the logs in a format which were possible to ingest in Coralogix. A lot of work on our side was needed initially. Any new company which is onboarding has to go through the same cycle. It’s a sizeable investment in terms of time if they the company is not ready to onboard. If they have already been a customer or have done similar work, at least it should be pretty straightforward and only take a few days.

What about the implementation team?

We did the initial setup ourselves. It's a pretty straightforward process.

What's my experience with pricing, setup cost, and licensing?

We are paying roughly $5,000 a month.

It is not a pay-as-you-go model and suddenly you pump in a lot of data and your cost blows up. I can have a monthly billing which is in my control. I can tweak around costs to reduce my cost and all that.

What other advice do I have?

I’m just an end-user.

Since we are using the cloud, we’re always using the latest solution version.

For any company getting onboarded to Coralogix or an equivalent solution for the first time, they need to do their in-house streamlining before they start working on this. It took us almost a month to streamline our systems and our processes to get onboarded. And during that time, we were just waiting to get onboarded. It's better to sort out internal things before you start looking at for a solution.

I’d rate the solution eight out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
David Pratt - PeerSpot reviewer
Senior DevOps Engineer Individual Contributor at EML Payments Ltd
Real User
Top 5
Reasonably priced, straightforward to set up, and performs as expected
Pros and Cons
  • "It does everything we wanted it to do."
  • "How granular I could go down at looking at certain data, especially related to the operations, is limited."

What is our primary use case?

We use New Relic APM to monitor our public cloud-hosted application and infrastructure.

How has it helped my organization?

Not so far. Although, we haven't really got a very mature system of defining our application processes from end-to-end and certainly not our client-centric impact.

I'm reasonably satisfied. We haven't run the rule over it too much because it hasn't been a massive investment.

It has been quite valuable to demonstrate how we can change our views for our services due to their service. It has proved value so far.

What is most valuable?

We don't have any problems with this solution.

The configuration isn't terrible when compared with other products.

It does everything we wanted it to do. We haven't been too critical in our thinking about where it can improve.

What needs improvement?

There really is nothing that stands out with New Relic. With the insight, I think it will be found lacking for its report aggregation capabilities. How granular I could go down at looking at certain data, especially related to the operations, is limited.

The API integrations that they have for us to automate our configuration was fine, but I think for some of these tools, it was over-engineering for us to try and automate any of that. So, we just use the user interfaces.

For how long have I used the solution?

I have been using New Relic APM for approximately two months.

We are using the latest version.

What do I think about the stability of the solution?

It's a stable solution. We have not encountered any issues. We're not plugging too much traffic into it. We're not reporting on it heavily. It's not feeding into our service management processes heavily. So we haven't seen anything.

What do I think about the scalability of the solution?

We have not yet explored the scalability, it's too early for us.

There are approximately 15 of us using it altogether. We are called infrastructure engineers, who are third line infrastructure support and architecture people. 

Some of the lead developers have access, but there are 15 of us and we are all pretty similar.

How are customer service and technical support?

We have not contacted technical support.

Which solution did I use previously and why did I switch?

We have two custom in-house processes that do our application data flow monitoring. We have manually and in a custom nature, built out a performance monitoring platform in Splunk using our knowledge of the system over the years.

I have used App Dynamics in the past with another company. There really is nothing that stands out with New Relic. It is similar to AppDynamics and Dynatrace.

How was the initial setup?

The initial setup was straightforward. 

I don't think any of these tools are tools that anyone can pick up and install. 

I wouldn't say it was any more difficult to configure than some of the other solutions. It is definitely not more difficult to configure than AppDynamics. 

What's my experience with pricing, setup cost, and licensing?

The price was one of the reasons we chose this solution. It's reasonably priced. It's cheaper than the likes of AppDynamics and Dynatrace, based on how our subscription is.

Which other solutions did I evaluate?

We looked at Datadog initially and found the initial setup to be far more complex than what we found in New Relic. 

What other advice do I have?

Our proof of concept has been successful.

Getting an order in and reporting is an industry in itself, don't think it can solve the problems it's not trying to solve. It is an application performance monitoring tool. Don't try and make it anything else.

The big problem with Splunk for us is that it can do everything. The thing that's nice about New Relic is it doesn't try and do everything, it does what it does. So far, it does it to satisfaction, but don't try and fill multiple holes in your toolchain with it. It's good at what it does.

We had some pretty informed opinions on what it was going to do. We knew where it wanted us to get, and so far it has cost the amount we wanted it to cost and done everything that we wanted it to do.

I would rate New Relic APM a ten out of ten.

Which deployment model are you using for this solution?

Public Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Cloud Monitoring Software
July 2022
Get our free report covering Dynatrace, New Relic, Microsoft, and other competitors of Datadog. Updated: July 2022.
620,600 professionals have used our research since 2012.