LogicMonitor OverviewUNIXBusinessApplication

LogicMonitor is the #3 ranked solution in top AIOps tools, #5 ranked solution in best Network Monitoring Tools, #5 ranked solution in Infrastructure Monitoring tools, and #5 ranked solution in top Cloud Monitoring Software. PeerSpot users give LogicMonitor an average rating of 9.0 out of 10. LogicMonitor is most commonly compared to SolarWinds NPM: LogicMonitor vs SolarWinds NPM. LogicMonitor is popular among the large enterprise segment, accounting for 59% of users researching this solution on PeerSpot. The top industry researching this solution are professionals from a computer software company, accounting for 24% of all views.
LogicMonitor Buyer's Guide

Download the LogicMonitor Buyer's Guide including reviews and more. Updated: November 2022

What is LogicMonitor?

LogicMonitor, a unified observability platform, brings together comprehensive monitoring capabilities and enables observability across data centers, public/private clouds, and applications. LogicMonitor provides correlation, context, and clarity to understand the business impact and causes of complex IT incidents.

LogicMonitor is a SaaS-based unified observability platform that enables today’s digital enterprises to adopt a cloud-ready operating model for effectively meeting key business demands. The solution provides clarity across hybrid enterprise IT, and brings diverse IT and development teams together to solve complex problems. In addition, it enables IT to innovate faster while improving operations efficiency to the critical IT services they deliver. 

LogicMonitor unifies IT teams around a single platform that allows for the collection, analysis, contextualization, and exploration of observable data in hybrid settings.

LogicMonitor Features

LogicMonitor has many valuable key features. Some of the most useful ones include:

  • A single source of truth for data onboarding, management, and exploration across infrastructure, apps, and IT stacks.
  • A robust ecosystem of community-supported LM modules to accelerate onboarding.
  • Provides an intuitive interface with dashboarding, reporting, and data exploration to regularly and effectively monitor and troubleshoot.
  • AIOps capabilities enable anomaly detection, early warning detection, and rapid root cause analysis for services, applications, and infrastructure.
  • Monitors across networks, systems, storage, and other IT infrastructure.
  • OpenTelemetry-based application microservices with business context and vendor independence.
  • Logging to aid infrastructure monitoring, anomaly identification, and troubleshooting.

LogicMonitor Benefits

There are many benefits to implementing LogicMonitor. Some of the biggest advantages the solution offers include:

  • Instantly resolve issues across DevOps and ITops teams with a single source of truth. Anticipate and discover problems early, eliminate dead ends in troubleshooting, and deploy more frequently with the confidence that hybrid IT is under control.

  • By evaluating IT data in real time for anomaly detection, you can gain predictive insights. Identify potential flaws by analyzing billions of metrics and data points from hundreds of IT devices and resources.

  • Consolidate monitoring across IT infrastructure and apps to save money and reduce risk. By replacing several point products, you can save money on licensing and maintenance.

  • Consolidate enterprise-scale and innovative capabilities into a single platform.

  • To enhance productivity, the solution offers complete visibility across the technological stack with insights, context, and correlation.

Reviews from Real Users

LogicMonitor stands out among its competitors for a number of reasons. Two major ones are its robust root cause analysis and its event correlation tool. PeerSpot users take note of the advantages of these features in their reviews: 

Valentine C., Technical Service Delivery Manager at Sparx Solutions, writes of the solution, “As a support resource, I don't need to use multiple platforms to connect to a device to further investigate the issue. It is all consolidated. From that perspective, it saves time because a resource now only needs to use one platform.” He adds, “I will rate it as a solid nine out of 10.”

Robert V., Teamlead at i-LEVEL ICT Solutions, notes, “One thing that's very valuable for us is the technical knowledge of the people who work with LogicMonitor. We looked at several products before we decided to use LogicMonitor, and one of the key decision-making points was the knowledge of the things that they put in the product. It provides real intelligence regarding the numbers that you see on the product, which makes it easy for us technical people to troubleshoot.”

LogicMonitor Customers

Kayak, Zendesk, Ted Baker, Trulia, Sophos, iVision, TekLinks, Siemens

LogicMonitor Video

Archived LogicMonitor Reviews (more than two years old)

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
Daniel Gavin - PeerSpot reviewer
Network Architect at Envision IT
Video Review
MSP
It consolidated our monitoring tools, reducing our onboarding times
Pros and Cons
  • "The dashboarding is very useful. Being able to create custom data sources is one of its biggest features which allows quick time to market with new features. If one of our vendors changes their data format or metrics that we should be monitoring, then we can quickly adjust to any changes in the environment in order to get a great user experience for our customers."
  • "LogicMonitor's reporting capabilities definitely could use an improvement. We have made do with the dashboarding and done what we can to make that work for our customers. However, there are definitely customers who would like a PDF or some kind of report along those lines, where we have been utilizing other tools to provide them. The out-of-the-box LogicMonitor reporting is the only thing that we have been less than impressed with."

What is our primary use case?

We are a managed service provider, so we have a wide range of deployments. LogicMonitor, as a whole and software as a service solution, is deployed with collectors on-premise, which also ties directly into cloud providers.

We primarily monitor Citrix environments for customers. That varies from the delivery side, so network Citrix ADCs as well as virtual desktops and the supporting infrastructure around that. That's probably our primary use case.

While we do some NetFlow capture for other managed service clients, the primary use case would be Citrix monitoring.

How has it helped my organization?

LogicMonitor really improved our workflow as a company. Previously, we had been using a combination of about four or five tools. We were able to consolidate those all into LogicMonitor, which significantly improved our response time to new customers and onboarding time for new employees.

We can create granular alerting for devices. Then, since we are a managed service provider, we can have very granular alerting, not only for our own purposes, but where customers would like to be alerted directly on specific issues. It is very easy to build escalation chains that include the customer as well as our own team.

LogicMonitor's AIOps give us a great view of performance over time and potential changes in performance.

We have been able to tune LogicMonitor very granularly and eliminated most of our false positives. Any monitoring platform is going to give you false positives to some degree, but we have definitely reduced our false positives with LogicMonitor by at least a half.

What is most valuable?

The dashboarding is very useful. Being able to create custom data sources is one of its biggest features which allows quick time to market with new features. If one of our vendors changes their data format or metrics that we should be monitoring, then we can quickly adjust to any changes in the environment in order to get a great user experience for our customers.

We have created custom dashboards for our customers to give them a single pane of glass view as far as what their environment looks like in relation to their Citrix environment or VMware Hypervisor environment. LogicMonitor is a combination of things that they have pre-built. Especially along the VMware infrastructure, they have some great dashboards canned and ready to go. On the Citrix side, we have developed a lot of our own dashboards for customer use. We have gotten great feedback from those, as they're very easy to throw together and provide a lot of value to our customers.

We use custom data sources extensively. It's one of the greatest features of LogicMonitor, as a product. We can have very granular control over our data sources. Customizable data sources are one of the primary draws to LogicMonitor, and we do use them extensively. Developing new LogicModules is very simple. We primarily use PowerShell, but there are also a myriad of other options depending on what your target operating system is.

LogicMonitor alerts us very quickly if one of their collectors loses connectivity with the cloud. Occasionally, we will get alerts for customers where we don't have extensive monitoring in place, and they may not be aware that their site is down or that there are other issues with their environments. We have had occasions where the alerts that we get from LogicMonitor that the collectors are down might be our first indication where a customer is having an issue.

At this time, we are using AIOps for dynamic thresholds and anomaly detection. For anomaly detection, we found it quite helpful because it will give us an idea of when there is an anomaly in the environment. For example, if you have a backup job that normally would run, but it isn't running or if there is a bulk data transfer that wouldn't normally occur at a particular time, we can have it alert one way or another. That is a great feature, as far as LogicMonitor's AIOps toolkit.

What needs improvement?

We have found LogicMonitor's reporting capabilities to be somewhat lacking. That is one of the only areas that we really thought was not as strong as it could be. One of the great things is the dashboard functionality, which we were able use to work around the reporting functionality. Instead of having a canned report that gets emailed to our customers, they have a live dashboard that they can log into and view the things we would normally include in a report. They can have a live look, where they can really drill into the data and see what is there.

LogicMonitor's reporting capabilities definitely could use an improvement. We have made do with the dashboarding and done what we can to make that work for our customers. However, there are definitely customers who would like a PDF or some kind of report along those lines, where we have been utilizing other tools to provide them. The out-of-the-box LogicMonitor reporting is the only thing that we have been less than impressed with.

Buyer's Guide
LogicMonitor
November 2022
Learn what your peers think about LogicMonitor. Get advice and tips from experienced pros sharing their opinions. Updated: November 2022.
655,711 professionals have used our research since 2012.

For how long have I used the solution?

We have been using LogicMonitor for about four years.

What do I think about the stability of the solution?

LogicMonitor's stability has been very good for us. We have not experienced any major outages or issues with LogicMonitor as a product in the several years that we've been using it.

We have a team of a couple of people who handle the implementation and deployment of LogicMonitor. We have a larger team who handles the day-to-day support. One of the great features of a LogicMonitor being a software as a service product is we don't have to monitor or manage the tool itself. The collectors update automatically. We handle the operating system running the collector within our normal toolset. Therefore, it gets Windows updates and does all these things on its own or through that toolset. There is very little time that has to be spent managing the tool itself. We are really just managing our systems in the tool. 

What do I think about the scalability of the solution?

Scalability is actually one of the reasons that we went to LogicMonitor from our own internal tool sets. The scalability is as big as you want to go. I've seen other customers that have thousands of endpoints in there without any issue. We certainly have not run into any scalability issues in our environments.

We have a variety of users who interact with LogicMonitor on a daily basis. We have our managed services team who work directly with the customers and are in there on a day-to-day basis doing remediation of issues as they arise. We also have our implementation group who take care of onboarding new customers, working with them on any custom data sources or custom monitoring needs that they might have. Then, our customers are able to log and see their own environment along with the dashboards and things that we built for them. It really has been a great tool for our team and customers to be able to see all of that. 

The role-based access control that LogicMonitor provides is very robust. We are able to provide single sign-on for our users as well as multi-factor authentication for our customers. Therefore, the role-based access control and authentication components of the LogicMonitor product are excellent.

Our use of LogicMonitor is constantly increasing as we roll our managed customers into the platform. We definitely plan to increase our managed services, and directly as a result, increase our utilization of LogicMonitor.

How are customer service and support?

We have only had to engage with technical support on a handful of occasions over the last four years. Thankfully, the product runs very well; we've had very few issues with it. On the couple of occasions that we have had to engage technical support, they have been very quick with first-call resolution, and we've been very happy with our experience during that process.

LogicMonitor provides very wide support for just about any device that you can use in an enterprise environment. We've used it for VMware, XenServer, and Hyper-V on the hypervisor side. On the storage side, we have people using NetApp and Dell EMC. On the networking, we are using Cisco. We also have some customers running UniFi gear and Juniper. There are just a massive variety of devices that it can monitor out-of-the-box.

Which solution did I use previously and why did I switch?

LogicMonitor was a great move for us in terms of consolidating our monitoring tools. We previously used a combination of paid and open source tools to monitor our customer environments. Being able to consolidate to LogicMonitor has allowed us to save significant time in server management when managing the tool. We have also seen a lot better onboarding times for our employees coming to the environment. It has been a great gain all-around.

The customer onboarding time was cut down by a half to maybe three-quarters. As far as the employee onboarding time, they only have to learn one tool instead of having to learn multiple tools. We have consolidated our collector or data source development from probably three languages down to just PowerShell. That has been a huge gain. It's much easier to find resources that can learn or know PowerShell, so that's been fantastic.

LogicMonitor replaced Observium, Zabbix, Nagios, and SolarWinds.

How was the initial setup?

The initial setup with LogicMonitor was very straightforward. The team at LogicMonitor worked with us to deploy our first collector, then walked us through how to create groups and assign properties to the groups or devices. Most devices have very good metrics out-of-the-box, as the data sources that LogicMonitor provides are excellent for the vast majority of devices. Where we have had to create our own data sources has been with our managed services around more complex data sets, not a specific device.

In our organization, deploying to our internal systems took probably six hours. It was very easy.

What about the implementation team?

We did the initial implementation with the LogicMonitor team. They had a very straightforward strategy as far as getting it deployed. It was very easy to get our devices added in there. As we have moved forward, we have certainly learned different tips and tricks as far as how we organize devices into categories or groups in order to effectively monitor devices with minimal user interaction.

What was our ROI?

The return on investment with LogicMonitor has been excellent. We have seen a great reduction in the number of hours spent managing the tool as well as the ability to monitor a wide variety of services and systems without significant investment, in terms of time for developing custom modules or having to dissect a tool to figure out exactly what we need to do to add the functionality that we're looking for. On top of that, being able to onboard our own employees in a much faster method with only having to learn one tool instead of having to learn four or five tools has been a gain to the net positive with our onboarding process.

Our customer onboarding process is now automated. We don't have to go in and manually create a large numbers of devices in multiple platforms. We go through the process and install the collectors at the customer site, then we have templates that we utilize to deploy LogicMonitor out to those collectors. The automation with LogicMonitor has probably saved us 20 or 30 percent in time, as far as deployment to customers goes.

What's my experience with pricing, setup cost, and licensing?

As a managed services provider, the licensing model that LogicMonitor provides us is excellent. We are able to scale up and scale down as needed. The pricing is reasonable for the amount of features and support that they provide.

As a managed service provider, we have the highest level of licensing that they offer, so we don't have any extra fees. I believe there are some add-ons for some of the lower tiers of LogicMonitor service, but that's not something that we use with our agreement.

Which other solutions did I evaluate?

We found that the amount of time that we were spending on managing the tool or doing upgrades was significant. We found that the cost of LogicMonitor was less than the cost to maintain some of these open source products that we had running. The other side of that is there were some new features that we wanted to roll out to decrease our footprint as far as what we're monitoring. The time that we would have taken to develop or enable those modules in our toolsets would have had a higher cost than moving to this software as a service based product.

We evaluated a handful of options. The ability of LogicMonitor analysis as a managed service provider really shined. A lot of the other products didn't have a great MSP portal or their role-based access control was not really mature enough to handle multiple tenants. Therefore, LogicMonitor won out very quickly when we started to evaluate most of the players out there.

We looked at SolarWinds and a couple of other solutions.

What other advice do I have?

If you are looking to implement LogicMonitor for the first time, work through their available documentation. There are a couple of certifications that they offer which are very good and give you a good foothold into the process. Then, talk with people who are currently using LogicMonitor. There is a great support community out there with people who are more than willing to help.

AIOps does provide a very useful data set. They have been continually improving it. AIOps is one of those things, which is there and we use it a bit. While the dynamic thresholding is interesting, the anomaly detection is probably more a nice to have, and not more of the primary features that we use.

We have not utilized the automated discovery and deployment. With managed services, we have to keep track of how we charge customers. Generally, we have a specific list of devices that we're going to monitor, so we don't use the discovery features on LogicMonitor.

As far as monitoring platforms go, I have worked with a wide variety. I would give LogicMonitor a 10 out of 10.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Head of IT Operations at a computer software company with 201-500 employees
Real User
Its visualization capabilities enable us to be more proactive in resolving issues and preventing problems
Pros and Cons
  • "The most valuable feature is the visualization of the data that it is collecting. I have used many products in the past and they tend to roll up the data. So, if you're looking at data over long periods of time, they start averaging the data, which can skew the figures that you're looking at. With LogicMonitor, they have the raw data there for two years, if you are an enterprise customer. If you are looking at that long duration of data, you're seeing exactly what happened during that time."
  • "The topology mapping is all based on the dynamic discovery of devices that could talk to each other. There is no real manual way that you can set up a join between two devices to say, "This is how this network is actually set up." For example, if you have a device, and you're only pinning that device and not getting any real intelligent information from it, then it can't appear on the map with other devices. Or if it can appear, then it won't show you which devices are actually joined to it."

What is our primary use case?

It is to monitor our customer’s infrastructures. We provide the service as part of our managed service offerings. We monitor our customer networks and infrastructures for things, like availability, vital statistics, and the various services, that they have running in their environments. We provide a NOC and Service Desk that actually responds to alerts that come up and use the tool to allow them to be proactive in looking after their environments.

How has it helped my organization?

It is clean and clear compared to other products that we have used. This has made it easier to get to the root cause of a problem, because it's easier to see (through the visualization) where the problems lie.

I have worked on several data sources where I've either customized what's there already or created additional ones that don't exist. Also, LogicMonitor have been very flexible in terms of providing resources to assist with building custom data sources. If we have a requirement, we can approach LogicMonitor and they will assist us in getting the data that we are after.

It has improved our control over the environments that we manage. With a lot of products, you can just pop a device and get a metric out the system. With the LogicMonitor, you can do a lot of manipulation through scripting, then calculate the results that you're getting. It makes you more efficient and able to get the data in the particular format that you want.

You can do a lot of tuning of alerting, from the device group down to the data source and individual instances of those data sources. This is very flexible. We have many customers who have their own requirements of what they want us to do alerts on, so I was asked to be more flexible with our monitoring and alerting. I now can provide more bespoke, customized services for them.

LogicMonitor alerts us if the cloud loses contact with the on-prem collectors and we have found this advantageous. We have email alerting and an integration with our ticketing system. In some instances, we have automated text messages and phone calls for the more critical services. When our collectors do happen to go down, that's a P1 situation because we've lost complete sight of the customer's environment.

We have started using Artificial Intelligence for IT Operations (AIOps) capabilities more for the anomaly detection and for troubleshooting. The root cause analysis is something which we're testing now to see how it will work for us. These features will take a lot of noise away from the alerts when they come in.

One thing which has really helped is the integration that we have between LogicMonitor and our ticketing system: The ability to be able to log and update the ticket. We do have additional functionality to this integration as well, where if we have a number of alerts for a particular device in a period of time, then it will then create a problem ticket in the ticketing system and attach the associated incident tickets. All of these pieces help dramatically in terms of keeping everything central in the ticket. We know when things have gone down or cleared. It's not repeatedly opening and creating tickets for every single failed poll. In terms of the whole ticket management process, it's helped immensely with that.

Most of the products that we work with it does monitor out-of-the-box because we work with a lot of the big vendors, like Microsoft, Cisco, Palo Alto, Citrix, etc. They are very good at having the data sources readily available for those.

What is most valuable?

The most valuable feature is the visualization of the data that it is collecting. I have used many products in the past and they tend to roll up the data. So, if you're looking at data over long periods of time, they start averaging the data, which can skew the figures that you're looking at. With LogicMonitor, they have the raw data there for two years, if you are an enterprise customer. If you are looking at that long duration of data, you're seeing exactly what happened during that time.

I have probably two types of favorite dashboards:

  1. Dashboards that give a general overview of our whole environment and a complete sort of NOC-level view that can be drilled into if there isn't an alert.
  2. I like the dashboards that can be very granular into a particular service or piece of equipment. For example, if you were looking at a dashboard just related to Citrix, you can have a huge amount of detail on one page. Taking all the metrics into visual graphs, pie charts and big number widgets which makes it a lot easier than having to work your way around the devices that you are monitoring to bring the data that you're interested in altogether.

We are quite a large networking company. One of the features that we like with LogicMonitor that they have out-of-the-box is NetFlow, which is a great tool to help troubleshoot something. This has improved how we can provide a service to our customers.

The anomaly detection is a very good tool because you can compare the statistics that you're looking at against a week or month ago to see if it's something that's truly out to the norm or not. The visualizations that I get are very powerful. These capabilities enable us to be more proactive in resolving issues and preventing problems. If you are managing a customer's network as you should be, you should be looking at these tools and visualizations on a general day-to-day basis to understand what is happening with the customer's network. It's very useful to use these tools to learn about what's going on and know what the norm is for those networks. Then, you can get to a point where you're tuning your alerting to be a bit more in tune with what the actual norm is for that customer.

The solution has consolidated the monitoring tools we need into one. A reason why we moved to LogicMonitor would be the additional features that are provided, like NetFlow. We would use a separate solution for that and configuration management as well. Just to have those additional items built into the product has been a really good part of the product.

What needs improvement?

The topology mapping is all based on the dynamic discovery of devices that could talk to each other. There is no real manual way that you can set up a join between two devices to say, "This is how this network is actually set up." For example, if you have a device, and you're only pinging that device for availability and not getting any real intelligent information from it, then it can't show you which devices are actually connected to it. Before the topology mapping was released, I was working with product management and did raise this issue at the time. I haven't seen it yet, but it was something that I suggested to them that they should allow customers to be able to build their own topologies, or at least to override what's being discovered, just for visualization more than anything.

I can completely understand that the old topology mapping is how the root cause analysis and the alert suppression work, which is all dependent on that as well. So I wouldn't want to override that in terms of functionality. But, in terms of a visualization on a map, it would be a big plus to be able to do that.  I have been told that this is being worked on in the background.

For how long have I used the solution?

Just over two years.

What do I think about the stability of the solution?

It is a very solid platform. I haven't noticed any real outages from my point of view. I've seen when LogicMonitor emails out to say, "There is currently a problem in these particular regions," but I don't think I've actually seen myself experiencing those issues. They are very good at communicating out what's going on. In terms of actual availability, I've never really seen an outage on the platform at all.

What do I think about the scalability of the solution?

Because it's a SaaS offering in terms of scalability, onboarding customers is more on the LogicMonitor side. They are the ones who need to have the capacity to onboard these customers, and I've never had an issue so far. From my understanding, they are growing month on month in terms of their infrastructure.

There are definitely limitations with the sizing of the devices that LogicMonitor provides. It's based on the number of instances in general. A lot of the time, I have customers on a large collector who say something like, "It needs to be a particular spec for 10,000 instances." On the customer sites, I have the same spec device with 50,000 to 60,000 instances, and it's working perfectly fine. So in terms of the actual scalability, there are restrictions, but I think LogicMonitor has been quite conservative in terms of what they've published and say that they're actually capable of. In my experience, I've been able to push those boundaries a fair amount.

From our company's point of view, there are probably about 50 to 55 users who access LogicMonitor to use it in one way or another. Then, we provide logons for our customers as well, if they want to see their own environment. Service desk and NOC analysts are the main people who use the platform, then we have our service management team who log on there to get information for monthly reports or outage queries.

We do use quite a lot of the platform. There is room for growth, but it's just one step at a time while we're getting used to the platform and as and when we have a requirement for using additional features.

How are customer service and technical support?

The great thing about LogicMonitor is that you have the inbuilt chat within the platform. You're getting through to people that know the product and not getting through to people who are just logging tickets. Most of the time, you're either getting an answer straight away to your problem or they try their very best before they actually have to escalate it somewhere else. I seriously can't fault their technical support.

Which solution did I use previously and why did I switch?

LogicMonitor replaced our other monitoring solution, ScienceLogic, which was very similar to this platform in terms of multitenancy and customisation. The previous platform charged a premium cost for the additional features that come with LogicMonitor. To have the additional pieces native in this product is a huge advantage.

We evaluated about 6 products before moving to LogicMonitor.  The decision to move was based on features, ease of use and commercial elements.

How was the initial setup?

Most products are very good at onboarding devices onto the platform. LogicMonitor is no different either. Once it has some credentials that it can use, it will automatically discover the metrics that it wants to apply against them. They are very good at setting some good baseline thresholds, so they give you a good starting point with those data sources to say what you should be alerting on and at what levels. Because of that, it does reduce the time down it takes to onboard a customer.

For the average onboarding time, you have several factors that can contribute to it. You must make sure that you have the right credentials to access devices and the devices themselves are accepting access to them. The LogicMonitor process has improved how long it takes to onboard a customer, especially with the time it takes to provision a collector. A collector takes minimal time at all. Whereas with my previous vendor, towards the end of our relationship, it was taking a long time to get the collectors up and running. A lot of the time, you had to get support involved because it wouldn't happen properly.

What about the implementation team?

We used the professional services of LogicMonitor.  They were amazing and extremely efficient.  They had experience of migrating from our previous platform and were able to automate as much as possible.

What was our ROI?

I think that we have seen ROI. We moved to LogicMonitor because of the types of devices that we are monitoring. It’s better for us now with the efficiencies that we're getting from the platform. It's definitely benefiting us. It's more than just having a tool. It's something we can use day in, day out, giving us good insights to what is happening.

It has saved time because you have the information that you need in one place. In turn, the productivity is better because of it.

What's my experience with pricing, setup cost, and licensing?

The licensing side of things with LogicMonitor, is quite simple. It is one license per device. LMCloud and LMConfig is slightly different but still a simple model.

The standard license it's very straightforward versus my previous vendor where there was like six different tiers of licensing on the devices that you're monitoring based on the number of metrics they were getting per device.

From what I understand, they are bringing out a number of new features, where there will be a different licensing model for those features. So, it will be interesting to see how that comes about and affects things. However, today it hasn't been too bad. It has been a very straightforward licensing model.

What other advice do I have?

Take your time with it. A lot of the delays that we had were around customers not giving us access to their networks to get the collectors installed. We had a very strict timeline that we had to follow when we were doing the migration because our contract was ending with our previous vendor. We had to get everything all up based on a particular date, and it was down to the wire. We were very close to actually not monitoring a couple of customers because they just weren't giving us the access we needed. So, my advice is if you're onboarding the product and you are dealing with many customers, then just make sure you give yourself enough time.

The reporting capabilities are within average. They are good for certain point-in-time reports that you might need. However, most reporting that we do is service reports that we provide our customers at the start or end of the month. Because we try and look at various data from multiple systems in one report, we use an external product to get the data from the LogicMonitor API that we want to put into one report. With the reporting in LogicMonitor, you would have to run many reports to try and get all of those pieces of data. Therefore, we use a third-party product so we can just run one report, have it all automated, and take away the administrative headache. There is nothing wrong with the reporting. It's just for our requirements: We need the data to come from LogicMonitor and other platforms as well.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
LogicMonitor
November 2022
Learn what your peers think about LogicMonitor. Get advice and tips from experienced pros sharing their opinions. Updated: November 2022.
655,711 professionals have used our research since 2012.
William Guertin - PeerSpot reviewer
Senior Systems Engineer at Accruent
Real User
Its fine-tuned alerting lets us troubleshoot issues and resolve them quickly
Pros and Cons
  • "The breadth of its ability to monitor all our environments, putting it in one place, has been helpful. This way, we don't have to manage multiple tools and try to juggle multiple balls to keep our environment monitored. It presents a clear picture to us of what is going on."
  • "We have very fine-tuned alerting that lets us know when there are issues by identifying where exactly that issue is, so we can troubleshoot and resolve them quickly. This is hopefully before the customer even notices. Then, it gives us some insight into potential issues coming down the road through our environmental health dashboards."
  • "Automated remediation of issues has room for improvement. I don't know how best to handle it, but I know that they're kind of working on it. I know there are some resources that can do automated remediation. I would like them to improve this area so it could be completely hands-free, where it detects an issue, such as, if a CPU is running high. There are ways to do it even now, but it's a bit more involved."

What is our primary use case?

We are in four public clouds. We are in AWS, Azure, and GCP. While we do Oracle cloud, we only have a small footprint there. We are monitoring all the virtual server environments as well as all the services in those environments and alerting on various set points depending on what it is: virtual, server and service. 

We are also monitoring our colos. We have on-prem hardware, networking, and server solutions that we are monitoring with LogicMonitor. We are in both the cloud and on-prem. The breadth of cloud and on-prem that we have is a good use case for LogicMonitor

How has it helped my organization?

We have very fine-tuned alerting that lets us know when there are issues by identifying where exactly that issue is, so we can troubleshoot and resolve them quickly. This is hopefully before the customer even notices. Then, it gives us some insight into potential issues coming down the road through our environmental health dashboards.

The breadth of its ability to monitor all our environments, putting it in one place, has been helpful. This way, we don't have to manage multiple tools and try to juggle multiple balls to keep our environment monitored. It presents a clear picture to us of what is going on.

When I first started, it was less granular in terms of the fine tuning and the ability to tune out specific servers running high CPU. Keeping a global general standard has really helped. We now modify the environment where we need to alert and ignore those areas where we're not as concerned. This has helped our company in ways that maybe management doesn't even realize, e.g., we're not waking up our engineers in the middle of the night. Therefore, there is more job satisfaction in being able to get a good night's sleep. For example, we had one team that was being alerted every couple hours, which was ridiculous when you're on call and need to sleep. This was one of my first prime objectives when I started: To improve the quality of life, so we don't have as much turnover in our engineering support staff.

What is most valuable?

At the top of the list of most valuable features is the ability to modify and add data sources, to use other people's data sources, and the LM Exchange itself. It gives LogicMonitor a lot of flexibility. It gives the end user the ability to monitor just about anything that can connect to a network and send data, which is a nice. You can take the data sources for what you are trying to do, then modify and adjust them to what your new parameters are or your use cases. With a lot of other applications, you either don't have the option at all (because you have to use what they have out-of-the-box) or it takes a lot of work to be able to enable monitoring something new. That is the best thing about being an administrator of LogicMonitor.

I have written my own data sources in a number of cases. We have also leveraged existing data sources and modified them to fit our specific cases. We don't typically publish them, but I know with the LM Exchange that it's becoming easier to do that.

I know management very much likes the dashboard presentations that LogicMonitor has. They are very comprehensive. You can pull in other things and add them in as a widget. You can see more than just what is in LogicMonitor, as it gives a single pane of glass for whatever management is interested in or whatever environment they're looking at when they are the monitoring software metrics. Then, it is presented all in one location, which is really nice.

We have SLAs for uptime, all our hardware, and all our infrastructure: hardware, servers, and storage. I have spun up a number of services based on the specific metrics for all those devices, then determine SLAs based on the uptime of those metrics. We have a nice SLA dashboard that shows the uptime of all of our environments, so when my manager or his manager comes to me, and asks, "What was the uptime of our environments or this area in storage?" Then, I can quickly look at the dashboard and tell him. Therefore, I really like that feature. 

Another dashboard that we find valuable is environmental health. We have a number of dashboards for all of our products. We have product teams for whom we created dashboards to look at the product, not just see what's happening now or in the past, e.g., what is currently having an issue. We also use it for forecasting, where we potentially might see an issue with storage on this server with a CPU that generally runs high or if there is an increasing trend in network traffic on the pipe. The environmental health dashboards have helped us stay ahead of potential issues that were coming down and ensure we had uptime for our customers' environments. 

LogicMonitor has the flexibility to enhance networking gear as well as handle our unique environment: servers, hardware, cloud, and Kubernetes. There are a lot of features that we like about LogicMonitor.

I would rate it a nine out of 10 in terms of alerting. It is doing everything that we wanted it to do. We did a lot of tweaking in the last year and a half. In the last two years, since I have gotten really familiar with the product, I have been able to mesh with the teams to learn what we need to alert on. Previous to my arrival, we were sending a lot of alerts to teams, waking them up in the middle of the night. We have cleaned up a bit of their garbage so we are pretty clean in terms of what we're alerting on. It is doing a good job of letting us know when there is a problem in the environment, which is nice. 

What needs improvement?

I have struggled a bit with the SLA calculations though, because I have some issues with the reporting having no data. However,  I have worked around those issues and we have a solid process for reporting the SLA.

Automated remediation of issues has room for improvement. I don't know how best to handle it, but I know that they're kind of working on it. I know there are some resources that can do automated remediation. I would like them to improve this area so it could be completely hands-free, where it detects an issue, such as, if a CPU is running high. There are ways to do it even now, but it's a bit more involved. Also, for a LogicMonitor program, it really depends upon the hardware and environment that it is running on to make that call. 

In terms of when it alerts, there are times when we do get alert storms because one device kind of fails on an interface where there are a number of things. Even if only one out of the five things on the interface fails, then everything on the interface will alert.

I would like it to able to create network maps and connectivity structures so you don't have to manually do it. This piece hasn't been a big hitch for us, but I imagine there are other customers who would really like to see the mapping piece of it grow and become a little bit more automated.

For how long have I used the solution?

I personally have been using it for almost three years. The company has been using it for six years.

What do I think about the stability of the solution?

The stability is very good. There are times when we get specific alerts based on if there are issues with this piece or that, but those generally haven't affected us. 

What do I think about the scalability of the solution?

It can handle scaling. It is like any other cloud service. There is a cost associated with scaling, so we currently don't monitor all of our environments. We monitor just the customer-facing production environments. It would be nice if we could monitor our dominant environments, but we will have to pay a lot more due to the scaling issue. So, there's a balance there between what we would like and what we are willing to pay for.

We have had issues in the past with data collection. Maybe it is due to pushing the limits of what LogicMonitor can do, or even the devices its monitoring. For example, we have a couple of F5s that are heavily used with a number of data sources on them and the SNMP couldn't actually pull all the information back in time, which was causing blind spots.

We have probably close to 100 users who use LogicMonitor, not all of them on a regular basis:

  • We have infrastructure engineers who maintain the infrastructure of our environment.
  • We have product engineers who maintain the IT server environments for the products. They work closely together with the infrastructure engineers.
  • We have our automation team and DevOps team who use LogicMonitor to do performance modeling on their environment and learn the automation processes that they have. They also use the API fairly heavily. 
  • We have software engineers on the teams who are monitoring specific server processes.

There are heavier and lighter users in all those areas. We have primary admins who administer LogicMonitor, and we're the heaviest users of it.

How are customer service and technical support?

Their technical support is very good. When we have an issue, they are usually knowledgeable enough to handle it. If not, they at least know what the issue is. It seems like they're sitting right next to a DevOps software engineer because it doesn't take them long to escalate to the developers. They are very good at getting back to us. I would give them 10 out of 10 in terms of their response.

Which solution did I use previously and why did I switch?

LogicMonitor has become our standard for all the products. Each product is basically an acquisition, e.g., we got rid of Datadog recently and phased out Splunk. The other solutions all came with their own tools, and we have gotten rid of all those other tools. A lot of that happened before I joined.

How was the initial setup?

I was not involved in the initial setup.

I was at the company for enabling the cloud and Kubernetes, which was a fair amount of work to pull that information in and reconfigure the cloud devices. We had them monitored as regular resources, but needed to migrate them over to monitoring them as cloud devices. It was a fair amount of work with no good way to automate it.

What was our ROI?

We haven't had as big a cost for downtime, so that has saved us a lot of money.

I am on a call every Monday where we evaluate all the alerting that has been done in the previous week. We have gone from constant complaints two years ago down to basically nothing.

When we spin up new servers and network devices, we have NetScans that are going on in LogicMonitor. It's a weekly scan on each subnet. If it detects a new device, then it will look it up in the DNS. From there, we have everything named appropriately, such that they are named in a way where LogicMonitor can, using property sources, figure out who the device belongs to and what the device does. This is in addition to it doing a standard SNMP network monitoring for the device to determine what it is. It uses that information, along with the name and property sources, to automatically assign where that device goes in our resource tree, then starts holding that device. That has been a lot of work, but it has been very fruitful in terms of being able to be hands-free and hands-off for bringing new devices into LogicMonitor. This saves us about five man-hours a week.

Which other solutions did I evaluate?

When we were evaluating software packages (and we were already using LogicMonitor at that point), LogicMonitor became one of the few solutions that ended up on our short list because it can handle cloud and on-prem. They are really good at both. Solutions, like Datadog, don't give you the option to monitor on-prem hardware. They assume that you are just in the cloud because why would anyone be on-prem when there is cloud available, then you can spend a lot of money in the cloud. 

What other advice do I have?

We have used dynamics thresholds in only a couple of cases. We didn't necessarily see the application of dynamics thresholds in looking at critical alerts. So, we haven't used that a whole lot. Also, we haven't really leveraged the AI pieces of LogicMonitor. We are at a point with our tuning that we haven't needed to do so. If teams started complaining about specific alerts, like specific servers showing trends, increasing or decreasing, then we would probably do it, but we have been able to handle those concerns with static thresholds at this point.

I would rate the solution a nine out of 10.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Douglas Hoover - PeerSpot reviewer
IT Systems Engineer at a manufacturing company with 1,001-5,000 employees
Real User
We can more finely tune the details of our monitoring
Pros and Cons
  • "The alerting would be number one in my book. The thresholds for getting alerts for different criteria are pretty well-thought-out. We don't get many false positives or negatives on the alerting side. If we do get an email alert or some similar alert, we know that it is something that has to be looked at."
  • "Some more application performance type monitoring would be nice. For example, an APM type solution, which would not necessarily completely replace it, but be able to tie into to what we're seeing on the application performance side so we can correlate what's going on with the application versus the underlying infrastructure."

What is our primary use case?

The biggest things are infrastructure monitoring and alerting. This is mostly for our virtual machines, but it is also for other networking equipment and a few other pieces as well.

We are at the newest update. It is a mix between on-premise Collectors and their software as a service (SaaS), which is the newest update. Our Collectors are also on the newest version right now. While they don't have to be the newest version, they tend to get pretty close to the newest version to work properly.

How has it helped my organization?

We have used the solution’s ability to customize data sources to a small degree. We are able to more finely tune all the details of what we are monitoring. This comes down to the false negatives or positives, and being able to alert on the actual details that we want to be alerted on.

What is most valuable?

The alerting would be number one in my book. The thresholds for getting alerts for different criteria are pretty well-thought-out. We don't get many false positives or negatives on the alerting side. If we do get an email alert or some similar alert, we know that it is something that has to be looked at.

I built a remote workforce dashboard, which is my favorite dashboard. When the company pretty much all started working from home, I put together a lot of different graphs of types of infrastructure pieces necessary for users to be able to work from home and put those all onto one dashboard. Therefore, at a glance, we could view the health to make sure anybody working remotely would be in good shape and be able to work successfully.

The reporting capabilities are pretty effective, if you know what you are looking for. We don't use the reporting features a whole lot. However, when I have gone in to create reports, as long as you know what you want to be included in the report, it's definitely pretty quick and easy to get the reporting started.

What needs improvement?

Some more application performance type monitoring would be nice. For example, an APM type solution, which would not necessarily completely replace it, but be able to tie into to what we're seeing on the application performance side so we can correlate what's going on with the application versus the underlying infrastructure.

For how long have I used the solution?

Our company has been using it for four to five years. I personally have been using it within the company for about three years.

What do I think about the stability of the solution?

It's been highly stable. We have had two brief outages, which lasted less than an hour, in the three years that I have worked with them.

We have two people (at any time) dedicated to deployment and maintenance. They are our systems engineers.

What do I think about the scalability of the solution?

It is easily scalable.

We have about 20 users working with the solution who are mostly systems engineers. We also have some DevOps engineers and a few software architects who use it.

We have 1000 resources that we are monitoring with a couple hundred websites. As our company grows, we do plan to increase usage, but nothing major. It will probably be about a 10 percent increase over the next year or two.

How are customer service and technical support?

I'd rate the technical support pretty highly. The few times that we have had to put in a ticket for support, they have been very helpful. Every time that I can remember, the issue has always been something on the actual resource being monitored. While not technically LogicMonitor's fault, they were still able to help us quickly identify and resolve it.

Twice in the last three years, we have had brief outages between LogicMonitor and our solution. We received phone calls almost immediately from LogicMonitor indicating this. It was a very quick reaction. We know the issue isn't on our side, which is good, in these particular cases.

Which solution did I use previously and why did I switch?

LogicMonitor was able to replace at least two different solutions that we had in the past for monitoring. One of them being Logicworks. LogicMonitor was able to monitor a wide variety of websites, devices, and virtual machines. We were able to consolidate some of our monitoring so we are one single source now instead of multiple.

How was the initial setup?

The solution’s automated and agentless discovery, deployment, and configuration has been helpful. I've used some of the automated discovery, especially when we've changed data centers and put a bunch of new hosts into our data center. I used their discovery tool. It was able to find and pull in most of the resources that we actually wanted. Even though I wasn't there for the initial deployment, for the times that I have used it, it seems very helpful. There are still some manual processes and checking that we do, but it has helped out a lot.

Out-of-the-box, it was able to monitor vSphere virtual machines, which was the biggest for us. We also have network load balancers, switches, and firewalls that it was able to pull in. We had to do very little to get it monitoring and reporting correctly.

What was our ROI?

We have definitely seen ROI with LogicMonitor. We used to provide 24/7 IT support for our users. We have since been able to change to operating just within normal business hours for IT support, and LogicMonitor was a large part of being able to accomplish that.

LogicMonitor has reduced our number of false positives compared to how many we were getting with other monitoring platforms. We have seen a 50 percent reduction in false positives, possibly more.

What other advice do I have?

It really just comes down to making sure that we're getting alerts on something that actually does need attention.

We're starting to look into the solution’s Artificial Intelligence for IT Operations (AIOps) capabilities for things like anomaly detection, root cause analysis, or dynamic thresholds to see if it might be useful for some of our services.

Take a look at your environment and at what level of detail you will need for monitoring. One of the advantages to LogicMonitor is just monitoring your vSphere environment without monitoring the individual VMs within it. You still get a lot of detail about those VMs as instances. To put a VM in as a resource, instead of an instance, you get a lot more granularity on the operating system side for what you can look at. However, just monitoring your vSphere environment alone gives you a surprising amount of detail.

The biggest lesson I've learned is you need to understand what role your different devices play in your infrastructure in order to successfully monitor them. Get a detailed list of the devices that you do have in your environment that you want monitored and why you want them monitored. The why you want them monitored will tell you what different things you might want to be alerted on because LogicMonitor will collect a lot of information about your devices. Narrowing down what you actually want to be alerted on is the important part.

I would rate the solution as a nine out of 10.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Paul Dinapoli - PeerSpot reviewer
Sr. Systems Engineer, Infrastructure at NWEA
Real User
Improved our organization with its capacity planning
Pros and Cons
  • "It has improved our organization with its capacity planning. We have a performance environment that we use to benchmark our applications. We use it to say, "Okay, at a certain level of concurrency, we know where our application will fall over." Therefore, we are using LogicMonitor dashboards to tell us that we're good. Our platform can handle X number of clients concurrently hitting us at a time."
  • "The ease of use with data source tuning could be improved. That can get hairy quickly. When I reach out for help, it's usually around a data source or event source configuration. That can get challenging."

What is our primary use case?

We are using the solution for on-prem, all our applications, and network monitoring. It fits everything. We use it for monitoring and reporting on our ESX, Pure Storage, Cisco, F5, Palo Alto environments. We also use it for alerting, graphing, and capacity planning. We use it for everything.

We are using the latest version. We have LogicMonitor Collectors onsite in our data center, but the dashboard and everything else is all the cloud model. We use both AWS and Azure as our cloud providers.

How has it helped my organization?

It has improved our organization with its capacity planning. We have a performance environment that we use to benchmark our applications. We use it to say, "Okay, at a certain level of concurrency, we know where our application will fall over." Therefore, we are using LogicMonitor dashboards to tell us that we're good. Our platform can handle X number of clients concurrently hitting us at a time. That's how we use it to size our business, e.g., size our ESX environment and Internet pipes. 

Our capacity planning team consumes the data on the dashboards. The bread and butter of using the data in the dashboards is to inform, "Hey, what upgrades do we need to make in six months?" So, that data gets consumed regularly by other teams.

In the three and a half years that I've been using it, we haven't had false positives. I'm the primary network engineer, so I can say with confidence, "We have the environment tuned to the point where we don't get false positives."

What is most valuable?

Its historical reporting: I can go into my production F5s and look at the CPU, memory transactions, application transactions, and bandwidth utilization. Then, I can use all of the graphing metrics. I can have a dashboard for my production environment and all of my critical elements where I can graph utilization over time and use it for capacity planning. It's a single pane of glass for everything about your environment health.

We build our own dashboards, creating dashboards for our various environments. It is all written in HTML5, so it's super easy to drag and drop, move things around, expand, and change dates. It's awesome. We can get as detailed as we want or roll up to a manager/director level. I like its ease of use.

I don't do much with reporting because the dashboards are good enough that they tell the story. I haven't actually clicked on the reports tab in quite a while, so we're probably under utilizing that. If you just go into a dashboard, and say, "Show me my F5 health for the last six months," the dashboard is good enough for that.

I have custom data sources for various things. With data sources, you can go down the rabbit hole real quick because they're very powerful. You can go to the LM Exchange, grab data sources, pull them down and put them into your installation, and then you can tweak them. The idea of a data source is that it matches. For example, if I have a collection of Cisco devices along with a collection of F5 and Palo Alto. There's a generic match criteria which says, "Is a Cisco. Is an F5. Is a Palo Alto." However, it also has all these other match conditions. Therefore, you can build Redex filters or match on 10 Gigabit Ethernet, but not 1 Gigabit Ethernet. You can get super deep in the weeds, and it can get complicated pretty quick, but their support is fantastic. 

The solution provide us with granular alert-tuning for devices. E.g., I can use it for application website checks, where I can set up an automated check from a bunch of different test facilities. So if I want check my application, I can ping it from five locations. I can tune the data source so that if the millisecond response time is ever greater than 500 milliseconds, it lets me know. I also can tune it so it won't alert me on one fail, but alert me on three fails. For any data source that you're collecting for, you can set thresholds for notice, warning, critical, and what to do if it fails one, two, or three times. You can just go crazy tuning it.

We found the solution monitors most devices out-of-the-box, such as, F5, Cisco, Palo Alto, ESX, Pure Storage, Windows database connectors, ActiveBatch. and Rubrik.

What needs improvement?

The ease of use with data source tuning could be improved. That can get hairy quickly. When I reach out for help, it's usually around a data source or event source configuration. That can get challenging.

For how long have I used the solution?

I joined NWEA about three years ago and was new to LogicMonitor at that time. Three and a half years is how long I've been using it.

What do I think about the stability of the solution?

The stability is perfect. It is 100 percent.

Right now, we're collectively administrating it across the organization at five or six people. It doesn't take day-to-day massaging.

What do I think about the scalability of the solution?

We have close to 50 users utilizing the solution. It's mostly a production/operations audience. My Ops team has a couple hundred people, but I doubt that many of them would be consuming the dashboards on a regular basis.

The product is extensively being used. It's completely a part of our production environment. We couldn't maintain our environment without it. It's production-impacting.

I've never been presented with a scenario where it didn't scale.

How are customer service and technical support?

Their support is fantastic. The support is always super friendly and helpful.

From the dashboard, you click support. You chat with an engineer, saying, "I'm trying to clone this data source that already exists and I want to tweak it so it only applies to interfaces with this tag." You can clone a data source, tweak it to match what you want, negate the things you don't want, and then you have a new data source. You can take all of their stuff out-of-the-box, and it generally works, then you tweak it as needed. So, data sources are pretty easy to use.

Which solution did I use previously and why did I switch?

I think my team was using Nagios before. That's just a burning trash heap of an old application.

In my organization, as a whole, we have many chefs in the kitchen. We, the infrastructure team, picked LogicMonitor, then we moved all our stuff to it. However, the database team still relies on Nagios because they're like dinosaurs. DevOps uses Sensu Prometheus, collectd, SIEM, and a laundry list of others. The only reason why LogicMonitor hasn't consolidated is because our teams have the freedom to choose their own tools, and we do. Unfortunately, we tend to overspend on duplicate functionality. I don't think it's because LogicMonitor can't do it, but because the infrastructure team picked it, the Dev Ops team was like, "Well, that's your guys' tool. You guys use it. We're going to go pick our own thing." We were like, "Okay, go ahead.

How was the initial setup?

I know that we have added extra Collectors, and it's super simple. We get to a point where we have too many instances on a Collector and it starts working too hard because it's just a VM. So, we spin up another Linux VM, download their Collector code, install it, and then you have another Collector running in 30 minutes. It's pretty straightforward. We add collectors fairly regularly, and it's pretty easy.

I know getting it installed is not that big of a deal, but getting things migrated off of old stuff can be time consuming. However, I wasn't around for it.

If we were implementing LogicMonitor now, we would need to identify when to pull the plug on Nagios, then identify what we wanted to monitor so we were not running duplicates.

What about the implementation team?

One person is needed for a new LogicMonitor deployment.

What was our ROI?

We use LogicMonitor for our alerting and integrate it with PagerDuty for on-call paging. That is key to operational uptime. We live and die by the number of SEV-1, SEV-2, SEV-3, outages, and uptime. It is absolutely critical that LogicMonitor alerts PagerDuty, which alerts the on-call. We are reducing the impact of incidents using the tool by alerting for incidents that we can respond to.

What's my experience with pricing, setup cost, and licensing?

I don't know what we spend on LogicMonitor, but I know that Cisco Prime is a multiple six-figure solution. Therefore, I know we are saving at least several hundred thousand dollars in that we're not buying Cisco Prime.

We pay for the enterprise tech support.

Which other solutions did I evaluate?

The organization I came from had a huge SolarWinds deployment. We also used Nagios, Cacti, and OpenNMS, which is an open source NMS platform. Unfortunately, I've had to do some work with Cisco Prime as well, which used to be called Cisco Works. I installed Cisco Prime for a handful of clients in a past life.

  • Pros of LogicMonitor: Ease of installation and use. 
  • Cons. Tuning data sources can be a bit labor intensive. However, once you get it set up, it's pretty straightforward. 

Having worked with OpenNMS, Cisco Prime, and SolarWinds, just the cost and complexity of those solutions is ridiculous. I would never advocate going back to that black hole.

What other advice do I have?

We're fairly self-sufficient. We already use Puppet for automation, and we're starting to move some workloads to Ansible. However, we wouldn't ask LogicMonitor to help us with automation.

Biggest lesson learnt: Know what you want to monitor and what threshold you want to alert from. E.g., if you don't do anything and just start monitoring out-of-the-box, it works. However, if you don't set thresholds, it's not telling you when to take action. So, if you just add things to LM and start monitoring them, you're not done. Until you've set a threshold for where something is actionable, you haven't really finished the job. That's my experience with NWEA. You can click on anything that we've been monitoring, and if you don't have any thresholds set, then you're just making pretty graphs.

I would rate the solution as a 10 (out of 10). I am a fan of the product. It's great.

Which deployment model are you using for this solution?

Public Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Subbarao Punnamaraju - PeerSpot reviewer
IT Operations Manager at a university with 10,001+ employees
Real User
Clear escalation chains mean the right people are alerted, decreasing resource usage and helping with planning
Pros and Cons
  • "Another feature from the technical aspect, the back-end, is the ability to allow individual users or customers to have their own APIs. They're able to make changes using the plugins covered by LogicMonitor. That is a very powerful feature that is more attractive to our techno-savvy customers."
  • "The dashboards can be improved. They are good, but there is a pain point. To show things to management, to explain pain points to other customers, to show them exactly where we can do better, the dashboarding could be better. Dashboards need to show the key things. Nobody is going to go into the ample details of Excel sheets or HTML."

What is our primary use case?

We use it to make sure that proper tuning is done for the existing monitoring.

In addition, our university has a number of schools and each is a customer of the main IT organization that manages and provides support for all the colleges, like the law school, the business school, the medical school, the arts school, etc. The goal, and one of the main use cases that we were planning and thinking about, was to be able to onboard all the devices, all the applications, all the databases, as required by individual schools.

We also wanted them to be able to create their own dashboard, tweak it, manage it, delete from it, and add to it. 

It's deployed as a SaaS model. LogicMonitor is out in the cloud.

How has it helped my organization?

When we were using Nagios and we had alerts but there was only red, yellow, green. Here, the good thing is that you have escalation: level-one, two, three, which are clearly defined, and what action needs to be taken for each level. The clear escalation chain and tuning helps, because we don't want to wake up the director for 80 percent of the cases. That would be ridiculous. But when necessary, the right people should be alerted, especially for the production environment. If something has been "red" or there has been no interaction for half an hour, it's important to know that and to take the necessary actions.

That's a key thing, being a production-operations team member, because I don't want my team to be flooded with all the noise of alerts for something which can be tackled by a specific team. Having escalation chains, so that the alert goes to the right team to look into that and take action, means the prod-ops team doesn't need to even look into it. We don't even need to ticket it. We only keep aware of it through the daily alert dashboards. That has made a big difference in our overall resource planning, because previously we had 400 to 450 daily alerts. By using this feature we cut that down to 150 to 200 which are "candidate alerts" that production-operations needs to take action on. They may require creating a ticket, or calling the right people, or doing some activity that needs intervention or escalation to the next level. We have been able to cut down on our resources. We don't need to have four members actively looking into the dashboard. We can validate things with one or two employees.

LogicMonitor has also helped to consolidate the number of monitoring tools we need. We had some third-party monitoring, four or five things, and they're all consolidated with LogicMonitor. The only exception is IBM Tivoli Workload Scheduler. But what we did was we integrated that via Slack. I'm not really sure why we weren't able to consolidate TWS. The plan is to get rid of TWS, but we could not do so immediately, until there is an alternate route. But apart from that, everything has been consolidated using LogicMonitor.

We were especially able to consolidate third-party cloud monitoring for AWS. There were discussions about how we could also integrate or combine Azure monitoring resources through LogicMonitor. The team has mentioned that it has plug-ins that it can use to combine that. We also had separate backup scheduling software, a tool that had separate monitoring, and that has also been combined with LogicMonitor.

And LogicMonitor has absolutely reduced the number of false positives compared to how many we were getting with other monitoring platforms. At a minimum they have been reduced by 50 percent. The scope of more tuning and going through the learning curve helped to bring it down. Within the first two or three months, we were able to bring the false positives down by 50 percent. That's a big achievement. That is the main reason we initiated this project of getting into LogicMonitor. There have been further talks internally about how we can eliminate them further, and bring it down by 70 percent compared to the false positives we were getting. That's our goal. So far, it has reduced the time we used to spend on them by 50 percent, both offshore and onsite, as we have an offshore team in India that works 24/7. We used to have multiple people in each shift and we have reduced that down to a single person in each shift. That's a big step in the right direction.

What is most valuable?

Tuning is one of the main components. We like to make sure that only the right alerts are escalated, and that alerts are being sent to the right members, as opposed to every alert being broadcast to everybody. The main thing is the escalation chains. We feel that is a very good thing, rather than sending all the information to everybody at each level. Having the ability to make those sorts of changes doesn't require you to do too much, out-of-the-box. You just need to create the basic entities, like who are the different people, who are the contacts, or email groups, and cover the data source and events which should be alerted.

Another feature from the technical aspect, the back-end, is the ability to allow individual users or customers to have their own APIs. They're able to make changes using the plugins covered by LogicMonitor. That is a very powerful feature that is more attractive to our techno-savvy customers.

In terms of basic functionality, from a normal user's perspective, the escalation chains and the tuning part that are embedded in LogicMonitor are the two most important things.

Among my favorite dashboards are the alert dashboards. Being a prod-ops team, we took the out-of-the-box alerts dashboard given by LogicMonitor and we have kept on tweaking it by adding more columns and more data points. The alert dashboard is something which is very key for us as a team. In general, it gives us more in-depth information about uptime, the SLAs, etc. LogicMonitor has done a good job of providing very user-friendly dashboards, out-of-the-box. There are so many things that we are still learning about it, how we can use it better, but the alerts dashboard is my favorite.

The reporting is something which I have explored, to send me an email every day with how many alerts, in particular how many critical alerts, there were. It's a good starting point. The reporting can be sent in both HTML and Excel and is accessible on the dashboard after you log in. These two things are very good. This is the first feature I looked at once we went live, because I want to know things on a day-to-day basis and a weekly basis. I activated the email feature because I want it to send daily, weekly, and monthly reports of my alert dashboard data.

We use LogicMonitor's ability to customize data sources and it's a must, because ours is a very heterogeneous, complex environment. Changing data sources is important for at least some of the deployments. For other organizations, it may not really be required to change the default data sources provided by LogicMonitor. But here, it was important to change them. That's where the capabilities of the embedded APIs really helped us. I'm not part of the team that makes those changes, but I worked actively with the teams that did, and I always got very positive feedback from them on how they would get the right answers from LogicMonitor. They had to make a lot of changes to the data sources, for each customer, and it worked out well.

What needs improvement?

There are a few things that could have been done better with the reporting. It could have a more graphical interface.

The dashboards can be improved. They are good, but there is a pain point. To show things to management, to explain pain points to other customers, to show them exactly where we can do better, the dashboarding could be better. Dashboards need to show the key things. Nobody is going to go into the ample details of Excel sheets or HTML.

Automation can also be improved. 

Finally, while this is a very good tool for monitoring and responding, if there was a way they could do something like PagerDuty or another third-party solution for alerting, integrate both monitoring and alerting, that would be an ideal scenario.

For how long have I used the solution?

I have been using LogicMonitor for close to a year. If I remember correctly, LogicMonitor was implemented in my organization as a replacement for Nagios. I was actively involved in that project right from the beginning of verification through going live. In the initial stages we may not have been actively using it, but we started learning about the tool and how to implement it about a year ago.

What do I think about the stability of the solution?

Overall, the stability has been good. We didn't have any issues during the phase after we set up and went live. 

The performance was also pretty good. We didn't have to wait for a response for any of the attributes on the dashboard or reporting.

LogicMonitor has the ability to alert you if the cloud loses contact with the on-prem collectors. We had a challenge within one or two months of deployment. The problem was the way we were using the collectors. We were actually using our Nagios server as one of the collectors. We were trying to eliminate that server altogether, because it was giving duplicate alerts.

Initially we had a challenge of not getting any alerts when the connection to the collector was lost. Later on we found that there was a routing table or there were some firewall changes that were needed. I would attribute that more the learning curve and what the best practices are.

Since correcting that problem, we haven't had an issue of any collector being down. There's no question about any of the alerting.

What do I think about the scalability of the solution?

The impression we got when we provided information about the number of servers, the number of end-users, and the number of networks that were part of Nagios back then, was that LogicMonitor said they could expand and double that, if things were to grow. There is scalability in that environment to support a big data buffer. So there should not be any problem with scalability.

In terms of DR, discussions are still going on as to what would happen if there were a disaster. 

As a whole, the organization has to use a monitoring tool. It could be Nagios, it could be LogicMonitor. There was a phase in which most of the schools were using both in parallel. But one after another, they are all happy to be using LogicMonitor. Usage-wise now, it's only LogicMonitor. Nagios has been cut down, so nobody is looking for any monitoring system apart from LogicMonitor.

There are some schools that still need to tweak it and tune it, because they have not given it much attention or have not really been required to actively monitor their solutions. We know where the priorities are, which school is the top priority and which schools were using Nagios more actively. But all the major customers that were using Nagios, once we unplugged it, have been happy with the LogicMonitor implementation. There are a few schools which are not actively using any monitoring system. They may get to the stage of actively using it, but, university-wide, everybody is using LogicMonitor. There is no other monitoring tool out there.

How are customer service and technical support?

We have evolved and have kept on making changes, as per the requirement of the customers and one good thing about LogicMonitor is that it has a very good support system. We have had chat sessions with them to ask questions which help each school, and the IT organization as a whole, to evolve a better monitoring and alerting tool.

The way LogicMonitor support responded during our initial setup was amazing. That's something I really enjoyed a lot. They never said something like, "This question should not be asked," or "This question is not a candidate for the chat session." For every question we would get a reasonably quick answer which we would be able to implement right away. They would also log in remotely and help if something was something beyond an individual's capability. That helped to migrate and complete this process in a quicker manner. LogicMonitor has a very highly talented support team that can answer the questions and help the customer right away. It's been wonderful.

I don't see that happening with all vendors. With other organizations, when you submit questions in the chat session, they'll take the request and they'll say, "Okay, we'll get back to you." LogicMonitor — and it's a differentiating factor — is there to provide solutions right away, rather than putting it into their ticketing system and escalating to level-2 and to level-3.

I really don't know if that level of service is only for specific customers, based on the contractual terms and conditions, or if it is the way they do it for everybody. If this is the way they do it for every customer, they should definitely be very proud of the way they are doing it. Their team is there to help support the customer instantly, versus taking their own sweet time.

I would encourage LogicMonitor to continue that same level of expertise, of people being there 24/7 to support customers. That would be a big differentiating factor compared to competitors.

Which solution did I use previously and why did I switch?

The main reason for migrating to LogicMonitor from Nagios was to eliminate the noise of alerts. It may have been because alerts were not properly tuned, but the visibility with Nagios was not complete. It became a bottleneck. 

Only one or two people had active access to tune things. If anything had to be done, there was just one guy who had to do it. We wanted to move towards a self-managed model. LogicMonitor is a solution which can be in that category, once it's deployed and there is a transfer of knowledge to each school.

We want each department to self-manage: manage their own dashboards and create their own reports based on their requirements. If they have a new device coming up, they can spin up a new AWS instance and onboard that, etc. It's the initial phase which is going to be challenging. But once we have the handover call with the individual customer, it's going to be easy, and that was not possible in Nagios.

We also wanted to have a proper escalation chain, which was not present in Nagios. That's something we have made use of in LogicMonitor.

Finally, we switched to use fewer resources and to speed up turnaround.

How was the initial setup?

The initial setup is complex. It's too picky. I'm a hands-on technical guy, although I don't call myself an SME, but I know everything right from networking, servers, databases, firewalls, to clustering, support, and operations. The initial phase is definitely a little bumpy for somebody who's not completely technically savvy. I understand that it's because there are so many features involved, and there are so many ways for onboarding and using the custom APIs, etc. To me, LogicMonitor, looks like too much of a technical-savvy company. There's good and bad in that. It depends on how you look at it.

The automated and agentless discovery, deployment, and configuration are good. We used that a lot initially. They did a good job with that. One thing that could be done is to make the naming conventions — adding different names like the IPs, the DNS lookup — a little better. They could eliminate some of the duplicate entries when you're onboarding it. I saw a lot of duplicate entries, which goes into the licensing. Apart from that, the way they provide a template or a flat file to the system for onboarding is good.

As for monitoring things out-of-the-box, it seemed that our database team spent more time in configuring stuff, whether MySQL or Oracle, etc. Now, LogicMonitor has come up with a very easy way for configuring and monitoring database components out-of-the-box. But that's something which I felt was a little bit of a pain point. I don't know whether it was that our team made it more complicated or LogicMonitor didn't handle it out-of-the-box.

Apart from that, LogicMonitor has done a good job of out-of-the-box monitoring of the basic resources within the servers — memory, CPU, disk configuration, etc. — as well as for HTTP, the web components.

While I wasn't actively involved in the planning for the implementation, I picked up things from the team which was actively involved in planning and implementation. The process was primarily to engage with LogicMonitor. Our team — the product owner and team members — worked together and was in touch with LogicMonitor to gather all the existing features that were available and how we would make use of all that. That was the initial phase during which we got to know the product completely.

We mapped all of the devices which were in Nagios to make sure we onboarded everything that was in Nagios to LogicMonitor.

We had several internal discussions where we told the schools how we were actively engaging with LogicMonitor to make sure that we would go in phases. The initial phase was knowledge-transfer, the second one was to onboard a school, or at least one application, to make sure that it was tested completely and then remove that from Nagios. We took time to make sure that they were getting proper monitoring and proper alerts, out-of-the-box.

While doing that, we found that there were a few things which were not properly configured in LogicMonitor, compared to Nagios. The goal was to improve on Nagios, minimize the false alerts, and have better features for reporting, dashboarding, escalation chains etc.

We had six to seven people actively involved in the process. Two to three were purely technical, and made use of LogicMonitor support very extensively, especially for some of the customized activities like using custom APIs. From the LogicMonitor side, there were two to three members from the front-office who were actively involved, and on the technical side they designated a couple of people whom we could directly contact on a day-to-day basis. We had a daily, separate session with each of our teams, like networking, business, operations, and DevOps, so that each team could ask questions about its pain points and get better information so that we could do things ourselves and, for things that were beyond us, to learn how they could help. We had a month of one-on-one sessions with them, every day, for two or three hours.

When we initially started the engagement with the LogicMonitor team, they came onsite to run a one-week session with all the key stakeholders: the customers, the technical team, and back-end operations team. That was a very useful session that helped kickstart things. At that point, not everybody knew completely how LogicMonitor works and how we could plan to migrate from Nagios to LogicMonitor. What were the things that we could retain? What were the things that we could just ignore? Overall, the exposure to LogicMonitor during that one-week phase, in terms of customer-engagement, was really a great experience for me. We also had the ability to quickly use the chat session online and ask questions.

The implementation team's role and its way of engaging with the customer was amazing. That's something which I really appreciated. That helped me. Once the engagement was over and the contract started, the online support was available. If we had a problem, we could type in our question or our problem right away. The support team would respond and fulfill our requirements. They would fix the problem.

Our deployment took two to three months. That includes the visits by the LogicMonitor to do some knowledge transfer and give hands-on experience to some of the key stakeholders. But during that time, not all places within the university were onboarded. Some schools were not really interested. I don't think they were properly updated. That was something that was more of an internal issue, because we were doing our own "selling" to tell them what the differences are between LogicMonitor and other things. We had to tell them that Nagios was going to be pulled and that they would be completely in the dark if they were not moving to LogicMonitor. So during those three months, there were still quite a few schools which were not migrated to LogicMonitor or didn't onboard all of their resources. But the majority of them were done in three months.

In terms of maintenance, we have three to four people involved. One guy was actively involved in the Nagios implementation and its maintenance. He was part of decommissioning that and completely taking ownership of LogicMonitor's technical aspects. One person is the product owner who interacts with all the stakeholders, the different schools, to make sure that they have their requirements met using LogicMonitor. One is a manager. And there is a person from the business point of view, who provides his pain points, and what they're seeing on a day-to-day basis. So those four people are actively dedicated — I would not call it to maintenance — but to the day-to-day LogicMonitor stuff.

There are the users as well. Each school has its own applications and services that they offer internally. I don't have exact numbers but there are about 20 of them.

What was our ROI?

It allows us to accomplish more with less by minimizing the false alerts.

And by giving the "keys" to the individual owners, it makes things faster.

Also, as I mentioned, we don't need to have as many people in each monitoring shift, in the 24/7 environment. Previously, we had alerts that went to everybody and everybody was up and looking into why we had a given problem. Now that we are splitting the problems into different buckets, we are not tapping into all our resources' time. That's an area where we're saving. As a rough ballpark, we are saving about 50 percent of the resources from an operations perspective.

What's my experience with pricing, setup cost, and licensing?

We have a separate team involved in licensing. I wasn't involved in that.

Which other solutions did I evaluate?

I believe they evaluated two or three other tools, but I was not part of that process.

What other advice do I have?

For the initial phase, rather than having only one or two functional guys participating, it's always good to have one or two technical folks in the discussions. That helps a lot. You don't want surprises if an organization decides to go live with this tool, and then realizes that technical things are not on board with the ideas of the functional team. That's something I can say based on my journey and experience.

Another thing that is important is to keep on having internal conversations; that you value and give importance to everybody. It's good to educate them. Use the help of the LogicMonitor support team for internal question/answer sessions and do anything that will help them feel more comfortable. It's not about two or three members being really happy with this. LogicMonitor is something which can only be successful in automation if all the key teams and team players are on the same page.

The biggest lesson has been how we could make everybody be part of the mission. Previously, monitoring used to be in the hands of one or two, and each of them had a lot of overhead to deal with. But by doing this, we have reduced the complaints from individuals and each stakeholder. They know how they're configured. They know what the escalation chain is, so they're confident. If there is something not working, it's because of the way they have it configured.

By doing this we have minimized the internal noise. We have given everyone the opportunity to know the pain involved in monitoring and what it takes to have a better monitoring system in place, and how each person can contribute and think outside the box. They know how to put into place the right parameters and the right numbers. Previously, 70 or 80 percent of things were escalated internally. There was no involvement of the particular customer. If there was a problem for a team, it was somebody's problem, not their problem. Now, it has all become their problem. This is a very high-level benefit of using tools like LogicMonitor, which involves everybody more.

I would give LogicMonitor an eight out of 10. There are a few things that LogicMonitor is also learning from their experience with the customer. Most of the customers are giving feedback to LogicMonitor for improvements and to make changes. I'm sure that very soon it will be a 10, but at this point in time, from my experience and journey, it's an eight.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Systems Engineer at a tech vendor with 201-500 employees
Real User
Saves us cost-wise in the amount of time we're not spending with false errors
Pros and Cons
  • "The solution’s overall reporting capabilities are pretty powerful compared to ones that I have used previously. It seems like it has a lot of customizations that you can put in, but some of the out-of-the-box reports are useful too, like user logon duration and website latency. Those type of things have been helpful and don't require a lot of, if any, changes to get useful content out of them. They have also been pretty easy to implement and use."
  • "It needs better access for customizing and adding monitoring from the repository. That would be helpful. It seems like you have to search through the forums to figure out what specific pieces you need to get in for specific monitoring, if it's a nonstandard piece of equipment or process. You have to hunt and find certain elements to get them in place. If they could make it a bit easier rather having to find the right six-digit code to put in so it implements, that would be helpful."

What is our primary use case?

We use it in a few different ways:

  • For general monitoring of operating systems. 
  • Leveraging some customized offerings, specifically for creating application monitoring. 
  • Some external site-to-site monitoring in various places, ensuring that our websites and external pieces are available over an Internet connection. 

How has it helped my organization?

It has given us a clearer view into our environment because it's able to look in and pull things off of the event viewer or log files. We have been able build dashboards and drill down on things, which has helped improve our time to respond. Also, in the case of specific conditions being met in X log, we have been able to get in and take a look at that a lot faster rather than trying to connect and parse through the log and figure it out. It's able to flag that and work us towards a solution faster than normal.

We have a few custom data sources that we have defined, especially for our application. It is able to leverage a specific data source and build monitoring rather than just having it be a part of the general monitoring. It is segmented and customized for what we actually need, which has been pretty helpful.

Custom data sources have given us a bit more information from a point in time and historically viewpoint. In the console, it is easy to compare week-over-week or month-over-month traffic and numbers. As changes are made in the environment, we can look and have better historical knowledge, and say, "We started seeing this spike three months ago and this is the change we made," or, "We started seeing this CPU usage reduced after the last patch or software update." It lets us be able to compare and get a better insight into the environment over a longer period, rather than just at a point in time, when investigating an issue.

The solution has allowed us to have specific alerting for specific messages. If we know that X messages on a notification let us know this state has happened, we can then set that to be either an email notification or a tracking notification. In the cases of a log meaning that we have a specific issue, we can have it send an email and let us know. Thus, we have a better, faster response. We also have integrations with PagerDuty, which allows us to be able to make things very specific as to the level of intervention and the specific timing of that intervention. It has been nice to be able to customize that down to even a message type and timing metric.

The solution’s ability to alert us if the cloud loses contact with the on-prem collectors has been helpful to know. E.g., if we are having an issue with our Internet connection or some of our less monitored environments, such as our lower environments in different data centers where we don't have as heavy of monitoring. Therefore, it's helpful to have that external check there versus our production environments which are heavily monitored. Typically, we are intervening before it times out to say that it's lost the connection. It's been helpful to have that kind of information. This way, we know either via a page or email if there is any sort of latency or a timing issue with it connecting to the cloud. It's been helpful that it's not just a relying on the Internet connection at our site, but is able to see into our environment, then it monitors when there are connectivity or timeout issues.

We use it for anomaly detection because our software is designed to function in a specific way. Therefore, anomaly detection is helpful when there are issues that may not be breaking the software but when it is running in a nonstandard way, then we can be alerted and notified so we can jump on that issue. Whether the issue will be fixed it in the moment or handed off to development to find a solution, it's helpful to have that view into how it's running over the long-term.

It is a pretty robust solution. There are a lot of customizations that you can put in for what you want it to be checking, viewing, and alerting on. As we get alerting and realize that that's not something we need to be alerted on or it happens to be normal behavior, a lot of that information can be put back into the system, to say, "Alright, this may look like an anomaly, but it isn't." Therefore, we can customize it so it gets smarter as it goes on, and we're really only being notified for actual issues rather than suspected issues.

It's been helpful to be able to have some information to be able to pass along to development that's very specific as to what the issues are. E.g., we can see an anomaly during periods of time while this is running, then pass that along so development can figure out, "Is it a database issue, an application issue, or possibly a DNS level issue?" They also determine if there are further things that need to be dug into or if it is something that can just be fixed by a code change. 

The solution’s automated and agentless discovery, deployment, and configuration seems to work pretty well for standard pieces, like Windows servers and your standard hardware. It has been able to find and add those piece in. Normally, if I'm running into an issue with finding something, it's usually because it's missing a plugin or piece that just needs to be implemented, which just needs to be added in manually. However, 99 percent of the time, it finds things automatically without a problem.

What is most valuable?

The flexibility to be able build a custom monitor is its most valuable feature. Because it's just a general CPU or memory, it doesn't always give you a full picture, but we can dig into it, and say, "These services are using this much, and if these services are using more than 50 percent of the CPU, then alert us." We can put those type of customizations in rather than use the generic out-of-the-box things with maybe a few flags. It's been very nice to be able to customize it to what we need. We can also put in timings if we know there are services restarting at 11 o'clock at night (or whenever). We can put those in so as long as it's doing exactly what we want it to do, which is restarting the service, then it won't monitor us. However, if there are any issues or errors, then it monitors us right away. That's been really helpful to leverage.

We use a few dashboards. A couple are customized for specific groups and what they maintain. As I am doing projects, I'm able to make a quick dashboard for some of the things that I'm working on so I can keep track without having to flip between multiple pages. It seems pretty flexible for making simple use cases as well.

I have a custom dashboard which monitors each site and does virtual environment monitoring, such as CPU, memory, timing, etc. It was easy to get in place and adjust for what I wanted to see. It has been one of the go-to dashboards that I have ended up utilizing.

We can kind of get a single pane of glass and be able to view specific functions, whether it be sites or the entire environment. We are able to quickly get in, see what's going on, and where issues are coming from rather than having to hunt down where those issues are. Therefore, it's helped us more with our workflow than automating functions.

The solution’s overall reporting capabilities are pretty powerful compared to ones that I have used previously. It seems like it has a lot of customizations that you can put in, but some of the out-of-the-box reports are useful too, like user logon duration and website latency. Those type of things have been helpful and don't require a lot of, if any, changes to get useful content out of them. They have also been pretty easy to implement and use.

What needs improvement?

It needs better access for customizing and adding monitoring from the repository. That would be helpful. It seems like you have to search through the forums to figure out what specific pieces you need to get in for specific monitoring, if it's a nonstandard piece of equipment or process. You have to hunt and find certain elements to get them in place. If they could make it a bit easier rather having to find the right six-digit code to put in so it implements, that would be helpful.

For how long have I used the solution?

Personally, I've been using the solution for about a year. We've had it in place for about a year and a half, but I came to the organization about a year ago.

What do I think about the stability of the solution?

I don't think we've really had a time where the application or monitoring nodes have failed. The connection to LogicMonitor has been very stable. We haven't had any connection issues to the SaaS offering. It's been pretty resilient and stable from our end.

What do I think about the scalability of the solution?

The scalability seems fine. Every time we've had to expand and add elements, we've not run into any delays or issues with it. It seems to expand with us as we've needed to use more features. We haven't had any issues with delays or timing. It's been able to handle what we've thrown at it.

There are at most 10 users at our company, who do everything from application monitoring to platform engineering to some developers who have access into the solution for some monitoring pieces. Varying segments have been able to get in and they all seem to have had pretty good luck with accessing and using it.

We are using LogicMonitor pretty extensively. We're using it from low level environments, development, quality assurance, all the way up to user testing and production. We have leveraged it in as many segments and parts of the business as we can. It has been really helpful to have it be able to handle different workloads, but also be customized. This way, we're not getting triggered at 2:00 AM because a switch is on in the office reporting an issue, instead we can adjust those timings to report for specific times of the day rather than any time during the day.

We have about 1,000 totals including VMs and physical devices.

How are customer service and technical support?

The technical support has been pretty good. I haven't had to leverage it, but some of the people I work around have taken it on when we have had questions or issues to leverage the process. They seem to be fairly responsive and the timing of it is usually good. We are usually hearing back in minutes instead of hours. We haven't had any major issues with them.

Which solution did I use previously and why did I switch?

We've eliminated three different monitoring tools by leveraging LogicMonitor. We had two different in-house, custom built tools that were used for a long time that we were able to roll off, and we also used Nagios. I have also used Zabbix and Orion.

LogicMonitor has reduced our number of false positives compared to how many we were getting with other monitoring platforms. We leveraged the solution to focus it down and only look at the specific things that need monitoring, e.g., rather than every time a service is down we get notified, instead if it's not a critical service, then we can just get a flag, go back, and check it. This is rather than getting spammed with hundreds of emails about specific things being down. Thus, we can customize it for what we actually want to know and need for non-issues.

How was the initial setup?

It had already been implemented before I joined the company. We've added a few functions since then, but the core and initial launch of it had already been implemented and heavily used at that point that I joined.

What was our ROI?

We have definitely seen ROI.

We have seen probably a 80 or 90 percent decrease in false flag alerts.

We move our people so they're able to be more proactive on things, rather than having to deal with parsing through and figuring out if something is an issue or a non-issue, that cuts down on our personnel time of managing the day-to-day processes. That's been helpful. At least from conversations I've had with management, they've seemed to have found it to be a good investment and solution for getting our normal work done, but also for making sure that we're ready to go if something does go wrong.

What's my experience with pricing, setup cost, and licensing?

It definitely pays for itself in the amount of time we're not spending with false errors or things that we haven't quite dealt with monitoring. It has been good cost-wise. 

What other advice do I have?

I would definitely recommend LogicMonitor. It's something to look at either when signing up for a trial or for a use case process . It's been a great product. It has customizations when you want them, and out of the box solutions if you don't want them. It works and is reliable. Compared to other monitoring platforms I've used in the past, it seems to be the most powerful and robust that I've dealt with.

The solution monitors most devices out-of-the-box, such as, Windows, Windows Server, Linux, F5 load balancers, Cisco firewalls, and Cisco switches. Those have been pretty easy to monitor. Our issues have been with one-off or nonstandard platforms that we've implemented. Otherwise, everything has been pretty easy to implement.

I would rate it as a solid nine (out of 10).

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Principal IT Consultant at a tech services company with 51-200 employees
Consultant
Granular alert-tuning allows us to monitor and get automated reports on vCPU per CPU percent, to help track VMs
Pros and Cons
  • "I really appreciate the reporting function because it allows me to create dashboards that will be emailed to me during the morning so that I have a complete overview of my client's health, within a specific time frame."
  • "The process of upgrading some of the collectors has been a little bit confusing. I need to understand that better."

What is our primary use case?

Right now we have two or three clients that have medium to large data centers, and we use LogicMonitor to give us an overview of the status of the infrastructure: if there are any holes or any issues either with memory, CPU, or storage devices, such as how much storage is consumed. 

One of them is an insurance company which has a presence here in Puerto Rico and in the U.S. It employs about 5,000 people and has two data centers in Puerto Rico and two more in Florida. Their main data center is in Atlanta with disaster recovery in North Carolina. That one is what I would consider a large environment. We also have a medium-size company in the communications area here in Puerto Rico and has about 2,000 employees. It covers all of Puerto Rico. We monitor their infrastructure in terms of servers, storage, and backups, among other things.

We are also monitoring things such as vCenter, its data infrastructure, and NetScaler networking cards. We have a complete overview of the health of the client at a specific moment. 

We're using the SaaS solution. Everything resides on the LogicMonitor cloud. We just have connectors to extract the data from different servers that we have.

How has it helped my organization?

It keeps us informed whenever we have an issue. Once it's been configured and LogicMonitor is gathering the information through the connectors, it keeps me and my supervisor informed of any issue on the customer's platform. Sometimes they don't notify us if they're going to reboot a server, so we get notified whenever the server is rebooted. Or if there is a server having memory processor or storage problems, it keeps us one step ahead of such situations. If they call us and say that they have a problem, we can say, "We noticed that you rebooted the server," so it gives us an advantage.

The solution provides granular alert-tuning for devices. For example, in virtual environments you have to take into consideration that the virtual machines have available vCPUs. There is a specific metric called "vCPU per CPU percent" and we monitor that data point because it will let us know whenever we have too many virtual machines for the available CPUs on a hypervisor. That has helped us a lot. We do automatic reports on that every morning, just to check how the virtual environment is behaving in terms of the availability of vCPUs.

We also use its AIOps for root cause analysis and it is very good. We do have to adjust the thresholds at times for specific points that we are looking for, but once that is done, it works like a charm. We have no issue with that at all. We get the alerts we want at the time that we need them. This definitely helps us to be more proactive in resolving issues and preventing problems because we don't have to waste time entering, for example, vCenter to look for metrics. Checking one of the clients could easily take me more than an hour, just to check that everything is fine. With LogicMonitor, we receive the alerts whenever there is an issue and that allows us to work more easily and be more proactive instead of being reactive.

It has also helped us to automate. Depending on the kind of alerts, the person who works in a specific area is notified, so I don't get all the alerts myself. We have storage, virtual infrastructure, and Citrix. So whenever there is an issue with Citrix, the person from Citrix is notified. If it is with storage or infrastructure, the right person is notified.

In the morning, without LogicMonitor, it could take about an hour to an hour and a half to go through every system. Right now, I just check my dashboards and I know if there's something that needs to be addressed. Most of the time we get notified either by email or by SMS if there is something that we need to take care of, in terms of infrastructure and storage. Looking at the dashboard takes about 15 minutes and we know that everything is working fine.

What is most valuable?

There are at least two most valuable features for us. I really appreciate the reporting function because it allows me to create dashboards that will be emailed to me during the morning so that I have a complete overview of my client's health, within a specific time frame.

One of the dashboards I use a lot is the storage dashboard. We migrated recently from one storage to another, and it allows me to keep everything in focus: How much space am I using, how much is being compressed, how much is being deduplicated? It also provides predictive functionality: How long will it take to fill this disk? That helps us to make decisions on whether we need to buy more space or we need to move or rearrange something within our storage infrastructure. I like that dashboard very much.

The other valuable feature is the alerts. We receive alerts by email and SMS with escalation schemes, so if we notice that an issue is not addressed in a specific amount of time, it will escalate to the next person in the chain. We can rest assured that specific problems are resolved within a specific time frame. Because we receive the alerts by email and SMS, whether I am at my computer or not, I will still receive the messages through SMS on my phone. That is a really cool feature.

In terms of the overall reporting of LogicMonitor, at first it was a little bit confusing. But once you get the hang of it, it's pretty easy to add the widgets and arrange the information that you need or to filter it. It's pretty easy to use.

What needs improvement?

The process of upgrading some of the collectors has been a little bit confusing. I need to understand that better.

For how long have I used the solution?

I've been using LogicMonitor for about three years.

What do I think about the stability of the solution?

It's pretty stable. I haven't had issues with the collectors being disconnected. Whenever I see that there is no data flowing from the environment to LogicMonitor, it is mostly because somebody has changed a password on the host or on my host. As soon as I fix it, everything just keeps on working, straight up.

I believe we've only had an issue where a collector disconnected from the cloud. It happened to my supervisor and he just removed the collector and installed a new one and everything has been working out fine since. The solution's ability to alert whenever we have a disconnection of the collector to the cloud is an advantage.

What do I think about the scalability of the solution?

It has pretty good scalability. We have added several servers and I haven't seen any problems or issues at all.

The topic of increasing our use of LogicMonitor is being discussed, but it's mostly my manager discussing it with the group of managers and the owner of the company. I am not aware of any plans, but it has been mentioned that there is a possibility of expanding.

How are customer service and technical support?

I haven't used LogicMonitor's technical support. Every time that I need to validate or make some changes in a configuration, the support page is pretty helpful. I have found the answers to all my questions there, so I haven't needed to contact support.

Which solution did I use previously and why did I switch?

I don't believe the company had a previous solution.

How was the initial setup?

I wasn't involved in the initial deployment. However, in terms of configuration, I have done many rearrangements of specific hardware and discovery of new equipment. That was pretty easy. It didn't take that much for the configuration, mostly for storage or infrastructure, like hypervisors. It was pretty straightforward. The Help page is pretty straightforward too. You will find what you're looking for.

LogicMonitor monitors most devices out-of-the-box. I was pretty amazed with all the documentation on how to configure specific hardware, like Citrix NetScaler ADC and PureStorage FlashArray. Those were pretty easy to configure. Other things it was able to monitor out-of-the-box include Veeam Backup, NetBackup, VMware, Windows Server — all the versions that we're using are supported — SQL Server, Linux servers, Red Hat, Oracle. Those are a few that come to mind.

What was our ROI?

I believe we have seen return on our investment because we are receiving alerts and dashboards for specific time frames, so whenever there is a problem with some part of the infrastructure, we're able to provide the customer with valid information on what's happening and what was happening. It allows us to document things whenever root cause analysis is required for an issue.

What's my experience with pricing, setup cost, and licensing?

I don't know much about the pricing. My manager handles that. But I believe that, at least from his comments, the pricing is pretty reasonable for the licensing that we have.

What other advice do I have?

The big lesson I have learned from using LogicMonitor is to pay attention to the alerts we receive. Things get escalated to me whenever guys from the other teams do not acknowledge their alerts. I need to pay attention to those because they will tell us whenever a computer or server is being rebooted or if the drives are getting full.

There are six of us in my company on the services and support side. My manager is the person who actually configures it. In addition to me, there is the principal IT consultant for services and support. I do mostly storage and power infrastructure, in terms of servers. We have two more guys who work with Citrix. And there is another guy who does mostly networking. He works mostly with NetScaler ADC.

I give LogicMonitor 10 out of 10. From the time that I started using it, I haven't had any issues with the software at all. I get notified whenever they're doing upgrades and, whenever I need to do an upgrade to my collectors, I get the information with plenty of time to make arrangements if there is something else that needs to be done. I don't believe there have been any upgrade procedures have been done on the platform that have impacted us in any way. It's been a really stable and trustworthy platform.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Jason Fant - PeerSpot reviewer
Solutions Engineer at Black Box Network Services
Real User
Reduced our false positives significantly, improving our reliability, SLAs, and uptimes
Pros and Cons
  • "The dashboards are the big seller for us. When our customers can see those graphs and are able to interact with the data, that is valuable. They can easily adjust time ranges and the graphs display the data fast. We've used other tools in the past, where you'd say, "Hey, I want the last three months of data on a graph," and it would just sit there and crunch for five minutes before you'd actually see the data. With LogicMonitor, the fast reliability of those dashboards is huge."
  • "One thing I would like to see is parent/child relationships and the ability to build a "suppression parent/child." For example, If I know that a top gateway is offline and I can't talk to it anymore, and anything that's connected below it or to it is also going to be offline, there is no need to alarm on those. In that situation it should create one ticket or one alarm for the parent. I know they're working towards that with their mapping technology, but it's not quite to that level where you can build out alarm logic or a correlation logic like that."

What is our primary use case?

We use it for alarming on Cisco Voice systems, the Unified Communication stuff. We monitor all the gateways, trunks, SIP trunks, servers, and make sure all of the application is functioning, calls are being completed, and that there are no performance issues on the network or the voice system.

How has it helped my organization?

We used a different monitoring tool to do the Cisco Voice monitoring and our customers were very unhappy with us. It was missing stuff when monitoring, meaning it wasn't fully knowledgeable about checking all the OIDs and other things. It wasn't robust enough to allow us to customize it and build it out. Customers were getting very unhappy with that and they didn't like the dashboards, the graphs, and the reporting that came out of the other tool. When we moved over to LogicMonitor and we were able to show everything that we could actually deliver, a lot of our customers that were leaving came back to us or have provided us more services. We now have a proper tool that can deliver the services that we actually need, and that we've actually quoted and have contracts for.

The solution's ability to alert us if the cloud loses contact with on-prem collectors means we get alarms when a customer's collector isn't calling home anymore. That allows our engineers to know that there's some sort of serious outage. Either there's a power outage, the server crashed, or the internet's down. That's something that triggers our engineers to look at the customer and figure out why the monitoring solution is down. Is it the monitoring solution itself, is it the customer, or is it an act of God?

In addition, we had a lot of false positives before because we used a lot of VPN tunnels with other solutions. Moving to a SaaS solution and using LogicMonitor and the cloud has helped us a ton because it's improved reliability, SLAs, and uptimes. We've seen a 70 to 80 percent decrease in false-positive alarms.

Another benefit is that we went from three monitoring systems down to one. The first solution was Prognosis, which was developed by Integrated Research. The other tool was N-central, which is now provided by SolarWinds. We consolidated those two tools down into just LogicMonitor.

We've also been able to automate things such as cleaning up disk space or restarting a service. If the monitoring system catches a service not running, instead of initially sending off an alarm and creating a ticket, it's going to do some self-healing, to try to restart that service or run a script that cleans up some disk space. If that still doesn't fix the issue, it then passes the alarm on to create a ticket for a human to look at. 

That saves us time because, obviously, it doesn't disrupt an engineer and force him to try to log in to that customer and try to start the service or look at logs. It just says, "Hey, we restarted it. Everything's up and running," and there is no real impact to the company or business. It didn't take time for an engineer to look at it, respond to a ticket, and close the ticket. If a single service isn't running, that's about 15 minutes, at least, of an engineer's time. If an engineer doesn't have to do that three times a day, he's saving about an hour.

What is most valuable?

The dashboards are the big seller for us. When our customers can see those graphs and are able to interact with the data, that is valuable. They can easily adjust time ranges and the graphs display the data fast. We've used other tools in the past, where you'd say, "Hey, I want the last three months of data on a graph," and it would just sit there and crunch for five minutes before you'd actually see the data. With LogicMonitor, the fast reliability of those dashboards is huge. Allowing our customers and nontechnical people to see what is happening in their environments in an easy, friendly way is huge for us. That's the big feature we use and push on our customers. 

I have two favorites when it comes to dashboards. I put together a few dashboards for the voice systems that allow the customer to to see how the performance is going: green light/red light. They see green and everything looks good. Being able to click into that and interact with the dashboards to then drill down and get more info is awesome. The other thing that I really like is their Google Maps widget that goes inside of a dashboard. That is great for customers that have multiple locations across the country. They can see, "Oh, hey, I've got a regional outage in St. Louis, or the West Coast has a power outage, or everything is green. I see all my sites in my countries are green. Everything is good in my environment." 

Another valuable feature would be their logic modules. They are little scriptlets or settings so you can say, "Hey, I want to monitor this OID or these services," etc. That's huge in terms of customizability and having the system be robust. Out-of-the-box, monitoring solutions don't always have everything you need. You might say, "Hey, I know that there's this new OID for this new firmware," and you need to be able to write something to call that and pull it into the monitoring system. The logic modules within LogicMonitor, being so robust, is awesome because I can easily go into the tool, add something and push it out to all my customers and, boom, I'm off running with all this monitoring. And it took me five minutes to put it together.

In terms of the solution's reporting capabilities, I look at it in two ways. One of the ways is the dashboards. Being able to take all those dashboards and say, "Hey, I want a recurring report every quarter for QBRs," is awesome. On the technical side, for all the back-end stuff, being able to use reports to export information so that I can use it to inventory or check properties of stuff in the environment — do assessments — I really like those as well.

In addition, the solution's ability to customize data sources was big and something I did a lot of to build out the Cisco Voice monitoring, so that we could deliver what we've been contracted to do.

Another big thing we use a lot is LogicMonitor's granular alert tuning for devices. A customer might say, "Hey, we know this SIP trunk is going to have this utilization, so tweak the threshold for that one interface or that one SIP trunk at this level, but leave everyone else at the default." Or, "Hey, we're going to be doing maintenance on a power supply, so we'll need to set downtime or suppress alarming for that power supply, but let everything else that we're monitoring for that system go through." Using that granular ability is great for that. It's also great for adjusting alarming. They'll say, "Hey, we want this specific interface to be a priority-one alarm," but it's default is priority-two. Being able to tune that within the alert rules and get that granular and say, "This specific interface is going to be different, it's going to go somewhere else," or "it's got a different priority," is important.

What needs improvement?

One thing I would like to see is parent/child relationships and the ability to build a "suppression parent/child." For example, If I know that a top gateway is offline and I can't talk to it anymore, and anything that's connected below it or to it is also going to be offline, there is no need to alarm on those. In that situation it should create one ticket or one alarm for the parent. I know they're working towards that with their mapping technology, but it's not quite to that level where you can build out alarm logic or a correlation logic like that.

I would also like them to expand more on their resources view, which is their tree structure of all the devices and what's being monitored. I'd like to see some logical type of grouping of services. If I know I've got this web application which is using this SQL database and this service from this web server, it would be helpful if I could create a special view for those kinds of services and instances.

For how long have I used the solution?

I used LogicMonitor about six years ago at a different company. It was brought in there and I used it for a few years. Then I transferred to a different employer at which time I brought LogicMonitor in. It was in about 2014 when I first got exposed to it. With this new company, we've been using it for about four or five years now.

What do I think about the stability of the solution?

I am happy with the solution's stability. I haven't had any issues with reliability, with the service going offline or not being available.

What do I think about the scalability of the solution?

LogicMonitor will be able to scale to many more devices, if we need it to.

We're monitoring about 1,200 devices currently. That's a bit of a misleading number because there's so much more stuff we monitor, like virtual machines that don't really count as licenses, or even phones. We're also monitoring Meraki devices and cloud stuff. We're monitoring almost 30,000 phones with the tool, but they're not really devices in terms of licenses.

How are customer service and technical support?

Their support is fantastic. They're always there to answer your questions and they're very knowledgeable.

How was the initial setup?

The initial setup was very straightforward. Installing the collector at a customer site is super-easy. You do a basic default install, "next, next, next, finish," and it's calling home. 

Adding devices and getting customers set up, whether they've got one device or 1,000 devices, is easy. I can import a CSV and it starts going out, scanning, setting up everything, and auto-discovering all the different services. There is a lot of automation that makes it easy for us. Before, with other systems, if I knew there was a Windows server and it had SQL, I would have to add these special SQL packages and then add this other package. And then I might forget: "Oh, hey, there's a special service I was supposed to monitor." Having all those data sources and automation within LogicMonitor makes it easier for us to set up and deploy.

The solution's automated and agentless discovery, deployment, and configuration is that ease of use. No matter how many devices there are, being able to easily import and add them in is great. Having it automatically know it's scanning SNMP, for example, when it finds this name in this one OID it knows it's a Dell Storage unit and that it needs to automatically apply all of these special Dell Storage unit monitoring services. It will scan how many hard drives there are. If it finds there are 12 hard drives instead of 24, then it only monitors 12. Or instead of having two power supplies in this unit, if I'm only seeing one power supply, I should only monitor the one. That automation is awesome.

LogicMonitor also monitors most devices out-of-the-box. For us, it's a lot of the Nexus switches and VSS, which are the Cisco Virtual Switching System. There was so much stuff and we didn't know what we could monitor with our other solution. We saw only the basic stuff. When we installed LogicMonitor for this one customer, and added the Nexus switch, all of a sudden we saw module stuff, a lot more interfaces, and different hardware things. All of that was out-of-the-box and we were blown away by that. We didn't realize we were missing 70  percent of what we could monitor on this one device until we switched to LogicMonitor. 

That was actually the big savior for us for this very large, high-profile customer. We were using N-central for them and it required 15 collectors to monitor these 4,000 devices. We were able to use LogicMonitor and get that down to two collectors to monitor all that. The customer had been calling us out on it saying, "Hey, how come you don't see this? How come you don't see that?" We had to throw our hands up in the air. Once we introduced LogicMonitor and showed them what we did within five minutes, and all of the stuff we could see, they said, "Perfect. We'll stick with you guys. You seem to have the right solution."

What was our ROI?

We have definitely seen return on our investment with LogicMonitor, especially once we showed how we could replace that Prognosis tool with it. The cost savings were through the roof. As an example, for one customer of ours, for one year with the Prognosis license, it would have cost $180,000. With LogicMonitor, it only costs us about $8,000 to $9,000. That's a huge savings, and it's great for the customer because it means we can lower our cost and they think we're losing money, but we're still getting so much. That was a huge benefit.

What's my experience with pricing, setup cost, and licensing?

It's affordable. The price we get per license is a lot cheaper than what we were getting with some of the other tools. There are other monitoring tools out there that are cheaper, but what you get with LogicMonitor, out-of-the-box, makes it worth the cost. It works well.

Which other solutions did I evaluate?

There were a few other tools we looked at. Their pricing, how complex their setup was, and even their dashboards and reports were all considered. LogicMonitor seemed to fit all those categories for us and give us huge improvements. It was a no-brainer.

We looked at WhatsUp Gold. We looked at the main SolarWinds package and there was a tool called ScienceLogic that we looked at. And there was also Nimsoft.

What other advice do I have?

Do it. Your customers are going to like it, once you show them the dashboards, the pretty colors, and the ability to easily interact with it. That's going to win over your customers. I guarantee it. I've seen it happen. You can say, "I've got this tool that does everything," but if the customer can't tangibly see what the tool is doing, they'll say, "Well, what am I paying you for?" And they don't want to see generic spreadsheets. They want something that's easy to use and interact with.

I like how they've been improving on it over the years. It seems like they're going in the right direction. LogicMonitor fits what our company needs, and we plan to keep on using it for at least five more years, until something else gets better or they're out of business.

We don't use its AIOps capabilities for things like anomaly detection or root cause analysis yet, but that is something we are looking into. I know they're releasing those features in phases. They've got the first phase of AIOps and then they're pushing the next one with the dynamic thresholds, and that is definitely something we're going to be using, especially when you're looking at Cisco Voice systems and how they perform throughout the day. Dynamic thresholds are going to be huge for us, so that's going to be exciting.

We have about 100 people who work directly with LogicMonitor in our company. They're all the way from managers down to the low-level NOC people who are answering the telephone, to the Tier-3 engineers, and even the sales and marketing people. Everyone interacts with LogicMonitor in some way, either supporting a customer, running reports, or looking at the capabilities and what we are monitoring.

Overall, I've been very happy with the solution so far.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Download our free LogicMonitor Report and get advice and tips from experienced pros sharing their opinions.
Updated: November 2022
Buyer's Guide
Download our free LogicMonitor Report and get advice and tips from experienced pros sharing their opinions.