Buyer's Guide
Log Management
November 2022
Get our free report covering Wazuh, Splunk, Graylog, and other competitors of Elastic Security. Updated: November 2022.
655,465 professionals have used our research since 2012.

Read reviews of Elastic Security alternatives and competitors

Product Director at a insurance company with 10,001+ employees
Real User
Top 20
Gives us a single, integrated tool to simplify support and reduce downtime
Pros and Cons
  • "Those 400 days of hot data mean that people can look for trends and at what happened in the past. And they can not only do so from a security point of view, but even for operational use cases. In the past, our operational norm was to keep live data for only 30 days. Our users were constantly asking us for at least 90 days, and we really couldn't even do that. That's one reason that having 400 days of live data is pretty huge. As our users start to use it and adopt this system, we expect people to be able to do those long-term analytics."
  • "One major area for improvement for Devo... is to provide more capabilities around pre-built monitoring. They're working on integrations with different types of systems, but that integration needs to go beyond just onboarding to the platform. It needs to include applications, out-of-the-box, that immediately help people to start monitoring their systems. Such applications would include dashboards and alerts, and then people could customize them for their own needs so that they aren't starting from a blank slate."

What is our primary use case?

We look at this solution for both security monitoring and operational monitoring use cases. It helps us to understand any kinds of security incidents, typical-scene use cases, and IT operations, including DevOps and DevSecOps use cases.

How has it helped my organization?

We had multiple teams that were managing multiple products. We had a team that was managing ELK and another team that was managing ArcSight. My team was the "data bus" that was aggregating the onboarding of people, and then sending logs through different channels. We had another team that managed the Kafka part of things. There was a little bit of a loss of ownership because there were so many different teams and players. When an issue happened, we had to figure out where the issue was happening. Was it in ELK? Was it in ArcSight? Was it in Kafka? Was it in syslog? Was it on the source? As a company, we have between 25,000 and 40,000 sources, depending on how you count them, and troubleshooting was a pretty difficult exercise. Having one integrated tool helped us by removing the multiple teams, multiple pieces of equipment, and multiple software solutions from the equation. Devo has helped a lot in simplifying the support model for our users and the sources that are onboarding.

We have certainly had fewer incidents, fewer complaints from our users, and less downtime.

Devo has definitely also saved us time. We have reduced the number of teams involved. Even though we were using open-source and vendor products, the number of teams that are involved in building and maintaining the product has been reduced, and that has saved us time for sure. Leveraging Devo's features is much better than building everything.

What is most valuable?

It provides multi-tenant, cloud-native architecture. Both of those were important aspects for us. A cloud-native solution was not something that was negotiable. We wanted a cloud-native solution. The multi-tenant aspect was not a requirement for us, as long as it allowed us to do things the way we want to do them. We are a global company though, and we need to be able to segregate data by segments, by use cases, and by geographical areas, for data residency and the like.

Usability-wise, Devo is much better than what we had before and is well-positioned compared to the other tools that we looked at. Obviously, it's a new UI for our group and there are some things that, upon implementing it, we found were a little bit less usable than we had thought, but they are working to improve on those things with us.

As for the 400 days of hot data, we have not yet had the system for long enough to take advantage of that. We've only had it in production for a few months. But it's certainly a useful feature to have and we plan to use machine learning, long-term trends, and analytics; all the good features that add to the SIEM functionality. If it weren't for the 400 days of data, we would have had to store that data, and in some cases for even longer than 400 days. As a financial institution, we are usually bound by regulatory requirements. Sometimes it's a year's worth of data. Sometimes it's three years or seven years, depending on the kind of data. So having 400 days of retention of data, out-of-the-box, is huge because there is a cost to retention.

Those 400 days of hot data mean that people can look for trends and at what happened in the past. And they can not only do so from a security point of view, but even for operational use cases. In the past, our operational norm was to keep live data for only 30 days. Our users were constantly asking us for at least 90 days, and we really couldn't even do that. That's one reason that having 400 days of live data is pretty huge. As our users start to use it and adopt this system, we expect people to be able to do those long-term analytics.

What needs improvement?

One major area for improvement for Devo, and people know about it, is to provide more capabilities around pre-built monitoring. They're working on integrations with different types of systems, but that integration needs to go beyond just onboarding to the platform. It needs to include applications, out-of-the-box, that immediately help people to start monitoring their systems. Such applications would include dashboards and alerts, and then people could customize them for their own needs so that they aren't starting from a blank slate. That is definitely on their roadmap. They are working with us, for example, on NetFlow logs and NSG logs, and AKF monitoring.

Those kinds of things are where the meat is because we're not just using this product for regulatory requirements. We really want to use it for operational monitoring. In comparison to some of the competitors, that is an area where Devo is a little bit weak.

For how long have I used the solution?

We chose Devo at the end of 2020 and we finished the implementation in June of this year. Technically, we were using it during the implementation, so it has been about a year.

I don't work with the tool on a daily basis. I'm from the product management and strategy side. I led the selection of the product and I was also the product manager for the previous product that we had.

What do I think about the stability of the solution?

Devo has been fairly stable. We have not had any major issues. There has been some down time or slowness, but nothing that has persisted or caused any incidents. One place that we have a little bit of work to do is in measuring how much data is being sent into the product. There are competing dashboards that keep track of just how much data is being ingested and we need to resolve which we are going to use.

What do I think about the scalability of the solution?

We don't see any issues with scalability. It scales by itself. That is one of the reasons we also wanted to move to another product. We needed scalability and something that was auto-scalable.

How are customer service and support?

Their tech support has been excellent. They've worked with us on most of the issues in a timely fashion and they've been great partners for us. We are one of their biggest customers and they are trying really hard to meet our needs, to work with us, and to help us be successful for our segments and users.

They exceeded our expectations by being extremely hands-on during the implementation. They came in with an "all hands on deck" kind of approach. They worked through pretty much every problem we had and, going forward, we expect similar service from them.

Which solution did I use previously and why did I switch?

We were looking to replace our previous solution. We were using ArcSight as our SIEM and ELK for our operational monitoring. We needed something more modern and that could fulfill the roadmap we have. We were also very interested in all the machine learning and AI-type use cases, as forward-facing capabilities to implement. In our assessment of possible products, we were impressed by the features of AI/ML and because the data is available for almost a year. With Devo, we integrated both operational and SIEM functions into one tool.

It took us a long time to build and deploy some of the features we needed in the previous framework that we had. Also, having different tools was leading to data duplication in two different platforms, because sometimes the security data is operational data and vice versa. The new features that we needed were not available in the SIEM and they didn't have a proper plan to get us there. The roadmap that ArcSight had was not consistent with where we wanted to go.

How was the initial setup?

It was a complex setup, not because the system itself is complex but because we already had a system in place. We had already onboarded between 15,000 and 20,000 servers, systems, and applications. Our requirement was to not touch any of our onboarding. Our syslog was the way that they were going to ingest and that made it a little bit easier. And that was also one of our requirements because we always want to stay vendor-agnostic. That way, if we ever need to change to another system, we're not going to have to touch every server and change agents. "No vendor tie-in" is an architectural principle that we work with.

We were able to move everything within six months, which is absolutely amazing. That might be a record. Not only Devo was impressed at how efficiently we did it, but so were people in our company.

We had a very strong team on our end doing this. We went about it very clinically, determining what would be in scope and what would not be in scope for the first implementation. After that, we would continue to tie up any loose ends. We were able to meet all of our deadlines and pivot into Devo. At this point, Devo is the only tool we're using.

We have a syslog team that is the log aggregator and an onboarding team that was involved in onboarding the solution. The syslog team does things like the opening of ports and metrics of things like uptime. We also have four engineers on the security side who are helping to unleash use cases and monitor security. There's also a whole SOC team that does incident management and finding of breaches. And we have three people who are responsible for the operational reliability of Devo. Because it's a SaaS product, we're not the ones running the system. We're just making sure that, if something goes wrong, we have people who are trained and people who can troubleshoot.

We had an implementation project manager who helped track all of the implementation milestones. Our strategy was to set out an architecture to keep all the upstream components intact, with some very minor disruptions. We knew, with respect to some sources, that legacy had been onboarded in certain ways that were not efficient or useful. We put some of those pieces into the scope during the implementation so that we would segregate sources in ways that would allow better monitoring and better assessment, rather than mixing up sources. But our overall vision for the implementation was to keep all of that upstream architecture in place, and to have the least amount of disruption and need for touching agents on existing systems that had already been onboarded. Whatever was onboarded was just pointed at Devo from syslog. We did not use their relays. Instead, we used our syslog as the relays.

What's my experience with pricing, setup cost, and licensing?

Devo was very cost-competitive. We understood that the cost came without the monitoring of content, right out-of-the-box, but we knew they were pointed in that direction.

Devo's pricing model, only charging for ingestion, is how most products are licensed. That wasn't different from other products that we were looking at. But Devo did come with that 400 days of hot data, and that was not the case with other products. While that aspect was not a requirement for us, it was a nice-to-have.

Which other solutions did I evaluate?

We started off with about 10 possibilities and brought it down to three. Devo was one of the three, of course, but I prefer not to mention the names of the others.

But among those we started off with were Elastic, ArcSight, Datadog, Sumo, Splunk, Microsoft systems and solutions, and even some of the Google products. One of our requirements was to have an integrated SIEM and operational monitoring system.

We assessed the solutions at many different levels. We looked at adherence to our upstream architecture for minimal disruption during the onboarding of our existing logs. We wanted minimal changes in our agents. We also assessed various use cases for security monitoring and operational monitoring. During the PoC we assessed their customer support teams. We also looked at things like long-term storage and machine learning. In some of these areas other products were a little bit better, but overall, we felt that in most of these areas Devo was very good. Their customer interface was very nice and our experience with them at the proof-of-value [PoV] level was very strong. 

We also felt that the price point was good. Given that Devo was a newer product in the market, we felt that they would work with us on implementing it and helping us meet our roadmap. All three products that we looked for PoV had good products. This space is fairly mature. They weren't different in major ways, but the price was definitely one of the things that we looked at.

In terms of the threat-hunting and incident response, Devo was definitely on par. I am not a security analyst and I relied on our SIEM engineers to analyze that aspect.

What other advice do I have?

Get your requirements squared and know what you're really looking for and what your mandatory requirements are versus what is optional. Do a proof of value. That was very important for us. Also, don't only look at what your needs are today. Long-term analytics, for example, was not necessarily something we were doing, but we knew that we would want to do that in the coming years. Keep all of those forward-looking use cases in mind as well when you select your product.

Devo provides high-speed search capabilities and real-time analytics, although those are areas where a little performance improvement is needed. For the most part it does well, and they're still optimizing it. In addition, we've just implemented our systems, so there could be some optimizations that need to be done on our end, in the way our data is flowing and in the way we are onboarding sources. I don't think we know where the choke points are, but it could be a little bit faster than we're seeing right now.

In terms of network visibility, we are still onboarding network logs and building network monitoring content. We do hope that, with Devo, we will be able to retire some of our network monitoring tools and consolidate them. The jury is still out on whether that has really happened or not. But we are working actively towards that goal.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Ibrahim Albalawi - PeerSpot reviewer
SOC Leader at a tech consulting company with 51-200 employees
Real User
Less false positives, good detection and integration capabilities, and good pricing
Pros and Cons
  • "The detection of threats and reduction of false positive alarms as compared to other solutions are valuable features. It has improved threat detection response and reduced a lot of noise from false positives as compared to our previous SIEM solutions."
  • "The incident response area should be improved."

What is our primary use case?

We are using it for monitoring firewalls, Windows operating systems, some Linux operating systems, active directories, and some of the solutions in the cloud such as Office.

In terms of deployment, everything is in the cloud. Our licenses are on the cloud. We don't deploy anything on premises except the RIN.

How has it helped my organization?

We are a managed service provider, and we offer this service to third-party clients. Most of our clients are very happy with the solution. We can detect a lot of threats, which are not false positives, and we can describe the threats very well. A lot of information can be obtained from this SIEM, and we can provide very good incident reports to our clients.

We were using another solution previously. The other solution couldn't compete with the features and functionality that we were looking for as a managed service provider. Some clients ask for specific features, and we couldn't complete those needs with other products. They were more about calculations, such as events per second (EPS). With Securonix, it is easier to sell the product and make quotes for our clients. It has helped us a lot at the administration, commercial, and operation levels.

It provides actionable intelligence on threats related to our use cases. It can detect violations and reduce false positives. This actionable intelligence is one of the most important parts because we have suffered with some of the other solutions in terms of receiving a lot of events and alarms where most of them were false positives, which made it a bit difficult for us to investigate and generate incident reports. Securonix is handy for engineers and the security operations center.

Its analytics-driven approach is pretty good at finding sophisticated threats and reducing false positives. When it comes to monitoring network devices, such as firewalls, it can detect behaviors that would be difficult for other solutions to detect or for normal engineers to detect manually. It has a lot of violation policies, and it is very handy and helpful at this level.

It adds contextual information to security events, which is one of the most important points. We can fill a lot of information into our reports for our clients.

Everything is saved for us and indexed. We can review any event we need within three or six months. We can review even when the data is in the cold phase. We never faced any case where we lost any event data. When clients asked about some events in the past, we could find them very easily and without any issues by using the queries.

It improves analysts' efficiency to do more with less time. Spotter is one of the best tools for me for searching and visualizing various things such as policies. With the Spotter language, you can search for whatever you need. You can search for any endpoint, any IP, any hostname, or any violation name. Even though it is not very fast, it is fine for us. Splunk or Elasticsearch is faster than Securonix because this is their job. Even though Spotter is not as fast, it has been helpful for us.

What is most valuable?

The detection of threats and reduction of false positive alarms as compared to other solutions are valuable features. It has improved threat detection response and reduced a lot of noise from false positives as compared to our previous SIEM solutions. This was one of the reasons we decided to try or move to Securonix. Other products generated thousands of events, and a lot of them were false positives, which made it difficult for us to handle all the events. For example, we were monitoring a firewall internally, and that firewall generated about five million events per month. The previous product detected almost 1,000 to 1,500 events as positive events, whereas Securonix generates less than 200 events, and most of them are not false positives.

It can integrate with a lot of solutions. Being able to ingest all our log sources when investigating threats is one of the good points of Securonix. After we started to use Securonix, we could integrate a lot of solutions, which we couldn’t do previously. It works with many devices, platforms, and cloud solutions. It is pretty good in terms of integration.

What needs improvement?

The incident response area should be improved.

It is more difficult than other products, but overall, it is good. The platform has a lot of options and functionality. So, you need to check almost everything. For new engineers or people who don’t have much experience with this kind of platform, it is a bit difficult, but for experienced engineers, it is not that difficult.

When you have been doing a lot of work for about one or two hours, and you have a lot of tabs open, it slows down or gets stuck. There is a delay of 10 to 15 seconds in opening tabs or dashboards. I don't know why this happens, but for me, it is not a big issue. I just wait, and that's all.

For how long have I used the solution?

I have been using this solution for one year.

What do I think about the stability of the solution?

It is stable, but it slows down or gets stuck when you have a lot of tabs open.

What do I think about the scalability of the solution?

Overall, it is scalable, but when you are investigating a lot and you have a lot of tabs open and are involved in big work, it sometimes becomes slow or gets stuck.

In terms of its users, our SOC team has three engineers, and I am the fourth one. We have three clients for now for Securonix. We use it internally to monitor our company. Overall, there are five or six users using the interface, investigating, and reporting to the clients.

How are customer service and support?

Most of the time, their support is very good, but sometimes, we had to escalate the issues. Sometimes, we opened a ticket, and we immediately received an answer for fixing the issue, but at other times, we got a response after one, two, three, or even seven days. I guess it is based on the impact or severity, but when we have an urgent issue or problem, Securonix solves it very fast. I would rate them an 8 out of 10.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We were previously using Splunk, and we wanted to continue, but when we did the evaluation, we found Splunk to be more difficult to implement than others. It is fine to operate it, but its implementation is more difficult. It also had fewer features than Securonix. Securonix is dedicated to security information event management, but this is not the main functionality of Splunk. Even though Splunk is very strong in security, and we have been using it, when it comes to, for example, machine learning, Securonix has pre-configured policies. So, we don't have to spend that much time, whereas when it comes to Splunk, we have to configure everything. We have to install the applications and configure the dashboards. Considering the functionalities, features, and pricing, we felt that Securonix would be the best option.

It is better than previous solutions in terms of threat investigations and onboarding. That's because most of the other solutions are based on rules. Sometimes, there is no intelligence when it comes to detection, whereas Securonix has policies that are a collection of rules. Securonix doesn't only extract the log and tells us that it is a low-impact event or informative event. It also tries to correlate most of the events according to the policies and takes us to the main point. This is how Securonix has helped us to reduce a lot of false positives. Other solutions only worked with rules, and they only sent us events. We had to review most of those events, which is not the case with Securonix. It has a lot of policies for all types of detections. There are almost 1,000 policies, and Securonix can correlate various types of behaviors and pieces of evidence to detect advanced threats. It is good at this level.

How was the initial setup?

We have the cloud license of Securonix. Everything is on the cloud. We only implement RIN on-premises, which is straightforward. You just download the executable, give it permission, and execute it. You provide the information it asks. There are a few packages that you need to install previously, but overall, it is very handy and straightforward.

What about the implementation team?

I implemented it on my own. 

What's my experience with pricing, setup cost, and licensing?

Its price is fine. We found it to be cheaper than LogRhythm, Exabeam, Splunk, as well as Elastic Security. A few months ago, when we were comparing Securonix with Elastic Security, we found Securonix to be cheaper than Elasticsearch. We were pretty surprised that Elastic Security is more expensive than Securonix because Elasticsearch is just starting, and it cannot compete with Securonix at this time. So, the pricing of Securonix is pretty good for now.

Which other solutions did I evaluate?

We tried to evaluate some of the other products, but we decided to go with Securonix for the business part. It was easier for us to meet the needs of our clients related to calculations.

We evaluated LogRhythm. The first problem that we faced with LogRhythm was that it would have been pretty difficult for engineers to handle in terms of the user interface. As compared to Securonix, it was also very expensive. Securonix had most of the features or functionalities that we were looking for. We also evaluated Exabeam, and we had the same problem with the price and features.

What other advice do I have?

It has somewhat reduced the amount of time we require for investigation. It hasn't probably helped in detecting advanced threats faster along with lower response times because there is this gap between the RIN receiving the information and then sending this information to the cloud. This gap makes it a little bit late as compared to other solutions. Other than that, it is good.

I would rate it a 9 out of 10.

Which deployment model are you using for this solution?

Public Cloud
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: MSP
Flag as inappropriate
Architect at SEI Investments
Real User
Great support with a helpful APM and profiler
Pros and Cons
  • "The most valuable aspects of the product include the APM and profiler."
  • "I find the training great. That said, it is set for the LCD (lowest common denominator). Of course, this is very helpful to sell the product, yet, to really utilize the product, you need to get more detailed."

What is our primary use case?

We primarily use Datadog for:

  • Native memory
  • Logging
  • APM
  • Context switching
  • RUM
  • Synthetic
  • Databases
  • Java
  • JVM settings
  • File i/o
  • Socket i/o
  • Linux
  • Kubernetes
  • Kafka
  • Pods
  • Sizing

We are testing Datadog as a way to reduce our operational time to fix things (mean time to repair). This is step one. We hope to use Datadog as a way to be proactive instead of reactive (mean time to failure).

So far, Datadog has shown very good options to work on all of our operational and development issues. We are also trying to use Datadog to shift left, and fix things before they break (MTTF increase).

How has it helped my organization?

We are currently in a POC and do not own Datadog at the moment. 

So far, there have been a few issues due to security. There are two main security issues. 

The first is moving data off-prem. This has been resolved to a point (filtering logs, etc). However, there is still an issue with moving a JFR as a JFR potentially contains data that is not allowed off-prem.

The second security issue is more internal, however, the main installation requires root access or using an ACL. Our company does not use ACLs on our Linux platform. This is problematic since the install sets a no-login on the Datadog user.

What is most valuable?

The most valuable aspects of the product include the APM and profiler.

These two have given us insights into things that are very difficult to track down given the standard OS (Linux) tools. 

The native memory tracking is super difficult to see exactly where it comes from. I attended a course (continuous profiling), and it showed me the potentially very important capabilities.

If you add these details to a standard dashboard, or a sub-dashboard for techy people, or even just a notebook, it would be easy to identify issues before they occur.

Combining these details with the basic tools (infra, logging, APM, and good rules), Datadog can easily show the details that a true engineer would need. It isn't just for monitoring, however, I see the value in it for engineers.

What needs improvement?

I have done every training offered (and in a short period of time: two days for 20 courses).

I find the training great. That said, it is set for the LCD (lowest common denominator). Of course, this is very helpful to sell the product, yet, to really utilize the product, you need to get more detailed.

If I did the training as it is written and I cut/paste a bunch of stuff and see the cut/paste work, I didn't really learn anything. Later sessions (I quit using the editor and switched to VI) stopped cutting and pasting, and learned much more.

For how long have I used the solution?

I've used the solution for one month.

What do I think about the stability of the solution?

I' give stability a thumbs up.

What do I think about the scalability of the solution?

We are not sure yet in terms of scalability. The off-prem solution seems to scale well (although had issues with the training slowing down).

How are customer service and support?

Technical support is great.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

I previously used Dynatrace and Elastic. We didn't switch. We are in a POC.

How was the initial setup?

The initial setup is simple yet complex. There are too many teams are needed.

What about the implementation team?

We did the initial setup in-house.

What was our ROI?

In terms of ROI, the labor saving is probably the biggest. The NPR is probably second - although management would probably reverse these.

What's my experience with pricing, setup cost, and licensing?

Pricing and licensing is fairly complicated. A GB for .1 sounds great, however, once you put all 16 or so prices together, it adds up fast. A cost model sheet on the main site would be very helpful.

Which other solutions did I evaluate?

We are currently in a POC.

What other advice do I have?

We work with all product versions.

Which deployment model are you using for this solution?

On-premises

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
reviewer1331706 - PeerSpot reviewer
I&T Design & Execution Reliability Engineering Leader at a financial services firm with 10,001+ employees
Real User
Top 5Leaderboard
Poor performance and the display options are limited, but it can parse a variety of log files
Pros and Cons
  • "Splunk works based on parsing log files."
  • "I find the graphical options really limited and you don't have enough control over how to display the data that you want to see."

What is our primary use case?

We use Splunk to monitor our private cloud, data center, and other applications.

How has it helped my organization?

I don't like Splunk very much and find that it does not have many useful features.

What is most valuable?

Splunk works based on parsing log files.

What needs improvement?

I don't like the pipeline-organized programming interface.

I find the graphical options really limited and you don't have enough control over how to display the data that you want to see.

I find that the performance really varies. Sometimes, the platform doesn't respond in time. It takes a really long time to produce any results. For example, if you want to display a graph and put information out, it can become unresponsive. Perhaps you have a website and you want to show the data, there's a template for that, or it has a configuration to display your graphics, and sometimes it just doesn't show any data. This is because the system is unresponsive. There may be too much data that it has to look through. Sometimes, it responds with the fact that there is too much data to parse, and then it just doesn't give you anything. The basic problem is that every time you do a refresh, it tries to redo all of the queries for the full dataset.

Fixing Splunk would require a redesign. The basic way the present the graphs is pipeline-based parsing of log files, and it's more of a problem than it is helpful. Sometimes, you have to perform a lot of tricks to get the data in a format that you can parse.

You cannot really use global variables and you can't easily define a constant to use later. These things make it not as easy to use.

For how long have I used the solution?

I have been using Splunk for approximately one year.

What do I think about the stability of the solution?

I use Splunk at least a couple of times a week.

What do I think about the scalability of the solution?

I'm not sure about scalability but to my thinking, it's not very scalable. I know that it's probably expensive because it relies a lot on importing log files from all of the systems. One of the issues with respect to scalability is that there's never enough storage. Also, the more storage you have, the more systems you need to manage all the log files.

Splunk is open for all of the users in the company. We might have 1,000 IT personnel that could access it, although I'm not sure how many people actually use it. I estimate that there are perhaps 200 active users.

How are customer service and support?

I have not been in contact with technical support from Splunk.

Which solution did I use previously and why did I switch?

In this company, we did not previously use a different monitoring solution.

How was the initial setup?

I was not involved in the initial setup.

We have a DevOps team that is implementing Splunk and they are responsible for it. For example, they take care of the licensing of the product.

What about the implementation team?

We have a team at the company that completed the setup and deployment.

Which other solutions did I evaluate?

The other product that I've seen is Elastic, and I think that it would be a better choice than Splunk. This is something that I'm basing on performance, as well as the other features.

What other advice do I have?

My understanding is that as a company, we are migrating to Azure. When this happens, Splunk will be decommissioned.

Overall, I don't think that this is a very good product and I don't recommend it.

I would rate this solution a five out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Principal Architect at Calsoft
Real User
Top 5
The pile integrity monitoring features are solid, but log analysis could be improved.
Pros and Cons
  • "The configuration assessment and Pile integrity monitoring features are decent."
  • "Log data analysis could be improved. My IT team has been looking for an alternative because they want better log data for malware detection. We are also doing more container implementation also, so we need better container security, log data analysis, auditing and compliance, malware detection, etc."

What is our primary use case?

Our primary use case for Wazuh is monitoring endpoints. The second is incident management. Logging is essential for us because of Indian IT compliance rules require us to store logs for 180 days. We need to monitor and maintain logs also. 

Wazuh is monitoring around 1,200 inputs, but there are only about four or five members of the IT team directly using the solution. 

What is most valuable?

The configuration assessment and pile integrity monitoring features are decent.

What needs improvement?

Log data analysis could be improved. My IT team has been looking for an alternative because they want better log data for malware detection. We are also doing more container implementation also, so we need better container security, log data analysis, auditing and compliance, malware detection, etc. 

Overall, the implementation part of Azure is tricky. It can be simplified and automated more to shorten the deployment timeline, so we can immediately onboard the application. The entire implementation process should be user-friendly.

For how long have I used the solution?

We implemented Wazuh in 2019.

What do I think about the stability of the solution?

I rate Wazuh six out of 10 for stability. While we haven't seen any incidents lately, it used to crash a few years back. The dashboard would be inaccessible due to some service failure or something. 

What do I think about the scalability of the solution?

I rate Wazuh eight out of 10 for scalability.

How are customer service and support?

We use community forums like Stack Overflow to find answers. Most debugging and troubleshooting processes are readily available online. 

How was the initial setup?

Setting up Wazuh is complex. The deployment involved two IT engineers and took about two months

What about the implementation team?

We deployed Wazuh. 

What's my experience with pricing, setup cost, and licensing?

Wazuh is a free solution. 

Which other solutions did I evaluate?

We tried to replace Wazuh with a CrowdStrike real-time security solution. We also tried some solutions from one of our vendors We want to move to either Elastic or CrowdStrike.

What other advice do I have?

I rate Wazuh six out of 10. It's a solid open-source. Stability-wise, Wazuh seems to have fixed all the past issues, and the latest version is possibly the most stable. However, they need to add more features to keep up with the competition. Compared to products like Elastic, Wazuh still lacks a lot of in-depth information. It's still not possible to do a dive, and the configuration could be easier.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Buyer's Guide
Log Management
November 2022
Get our free report covering Wazuh, Splunk, Graylog, and other competitors of Elastic Security. Updated: November 2022.
655,465 professionals have used our research since 2012.