IT Central Station is now PeerSpot: Here's why
Buyer's Guide
Security Information and Event Management (SIEM)
June 2022
Get our free report covering Splunk, IBM, Datadog, and other competitors of LogRhythm NextGen SIEM. Updated: June 2022.
609,272 professionals have used our research since 2012.

Read reviews of LogRhythm NextGen SIEM alternatives and competitors

Elizabeth Manemann - PeerSpot reviewer
Cyber Security Engineer at H&R Block, Inc.
Real User
Top 10
Accepts data in raw format but does not offer their own agent
Pros and Cons
  • "The most valuable feature is definitely the ability that Devo has to ingest data. From the previous SIEM that I came from and helped my company administer, it really was the type of system where data was parsed on ingest. This meant that if you didn't build the parser efficiently or correctly, sometimes that would bring the system to its knees. You'd have a backlog of processing the logs as it was ingesting them."
  • "From our experience, the Devo agent needs some work. They built it on top of OS Query's open-source framework. It seems like it wasn't tuned properly to handle a large volume of Windows event logs. In our experience, there would definitely be some room for improvement. A lot of SIEMs on the market have their own agent infrastructure. I think Devo's working towards that, but I think that it needs some improvement as far as keeping up with high-volume environments."

What is our primary use case?

We have a couple of servers on-premises to gather the logs from our devices. We have a lot of devices including vendor-agnostic collectors that will, for example, collect syslogs from our Linux host. The logs are then sent to the Devo Relay, which encrypts the data and sends it to the Devo Cloud.

What we send to Devo includes all of our Unix-based logs. These are the host logs, as well as logs from a lot of the network devices such as Cisco switches. Currently, we are working with Devo to set up a new agent infrastructure, and the agents will collect Windows event logs.

We were using a beta product that Devo provided for us, which was based on an open-source platform called Osquery. That did not quite work for the volume of logs that we have. It didn't seem to be able to keep up with a large number of servers, or the large amount of Windows event log volume that we have in our environment. We're currently working with them to transition to an Xlog and use their agents, which work really well to forward the logs to Devo.

We also send cloud logs to Devo, and they have their own collector that handles a lot of that. It basically pulls the logs out of our cloud environment. We are sending Office365 management logs, as well as a lot of Azure PaaS service logs. We're sending those through an event hub to Devo. We are currently working on onboarding some AWS logs as well.

We have several corporate locations, with the main location in the US. That is where the majority of our resources are, but we do also have Devo relays stood up in Canadian, Australia, and India. These locations operate in a way that is similar to what is described above, although on a smaller scale. They're sending all of their Unix devices and syslogs to the relay, and then I believe only Australia at the moment is using agents to pull from Windows logs. Canada is using a different SIEM at the moment, although that contract is about to expire, so then we'll onboard their Windows event logs as well. India does not have any Windows servers that need to have an agent for collecting logs, so just send the Linux and Unix logs over the relay to Devo.

Our main use case and customer base are our security operations center analysts. A lot of our process was built up and carried over from our previous SIEM, LogRhythm. We have an alerting structure built out that initiates a standard analyst workflow.

It starts when you get an alert. You drill down in the logs and investigate to see if it's a false positive or not.

We are in the process of onboarding our internal networking team into Devo, and we are gathering a lot of network logs. This means that they can monitor the health of our networking infrastructure, and at some point, maybe set up health alerts for whatever they are looking for.

We have another team that is using Devo, which is our internal fraud team. They're very similar to stock analysts, where they just look for suspicious events. They are especially interested in tax filing and e-filing. We gather logs for that, and they go through a really deep investigative workflow.

How has it helped my organization?

One of the immediate improvements that come to mind is the amount of hot, searchable data. In the SIEM we had before, we were only able to search back 90 days of hot, searchable data, whereas here we have 400 days worth. That definitely has improved our threat hunting capabilities. 

We're also able to ingest quite a bit more data than we were before. We're able to ingest a lot of our net flow data, which if we had sent that to our previous SIEM would have brought it to its knees. So the amount of data that the analysts are able to see and investigate has been a really big beneficial use case. I'd say that's the biggest benefit that it's provided.

I myself do not leverage the fact that Devo keeps 400 days of hot data to look at historical patterns or analyze trends. A lot of times I will look at that to see the log volumes, the traffic, make sure there are no bottlenecks as far as how log sources are sending to Devo. I would say that the analysts definitely for certain cases will go back and try to retroactively view where a user was logging in, for example. At the moment, we haven't really had a use case to push the limit of that 400 days so to speak, and really go really far back. We definitely use the past couple of months of data for a lot of the analyst cases.

This is an important feature for our company especially with the recent SolarWinds attack, which was a big deal. We did not have Devo available, but because that happened so far in the past, it was a struggle to pull that data for it to look for those IOCs. That was definitely a really big selling point for this platform with our company.

Devo definitely provides us with more clarity when it comes to network endpoint or cloud visibility. We're able to onboard a lot of our net flow logs. We are able to drill down on what the network traffic looks like in our environment. For the cloud visibility, we're still working on trying to conceptualize that data and really get a grasp around it to make sure that we understand what those logs mean and what resources they're looking at. Also, there's a company push to make sure that everything in the cloud is actually logging to Devo. As far as cloud visibility, we as a company need to analyze it and conceptualize it a little bit more. For network visibility, I would say that Devo's definitely helped with that.

The fact that Devo stores the data raw and doesn't perform any transformation on it really gives us confidence when we know that what we are looking at is accurate. It hasn't been transformed in any way. I'd definitely say that the ability to send a bunch of data to Devo without worrying about if the infrastructure can handle it definitely allows us to have a bigger and better view of our environment, so when we make decisions, we can really address all the different tendencies. We're collecting a lot more types of log sources than we were before. So we can really see all sides of the issue; the vast amount of data and the ability to really take our decision and back it up with the data, and not just random data but we can use a query and display the data in a way that backs up the decision that we're making.

Devo helps to release the full potential of all our data. The Activeboards like the interactive dashboards that Devo provides really help us to filter our data, to have a workflow. There are a lot of different widgets that are available for us to visualize the data in different ways. The Activeboards can be a little slow at times, a little bit difficult to load, and a little bit heavy on the browser. So sometimes the speed of that visualization is not quite as fast as I would like but it's balanced by the vast amount of options that we have.

That's one of the big things that like all security companies, security departments really purported having that single pane of glass. The Devo Activeboards really allow us to have that single pane of glass. That part is really important to us as a company to be able to really visualize the data. I haven't found the loading speeds have become a significant roadblock for any of our workflows or anything, it's an enhancement and a nice to have.

We all want everything faster, so it's definitely not a roadblock but the ability to represent the data in that visualized format is very important to us. It's been really helpful, especially because we have a couple of IT managers, non-technical people that I am onboarding into the platform because they just want to see an overall high-level view, like how many users are added to a specific group, or how many users have logged in X amount of days. The ability to provide them not only with that high-level view, but allow them to drill down and be interactive with it has really been super helpful for us as a company.

Devo has definitely saved us time. The SIEM that we were on before was completely on-prem, so there were a lot of admin activities that I would have to do as an engineer that would take away from my time of contextualizing the data, parsing out the data, or fulfilling analysts requests and making enhancements. The fact that it is a stock platform has saved me a ton of time, taking away all those SIF admin activities. 

I wouldn't say that it really increased the speed of investigations, but it definitely didn't slow it down either. They can do a lot more analysis on their own, so that really takes away from the time that it takes to reach out to other people. If you went back 90 days, you had to go through a time-consuming process of restoring some archives. The analysts don't have to do that anymore, so that also cuts off several days' worth of waiting. We had to wait for that archive restoration process to complete. Now it's just you pull it back and it's searchable. It's right there. Overall, I would say Devo has definitely saved us a lot of time. For the engineering space, I would say it saves on average about one business day worth of time every two weeks because a lot of times with on-prem infrastructure, there would be some instances where it would go down where I'd have to stay up half the night, the whole night to get it back up. I haven't had to do that with the Devo platform because I'm not managing that infrastructure. 

What is most valuable?

We are using some of the other components, such as Relay, which is used to help us ship logs to Devo.

The most valuable feature is definitely the ability that Devo has to ingest data. From the previous SIEM that I came from and helped my company administer, it really was the type of system where data was parsed on ingest. This meant that if you didn't build the parser efficiently or correctly, sometimes that would bring the system to its knees. You'd have a backlog of processing the logs as it was ingesting them.

One thing that I love about Devo is that you can accept the data in a raw format. It's not going to try to parse it until you query it. This makes it really flexible for us because if the analysts come to us and explain that they need a specific log source, we can just work on the whole transportation system, insofar as how to get it to Devo. We don't have to worry about parsing it out until later. We can actually see the data in the platform and then we can use the queries to perform contextualization on it, parsing out whatever metadata we need.

I really like the flexibility that the queries offer to parse out the data. Parsing out JSON logs, for example, is very easy. You don't have to mess with regex. It's literally just a point-and-click interface. So that has been incredible. I would say overall in a nutshell, one of my favorite parts is that they really have captured the essence of sending us all your data. You don't have to worry about how to parse it. You can get the data onboard and then you can perform transformations on it later. And the transformations that you can perform on it are super flexible.

Devo definitely provides high-speed search capabilities and real-time analytics. The search can be a little bit slow at times. But for the amount of data that we're pulling back relatively speaking, I would say that the speed is very nice. The ability to pull back large amounts of data, also the amount of data that they keep hot and searchable for us is incredible. I would definitely say that they provide real-time analytics and searching.

I have heard from other customers that the multi-tenancy capabilities are pretty good, but I don't have much experience with that in the HR Block though.

What needs improvement?

When it comes to the ease of use for analysts, that's an area that they may need to work on a little bit. Devo offers its version of a case management platform called Devo SecOps. They did offer it to us. It's part of our contract with them. The analysts have found that the workflow isn't very intuitive. There are a couple of bugs within the platform, and so we are actually sticking with our old case management platform right now and trying to work with Devo to help iron out the roadblocks that the analysts are facing. Mostly it seems like they have trouble figuring out where the actual case is. A lot of the search features that are in the main Devo UI don't translate over into their SecOps module. They seem separate and disjointed. So the core of the platform where we have all of the data isn't integrated as well as we would like with their case management system. There's a lot of pivoting back and forth and the analysts can't really stay in the SecOps platform which adds some bumps to their workflow.

The SecOps module also needs improvement. It should be more closely integrated with the original platform that they had. The data search abilities in the SecOps platform should be made more like the data search abilities in the administrator's side of the platform. 

From our experience, the Devo agent needs some work. They built it on top of OS Query's open-source framework. It seems like it wasn't tuned properly to handle a large volume of Windows event logs. In our experience, there would definitely be some room for improvement. A lot of SIEMs on the market have their own agent infrastructure. I think Devo's working towards that, but I think that it needs some improvement as far as keeping up with high-volume environments.

For how long have I used the solution?

We implemented Devo as a PoC last year but have only just started using it officially a few months ago.

What do I think about the stability of the solution?

It's a Devo-managed SaaS cloud platform. It does seem like lately that they've been having trouble keeping up with the large volume of events. It's maybe due to other customers besides H&R Block. I have shared in their cloud infrastructure and we have noticed some slowness and some downtime. I would say it's definitely not more than a maximum of three hours every two weeks. It's usually not a lot of downtime or slowness, at least not to the point where we cannot work within the platform, but it does seem to have been picking up a little bit more lately. That's why I average out around three hours every two weeks. But I would say as far as overall stability, the uptime has been really great. If it's "down", it's really more just that search is run really slow, but you can still get into the platform. It's not really that everything is down where you can't look at alerts. That rarely ever happens. I would say overall, it's pretty stable and it allows our analysts to stay on the platform.

What do I think about the scalability of the solution?

In terms of scalability, we are able to ingest as much or as little data as we want, so that is really awesome. I've been pretty amazed at how much we're able to throw at it. We can expand as much as we want to suit our needs, obviously within the confines of the subscription agreement. There is a data cap, but within that limit, we can really go crazy. The scalability is awesome. It's very scalable.

There are about 50 or so users on the platform right now. We have our SOC analysts at different levels that just perform investigative activities. The majority of our clients on the platform are our security operations center analysts. They have different privileges based on their roles. We give them the ability to create test alerts if they're trying something out.

We have various other team members throughout our corporation using it, only two or three here and there. We have three individuals from our networking team, a couple of individuals from IT support that often utilize the platform to investigate user lockout and stuff like that. Of course, we have the engineers in the platform which have been five or six individuals. The main user base is our SOC analysts.

We do maintenance for our servers and such. We don't really have them on the platform at the moment. They have their own kind of tools. They utilize their graphs to monitor the health of our infrastructure, but that may be something at some point in the future that we may be pursuing. The more teams that we get into our SIEM, the better because it really justifies the usage of the tool. Right now, as far as from a maintenance perspective, the only IT staff that would be using it for that sort of thing would be our networking team. And we have about three individuals that we just recently onboarded, so they're just getting used to the platform.

Devo is mostly being used for security logs. There's a push to start using this not only for security monitoring but for infrastructure health monitoring as well. So we're starting with the networking team. We really are still in phase one of really fleshing Devo out, adding more enhancements and alerts. My primary role is to support the security operations center, to support the security aspect of things but I haven't heard if it's set in stone yet. I would say that we are definitely going to try to push to utilize Devo more throughout our organization for health monitoring and for the networking team to use. Perhaps at some point in the future, we would expand our usage. It's not set in stone yet, but I could definitely see that happening.

How are customer service and support?

We have a couple of tickets open with support but mostly for platform health monitoring questions. We do have some in regards to alert logic, but nothing that was super important that we actually had to call Devo tech support.

Their professional services team works really hard on building out Activeboards for us and helping us make sure that we are monitoring the health of our platform. Overall, I'd say they're definitely really collaborative. They want to hear what's going well for us and what's not. 

Sometimes they're not quite as responsive as I would like, but I think that is also due to the transition process because that was recent. We are giving them some time to get adjusted to our account and get things all set. I would say from a support standpoint, they have definitely been very responsive to our cases that we have open, so that's been really helpful. I've had instances in the past with vendors where it takes several weeks to hear anything back on a case.

Which solution did I use previously and why did I switch?

Prior to Devo, we used the LogRhythm SIEM.

We switched mainly because of the ability to ingest more data. In certain instances, we had to say no to onboarding certain log sources because of the amount of value it offered, the cost-benefit didn't weigh out. LogRhythm put the point where if you added too much data, if you had too much volume being ingested, it would start breaking. It would start complaining and things would just go bad. The amount of downtime we had with LogRhythm was really the main metric driver to get us to transition to Devo. Then what really appealed to us about Devo versus other SIEMs was their "give us all your data" model. That was something we were really struggling with and that was something that we really wanted from a SIEM. We wanted to correlate between as many data sources as possible. They offered us that capability that LogRhythm really did not. 

How was the initial setup?

The initial setup was fairly straightforward. I don't know if this falls into the whole analysis of the Devo UI itself. I'm not sure if Devo considers their agent infrastructure as part of this offering because it was beta, but that was rather complex. We didn't get a lot of details as to the specs needed for when we set up the agent manager infrastructure in our environment, so that was pretty complicated. But as far as just onboarding the data, really offloading all of the alerts that we had out of our old SIEM, it was pretty straightforward.

I would definitely say one thing that may have made it easier would be for Devo to have some more out-of-box alerts. The previous SIEM that we had, had a lot of alerts that they offered us from their research and from their labs and we really built on top of those. We had to build a lot of our alerts from scratch to transition them from our old SIEM to Devo. If Devo had their own alert library or a more fully fleshed-out alert library, that would have made that aspect of it a little less time-consuming. Otherwise, as far as the data onboarding and the data ingestion, that was very straightforward as far as SIEM transitions go. The relay they provided us gave us a single point to send everything to. 

We really shifted completely from our old SIEM to Devo in about three to four months, which by industry standards was very quick. It was a combination of a lot of hard work and teamwork on our side, but again, the data ingestion, the ease of getting that data into Devo took off a lot of that time. We had a single point to send it to which helped with the transition.

In terms of our implementation strategy, our initial goal was to get everything out of our old SIEM. So first we made sure that all of the log sources that were there would be redirected to Devo. And so we set up the appropriate components to forward it to Devo, and then once all the data was in there we started working on transitioning the alert and building out the alert logic, and making sure that the alert logic matched what we had in our old SIEM. After that was onboarding all the users, making sure that the RBAC controls were in place. 

What about the implementation team?

We did use a consultant. We worked with Novacoast. We had a relationship with them in the past. They're an MSSP. They really helped us with building the alert logic mainly.

What's my experience with pricing, setup cost, and licensing?

You definitely get what you pay for. Devo has offered a lot of extra features to justify the price. Devo worries about managing the infrastructure and how it's going to handle that volume, how it's going to store it, and all those things. It allows us to not require as many engineers and not require as many engineering hours. We can devote that time to other things. That's the biggest benefit for the cost. 

I have seen in the Devo documentation that for certain aggregation tasks that you have running in the background, you could be charged extra for those. I've been meaning to get clarification on that. 

Which other solutions did I evaluate?

We were considering Azure Sentinel, mainly because we're an Azure shop. That was mainly the only other solution we looked at. We chose which SIEM we wanted to have a POC with. Azure Sentinel didn't seem to offer as many features as Devo did, which is why we chose them for our POC. I think that was it. We sent it out to a couple of vendors but none of them fulfilled our needs as much as those two and then it was really just between Azure Sentinel and Devo. And Devo gave us a lot more features than Azure Sentinel did.

As far as from an analyst perspective, something that was unique about Devo versus other SIEMs was the immense contextualization capabilities they have because the platform is really based on that you query the data and you perform all these contextualizations in the query. 

Another thing that we were thinking about was the learning curve with querying and using the UI. Devo's response in that learning curve was they definitely provided a lot of training for us. That really helped the analysts.

As far as our previous solution, it definitely allows us to ingest more data than we were able to with LogRhythm. I don't think that the others had 400 days of hot, searchable data. They did not have that much time available as hot and searchable. We definitely have the ability to ingest and store for longer periods of time and to ingest more data. That's really the big thing that is the next-gen capability of Devo that we were looking for was really unlike other SIEMs that I've seen or administered where you don't parse on ingest, you parse after you get the raw data in the platform. That really removes that roadblock.

What other advice do I have?

I have been with the company for approximately three years and in the engineering space for about two.

If the more data the better is the goal for your organization, then Devo is really the way to go for that. But if you're looking more for a super robust analyst interface, next-gen analyst workflow, I don't think Devo is at that point yet. They're more at the point where you can ingest a lot of data and perform visualizations on it really well. 

One of the things that I really like about Devo is the ability to parse the data, and not just the ability to parse the data after you ingest it. There are so many different ways to do it. 

I would definitely explore trying to parse that out yourself because, for me, the first couple of times it was a little bit difficult to get used to the query language and everything. But now, when someone asks for something to be parsed out in a certain way, it's super easy. Explore the ability to use the queries to parse out data to give you that independence and ability to represent data however you want to represent it.

Devo definitely has all the next-gen concepts that I haven't really seen in any other SIEM, but I do think that they definitely have some more room for improvement. A lot of SIEMs offer their own agent and Devo does not at the moment. I would rate Devo a seven out of ten.

Most of the stuff that we saw in our POC with them was the "wow" moment. This platform can address anything. All of the features met my expectations from the POC. As far as the onboarding and integration, it's definitely improved our workflow but the "wow" moment was when we had our proof of concept with them and saw what the platform initially could do, and then it really lived up to that.

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Security Engineer at a recreational facilities/services company with 10,001+ employees
Real User
Top 10
Very versatile for many use cases
Pros and Cons
  • "The feature that I have found most valuable with Splunk is the ability to sift through a bunch of data very quickly."
  • "Their technical support sucks."

What is our primary use case?

We are using Splunk in the standard information security use case. We're also using it for various application use cases around identity management, windows active directory, and those types of use cases.

How has it helped my organization?

Splunk has provided a venue for us to determine student engagement during COVID, for which we didn't really have any other way except by looking at data that we captured off of our student systems and our authentication servers to see who's logging in, and who's logging out, and for how long they've been logged in.

What is most valuable?

The feature that I have found most valuable with Splunk is the ability to sift through a bunch of data very quickly.

We have about a 500 gig license with Splunk, so it's not like petabytes of data, but even 500 gigs is kind of hard to sift through sometimes.

What needs improvement?

Splunk has been improving consistently over the last couple of revs. I still think there are some administrative features that they could improve on and make them less kludgy, but from a user perspective, it has gotten very clean and very sexy looking over the last few builds. So the users seem to like it.

By less kludgy, I mean that in the version I'm running, I still have to go into the command line and modify files and then go into the GUI and validate that they got modified. So it's not all in the GUI, but it has been moving slowly to the GUI over the last several versions. It would be nice if they could move all of the administrative features into a GUI platform so that when you're in the Splunk distributed environment management platform, you then don't have to go into the command line to add new applications or new packages that you then want to be able to push out to your forwarders. Their forwarder management is still kind of split that way.

I don't really have any feature requests in Splunk's space. They seem to be doing a good job of keeping it contemporary from that perspective. 

Splunk's mission is to move everyone to the cloud and charge us a bunch more money. Their goal is to cloud source everything, and quite honestly, the price of cloud sourcing the product, even at smaller 500 gigs a day (which isn't a lot of data by Splunk standards) in the cloud for that is ludicrous. The cost for me to buy equipment every three years and own licensing and run it local to my prem, is significantly less from a three or five year license. I'm going to spend X amount of money on hardware every X years, and I'm going to have to pay licensing costs on software of X over that same period versus that amount that I'd amortize over five years is what I would be paying every year in the cloud.

That is the point with the product. It seems like they are so focused on forcing everyone into the cloud that they seem to be not understanding that there are people that don't have those really deep pockets. It's one thing for a Fortune 50 company to spend a million dollars a year in the cloud. It's another thing when you're a nonprofit educational institute to spend that kind of money in the cloud. Even though we do get some discounts in most of the cloud space providers, it is still not on par with the big public businesses.

For how long have I used the solution?

I have been using Splunk for probably 10 years.

What do I think about the stability of the solution?

At least in our environment, it is super stable. When you think about how much time you spend working with other applications, just Windows Server requires more feeding than Splunk does, you see that Splunk is a very low maintenance care and feeding product.

We have probably 150 users in the environment and their roles vary from being application management folks to application engineering folks to the executive suite, so lots of different use cases. The executive suite tend to prefer more curated content and the application owners have a mix of curated content and dynamic search functions they can perform. Then the engineering tier basically gets some curated content and some free reign to do whatever they want for the most part. I'm the guy that supports this instance. So there's one person.

I support not only Splunk, but I am also the campus security engineer and I'm also the dude that runs or is responsible for all of our campus monitoring infrastructure. So that tells you how little maintenance is required.

We are adding new use cases on a fairly regular basis and we are adding more licensing to our indexing license. I don't see Splunk going away. There's nothing else that I think provides the ability to do this much data analytics from just the numbers of equipment that you need to run it. Also, the number of people that you need to actually make sure that it's functioning well. In higher ed., everybody always says we should do open source. And I respond that what I do in Splunk with 20 systems, I would need three racks of equipment to do on an open source platform. I have basically 70 - 75% of the racks now and I'd need three times that or more to run this as an open source product. And it wouldn't be as cute and it wouldn't be as beautiful or as flexible.

What do I think about the scalability of the solution?

I know other folks in the higher ed. space that are running petabyte size instances with Splunk. So I would have to say it scales very well just from talking to the folks in my market silo.

How are customer service and support?

Their technical support sucks.

My engagement with their technical support was for a product which they basically took over from an open source product and they just seemed to not be able to figure out why it's not doing what it's supposed to do. The number of times I've had to engage with Splunk for solutions has been for a couple of use cases. And in every one of those use cases, support was very painful. It took a very long time and it seemed like they were more interested in burning their queue volume than actually satisfying me as a customer.

I work in higher ed. Here in higher ed., it costs us a lot of money to run it. The support from the company that you spend a lot of money with is pretty poor. I get most of my support through the Splunk sales folks because they seem to know more and they're more incentivized to keep me as a customer. When I call in to open a ticket with Splunk support, they really don't know, and this is going to sound terrible, they don't really care whether I have a 50 Meg license or a 50 petabyte license. If it's not on their workflow, their pre-programmed triage, they can't do it.

Which solution did I use previously and why did I switch?

Splunk came into being at Case Western when we were looking for a better log product than Check Point was providing at that point in time. My entire investment in Splunk, in hardware and software and integration cost, was cheaper than what Check Point was going to provide, or what the Check Point solution path was for just looking at firewall data. We knew we needed to be able to do more analytics than what we were currently getting out of our firewall products and Splunk was brought in to do that. It can do this and a whole lot more.

How was the initial setup?

Splunk is a complex critter to put in and it's a more complex critter to keep running. We have 10 search heads and four indexers and universal and a heavy forwarding cluster. We have clustered indexers and clustered search heads. This is definitely not a drag and drop product.

We engaged a third party Splunk integrator to help us do our Splunk deployment and they did our initial deployment. We used a different integrator to do some of our upgrades, which we probably won't use again. Our implementation strategy was we really just wanted to look at the classic security use case when we put this in 10 years ago. Then after that came in, and everybody was happy with what it was doing, we added some other use cases and universal forwarding and so on and so forth.

What about the implementation team?

We used an integrator.

The integrator we used to do our initial deployment was excellent. The integrator we used to do our last round of upgrades was less than excellent.

When I hire an integrator to do an upgrade in an environment, I expect them to come back and say "all of your application layer apps are upgradeable, but your OS's need to be upgraded. Do you want me to do that? Or should you do that?" I now have different versions of OS's under Splunk running in my Linux world and it would've been nice to upgrade the system OS and then upgrade Splunk, even if it was more disruptive. I guess I have to read the statement of work more closely in the future.

What was our ROI?

The TCO and ROI are really great if you're in the private, non-public sector and you're in a more standard business sector. The return on investment in total cost of ownership on Splunk is from somebody who doesn't fit into that neat silo. Do we calculate that stuff? So our return on investment is by being able to solve problems that we never knew we could solve. My answer to it is the flexibility to be able to figure out student engagement when COVID hit. This was the only platform we could do it on.

What's my experience with pricing, setup cost, and licensing?

I can comment on price in this way - in education in Ohio, we're part of the Ohio supercomputer consortium, and they act as a collective bargaining agent. So we get our licensing as a piece of the State of Ohio's Splunk license. So my pricing is very much not list or even reduced list because of the volume that the state buys.

We generally spend about $20,000 a year in third party integrator costs to get us past some of the rough edges that we get with Splunk support.

Which other solutions did I evaluate?

We briefly looked at the open source product and we obviously looked at a Check Point product. When we looked at Splunk it seemed like they had a smaller cost to procure it, and a much smaller cost to maintain it than all of those other solutions. So it was kind of why we went with Splunk. This is very non-intuitive since everybody says they love Splunk but it costs too much.

What other advice do I have?

My advice to anyone considering Splunk is to understand exactly how much data you want to look at and you want to bring in on a daily basis. Then create a rational strategy to bring the data in, in reasonably sized chunks, that fulfill a use case at a time.

On a scale of one to ten, I would rate Splunk a really good nine.

I'd rate it a really good nine because it's really versatile. You can do a lot of things with it. It allows you to do a lot of analytics in the platform without needing a bunch of other third partyware to help you figure it out.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Sean Moore - PeerSpot reviewer
Lead Azure Sentinel Architect at a financial services firm with 10,001+ employees
Real User
Top 20
Quick to deploy, good performance, and automatically scales with our requirements
Pros and Cons
  • "The most valuable feature is the performance because unlike legacy SIEMs that were on-premises, it does not require as much maintenance."
  • "If Azure Sentinel had the ability to ingest Azure services from different tenants into another tenant that was hosting Azure Sentinel, and not lose any metadata, that would be a huge benefit to a lot of companies."

What is our primary use case?

Azure Sentinel is a next-generation SIEM, which is purely cloud-based. There is no on-premises deployment. We primarily use it to leverage the machine learning and AI capabilities that are embedded in the solution.

How has it helped my organization?

This solution has helped to improve our security posture in several ways. It includes machine learning and AI capabilities, but it's also got the functionality to ingest threat intelligence into the platform. Doing so can further enrich the events and the data that's in the backend, stored in the Sentinel database. Not only does that improve your detection capability, but also when it comes to threat hunting, you can leverage that threat intelligence and it gives you a much wider scope to be able to threat hunt against.

The fact that this is a next-generation SIEM is important because everybody's going through a digital transformation at the moment, and there is actually only one true next-generation SIEM. That is Azure Sentinel. There are no competing products at the moment.

The main benefit is that as companies migrate their systems and services into the Cloud, especially if they're migrating into Azure, they've got a native SIEM available to them immediately. With the market being predominately Microsoft, where perhaps 90% of the market uses Microsoft products, there are a lot of Microsoft houses out there and migration to Azure is common.

Legacy SIEMs used to take time in planning and looking at the specifications that were required from the hardware. It could be the case that to get an on-premises SIEM in place could take a month, whereas, with Azure Sentinel, you can have that available within two minutes. 

This product improves our end-user experience because of the enhanced ability to detect problems. What you've got is Microsoft Defender installed on all of the Windows devices, for instance, and the telemetry from Defender is sent to the Azure Defender portal. All of that analysis in Defender, including the alerts and incidents, can be forwarded into Sentinel. This improves the detection methods for the security monitoring team to be able to detect where a user has got malicious software or files or whatever it may be on their laptop, for instance.

What is most valuable?

It gives you that single pane of glass view for all of your security incidents, whether they're coming from Azure, AWS, or even GCP. You can actually expand the toolset from Azure Sentinel out to other Azure services as well.

The most valuable feature is the performance because unlike legacy SIEMs that were on-premises, it does not require as much maintenance. With an on-premises SIEM, you needed to maintain the hardware and you needed to upgrade the hardware, whereas, with Azure Sentinel, it's auto-scaling. This means that there is no need to worry about any performance impact. You can send very large volumes of data to Azure Sentinel and still have the performance that you need.

What needs improvement?

When you ingest data into Azure Sentinel, not all of the events are received. The way it works is that they're written to a native Sentinel table, but some events haven't got a native table available to them. In this case, what happens is that anything Sentinel doesn't recognize, it puts it into a custom table. This is something that you need to create. What would be good is the extension of the Azure Sentinel schema to cover a lot more technologies, so that you don't have to have custom tables.

If Azure Sentinel had the ability to ingest Azure services from different tenants into another tenant that was hosting Azure Sentinel, and not lose any metadata, that would be a huge benefit to a lot of companies.

For how long have I used the solution?

I have been using Azure Sentinel for between 18 months and two years.

What do I think about the stability of the solution?

I work in the UK South region and it very rarely has not been available. I'd say its availability is probably 99.9%.

What do I think about the scalability of the solution?

This is an extremely scalable product and you don't have to worry about that because as a SaaS, it auto-scales.

We have been 20 and 30 people who use it. I lead the delivery team, who are the engineers, and we've got some KQL programmers for developing the use cases. Then, we hand that over to the security monitoring team, who actually use the tool and monitor it. They deal with the alerts and incidents, as well as doing threat hunting and related tasks.

We use this solution extensively and our usage will only increase.

How are customer service and support?

I would rate the Microsoft technical support a nine out of ten.

Support is very good but there is always room for improvement.

Which solution did I use previously and why did I switch?

I have personally used ArcSight, Splunk, and LogRythm.

Comparing Azure Sentinel with these other solutions, the first thing to consider is scalability. That is something that you don't have to worry about anymore. It's excellent.

ArcSight was very good, although it had its problems the way all SIEMs do.

Azure Sentinel is very good but as it matures, I think it will probably be one of the best SIEMs that we've had available to us. There are too many pros and cons to adequately compare all of these products.

How was the initial setup?

The actual standard Azure Sentinel setup is very easy. It is just a case where you create a log analytics workspace and then you enable Azure Sentinel to sit over the top. It's very easy except the challenge is actually getting the events into Azure Sentinel. That's the tricky part.

If you are talking about the actual platform itself, the initial setup is really simple. Onboarding is where the challenge is. Then, once you've onboarded, the other challenge is that you need to develop your use cases using KQL as the query language. You need to have expertise in KQL, which is a very new language.

The actual platform will take approximately 10 minutes to deploy. The onboarding, however, is something that we're still doing now. It's use case development and it's an ongoing process that never ends. You are always onboarding.

It's a little bit like setting up a configuration management platform and you're only using one push-up configuration.

What was our ROI?

We are getting to the point where we see a return on our investment. We're not 100% yet but getting there.

What's my experience with pricing, setup cost, and licensing?

Azure Sentinel is very costly, or at least it appears to be very costly. The costs vary based on your ingestion and your retention charges. Although it's very costly to ingest and store data, what you've got to remember is that you don't have on-premises maintenance, you don't have hardware replacement, you don't have the software licensing that goes with that, you don't have the configuration management, and you don't have the licensing management. All of these costs that you incur with an on-premises deployment are taken away.

This is not to mention running data centers and the associated costs, including powering them and cooling them. All of those expenses are removed. So, when you consider those costs and you compare them to Azure Sentinel, you can see that it's comparative, or if not, Azure Sentinel offers better value for money.

All things considered, it really depends on how much you ingest into the solution and how much you retain.

Which other solutions did I evaluate?

There are no competitors. Azure Sentinel is the only next-generation SIEM.

What other advice do I have?

This is a product that I highly recommend, for all of the positives that I've mentioned. The transition from an on-premises to a cloud-based SIEM is something that I've actually done, and it's not overly complicated. It doesn't have to be a complex migration, which is something that a lot of companies may be reluctant about.

Overall, this is a good product but there are parts of Sentinel that need improvement. There are some things that need to be more adaptable and more versatile.

I would rate this solution a nine out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
Flag as inappropriate
Andris Soroka - PeerSpot reviewer
Co-owner and CEO at Data Security Solutions
Real User
Top 20
Best price-performance ratio, good scalability, and easy to set up
Pros and Cons
  • "We have worked with other solutions, such as LogRhythm and Splunk. Compared to others, IBM QRadar has the best price-performance ratio so that you are able to reserve minimum costs. It starts settling in fast and gets the first results very quickly. It is also very scalable."
  • "There are a lot of things they are working on and a lot of technologies that are not yet there. They should probably work out a better reserve with their ecosystem of business partners and create wider and more in-depth qualities, third-party tools, and add-ons. These things really give immediate business value. For instance, there are many limitations in using SAP, EBS, or Micro-Dynamics. A lot of things that are happening in those platforms could also be monitored and allowed from the cybersecurity risks perspective. IBM might be leaving this gap or empty space for business partners. Some larger organizations might already be doing this. It would be very nice if IBM can make some artificial intelligence part free of charge for all current QRadar users. This would be a big advantage as compared to other competitors. There are companies that are going in different directions. Of course, you can't do everything inside QRadar. In general, it might be very good for all players to provide more use cases, especially regarding data protection and leakage prevention. There are some who are already doing some kind of file integrity or gathering some more information from all possible technologies for building anything related to the user and data analysis, content analysis, and management regarding the data protection."

What is our primary use case?

I am a system integrator. We have installed it on-premises, on the cloud, in distributed environments, and all other environments for our clients.

What is most valuable?

We have worked with other solutions, such as LogRhythm and Splunk. Compared to others, IBM QRadar has the best price-performance ratio so that you are able to reserve minimum costs. It starts settling in fast and gets the first results very quickly. It is also very scalable.

What needs improvement?

There are a lot of things they are working on and a lot of technologies that are not yet there. They should probably work out a better reserve with their ecosystem of business partners and create wider and more in-depth qualities, third-party tools, and add-ons. These things really give immediate business value. For instance, there are many limitations in using SAP, EBS, or Micro-Dynamics. A lot of things that are happening in those platforms could also be monitored and allowed from the cybersecurity risks perspective. IBM might be leaving this gap or empty space for business partners. Some larger organizations might already be doing this.

It would be very nice if IBM can make some artificial intelligence part free of charge for all current QRadar users. This would be a big advantage as compared to other competitors.

There are companies that are going in different directions. Of course, you can't do everything inside QRadar. In general, it might be very good for all players to provide more use cases, especially regarding data protection and leakage prevention. There are some who are already doing some kind of file integrity or gathering some more information from all possible technologies for building anything related to the user and data analysis, content analysis, and management regarding the data protection.

For how long have I used the solution?

I have been using this solution since 2011.

What do I think about the stability of the solution?

If the engineers are missing some technical knowledge from IBM documentation, then it might get interesting, but you can always rollback. Usually, when you are implementing innovations, as a system integrator, you usually do less on the test environment, and then you check if this works. If bigger organizations and customers want to do it by themselves, they should really stick to this approach and use a lot of material, community pages, and channels.

What do I think about the scalability of the solution?

There is absolutely no problem with scalability. It works very fine, especially when you are running just clients. It doesn't matter how many variants you have all across the culture. You can practically have different continents. It doesn't matter how many collectors are running. You can easily distribute the current license to multiple users, and all the collectors can upload it without any restrictions.

Which solution did I use previously and why did I switch?

We have worked with other solutions. Splunk is a long-term trap because it is very expensive, and it gets more and more expensive. It has different times, and it is integrated with different products. When you combine that together with licensing, it obviously fails. You are paying a lot more than QRadar.

LogRhythm has some problems with stability. We were the first partner to do some integrations with LogRhythm, but we had some problems. ArcSight was smaller at the time but not anymore. It is now a competitor. Fortinet is very good for those who are already using some software products from them.

How was the initial setup?

It usually happens within two or three hours, but it also depends on the preparation. If good homework is done, then the initial setup is totally flawless. It is ready very soon. We then try it and wait for maybe a couple of days more. After that, we start fine-tuning, and then we do advanced installations.

For us, such projects usually don't start without any experience with technology and the concepts. When you are buying it, you need to know all the information systems, create a list of tasks and priorities, and understand the use case better. 

What about the implementation team?

A lot of such innovations or implementations initially can be done by one person, two persons, or maybe a team of five dedicated administrators who later on will be using this technology or solution. You need to understand that there are different roles of people who are working with cybersecurity and threat management, such as an analyst, a simple technical maintenance performer, an administrator, a user behavior analyst, etc.

What other advice do I have?

It is not something like a next-generation firewall, next-generation intrusion prevention, or the most complex tool that you have got, which you can install and configure and then see if it runs smoothly. It is a completely different story in QRadar or any similar technology. These solutions or technologies have to be managed continuously. 

The biggest mistake that innovations people usually make is that they don't plan the total cost of the technology tools for a period of five years, especially because they don't know what kind of new threats are coming out. Despite that, IBM is very early in doing some kind of new content packs and including data enforcement, etc. When new threats are coming in, you effectively need to adjust. The more complex use cases you have, the more complex the responses will be. You might have different systems or you might be working in different time zones.

When buying, people think that 70% to 80% percent of the initial purchase is the total they are going to spend within next year at this time, and then every next year, they will spend like 20% or 25% on the technical support, maintenance, development of the system, etc. When you are talking about a huge, complex, and central cybersecurity threat management system, it is more likely that you are implementing a document management system and some complex CIP systems, etc. The cost of the license and the cost of the hardware initially can make up around 20%, 30%, or less percent of the total budget that is needed for quality management of such solutions for a longer period of time. 

Some people think that if they buy this for 100,000 pounds or euros, the next year, they can buy just annual subscriptions for 25,000 or 20,000. You may have some internal costs for the license, etc. If you are buying for, let's say, 100,000, you might have to make your budget for 200,000 more, because it needs to have certain people who are doing everything with the solution. You need to train them and send them to the IBM international technology academies and events such as Visor to know about its management and maintenance. You probably also need to do some certification, so you need to go for a course for implementation. A lot of internal work should be done to adjust the solution with other departments, and those other departments usually don't like such central, overseeing, and controlled solution. They, later on, learn that they can get a lot of different, useful reports out of it without doing additional work. 

I would rate IBM QRadar an eight out of ten. Every technology has some weaknesses and strengths. It has a lot of points to improve, but based on everything that we have seen in the market and from other customers, this is, so far, at least in Europe, the best solution.

Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
PaulWoods - PeerSpot reviewer
ICT Project Manager at a government with 5,001-10,000 employees
Real User
Top 10
Stable, with good reporting and technical support
Pros and Cons
  • "The most valuable features are the ones that we use the most, which are the search and report facilities."
  • "I know that they have user behavior analytics, but it's an extra cost for this feature. It would be nice if it was in with the standard products."

What is most valuable?

The most valuable features are the ones that we use the most, which are the search and report facilities.

What needs improvement?

There is room for improvement on both our side and on the side of LogPoint.

We could improve on what we decided to put into LogPoint for it to work on and LogPoint Is improving with its addition of the MITRE ATT&CK framework.

I know that they have user behavior analytics, but it's an extra cost for this feature. It would be nice if it was in with the standard products.

If there were one price that you paid and that included all of the features, instead of having to pay a bit more to get advanced features. It would make things simpler when you purchase.

For how long have I used the solution?

I have been using LogPoint for approximately six years.

We're currently migrating from version 6.6 to 6.9.

What do I think about the stability of the solution?

It's a stable solution.

What do I think about the scalability of the solution?

It's a scalable solution. We can add more LogPoint boxes, repositories, and sources.

We have 20 or 30 people who are using the information from it, in our organization.

How are customer service and technical support?

Technical support is very good.

Which solution did I use previously and why did I switch?

We used to use LogRhythm.

We made a significant investment in LogRhythm, and it didn't cope with the size of our estate, so we decided to go elsewhere.

How was the initial setup?

The initial setup was quite straightforward.

It took us a couple of weeks to set up all of the log sources and to configure them.

To maintain this solution it's one person and half their time to work on it.

What about the implementation team?

The implementation was very good from our point of view, but we had one of the top people come out and install it with us.

I think we were the first local authority and the council in the country to touch the LogPoint.

They came out and made sure that it was installed properly and that it worked properly with us, which I'm not sure everybody would get.

What's my experience with pricing, setup cost, and licensing?

It's getting more expensive, which is one of the reasons we're looking around just to see if there's anything better value. It's still good, but it's I think it's becoming more expensive.

Which other solutions did I evaluate?

We are looking to see what else may be available. There might be something better that we are not aware of yet.

What other advice do I have?

I would say that it's a good product. It's very stable, and the support is very good. We use it a lot. 

As I say, I'm looking to see whether or not it's still the product that we should be using or whether there's something out there now.

I would rate LogPoint an eight out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Security Information and Event Management (SIEM)
June 2022
Get our free report covering Splunk, IBM, Datadog, and other competitors of LogRhythm NextGen SIEM. Updated: June 2022.
609,272 professionals have used our research since 2012.