Coming October 25: PeerSpot Awards will be announced! Learn more
Buyer's Guide
Security Information and Event Management (SIEM)
September 2022
Get our free report covering Microsoft, Elastic, Dynatrace, and other competitors of Splunk. Updated: September 2022.
632,611 professionals have used our research since 2012.

Read reviews of Splunk alternatives and competitors

JerryH - PeerSpot reviewer
Director at a computer software company with 1,001-5,000 employees
Real User
Top 10
Enables us to bring all our data sources into a central hub for quick analysis, helping us focus on priorities in our threat landscape
Pros and Cons
  • "The real-time analytics of security-related data are super. There are a lot of data feeds going into it and it's very quick at pulling up and correlating the data and showing you what's going on in your infrastructure. It's fast. The way that their architecture and technology works, they've really focused on the speed of query results and making sure that we can do what we need to do quickly. Devo is pulling back information in a fast fashion, based on real-time events."
  • "Devo has a lot of cloud connectors, but they need to do a little bit of work there. They've got good integrations with the public cloud, but there are a lot of cloud SaaS systems that they still need to work with on integrations, such as Salesforce and other SaaS providers where we need to get access logs."

What is our primary use case?

Our initial use case is to use Devo as a SIEM. We're using it for security and event logging, aggregation and correlation for security incidents, triage and response. That's our goal out of the gate.

Their solution is cloud-based and we're deploying some relays on-premise to handle anything that can't send it up there directly. But it's pretty straightforward. We're in a hybrid ecosystem, meaning we're running in both public and private cloud.

How has it helped my organization?

We're very early in the process so it's hard to say what the improvements are. The main reason that we bought this tool is that we were a conglomeration of several different companies. We were the original Qualcomm company way back in the day. After they made billions in IP and wireless, they spun us off to Vista Equity, and we rapidly and in succession bought three or four companies in the 2014/2015 timeframe. Since then, we've acquired three or four more. Unfortunately, we haven't done a very good job of integrating those companies, from a security and business services standpoint.

This tool is going to be our global SIEM and log-aggregation and management solution. We're going to be able to really shore up our visibility across all of our business areas, across international boundaries. We have businesses in Canada and Mexico, so our entire North American operations should benefit from this. We should have a global view into what's going on in our infrastructure for the first time ever.

The solution is enabling us to bring all our data sources into a central hub. That's the goal. If we can have all of our data sources in one hub and are then able to pull them back and analyze that data as fast as possible, and then archive it, that will be helpful. We have a lot of regulatory and compliance requirements as well, because we do business in the EU. Obviously, data privacy is a big concern and this is really going to help us out from that standpoint.

We have a varied array of threat vectors in our environment. We OEM and provide a SaaS service that runs on people's mobiles, plus we provide an in-cab mobile in truck fleets and tractor trailers that are both short- and long-haul. That means our threat surface is quite large, not only from the web services and web-native applications that we expose to our customers, but also from our in-cab and mobile application products that we sell. Being able to pull all that information into one central location is going to be huge for us. Securing that type of landscape is challenging because we have a lot of different moving parts. But it will at least give us some insight into where we need to focus our efforts and get the most bang for the buck.

We've found some insights fairly early in the process but I don't think we've gotten to the point where we can determine that our mean time to resolution has improved. We do expect it to help to reduce our MTTR, absolutely, especially for security incidents. It's critical to be able to find a threat and do something about it sooner. Devo's relationship with Palo Alto is very interesting in that regard because there's a possibility that we will be pushing this as a direct integration with our Layer 4 through Layer 7 security infrastructure, to be able to push real-time actions. Once we get the baseline stuff done, we'll start to evolve our maturity and our capabilities on the platform and use a lot more of the advanced features of Devo. We'll get it hooked up across all of our infrastructure in a more significant way so that we can use the platform to not only help us see what's going on, but to do something about it.

What is most valuable?

So far, the most valuable features are the ease of use and the ease of deployment. We're very early in the process. They've got some nice ways to customize the tool and some nice, out-of-the-box dashboards that are helpful and provide insight, particularly related to security operations.

The UI is 

  • clean
  • easy to use
  • intuitive. 

They've put a lot of work into the UI. There are a few areas they could probably improve, but they've done a really good job of making it easy to use. For us to get engagement from our engineering teams, it needs to be an easy tool to use and I think they've gone a long way to doing that.

The real-time analytics of security-related data are super. There are a lot of data feeds going into it and it's very quick at pulling up and correlating the data and showing you what's going on in your infrastructure. It's fast. The way that their architecture and technology works, they've really focused on the speed of query results and making sure that we can do what we need to do quickly. Devo is pulling back information in a fast fashion, based on real-time events.

The fact that the real-time analytics are immediately available for query after ingest is super-critical in what we do. We're a transportation management company and we provide a SaaS. We need to be able to analyze logs and understand what's going on in our ecosystem in a very close to real-time way, if not in real time, because we're considered critical infrastructure. And that's not only from a security standpoint, but even from an engineering standpoint. There are things going on in our vehicles, inside of our trucks, and inside of our platform. We need to understand what's going on, very quickly, and to respond to it very rapidly.

Also, the integration of threat intelligence data provides context to an investigation. We've got a lot of data feeds that come in and Devo has its own. They have a partnership with Palo Alto, which is our primary security provider. All of that threat information and intel is very good. We know it's very good. We have a lot of confidence that that information is going to be timely and it's going to be relevant. We're very confident that the threat and intel pieces are right on the money. And it's definitely providing insights. We've already used it to shore up a couple of things in our ecosystem, just based on the proof of concept.

The solution’s multi-tenant, cloud-native architecture doesn't really affect our operations, but it gives us a lot of options for splitting things up by business area or different functional groups, as needed. It's pretty simple and straightforward to do so. You can implement those types of things after the fact. It doesn't really impact us too much. We're trying to do everything inside of one tenant, and we don't expose anything to our customers.

We haven't used the solution's Activeboards too much yet. We're in the process of building some of those out. We'll be building dashboards and customized dashboards and Activeboards based on what those tools are doing in Splunk. Devo's going to help us out with our ProServe to make sure that we do that right, and do it quickly.

Based on what I've seen, its Activeboards align nicely with what we need to see. The visual analytics are nice. There's a lot of customization that you can do inside the tool. It really gives you a clean view of what's going on from both interfaces and topology standpoints. We were able to get network topology on some log events, right out of the gate. The visualization and analytics are insightful, to say the least, and they're accurate, which is really good. It's not only the visualization, but it's also the ability to use the API to pull information out. We do a lot of customization in our backend operations and service management platforms, and being able to pull those logs back in and do something with them quickly is also very beneficial.

The customization helps because you can map it into your business requirements. Everybody's business requirements are different when it comes to security and the risks they're willing to take and what they need to do as a result. From a security analyst standpoint, Devo's workflow allows you to customize, in a granular way, what is relevant for your business. Once you get to that point where you've customized it to what you really need to see, that's where there's a lot of value-add for our analysts and our manager of security.

What needs improvement?

Devo has a lot of cloud connectors, but they need to do a little bit of work there. They've got good integrations with the public cloud, but there are a lot of cloud SaaS systems that they still need to work with on integrations, such as Salesforce and other SaaS providers where we need to get access logs.

We'll find more areas for improvement, I'm sure, as we move forward. But we've got a tight relationship with them. I'm sure we can get anything worked out.

For how long have I used the solution?

This is our first foray with Devo. We started looking at the product this year and we're launching an effort to replace our other technology. We've been using Devo for one month.

What do I think about the stability of the solution?

The stability is good. It hasn't been down yet.

What do I think about the scalability of the solution?

The scalability is unlimited, as far as I can tell. It's just a matter of how much money you have in your back pocket that you're willing to spend. The cost is based on log ingestion rate and how much retention. They're running in public cloud meaning it's unlimited capacity. And scaling is instantaneous.

Right now, we've got about 22 people in the platform. It will end up being anywhere between 200 and 400 when we're done, including software engineers, systems engineers, security engineers, and network operations teams for all of our mobile and telecommunications platforms. We'll have a wide variety of roles that are already defined. And on a limited basis, our customer support teams can go in and see what's going on.

How are customer service and technical support?

Their technical support has been good. We haven't had to use their operations support too much. We have a dedicated team that's working with us. But they've been excellent. We haven't had any issues with them. They've been very quick and responsive and they know their platform.

Which solution did I use previously and why did I switch?

We were using Splunk but we're phasing it out due to cost.

Our old Splunk rep went to Devo and he gave me a shout and asked me if I was looking to make a change, because he knew of some of the problems that we were having. That's how we got hooked up with Devo. It needed to have a Splunk-like feel, because I didn't want to have a long road or a huge cultural transformation and shock for our engineering teams and our security teams that use Splunk today. 

We liked the PoC. Everything it did was super-simple to use and was very cost-effective. That's really why we went down this path.

Once we got through the PoC and once we got people to take a look at it and give us a thumbs-up on what they'd seen, we moved ahead. From a price standpoint, it made a lot of sense and it does everything we needed to do, as far as we can tell.

How was the initial setup?

We were pulling in all of our firewall logs, throughout the entire company, in less than 60 minutes. We deployed some relay instances out there and it took us longer to go through the bureaucracy and the workflow of getting those instances deployed than it did to actually configure the platform to pull the relevant logs.

In the PoC we had a strategy. We had a set of infrastructure that we were focusing on, infrastructure that we really needed to make sure was going to integrate and that its logs could be pulled effectively into Devo. We hit all of those use cases in the PoC.

We did the PoC with three people internally: a network engineer, a systems engineer, and a security engineer.

Our strategy going forward is getting our core infrastructure in there first—our network, compute, and storage stuff. That is critical. Our network layer for security is critical. Our edge security, our identity and access stuff, including our Active Directory and our directory services—those critical, core security and foundational infrastructure areas—are what we're focusing on first.

We've got quite a few servers for a small to mid-sized company. We're trying to automate the deployment process to hit our Linux and Windows platforms as much as possible. It's relatively straightforward. There is no Linux agent so it's essentially a configuration change in all of our Linux platforms. We're going through that process right now across all our servers. It's a lift because of the sheer volume.

As for maintenance of the Devo platform we literally don't require anybody to do that.

We have a huge plan. We're in the process of spinning up all of our training and trying to get our folks trained as a day-zero priority. Then, as we pull infrastructure in, I want those guys to be trained. Training is a key thing we're working on right now. We're building the e-learning regimen. And Devo provides live, multi-day workshops for our teams. We go in and focus the agenda on what they need to see. Our focus will be on moving dashboards from Splunk and the critical things that we do on a day-to-day basis.

What about the implementation team?

We worked straight with Devo on pretty much everything. We have a third-party VAR that may provide some value here, but we're working straight with Devo.

What was our ROI?

We expect to see ROI from security intelligence and network layer security analysis. Probably the biggest thing will be turning off things that are talking out there that don't need to be talking. We found three of those types of things early in the process, things that were turned on that didn't need to be turned on. That's going to help us rationalize and modify our services to make sure that things are shut down and turned off the way they're supposed to be, and effectively hardened.

And the cost savings over Splunk is about 50 percent.

What's my experience with pricing, setup cost, and licensing?

Pricing is pretty straightforward. It's based on daily log ingestion and retention rate. They keep it simple. They have breakpoints, depending on what your volume is. But I like that they keep it simple and easy to understand.

There were no costs in addition to their standard licensing fees. I don't know if they're still doing this, but we got in early enough that all of the various modules were part of our entitlement. I think they're in the process changing that model a little bit so you can pick your modules. They're going to split it up and charge by the module. But everything was part of the package that we needed, day-one.

Which other solutions did I evaluate?

We were looking at ELK Stack and Datadog. Datadog has a security option, but it wasn't doing what we needed it to do. It wasn't hitting a couple of the use cases that we have Splunk doing, from a logging and reporting standpoint. We also looked at Logstash, some of the "roll-your-own" stuff. But when you do the comparison for our use case, having a cloud SaaS that's managed by somebody else, where we're just pushing up our logs, something that we can use and customize, made the most sense for us. 

And from a capability standpoint, Devo was the one that most aligned with our Splunk solution.

What other advice do I have?

Take a look at it. They're really going after Splunk hard. Splunk has a very diverse deployment base, but Splunk really missed the mark with its licensing model, especially when it relates to the cloud. There are options out there, effective alternatives to Splunk and some of the other big tools. But from a SaaS standpoint, if not best-in-breed, Devo is certainly in the top-two or top-three. It's definitely a strong up-and-comer. Devo is already taking market share away from Splunk and I think that's going to continue over the next 24 to 36 months.

Devo's speed when querying across our data is very good. We haven't fully loaded it yet. We'll see when the rubber really hits the road. But based on the demos and the things that we've seen in Devo, I think it's going to be extremely good. The architecture and the way that they built it are for speed, but it's also built for security. Between our DevOps, our SecOps, and our traditional operations, we'll be able to quickly use the tool, provide valuable insights into what we're doing, and bring our teams up to speed very quickly on how to use it and how to get value out of it quickly.

The fact that it manages 400 days of hot data falls a little bit outside of our use case. It's great to have 400 days of hot data, from security, compliance, and regulatory retention standpoints. It makes it really fast to rehydrate logs and go back and get trends from way back in the day and do some long-term trend analysis. Our use case is a little bit different. We just need to keep 90 days hot and we'll be archiving the rest of that information to object-based long-term storage, based on our retention policies. We may or may not need to rehydrate and reanalyze those, depending on what's going on in our ecosystem. Having the ability to be able to reach back and pull logs out of long-term storage is very beneficial, not only from a cost standpoint, but from the standpoint of being able to do some deeper analysis on trends and reach back into different log events if we have an incident where we need to do so.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Cyber Security Analyst at a retailer with 10,001+ employees
Real User
Top 20
Playbooks integrations, incident management features, and threat hunting services saved time and streamlined investigations
Pros and Cons
  • "Risk scoring was nice. We could exactly see which user had the highest risk score, and then we could pick it up and work on it."
  • "When they did upgrades or applied patches, sometimes, there was downtime, which required the backfill of data. There were times when we had to reach out and get a lot of things validated."

What is our primary use case?

We were using it for data loss prevention and data acceleration. We wanted a platform with a proper ticketing facility, and as and when we reviewed a user, we also needed a proper documentation setup. Securonix provided that. We were able to integrate playbooks and a lot of other modules so that we not only looked at a particular problem area but also at other factors. We didn't only want to look at exfiltration but also at any lateral movement inside the company by a user. We wanted to look at the outliers in a better way, not only in terms of a user's activity but also in relation to the peer activity to show that it is not a team; it is just a team member doing something wrong.

We most probably were using version 6.0.

How has it helped my organization?

It was very easy for us to do our manual threat hunting. We had a lot of instances where we found our internal users exfiltrating data. We were able to see that they were exfiltrating data. We could confirm that through the platform by taking a deeper look, which was very nice. It is user-friendly and handy. It allowed us to look at all kinds of activities and logs.

It provides actionable intelligence on threats related to the use cases. After you have done the configuration, it triggers an alert for any incident. This actionable intelligence is very important because it allows us to respond in time without missing the window of being able to take an action. Sometimes, threats are small, and the indicators do not pop up, but with manual analysis, we can get a complete view. So, it is very important to have real-time triggers.

We have been able to find a few true positives. Based on the triggers from the tool, we got to know that people have been exfiltrating data over a period of time. They had been doing it in small amounts, and that's why it went unnoticed. After the tool notified us, we discovered that one or two users have exponentially exfiltrated data over a period of time. Without the solution, just by looking at the logs, we wouldn't have known that. The tool understood the behavior and triggered a notification, and we got to know that. The users were not just sending our data to themselves but also to another vendor. They were contractors, and they were exfiltrating the data to another vendor. They were about to leave the company, and we were able to catch them before they left.

It reduces the amount of time required for investigations. If I had to check logs from different log sources or tools from different vendors and create tickets, it would have taken time. With SNYPR, we were able to perform a lot of actions within the same platform, and we were also able to push tickets to our SOAR management tool. Everything was in one place. We didn't have to navigate between different things. It was helpful for incident management. It took time for analysts to check whether an alert was a false positive or not and provide the right evidence. Having incident management within the tool reduced time in creating and closing some of the incidents. Instead of 30 minutes before, it was reduced to 10 to 15 minutes per incident. We didn't have back-and-forth navigation. Everything was in one place. 

It saved us a couple of hours of our day-to-day activity because everything was consolidated. Once I logged in, one or two hours were enough for me to look at everything and identify things to take an action on.

It has definitely helped us with threat management. Because of the sample use cases that we saw from Securonix, we were able to design a few of our own use cases. We would not have thought of those use cases in the past. We were able to add use cases that were helpful for our data internally. We were able to understand logs even better and create our specific use cases. It was good learning.

What is most valuable?

It is user-friendly. Its user interface is better than the other tools.

I like the playbook integration. In the beginning, we had a few hiccups because the tool was developing, but after that, the threat intelligence tool that we integrated got more accurate and better. The whitelisting and blacklisting of IPs, domains, or users were also working. 

Risk scoring was nice. We could exactly see which user had the highest risk score, and then we could pick it up and work on it. 

Securonix accommodates customer requests in the upcoming versions very well. They do their best to bring in the features required by a customer. We were able to have custom widgets for different departments or specific use cases. All tools do not provide such customization. Securonix was good at taking a request, reviewing it, and if it made sense, adding it. We got at least one or two features added. 

What needs improvement?

When they did upgrades or applied patches, sometimes, there was downtime, which required the backfill of data. There were times when we had to reach out and get a lot of things validated. 

For how long have I used the solution?

I have been using this solution for about 2.5 years. Right now, I'm not using it, but I have used it in the last 18 months.

What do I think about the stability of the solution?

Initially, during patch management, we did see a few downtimes, which required a backfill of data. Before I moved out of the previous company, patch management and upgrades had improved, and the tool had become stable. The queries we were running weren’t breaking the tool. We were able to fetch reports for more roles and data as compared to when we started.

What do I think about the scalability of the solution?

The company that I was working with was midsize. We didn't have a huge amount of data. We were accommodated pretty well. We didn't have any thresholds or limits, but I cannot speak for companies that have a huge amount of data. 

Their archiving and deletion policies also worked well for us. We didn't see any performance issues when the solution was ingesting all log sources. Its scalability was pretty nice. We started with six to seven data sources, and then we moved on to add a few more. It could easily accommodate any increase in the number of users or data. We didn't have to just stop at a particular point.

With on-prem, customers have control over the infrastructure, and they can tweak it, but a cloud solution is more simplified. You don't have the headache and overhead of maintaining your resources. So, it is definitely scalable. They partition you based on how big the company is. So, even if you move to a bigger scale, more resources get added to make it work better. It is seamless. We didn't have many issues. We had a few slowness issues at times, but they were resolved. We didn't have to deal with them for a long period of time.

How are customer service and support?

Their support was pretty good. We didn't have any issues there. They were pretty fast. Anytime we had downtime or any issue, we were certainly helped. We got emails telling us how long it will take, and they would stick by it. There were a few times when there was a one-day or two-day delay in response, but eventually, it all worked out. We didn't have major issues. I would rate them a nine out of ten.

They also provide a review with their content team. For the initial few months, they did a lot of threat hunting and showed us why they think a user is doing something in the company and why it is something that is worth taking a look at. It was helpful to have analysts from their side and see how the users are doing it and what are the patterns.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

I have only worked with this solution.

How was the initial setup?

We had another engineering team that took care of its deployment. My involvement in its setup was only for providing the type of data that we need to pull into Securonix. Some log sources took a while in terms of the data format that we wanted and accommodating it with the APIs on the Securonix end. We only had issues with a few data sources. It wasn't a very difficult process, but it did take some time. It took about two months.

Overall, its onboarding was pretty smooth because we were on SaaS. In terms of the strategy, we had to provide the data sources that we needed. They were divided into three levels. We first integrated one or two data sources, and when we saw it triggering, we integrated a few more. We also worked on fine-tuning it for false positives with their content team. They trained us on various use cases and algorithms behind those use cases. If there was any incorrect trigger, they explained the reason for it. It did take quite some time to configure it for our own custom use cases. This phase took more time than the initial integration of data sources. It took at least two to three months to onboard all the sources.

Because it was a SaaS solution, they did the maintenance. It didn't require any effort from our end. It minimizes infrastructure management. In case of downtime or outage, they used to notify us and fix the issue. It did not require our intervention, except monitoring and checking if things are running fine.

They provided flexibility in terms of features and patches. If we wanted to stay on a particular patch or have a few features in the next version, they were able to accommodate that. They were able to add our features even when other customers did not need them. 

What about the implementation team?

There were two people on the engineering team from our side, but I am not sure how many people were there from the Securonix side. For integration, two people were there, and then there were four analysts at the beginning to support the tool and give feedback.

What was our ROI?

We most probably did see an ROI. I was working only at the analyst level. I do not have the numbers, but it did improve the efficiency to do more in less time. In the beginning, we were hesitant to use a new tool, but it soon became our go-to tool for checking and verifying any issues. We started engaging with the tool quite a lot, and it probably saved four to five hours a day. Documentation and ticketing were the biggest challenges, and it helped in having everything in one place. We could just click on a ticket and see everything.

What's my experience with pricing, setup cost, and licensing?

I had heard that it was much cheaper than Splunk and some of the other tools, and they gave us a nice package with support. They accommodated the number of users and support very well.

Which other solutions did I evaluate?

My team had definitely looked at other tools, but I was not involved in the PoC. 

What other advice do I have?

I would advise having a look at it. The user experience or the user interface is definitely better than other tools, but you need to see how it interacts with your data sources and how easy it is to integrate it with those data sources.

It took us at least four to five months to realize the benefits of the solution from the time of its deployment. It depends on the log sources you are concentrating on and want to fine-tune. Most SIEM tools, including Securonix, have a lot of use cases that can be tied to Windows, VPN, etc. Modifying and tuning just one log source is not enough. You should tie different log sources so that you get an idea about any lateral movements. Everything that flows into a SIEM solution has to be tuned. If I'm sending a raw log in any format, it needs to be properly sanitized and tuned for my security requirements, which takes time. We had to go back and forth and get a lot of things fixed. It takes a while for the tool to understand and start triggering based on a specific activity.

False positives will always exist. They won't completely go away. When we first deployed it, it used to trigger alerts for 500 to 600 users, which had come down to 20 to 30. It needed continuous fine-tuning, but as an analyst, I was no longer overwhelmed by hundreds of alerts. It took a while to get to that stage and involved a lot of blacklisting and whitelisting. Even though the false positive rate had come down to a pretty good number, we still had to intervene and verify whether it was a false positive or not, but it was easier to do.

It hasn't helped to prevent data loss events, but it has helped to reduce further loss of data. We got to know about an event only when it had already started to happen. When the tool identified that something was happening, it would alert us. If an analyst was active enough to understand that and put a stop to it, it could have prevented any further loss, but I am not sure how much a data loss event would have cost our organization, especially in intellectual property. However, we figured out that about 40 to 50 GB of data was sent over a period of time. It was sent in small bits, and it included confidential reports, meeting keynotes, etc. We would not have known that if the tool had not notified us.

I would rate it a 10 out of 10 based on the experience I had. We didn't have any major issues related to slowness or querying the tool. Querying was pretty simplified, and there were also documents to know the processes. Their support was good, and they were also good in terms of the expansion of the tool. When we wanted a new data source, they were there to review it and modify it with us. They provided good assistance.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Flag as inappropriate
AidanMcLaughlin - PeerSpot reviewer
SIEM Engineer at a tech services company with 501-1,000 employees
Real User
Top 20
Enables us to monitor many different environments for cybersecurity incidents, and we use it as our main alerting tool to let us know when this activity happens
Pros and Cons
  • "The automation rules and playbooks are the most useful that I've seen. A number of other places segregate the automation and playbook as separate tools, whereas Microsoft is a SIEM and SOAR tool in one."
  • "Documentation is the main thing that could be improved. In terms of product usage, the documentation is pretty good, but I'd like a lot more documentation on Kusto Query Language."

What is our primary use case?

We use Microsoft Sentinel to monitor many different environments for cybersecurity incidents, and we use it as our main alerting tool to let us know when this activity happens. It also interfaces with all of our other Defender products, such as Defender for Office 365, Defender for Endpoint, et cetera.

Almost all of our solutions are based in Azure. We use Defender for Endpoint, Defender for Office 365, Defender for cloud, Sentinel, and Azure Active Directory Identity Protection.

I use the latest version of Sentinel.

Sentinel is mostly used within our security operations center and our security team. We have about 50 endpoint users.

How has it helped my organization?

The backbone of our organization is built on Microsoft Sentinel, its abilities, and the abilities of our Defender stack. Ideally, we'd have more data, but a lot of data and functionality are in one place. The Lighthouse feature is outside Sentinel, but it allows us to have multiple environments integrated into one and to access lots of different Sentinel environments through that. It's very easy to manage a security workload with Sentinel. 

I would like to see better integration with CICD. It should be easier to use GitHub, Jenkins, or whatever our code management stack looks like. Whether or not you use Azure DevOps, being able to manage the code you have is fairly important.

Since using Sentinel, we've experienced a faster response time and easier development features. There aren't as many hurdles to moving a configuration.

I'm not sure how long it took to realize the benefits because it was deployed before my time here. It took me about three months to get familiar with what Sentinel has to offer and how we could leverage it, so it will be about three months before you start getting proper value from it.

There are still elements of Sentinel that I haven't used to their fullest potential, like the Jupyter Notebooks and internet hunting queries.

The solution is good at automating routine tasks and alleviating the burden for analysts.

Automation has moderately affected our security operations, although there is scope for it to significantly affect SecOps. There is definitely the capability for Sentinel to do pretty much all of your first-line response, which would be a significant improvement. It's a moderate effect because we only use automation in a few areas.

There are a few different dashboards for each of the Microsoft tools. We have a dashboard for Defender, one for Sentinel, and one for Active Directory Identity Protection. It consolidated alerts in some aspects, but a lot of information is still scattered.

It's fairly good for being reactive and responding to threats and looking for indicators of compromise. Overall, it helped us prepare for potential threats before they hit.

Sentinel saves us time. The automation feature especially saves us time because we can automate a lot of menial tasks. If other businesses could do that, it would eliminate a lot of their first-line response.

Sentinel saves us about 20 hours per week, which is the equivalent of a part-time staff member.

It saved us money. It's a very cost-efficient SIEM to use and still provides a good level of coverage despite that. 

Sentinel saved us about 50% of the cost of Splunk. It decreased our time to detect and respond by about 10-15%.

What is most valuable?

The automation rules and playbooks are the most useful that I've seen. A number of other places segregate the automation and playbook as separate tools, whereas Microsoft is a SIEM and SOAR tool in one.

It provides us with very high visibility. It allows us to see a lot holistically across our environment in Azure. It integrates very well with other products like Defender.

It helps us prioritize threats across our enterprise. There are many things we can do to deal with prioritizing threats, such as having automation rules that automatically raise the priority of certain incidents. We're also able to make changes to the rule sets themselves and say, "I believe this to be a higher priority than is listed in the tool."

Prioritization is probably the most important thing to us because as an organization, we have a number of threats coming in at any moment, and each of them has its own valid investigation path. We need to know which ones are business critical and which ones need to be investigated and either ruled out or remediated as soon as possible. Prioritizing what to work on first is the biggest thing for us.

If you have the right licenses and access to all the products, it's fairly easy to integrate these products into Sentinel. Sometimes they don't pull as much information as possible, and I've noticed that there is a cross-functional issue where these tools will flag and alert themselves.

We can have it configured to create an alert in Microsoft Sentinel, but sometimes it doesn't create a bridge between them. When we finish our investigation and close the ticket on Sentinel, it sometimes doesn't go back to the tool and update that. That's the only issue that I have found with the integration. Everything else is straightforward and works well.

The solutions work natively together to deliver coordinated detection responses across our environment. It's probably one of the better-engineered suites. In other places, I've experienced an endpoint detection and response system that's completely different: proprietary coupled with a proprietary and different SIEM tool or maybe a different sort of tool. They are individual tools, and it can sometimes feel like they're engineered differently, but at the same time, they integrate better than anything else on the market as a suite of tools.

These solutions provide pretty comprehensive threat protection. A lot of them are technology agnostic, so you can have endpoints on Linux and Mac OS. It's pretty comprehensive. There's always a little oversight in any security program where you have to balance the cost of monitoring everything with the risk of having some stuff unmonitored, but that's probably an issue outside of this tool.

It enables us to ingest data from our entire ecosystem. It's difficult to ingest non-native data. It's not as easy as in Splunk because Splunk is probably the leading SIEM tool. If you have a native tool that's out of the Microsoft security stack, you can bring it into Sentinel and have an alert on it.

This ingestion of data is vital for our security operations. It's the driver behind everything we do. We can do threat hunting, but if we don't have logs or data to run queries, then we're pretty much blind. I've worked in places where compliance and regulatory adherence are paramount and having logs, log retention, and evidence of these capabilities is extremely important. One of the more vital things that our organization needs to operate well, is good data.

A lot of the alerts come in from other tools, so sometimes we have to actually use that tool to get the proper information. For example, if we get an alert through Defender for Office 365, to actually see an offending email or attachment or something like that, we have to go into the Defender console and dig that out, which is inconvenient. As an aggregator, it's not bad compared to the other solutions on the market. In an ideal scenario, having more information pulled through in the alerts would be an improvement.

A lot of Sentinel's data is pretty comprehensive. The overarching theme with Sentinel is that it's trying to be a lot of things in one. For a UEBA tool, people will usually have separate tools in their SIEM to do this, or they'll have to build their own complete framework from scratch. Already having it in Sentinel is pretty good, but I think it's just a maturity thing. Over the next few years, as these features get more fleshed out, they will get better and more usable. At the moment, it's a bit difficult to justify dropping a Microsoft-trained UEBA algorithm in an environment where it doesn't have too much information. It's good for information purposes and alerting, but we can't do a lot of automation or remediation on it straight away.

What needs improvement?

Although the integrations are good, it can sometimes be information overload. A number of the technologies run proprietary Microsoft algorithms, like machine learning algorithms and detection algorithms, as well as having out-of-the-box SIEM content developed by Microsoft. As an engineer that focuses on threat detection, it can sometimes be hard to see where all of the detections are coming from. Although the integrations are good, it can sometimes be information overload.

Documentation is the main thing that could be improved. In terms of product usage, the documentation is pretty good, but I'd like a lot more documentation on Kusto Query Language. They could replicate what Splunk has in terms of their query language documentation. Every operator and sub-operator has its own page. It really explains a lot about how to use the operators, what they're good for, and what they're not good for in terms of optimizing CPU usage.

In Splunk, I would like to see some more advanced visualization. There are only some basic ones in Sentinel.

For how long have I used the solution?

I've been using Microsoft Sentinel for about one year, but more heavily over the past five months.

What do I think about the stability of the solution?

It's pretty stable. We don't have any performance or capacity issues with it.

What do I think about the scalability of the solution?

It's scalable when using solutions like Lighthouse.

How are customer service and support?

I haven't needed to use technical support yet, but the documentation in the community is very good.

Which solution did I use previously and why did I switch?

I previously used Splunk. The move to Sentinel was definitely cost-based. A lot of people are moving away from Splunk to a more cost-effective SIEM like Sentinel. We also chose Sentinel because of the ease of maintenance. Splunk's enterprise security has some good queries out of the box, but if I were a small organization, I would use Sentinel because it has more out-of-the-box features.

How was the initial setup?

The log collection facilities must be maintained. Maintaining the solution requires a team of fewer than five people. It mainly involves ensuring that the rules are up to date, the connectors and log collection mechanisms are working correctly, and that they're up to date. It also involves ensuring that the right rules are deployed and the automation rules are in place.

What was our ROI?

Our ROI is 50% over and above what we spend on it in terms of what we can get back from Microsoft Sentinel, everything we use it for, and the time we save.

What's my experience with pricing, setup cost, and licensing?

Some of the licensing models can be a little bit difficult to understand and confusing at times, but overall it's a reasonable licensing model compared to some other SIEMs that charge you a lot per data.

There are additional fees for things like data usage and CPU cycles. When you're developing queries or working on queries, make sure that they're optimized so you don't use as much CPU when they run.

Which other solutions did I evaluate?

We spoke with Google about Chronicle Backstory. It looks pretty powerful, but it wasn't mature enough for what we were looking for at that time.

The only other real standalone solution I've had a good experience with is Splunk and Splunk Phantom. In terms of cost, it's astronomically different. Microsoft Sentinel can sometimes be expensive depending on how many logs you're taking, but it will never be in the same realm as Splunk. Sentinel is easy to use, but Splunk is so expensive because it's very easy to use.

Microsoft Sentinel is a better SOAR solution than Phantom. Phantom has good integrations, but it isn't really built for custom scripting. If you're going to be paying more, you would expect that to be better. Sentinel is better in that aspect. Sentinel's cost-effectiveness blows a lot of other solutions out of the water, especially if you're already in Azure and you can leverage some relationships to bring that cost down.

What other advice do I have?

I would rate this solution eight out of ten. It's heading in the right direction, but it's already pretty good and mature.

If a security colleague said it's better to go with the best-of-breed strategy rather than a single vendor security suite, I would understand that completely. Some people see tying yourself into a single vendor as a vulnerability. It's not quite spread out, but I think you can manage a single vendor security solution if you have a good relationship with the vendor and you really leverage your connections within that business.

It's good to diversify your products and make sure that you have a suite of products available from different companies and that you use the best that's available. In terms of this technology stack, it's pretty good for what it does.

My advice is to really focus on what's possible and what you could do with the SIEM. There are a lot of features that don't get used and maximized for their purpose from day one. It takes a couple of months to properly deploy the solution to full maturity.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Chief Infrastructure & Security Office at a financial services firm with 51-200 employees
Real User
Top 5Leaderboard
Collects logs from different systems, works extremely fast, and has a predictable cost model
Pros and Cons
  • "It is a very comprehensive solution for gathering data. It has got a lot of capabilities for collecting logs from different systems. Logs are notoriously difficult to collect because they come in all formats. LogPoint has a very sophisticated mechanism for you to be able to connect to or listen to a system, get the data, and parse it. Logs come in text formats that are not easily parseable because all logs are not the same, but with LogPoint, you can define a policy for collecting the data. You can create a parser very quickly to get the logs into a structured mechanism so that you can analyze them."
  • "The thing that makes it a little bit challenging is when you run into a situation where you have logs that are not easily parsable. If a log has a very specific structure, it is very easy to parse and create a parser for it, but if a log has a free form, meaning that it is of any length or it can change at any time, handling such a log is very challenging, not just in LogPoint but also in everything else. Everybody struggles with that scenario, and LogPoint is also in the same boat. One-third of logs are of free form or not of a specific length, and you can run into situations where it is almost impossible to parse the log, even if they try to help you. It is just the nature of the beast."

What is our primary use case?

We use it as a repository of most of the logs that are created within our office systems. It is mostly used for forensic purposes. If there is an investigation, we go look for the logs. We find those logs in LogPoint, and then we use them for further analysis.

How has it helped my organization?

We have close to 33 different sources of logs, and we were able to onboard most of them in less than three months. Its adoption is very quick, and once you have the logs in there, the ability to search for things is very good.

What is most valuable?

It is a very comprehensive solution for gathering data. It has got a lot of capabilities for collecting logs from different systems. Logs are notoriously difficult to collect because they come in all formats. LogPoint has a very sophisticated mechanism for you to be able to connect to or listen to a system, get the data, and parse it. Logs come in text formats that are not easily parseable because all logs are not the same, but with LogPoint, you can define a policy for collecting the data. You can create a parser very quickly to get the logs into a structured mechanism so that you can analyze them.

What needs improvement?

The thing that makes it a little bit challenging is when you run into a situation where you have logs that are not easily parsable. If a log has a very specific structure, it is very easy to parse and create a parser for it, but if a log has a free form, meaning that it is of any length or it can change at any time, handling such a log is very challenging, not just in LogPoint but also in everything else. Everybody struggles with that scenario, and LogPoint is also in the same boat. One-third of logs are of free form or not of a specific length, and you can run into situations where it is almost impossible to parse the log, even if they try to help you. It is just the nature of the beast.

Its reporting could be significantly improved. They have very good reports, but the ability to create ad-hoc reports can be improved significantly.

For how long have I used the solution?

I have been using this solution for three years.

What do I think about the stability of the solution?

It has been stable, and I haven't had any issues with it.

What do I think about the scalability of the solution?

There are no issues there. However much free space I give it, it'll work well.

It is being used by only two people: me and another security engineer. We go and look at the logs. We are collecting most of the information from the firm through this. If we were to grow, we'll make it grow with us, but right now, we don't have any plans to expand its usage.

How are customer service and support?

Their support is good. If you call them for help, they'll give you help. They have a very good set of engineers to help you with onboarding or the setup process. You can consult them when you have a challenge or a question. They are very good with the setup and follow-up. What happens afterward is a whole different story because if you have to escalate internally, you can get in trouble. So, their initial support is very good, but their advanced support is a little more challenging.

Which solution did I use previously and why did I switch?

I used a product called Logtrust, which is now called Devo. I switched because I had to get a consultant every time I had to do something in the system. It required a level of expertise. The system wasn't built for a mere human to use. It was very advanced, but it required consultancy in order to get it working. There are a lot of things that they claim to be simple, but at the end of the day, you have to have them do the work, and I don't like that. I want to be able to do the work myself. With LogPoint, I'm able to do most of the work myself.

How was the initial setup?

It is very simple. There is a virtual machine that you download, and this virtual machine has everything in it. There is nothing for you to really do. You just download and install it, and once you have the machine up and running, you're good to go.

The implementation took three months. I had a complete listing of my log sources, so I just went down the list. I started with the most important logs, such as DNS, DHCP, Active Directory, and then I went down from there. We have 33 sources being collected currently.

What about the implementation team?

I did it on my own. I also take care of its maintenance.

What was our ROI?

It is not easy to calculate ROI on such a solution. The ROI is in terms of having the ability to find what you need in your logs quickly and being confident that you're not going to lose your logs and you can really search for things. It is the assurance that you can get that information when you need it. If you don't have it, you're in a trouble. If you are compromised, then you have a problem. It is hard to measure the cost of these things.

As compared to other systems, I'm getting a good value for the money. I'm not paying a variable cost. I have a pretty predictable cost model, and if I need to grow, it is all up to me for the resources that I put, not to them. That's a really good model, and I like it.

What's my experience with pricing, setup cost, and licensing?

It has a fixed price, which is what I like about LogPoint. I bought the system and paid for it, and I pay maintenance. It is not a consumption model. Most SIEMs or most of the log management systems are consumption-based, which means that you pay for how many logs you have in the system. That's a real problem because logs can grow very quickly in different circumstances, and when you have a variable price model, you never know what you're going to pay. Splunk is notoriously expensive for that reason. If you use Splunk or QRadar, it becomes expensive because there are not just the logs; you also have to parse the logs and create indexes. Those indexes can be very expensive in terms of space. Therefore, if they charge you by this space, you can end up paying a significant amount of money. It can be more than what you expect to pay. I like the fact that LogPoint has a fixed cost. I know what I'm going to pay on a yearly basis. I pay that, and I pay the maintenance, and I just make it work.

Which other solutions did I evaluate?

I had Logtrust, and I looked at AlienVault, Splunk, and IBM QRadar. Splunk was too expensive, and QRadar was too complex. AlienVault was very good and very close to LogPoint. I almost went to AlienVault, but its cost turned out to be significantly higher than LogPoint, so I ended up going for LogPoint because it was a better cost proposition for me.

What other advice do I have?

It depends on what you're looking for. If you really want a full-blown SIEM with all the functionality and all the correlation analysis, you might be able to find products that have more sophisticated correlations, etc. If you just want to keep your logs and be able to find information quickly within your systems, LogPoint is more than capable. It is a good cost proposition, and it works extremely well and very fast.

I would rate it an eight out of 10. It is a good cost proposition. It is a good value. It has all the functionality for what I wanted, which is my log management. I'm not using a lot of the feature sets that are very advanced. I don't need them, so I can't judge it based on those, but for my needs, it is an eight for sure.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Simon Thornton - PeerSpot reviewer
Cyber Security Services Operations Manager at a aerospace/defense firm with 501-1,000 employees
Real User
Top 10
Provides a single window into your network, SIEM, network flows, and risk management of your assets
Pros and Cons
  • "The most valuable thing about QRadar is that you have a single window into your network, SIEM, network flows, and risk management of your assets. If you use Splunk, for instance, then you still need a full packet capture solution, whereas the full packet capture solution is integrated within QRadar. Its application ecosystem makes it very powerful in terms of doing analysis."
  • "I'd like them to improve the offense. When QRadar detects something, it creates what it calls offenses. So, it has a rudimentary ticketing system inside of it. This is the same interface that was there when I started using it 12 years ago. It just has not been improved. They do allow integration with IBM Resilient, but IBM Resilient is grotesquely expensive. The most effective integration that IBM offers today is with IBM Resilient, which is an instant response platform. It is a very good platform, but it is very expensive. They really should do something with the offense handling because it is very difficult to scale, and it has limitations. The maximum number of offenses that it can carry is 16K. After 16K, you have to flush your offenses out. So, it is all or nothing. You lose all your offenses up until that point in time, and you don't have any history within the offense list of older events. If you're dealing with multiple customers, this becomes problematic. That's why you need to use another product to do the actual ticketing. If you wanted the ticket existence, you would normally interface with ServiceNow, SolarWinds, or some other product like that."

What is our primary use case?

We're a customer, partner, or reseller. We use QRadar on our own internal SOC. We are also a reseller of QRadar for some of the projects. So, we sell QRadar to customers, and we're also a partner because we have different models. We roll the product out to a customer as part of our service where we own it, but the customer is paying. We also do a full deployment that a customer owns. So, we are actually fulfilling all three roles.

What is most valuable?

The most valuable thing about QRadar is that you have a single window into your network, SIEM, network flows, and risk management of your assets. If you use Splunk, for instance, then you still need a full packet capture solution, whereas the full packet capture solution is integrated within QRadar. Its application ecosystem makes it very powerful in terms of doing analysis.

What needs improvement?

In terms of the GUI, they need to improve the consistency. It has been written by different teams at different times. So, when you go around the interface, you'll find a lot of inconsistencies in terms of the way it works.

I'd like them to improve the offense. When QRadar detects something, it creates what it calls offenses. So, it has a rudimentary ticketing system inside of it. This is the same interface that was there when I started using it 12 years ago. It just has not been improved. They do allow integration with IBM Resilient, but IBM Resilient is grotesquely expensive. The most effective integration that IBM offers today is with IBM Resilient, which is an instant response platform. It is a very good platform, but it is very expensive. They really should do something with the offense handling because it is very difficult to scale, and it has limitations. The maximum number of offenses that it can carry is 16K. After 16K, you have to flush your offenses out. So, it is all or nothing. You lose all your offenses up until that point in time, and you don't have any history within the offense list of older events. If you're dealing with multiple customers, this becomes problematic. That's why you need to use another product to do the actual ticketing. If you wanted the ticket existence, you would normally interface with ServiceNow, SolarWinds, or some other product like that. 

Their support should also be improved. Their support is very slow, and it is very difficult to find knowledgeable people within IBM.

Its price and licensing should be improved. It is overly expensive and overly complex in terms of licensing. 

For how long have I used the solution?

I have been using this solution for 12 years.

How are customer service and technical support?

Their support is very slow. it is very difficult to find knowledgeable people within IBM. I'm an expert in the use of QRadar, and I know the technical insights of QRadar very well, but it is sometimes very painful to deal with IBM's support and actually get them to do something. Their support is very difficult to work with for some customers.

Which solution did I use previously and why did I switch?

I work with Prelude, which is by a French company. It is a basic beginner's SIEM. If you never had a SIEM before and you wanted to experiment, this is where you would start, but it is probably that you would leave very quickly. I've also worked with ArcSight and Splunk.

My recommendation would depend upon your technical appetite or your technical capability. QRadar is essentially a Linux-based Red Hat appliance. Unfortunately, you still need some Linux knowledge to work with this effectively. Not everything is through the GUI. 

Comparing it with Splunk, in terms of licensing, IBM's model is simpler than Splunk's model. Splunk has two models. One is volume metrics, so you pay for the number of bytes that are transmitted daily. The other one is based upon the number of events per second, which they introduced relatively recently. Splunk can be more expensive than QRadar when you start to get into adding what they call indexes. So, basically, you create specific indexes to hold, for instance, logs related to Cisco. This is implicit within QRadar, and it is designed that way, but within Splunk, if you want to get that performance and you have large volumes of logs, you need to create indexes. This is where the cost of Splunk can escalate.

How was the initial setup?

Installing QRadar is very simple. You insert a DVD, boot the system, and it runs the installation after asking you a few questions. It runs pretty much automatically, and then you're up and going. From an installation point of view, it is very easy.

The only thing that you have to get right before you do the installation is your architecture because it has event collectors, event processes, flow collectors, flow processes, and a number of other components. You need to understand where they should be placed. If you want more storage, then you need to place data nodes on the ends of the processes. All this is something that you need to have in mind when you design and deploy.

What's my experience with pricing, setup cost, and licensing?

It is overly expensive and overly complex in terms of licensing. They have many different appliances, which makes it extremely difficult to choose the technology. It is very difficult to choose the technology or QRadar components that you should be deploying. 

They have improved some of it in the last few years. They have made it slightly easy with the fact that you can now buy virtual versions of all the appliances, which is good, but it is still very fragmented. For instance, on some of the smaller appliances, there is no upgrade path. So, if you exceed the capacity of the appliance, you have to buy a bigger appliance, which is not helpful because it is quite a major cost. If you want to add more disks to the system, they'll say that you can't. If they ship a disk with 2 terabytes that the older appliances have, and you say to them that you can commercially get 10 terabyte disks, they will say this is not possible, even though there is no technical reason why it cannot be done. So, they're not very flexible from that point of view. For IBM, it is good because you basically have to buy new appliances, but from a customer's point of view, it is a very expensive investment.

What other advice do I have?

Make sure that you have the buy-in from different teams in the company because you will need help from the network teams. You will potentially need help from IT. 

You need to have a strategy of how you onboard logs into SIEM. Do you take a risk-based approach or do you onboard everything? You should take the time to understand the architecture and the implications of design choices. For instance, QRadar Components communicate with each other using SSH tunnels. The normal practice in security is that if I put a device in a DMZ, then communication between the device on the normal network, which is a higher security zone, and the DMZ, which is a lower security zone, will be initiated from the high-security zone. You would not expect the device in the DMZ to initiate communication back into the normal network. In the case of QRadar, if you put your processes in the DMZ, then it has to communicate with the console, which means that you have to allow the processor to communicate. This has consequences. If you have remote sites or you plan to use cloud-based processes, collectors, etc, and have an internal console, the same communication channels have to exist. So, it requires some careful planning. That's the main thing.

I would rate QRadar an eight out of 10 as compared to other products.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Buyer's Guide
Security Information and Event Management (SIEM)
September 2022
Get our free report covering Microsoft, Elastic, Dynatrace, and other competitors of Splunk. Updated: September 2022.
632,611 professionals have used our research since 2012.