Buyer's Guide
Security Information and Event Management (SIEM)
May 2023
Get our free report covering Microsoft, Splunk, Elastic, and other competitors of IBM Security QRadar. Updated: May 2023.
708,544 professionals have used our research since 2012.

Read reviews of IBM Security QRadar alternatives and competitors

Cloud Security Advisor at a tech services company with 10,001+ employees
Real User
Top 20
Gives us granular visibility into traffic from multiple firewalls and proxies, and MIP Labels help secure our data
Pros and Cons
  • "Sentinel enables us to ingest data from our entire ecosystem. In addition to integrating our Cisco ASA Firewall logs, we get our Palo Alto proxy logs and some on-premises data coming from our hardware devices... That is very important and is one way Sentinel is playing a wider role in our environment."
  • "The following would be a challenge for any product in the market, but we have some in-house apps in our environment... our apps were built with different parameters and the APIs for them are not present in Sentinel. We are working with Microsoft to build those custom APIs that we require. That is currently in progress."

What is our primary use case?

When Exchange email is outside the domain, we have found sometimes that there are phishing emails. With the help of Microsoft Defender only, without Sentinel, we would not be able to track them. A couple of times data was compromised. With Sentinel, what we have done is integrate Microsoft Endpoint for Defender, M365 Defender, and our Exchange Online for all the email communications in and out.

How has it helped my organization?

With the investigation and threat-hunting services in Sentinel, we have been able to track and map our complete traffic: Where it started from, where it was intercepted, and where the files were downloaded and exchanged. We have been able to see how a phishing email was entering our domain. Accordingly, we understood that we needed to develop or modify some rules in Exchange and now, we do not have any phishing emails.

Sentinel enables us to investigate threats and respond holistically from one place to all of the attack techniques, such as MITRE ATT&CK, manual, DDoS, and brute force attacks. They are quickly identified by Sentinel. That is of high importance because we don't use any other product with Microsoft. Our SOC team continuously analyzes and monitors Sentinel, the activities and events that are happening. That team needs to be equipped with all of the real-time data we are getting from our ecosystem.

We have also integrated our SIEM with multiple firewalls and proxies. The traffic in and out, coming from the firewalls and proxies, is intercepted by Sentinel. We are now getting granular visibility into our traffic. We can see the hits we are getting from various regions, such as the hits that recently came from Russia. We have multiple such attacks on our firewall front end and we have been able to develop more granular rules on our firewalls.

And for DLP we have the help of protection from Microsoft Information Protection labels that we have defined for our data. Whenever this labeled data is shared, the data is limited to the recipients who were specified in the email. Similarly, our OneDrive data has been secured with the MIP Labels. All of this tracking is happening on Sentinel, which is giving us a broader view of where our data is traveling within and outside our organization as well.

People tend to go with Microsoft because it provides you with 360-degree protection, protecting your files, network, infra, and cloud environment. Each of its products is linked and interacts with the others. Microsoft Defender for Cloud will interact with Microsoft Defender for Cloud Apps, for example. And both of them can interact with Sentinel. Sentinel is the central SIEM in Microsoft and has the ability to take all the instructions from all of these Microsoft products and it gives you a central dashboard view in Azure. That helps manage infrastructure and identify threats. It's a single pane of glass. That's why Microsoft is gaining compared to other products.

Eliminating our multiple dashboards was a little tough in the beginning, but the Microsoft support team's expertise helped us create our own dashboard. Previously, when we started integrating all the products, it was very hard for us to give a broader review to management. It was only something the technical guys could do because they know what all those events mean. But when it came to a dashboard and presenting the data to the stakeholders, it was very tough. With the help of Microsoft's expert engineers, we were able to create dashboards into Sentinel, as well as with the help of Azure dashboards and Microsoft Power BI, and we were able to present the data.

We got Sentinel to send the data to Microsoft Power BI and that helped us create some very useful and easy dashboards so that our stakeholders and senior-level management, who are non-technical guys, could understand much better how we are utilizing this product. They can see how much we are making use of it to investigate, hunt, and track the incidents and events, and the unnecessary accessing of applications in the environment. As a result, we started to put granular controls in place and restrict unnecessary websites.

What is most valuable?

The watchlist is one of the features that we have found to be very helpful. We had some manual data in our Excels that we used to upload to Sentinel. It gives us more insightful information out of that Excel information, including user identities, IP addresses, hostnames, and more. We relate that data with the existing data in Sentinel and we understand more.

Another important feature is the user behavior analytics, UEBA. We can see how our users are behaving and if there is malicious behavior such as an atypical travel alert or a user is somewhere where he is not regularly found. Or, for example, if a user does not generally log in at night but we suddenly find him active at night, the user behavior analytics feature is very useful. It contains information from Azure Identity as well as Office 365.

With the E5 license, we have Microsoft Defender for Cloud Apps, Microsoft Information Protection, Defender for Cloud, and Defender for Office 365. All of these products are integrated with Sentinel because it has those connectors. With both Microsoft and non-Microsoft products it can be integrated easily. We also have ASA on-premises firewalls and we have created a connector and have been sending those syslogs to Sentinel to analyze the traffic. That is the reason we are able to reverse-investigate and hunt threats going on in our network, end to end.

Sentinel enables us to ingest data from our entire ecosystem. In addition to integrating our Cisco ASA Firewall logs, we get our Palo Alto proxy logs and some on-premises data coming from our hardware devices. We also get our Azure Firewall logs, and the logs from the Microsoft 360 bunch of products, like MIP and Defender for Cloud, Defender for Cloud Apps, et cetera.

When I think about the kinds of attack techniques that you are not able to understand at eye level, the AI/ML logic being used by Sentinel helps an administrator understand them in layman's language. It tells you that something has been identified as a malicious event or activity being performed by a user. All of those details are mentioned in an understandable manner. That is very important and is one way Sentinel is playing a wider role in our environment.

We use Microsoft Defender for Cloud and from that we get our regulatory compliance, recommendations, CSPM recommendations, cost recommendations, cost-optimizing strategies, and techniques for things like purchasing reserve instances. It helps us reduce the number of unused VMs or turn off VMs if they're not in production, as well as DevOp VMs in the early hours. We also use it for applying multi-factor authentications for users and reducing the number of owner or administrator roles that are assigned to subscriptions.

And the bi-directional sync capabilities of Defender for Cloud with other Microsoft products is near real-time, taking a couple of seconds. Within a minute, the information is updated, always, for all of the products that are integrated. Some products have a latency of around 4 to 12 hours of latency to update.

What needs improvement?

The following would be a challenge for any product in the market, but we have some in-house apps in our environment. We were thinking of getting the activities of those apps into Sentinel so that it could apply user behavior analytics to them. But our apps were built with different parameters and the APIs for them are not present in Sentinel. We are working with Microsoft to build those custom APIs that we require. That is currently in progress. 

We are happy with the product, but when it comes to integrating more things, it is a never-ending task. Wherever we have a new application, we wish that Sentinel could also monitor and investigate it. But that's not possible for everything.

For how long have I used the solution?

I have used Microsoft Sentinel for around two years now.

What do I think about the scalability of the solution?

It is scalable, with the help of the log retention facility in Sentinel in the Log Analytics workspace. We can limit the data that is being retained in it and that limits the cost.

We have it deployed across multiple sites.

How are customer service and support?

In the beginning, it was not so good, but when we switched from standard support to premium support, the support improved.

Which solution did I use previously and why did I switch?

I have been using QRadar and Splunk, but they both only gave me a centralized SIEM solution, a SOAR, and a VAPT solution. But I wanted to reduce the efforts required when jumping into different portals at different points in time. The way things stood, I had to hire different engineers to maintain those different portals and products. With the help of Sentinel, I could integrate all of my applications with Sentinel, as the APIs were ready and the support for them from Microsoft was good. That's why we thought of moving to Sentinel.

What was our ROI?

It was pretty hard to convince the stakeholders to invest so much in protecting the ecosystem through investigating and hunting, which is mainly what Sentinel is for. The integration part comes later. But convincing the stakeholders about the cost we would be incurring was a big challenge.

Slowly but surely, we started integrating many of our products into Sentinel and it started showing us things on the dashboard. And with the help of the Logic Apps, we were able to do multiple other things, like automatically creating tickets out of the incidents that are detected by Sentinel, and assigning them to the SOC team. It reduced the SOC team's workload because they used to manually investigate activities and events. Sentinel killed those manual tasks and started giving "ready-made" incidents to work on and mitigate. It has helped my SOC team because that team was facing a lot of issues with workload.

Then we also got visibility into different products, like Microsoft Defender, and Defender for Cloud Apps, whereas we used to have to jump into different portals to see and analyze the logs. Now, we don't have to go to any other product. All the integration is happening with Sentinel, and with the help of the AI/ML in Sentinel, investigating and threat-hunting have become easier.

It took around six months for us to realize these benefits because we were slowly integrating things, one by one, into it. We were a little late in identifying the awesome capabilities it has.

Most of our products are integrated but a few of our products are facing challenges getting connected. We are dealing with it with Microsoft and they are creating a few connectors for us.

We had to pay extra compared to what we would pay for other products in the market. But you have to lose something to gain something. Sentinel reduced the efforts we are putting into monitoring different products on different portals, and reduced the different kinds of expertise we needed for that process. Now, there are two to three people handling Sentinel.

What's my experience with pricing, setup cost, and licensing?

The pricing was a big concern and it was very hard to explain to our stakeholders why they should bear the licensing cost and the Log Analytics cost. And the maintenance and use costs were on the higher side compared to other products. But the features and capabilities were going to ease things for my operations and SOC teams. Finally, the stakeholders had clarity.

Which other solutions did I evaluate?

Microsoft is costlier. Some organizations may not be able to afford the cost of Sentinel orchestration and the Log Analytics workspace. The transaction hosting cost is also a little bit on the high side, compared to AWS and GCP. But because it gives a 360-degree combination of security products that are linked with each other, Microsoft is getting more market share compared to Splunk, vScaler, or CrowdStrike.

But if I want to protect my files, to see where my files have been sent, or if the file I'm receiving is free of malware, or even if one of my users has tried to open it, Windows Defender would track it first. The ATP (Advanced Threat Protection) scans my emails and the attachments first. It determines if the attachment is safe and, if it is not safe, it will block it. I don't have to create any granular or manual settings. That connectivity across different products has a brighter future. That's the reason, even though we have a small budget, that we are shifting to Microsoft.

There are competitive applications in the market, like vScaler, Splunk, QRadar, and CrowdStrike. These are also good in terms of their features and capabilities. But these products only work as a SIEM or VAPT solution. They won't scan everything that we need to protect.

But if you are only considering SOAR, I prefer CrowdStrike because of cost and the features it provides. The AI/ML is also more developed compared to Sentinel.

But why Sentinel? Because it not only covers Microsoft products, but it also has API connectors to connect with any non-Microsoft products. It has inbound APIs for connectivity to QRadar, vScaler, or Splunk, so we can bring their data into Sentinel to be analyzed. Splunk is doing its job anyway, but Sentinel can filter the information and use it to investigate things. 

Those have great visibility and great potential over Sentinel. But for products that are out of the ecosystem, those competitive solutions might face issues in connecting or integrating with them.

What other advice do I have?

We have created a logic app that creates tickets in our service desk. Whenever a ticket is raised, it is automatically assigned to one of the members of our SOC team. They investigate, or reverse-investigate, and track the incident.

Every solution requires continuous maintenance. We cannot rely on AI/ML for everything. Whenever there is a custom requirement or we want to do something differently, we do sit with the team to create the required analytic rules, et cetera. It doesn't involve more than three to four people.

In terms of the comprehensiveness of Sentinel when it comes to security, it plays a wide role in analysis, including geographical analysis, of our multiple sites. It is our centralized eye where we can have a complete analysis and view of our ecosystem.

Go with a single vendor security suite if you have the choice between that and a best-of-breed strategy. It is better to have a single vendor for security in such a complex environment of multiple vendors, a vendor who would understand all the requirements and give you a central contact. And the SLA for response should be on the low side in that situation, as Microsoft, with its premium support, gives an SLA of an immediate callback, within two to three minutes of creating a ticket.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
SVP of Managed Security at CRITICALSTART
MSP
Be cautious of metadata inclusion for log types in pricing. Having the ability to do real-time analytics drives down attacker dwell time.
Pros and Cons
  • "The ability to have high performance, high-speed search capability is incredibly important for us. When it comes to doing security analysis, you don't want to be doing is sitting around waiting to get data back while an attacker is sitting on a network, actively attacking it. You need to be able to answer questions quickly. If I see an indicator of attack, I need to be able to rapidly pivot and find data, then analyze it and find more data to answer more questions. You need to be able to do that quickly. If I'm sitting around just waiting to get my first response, then it ends up moving too slow to keep up with the attacker. Devo's speed and performance allows us to query in real-time and keep up with what is actually happening on the network, then respond effectively to events."
  • "There is room for improvement in the ability to parse different log types. I would go as far as to say the product is deficient in its ability to parse multiple, different log types, including logs from major vendors that are supported by competitors. Additionally, the time that it takes to turn around a supported parser for customers and common log source types, which are generally accepted standards in the industry, is not acceptable. This has impacted customer onboarding and customer relationships for us on multiple fronts."

What is our primary use case?

We use Devo as a SIEM solution for our customers to detect and respond to things happening in their environment. We are a service provider who uses Devo to provide services to our customers.

We are integrating from a source solution externally. We don't exclusively work inside of Devo. We kind of work in our source solution, pivoting in and back out.

How has it helped my organization?

With over 400 days of hot data, we can query and look for patterns historically. We can pivot into past data and look for trends and analytics, without needing to have a change in overall performance nor restore data from cold or frozen data archives to get answers about things that may be long-term trends. Having 400 days of live data means that we can do analytics, both short-term and long-term, with high speed.

The integration of threat intelligence data absolutely provides context to an investigation. Threat intelligence integration provides great contextual data, which has been very important for us in our investigation process as well. The way that the data is integrated and accessible to us is very useful for security analysts. The ability to have the integration of large amounts of threat intelligence data and provide that context dynamically with real time correlation means that, as analysts, we are seeing events as they're happening in customer environments. We are getting the context of whether that is related to something that we're also watching from a threat intelligence perspective, which can help shape an investigation.

What is most valuable?

The ability to have high performance, high-speed search capability is incredibly important for us. When it comes to doing security analysis, you don't want to be sitting around waiting to get data back while an attacker is sitting on a network, actively attacking it. You need to be able to answer questions quickly. If I see an indicator of attack, I need to be able to rapidly pivot and find data, then analyze it and find more data to answer more questions. You need to be able to do that quickly. If I'm sitting around just waiting to get my first response, then it ends up moving too slow to keep up with the attacker. Devo's speed and performance allows us to query in real-time and keep up with what is actually happening on the network, then respond effectively to events.

The solution’s real-time analytics of security-related data does incredibly well. I think all the SIEM solutions have struggled to be truly real-time, because there are events that happen out in systems and on a network. However, when I look at its overall performance and correlation capabilities, and its ability to then analyze that data rapidly, it has given us performance, which is exceptional.

It is incredibly important in security that the real-time analytics are immediately available for query after ingest. One of the most important things that we have to worry about is attacker dwell time, e.g., how long is an attacker allowed to sit on a system after it is compromised and discover more data, then compromise more systems on a network or expand what they currently have. For us, having the ability to do real-time analytics essentially drives down attacker dwell time because we're able to move quickly and respond more effectively. Therefore, we are able to stop the attacker sooner during the attack lifecycle and before it becomes a problem.

The solution speed is excellent for us, especially in regards to attacker dwell time and the speed that we're able to both discover and analyze data as well as respond to it. The fact that the solution is high performance from a query perspective is very important for us.

Another valuable feature would be detection capability. The ability to write high quality detection rules to do correlation in an advanced manner that really works effectively for us. Sometimes, the correlation in certain engines can be hampered by performance, but it also can be affected by an inability to do certain types of queries or correlate certain types of data together. The flexibility and power of Devo has given us the ability to do better detection, so we have better detection capabilities overall.

The UI is very good. They have an implementation of CyberChef, which is very good for security analysts. It allows us to manipulate, transform, and enrich data for analytics in a very fast, effective manner. The query UI is something that most people who have worked with SIEM platforms will be very used to utilizing. It is very similar to things that they've seen before. Therefore, it's not going to take them a long time to learn their way around the platform.

The pieces of the Activeboards that are built into SecOps have been very good and helpful for us.

They have high performance and high-speed search as well as the ability to pivot quickly. These are the things that they do well.

What needs improvement?

There is room for improvement in the ability to parse different log types. I would go as far as to say the product is deficient in its ability to parse multiple, different log types, including logs from major vendors that are supported by competitors. Additionally, the time that it takes to turn around a supported parser for customers and common log source types, which are generally accepted standards in the industry, is not acceptable. This has impacted customer onboarding and customer relationships for us on multiple fronts.

I would like to see Devo rely more on the rules engine, seeing more things from the flow, correlation, and rules engine make its way into the standardized product. This would allow a lot of those pieces to be a part of SecOps so we can do advanced JOIN rules and capabilities inside of SecOps without flow. That would be a great functionality to add.

Devo's pricing mechanism, whereby parsed data is charged after metadata is added to the event itself, has led to unexpected price increases for customers based on new parsers being built. Pricing has not been competitive (log source type by log source type) with other vendors in the SEMP space.

Their internal multi-tenant architecture has not mapped directly to ours the way that it was supposed to nor has it worked as advertised. That has created challenges for us. This is something they are still actively working on, but it is not actually released and working, and it was supposed to be released and working. We got early access to it in the very beginning of our relationship. Then, as we went to market with larger customers, they were not able to enable it for those customers because it was still early access. Unfortunately, it is still not generally available for them. As a result, we don't get to use it to help get improvements on multi-tenant architecture for us.

For how long have I used the solution?

I have been using the solution for about a year.

What do I think about the stability of the solution?

Stability has been a little bit of a problem. We have had stability problems. Although we have not experienced any catastrophic outages within the platform, there have been numerous impacts to customers. This has caused a degradation of service over time by impacting customer value and the customer's perception of value, both from the platform and our service as a service provider.

We have full-time security engineers who do maintenance work and upkeep for all our SIEM solutions. However, that may be a little different because we are a service provider. We're looking at multiple, large deployments, so that may not be the same thing that other people experience.

What do I think about the scalability of the solution?

We haven't run into any major scalability problems with the solution. It has continued to scale and perform well for query. The one scalability problem that we have encountered has to do with multi-tenancy at scale for solutions integrating SecOps. Devo is still working to bring to market these features to allow multi-tenancy for us in this area. As a result, we have had to implement our own security, correlation rules, and content. That has been a struggle at scale for us, in comparison to using quality built-in, vendor content for SecOps, which has not yet been delivered for us.

There are somewhere between 45 to 55 security analysts and security engineers who use it daily.

How are customer service and technical support?

Technical support for operational customers has been satisfactory. However, support during onboarding and implementation, including the need for professional services engagements to develop parsers for new log types and troubleshoot problems during onboarding, has been severely lacking. Often, tenant set times and support requests during onboarding have gone weeks and even months without resolution, and sometimes without reply, which has impacted customer relationships.

Which solution did I use previously and why did I switch?

While we continue to use Splunk as a vendor for the SIEM services that we provide, we have also added Devo as an additional vendor to provide services to customers. We have found similar experiences at both vendors from a support perspective. Although professional services skill level and availability might be better at Devo, the overall experience for onboarding and implementing a customer is still very challenging with both.

How was the initial setup?

The deployment was fairly straightforward. For how we did the setup, we were building an integration with our product, which is a little more complicated, but that's not what most people are going to be doing. 

We were building a full integration with our platform. So, we are writing code to integrate with the APIs.

Not including our coding work that we had to do on the integration side, our deployment took about six weeks.

What about the implementation team?

It was just us and Devo's team building the integration. Expertise was provided from Devo to help work through some things, which was absolutely excellent.

What was our ROI?

In incidents where we are using Devo for analysis, our mean time to remediation for SIEM is lower. We're able to query faster, find the data that we need, and access it, then respond quicker. There is some ROI on query speed.

What's my experience with pricing, setup cost, and licensing?

Based on adaptations that they have made, where they are essentially charging for metadata around events that we collect now, that extra charge makes up any difference in price savings between Splunk or Azure Sentinel and them. 

Before, the cost was just the data itself, but they have adjusted it now where they even charge if we parse the data and add in names for a field that comes in. For example, we get a username. If you go to log into Windows, and it says, "That username tried to log in." Then, it labels the username with your name. They will charge us for the space that username takes up when they label it. On top of that, this has caused us to lose all of the price savings that were being found before. In fact, in some cases, it is more expensive than the competitors as a result. The charging for metadata on parsed fields has led to significant, unexpected pricing for customers.

Be cautious of metadata inclusion for log types in pricing, as there are some "gotchas" with that. This would not be charged by other vendors, like Splunk, where you are getting Windows Logs. Windows Logs have a bunch of blank space in them. Essentially, Splunk just compresses that. Then, after they compress and label it, that is the parse that you see, but they don't charge you for the white space. They don't charge you for the metadata. Whereas, Devo is charging you for that. There are some "gotchas" there around that. We want to point, "Pay attention to ingest charges for new data types, as you will be charged for metadata as a part of the overall license usage." 

There are charges for metadata, as Devo counts data after parsing and enrichment. It charges it against license usage, whereas other vendors charge the license before parsing and enrichment, e.g., you are looking at the raw, compressed, data first, then they parse and enrich it, and you don't get charged for that part. That difference is hitting some of our customers in a negative way, especially when there is an unparsed log type. They don't support it. One that is not supported right now is Cisco ASA, which should be supported as it is a major vendor out there. If a customer says, "Well, in Splunk, I'm currently bringing 50 gigabytes of Cisco ASA logs," but then they don't consider the fact that this adds 25% metadata in Splunk. Now, when they shift it over to Devo, it will actually be a 25% increase. They are going to see 62.5 gigs now when they move it over, because they are going to get charged for the metadata that they weren't being charged for in Splunk. Even though the price per gig is lower with Devo, by charging more for the metadata, i.e., by charging more gigs in the end, you are ending up either net neutral or even sometimes saving, if there is not a lot of metadata. Then, sometimes you are actually losing money in events that have a ton of metadata, because you are increasing it sometimes by as much as 50%. 

I have addressed this issue with Devo all the way to the CEO. They are not unaware. I talked to everyone, all the way up the chain of command. Then, our CEO has been having a direct call with their CEO. They have had a biweekly call for the last six weeks trying to get things moving forward in the right direction. Devo's new CEO is trying very hard to move things in the right direction, but customers need to be aware, "It's not there yet." They need to know what they are getting into.

Which other solutions did I evaluate?

We evaluated Graylog as well as QRadar as potential options. Neither of those options met our needs or use cases.

What other advice do I have?

No SIEM deployment is ever going to be easy. You want to attack it in order of priorities for what use cases matter to your business, not just log sources.

The Activeboards are easy to understand and flexible. However, we are not using them quite as much as maybe other people are. However, we are not using them quite as much as other people are. I would suggest investment in developing and working with Activeboards. Wait for a general availability release of SecOps to all your customers for use of this, as a SIEM product, if you lack internal SIEM expertise to develop correlation rules and content for Devo on your own.

I would rate this solution as a five out of 10.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Other
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Chief Infrastructure & Security Office at a financial services firm with 51-200 employees
Real User
Top 20
Collects logs from different systems, works extremely fast, and has a predictable cost model
Pros and Cons
  • "It is a very comprehensive solution for gathering data. It has got a lot of capabilities for collecting logs from different systems. Logs are notoriously difficult to collect because they come in all formats. LogPoint has a very sophisticated mechanism for you to be able to connect to or listen to a system, get the data, and parse it. Logs come in text formats that are not easily parseable because all logs are not the same, but with LogPoint, you can define a policy for collecting the data. You can create a parser very quickly to get the logs into a structured mechanism so that you can analyze them."
  • "The thing that makes it a little bit challenging is when you run into a situation where you have logs that are not easily parsable. If a log has a very specific structure, it is very easy to parse and create a parser for it, but if a log has a free form, meaning that it is of any length or it can change at any time, handling such a log is very challenging, not just in LogPoint but also in everything else. Everybody struggles with that scenario, and LogPoint is also in the same boat. One-third of logs are of free form or not of a specific length, and you can run into situations where it is almost impossible to parse the log, even if they try to help you. It is just the nature of the beast."

What is our primary use case?

We use it as a repository of most of the logs that are created within our office systems. It is mostly used for forensic purposes. If there is an investigation, we go look for the logs. We find those logs in LogPoint, and then we use them for further analysis.

How has it helped my organization?

We have close to 33 different sources of logs, and we were able to onboard most of them in less than three months. Its adoption is very quick, and once you have the logs in there, the ability to search for things is very good.

What is most valuable?

It is a very comprehensive solution for gathering data. It has got a lot of capabilities for collecting logs from different systems. Logs are notoriously difficult to collect because they come in all formats. LogPoint has a very sophisticated mechanism for you to be able to connect to or listen to a system, get the data, and parse it. Logs come in text formats that are not easily parseable because all logs are not the same, but with LogPoint, you can define a policy for collecting the data. You can create a parser very quickly to get the logs into a structured mechanism so that you can analyze them.

What needs improvement?

The thing that makes it a little bit challenging is when you run into a situation where you have logs that are not easily parsable. If a log has a very specific structure, it is very easy to parse and create a parser for it, but if a log has a free form, meaning that it is of any length or it can change at any time, handling such a log is very challenging, not just in LogPoint but also in everything else. Everybody struggles with that scenario, and LogPoint is also in the same boat. One-third of logs are of free form or not of a specific length, and you can run into situations where it is almost impossible to parse the log, even if they try to help you. It is just the nature of the beast.

Its reporting could be significantly improved. They have very good reports, but the ability to create ad-hoc reports can be improved significantly.

For how long have I used the solution?

I have been using this solution for three years.

What do I think about the stability of the solution?

It has been stable, and I haven't had any issues with it.

What do I think about the scalability of the solution?

There are no issues there. However much free space I give it, it'll work well.

It is being used by only two people: me and another security engineer. We go and look at the logs. We are collecting most of the information from the firm through this. If we were to grow, we'll make it grow with us, but right now, we don't have any plans to expand its usage.

How are customer service and support?

Their support is good. If you call them for help, they'll give you help. They have a very good set of engineers to help you with onboarding or the setup process. You can consult them when you have a challenge or a question. They are very good with the setup and follow-up. What happens afterward is a whole different story because if you have to escalate internally, you can get in trouble. So, their initial support is very good, but their advanced support is a little more challenging.

Which solution did I use previously and why did I switch?

I used a product called Logtrust, which is now called Devo. I switched because I had to get a consultant every time I had to do something in the system. It required a level of expertise. The system wasn't built for a mere human to use. It was very advanced, but it required consultancy in order to get it working. There are a lot of things that they claim to be simple, but at the end of the day, you have to have them do the work, and I don't like that. I want to be able to do the work myself. With LogPoint, I'm able to do most of the work myself.

How was the initial setup?

It is very simple. There is a virtual machine that you download, and this virtual machine has everything in it. There is nothing for you to really do. You just download and install it, and once you have the machine up and running, you're good to go.

The implementation took three months. I had a complete listing of my log sources, so I just went down the list. I started with the most important logs, such as DNS, DHCP, Active Directory, and then I went down from there. We have 33 sources being collected currently.

What about the implementation team?

I did it on my own. I also take care of its maintenance.

What was our ROI?

It is not easy to calculate ROI on such a solution. The ROI is in terms of having the ability to find what you need in your logs quickly and being confident that you're not going to lose your logs and you can really search for things. It is the assurance that you can get that information when you need it. If you don't have it, you're in a trouble. If you are compromised, then you have a problem. It is hard to measure the cost of these things.

As compared to other systems, I'm getting a good value for the money. I'm not paying a variable cost. I have a pretty predictable cost model, and if I need to grow, it is all up to me for the resources that I put, not to them. That's a really good model, and I like it.

What's my experience with pricing, setup cost, and licensing?

It has a fixed price, which is what I like about LogPoint. I bought the system and paid for it, and I pay maintenance. It is not a consumption model. Most SIEMs or most of the log management systems are consumption-based, which means that you pay for how many logs you have in the system. That's a real problem because logs can grow very quickly in different circumstances, and when you have a variable price model, you never know what you're going to pay. Splunk is notoriously expensive for that reason. If you use Splunk or QRadar, it becomes expensive because there are not just the logs; you also have to parse the logs and create indexes. Those indexes can be very expensive in terms of space. Therefore, if they charge you by this space, you can end up paying a significant amount of money. It can be more than what you expect to pay. I like the fact that LogPoint has a fixed cost. I know what I'm going to pay on a yearly basis. I pay that, and I pay the maintenance, and I just make it work.

Which other solutions did I evaluate?

I had Logtrust, and I looked at AlienVault, Splunk, and IBM QRadar. Splunk was too expensive, and QRadar was too complex. AlienVault was very good and very close to LogPoint. I almost went to AlienVault, but its cost turned out to be significantly higher than LogPoint, so I ended up going for LogPoint because it was a better cost proposition for me.

What other advice do I have?

It depends on what you're looking for. If you really want a full-blown SIEM with all the functionality and all the correlation analysis, you might be able to find products that have more sophisticated correlations, etc. If you just want to keep your logs and be able to find information quickly within your systems, LogPoint is more than capable. It is a good cost proposition, and it works extremely well and very fast.

I would rate it an eight out of 10. It is a good cost proposition. It is a good value. It has all the functionality for what I wanted, which is my log management. I'm not using a lot of the feature sets that are very advanced. I don't need them, so I can't judge it based on those, but for my needs, it is an eight for sure.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Global Security Manager at Chart Industries Inc
Video Review
Real User
Top 10
The solution reduced our investigation time from days to hours and assists in managing our workflows
Pros and Cons
  • "LogRhythm does a very good job of helping SOCs manage their workflows."
  • "One of the challenges of the SIEM for the LogRhythm 7 platform is the amount of time it takes to bring new log sources into the MDI."

What is our primary use case?

LogRhythm works within the core of our SOC. It's where our analysts work every day and where we do all of our investigatory work for security incidents.

It created our security posture. It is the central component of all of our security tools and it is the heartbeat of our SOC and our daily operations. It sets the tone for everything that we do.

How has it helped my organization?

This solution improves our organization daily. It saves us countless hours doing correlation work and reduces our investigatory process from days to hours. It routinely brings issues to the forefront using the AI engine and the use cases that we've built that need investigating. We constantly find new sources of logs to bring into the system to continue to make it better. 

LogRhythm does a very good job of helping SOCs manage their workflows. Our SOC is very young and we're not leveraging that feature yet. I've seen other companies' SOCs and watched them use the workflow features and it's incredibly well done. We're not mature enough yet to use it. 

For cybersecurity exposures, the one downside from LogRhythm's perspective is that it can only tell me about use cases that I've already defined. It cannot identify unknown cases at this time. However, we have just recently purchased the NDR solution and that does have this capability.

This solution is our principal mechanism for doing all investigatory work. When we get alerts from LogRhythm, we'd go back to the logs and trace those events back to their source. This is is how we shut down attacks. 

What is most valuable?

One of the features that we use the most and find the most valuable includes the Web Console. My analysts really like the interface and the ability to build queries using point-and-click without having to write Query languages. My favorite feature is the actual Admin Console and the ability to monitor all aspects of the SIEM's health and the ability to build new use cases for my analysts to work with.

We also use the Machine Data Intelligence feature for classifying and contextualizing logs. It does struggle with unknown log sources and we've had some challenges over the years getting new log sources incorporated into the MDI Fabric.

The ability to authenticate successes and failures using MDI is incredibly easy. For the log sources that we bring into the SIEM, that work is pretty much done for us by the MDI. We don't have to do any additional work.

What needs improvement?

One of the challenges of the SIEM for the LogRhythm 7 platform is the amount of time it takes to bring new log sources into the MDI. We've waited a couple of years on some sources before they were incorporated. Writing our own custom MDIs is very challenging because it requires expert-level regex in order to write those rules and to make them efficient. Bringing in sources that aren't natively understood is where we've struggled the most.

For how long have I used the solution?

We have been using LogRhythm SIEM Solution for six years.

What do I think about the stability of the solution?

The stability of the solution, if it's deployed properly with the right resources, is rock solid. We have not experienced any performance issues. When we first bought the SIEM, we undersized it, and the performance was compromised. 

What do I think about the scalability of the solution?

This is a scalable solution. I've load-tested the SIEM at its current resource allocations up to four or five times as much as my daily ingest and the system handled it just fine.

How are customer service and support?

Their technical support is second to none and is one of the reasons why we continue to invest in and consider LogRhythm as a strategic partner. Their support team are really good at their jobs and they always come through when we need them. I would rate their support a ten out of ten. 

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

LogRhythm is the first SIEM I have used and the only SIEM I have a lot of experience with. I've demoed other SIEMs and we've gone to market twice to look at whether LogRhythm was still the right decision. Both times we concluded that it was.

How was the initial setup?

The setup of the SIEM is complex in its own right. LogRhythm typically recommends professional services assistance to deploy the SIEM properly. My company did not purchase those professional services so I had to figure it out for myself. Their support structure was so good and they helped me so much that we were able to get it working without professional help. 

LogRhythm is an out-of-box solution and this was why we bought it. I had no experience with SIEM when we bought it six years ago. I needed something that I could plug into the network, get up and running and get value out of immediately.

What was our ROI?

We get a vast amount of ROI from this solution. We get way more out of it than we put into it. One of the metrics that I track pretty closely in our SOC is the mean time to detect. Prior to the SIEM, the mean time to detect was measured in weeks and it's now measured in minutes.

What's my experience with pricing, setup cost, and licensing?

LogRhythm's pricing and licensing are extremely competitive and it's one of the top three reasons we continue to invest in the platform. 

Which other solutions did I evaluate?

We looked at Securonix, Azure Sentinel, IBM's QRoC, and QRadar on Cloud. What really won us over with LogRhythm was the ease of use of the interface and the simplicity of the underlying architecture. It really lends itself to being a low-cost solution to own over time.

What other advice do I have?

The nice thing about LogRhythm is that they continue to innovate and come up with new capabilities like their NDR solution that we recently invested in. They continue to stay relevant. 

I would rate LogRhythm a nine out of ten. The on-prem version of the solution is fantastic and is the core of my SOC. It's our daily tool for all of our investigations. 

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Kishore Tiwari - PeerSpot reviewer
Deputy General Manager - Information Security (Lead ISA) at a energy/utilities company with 1,001-5,000 employees
Real User
Top 10
Development from open sources is very valuable but a huge infrastructure is required
Pros and Cons
  • "The beauty of the solution is that you can develop infrastructure for a data lake using open sources that are separate from the licenses."
  • "The solution's command line should be simpler so that routine commands can be used."

What is our primary use case?

Our company is using the solution to build a next-generation security operations center that automates all administration and orchestration. It will include our entire MITRE framework and use cases being mapped at the moment.

We were already developing UEBA and SOAR when we started using the solution. UEBA will track when users move around to determine if movements are suspicious or should be mapped with threat activity.

The solution is a hybrid model. The hardware infrastructure and log collector is on-premises. We provide IP addresses that open a specific communication with the solution's cloud console where our EPS data is contained. We administrate the SIEM via the cloud portal and manage operations or log management on-premises. 

What is most valuable?

The beauty of the solution is that you can develop infrastructure for a data lake using open sources that are separate from the licenses. You can use Ubuntu, CentOS, or any flavor of Linux to build your infrastructure. The solution installs a Docker with their licenses and script running on top of it. You can increase volume or build up servers and backend infrastructure at any time. Other products require you buy their proprietary-based log management system, forward the devices log to the SIEM, and pay for its storage. 

What needs improvement?

The solution's command line should be simpler so that routine commands can be used. The search configuration is a bit different than other OEMs or SIEM solutions like ArcSight or QRadar that are easy to search because they operate similarly. The logic is there and the solution supplies a pretty good explanation. Basically, DNIF spelled out is the opposite of FIND. You have to find commands whenever you want to search something. For example, a highway gets you to your destination but there is an alternate way people don't yet know about. Gartner or Forrester haven't yet studied it. We were a bit nervous when we were trying to get familiar with the solution. We wondered if we could realize ROI because the commands and ways of pulling data were different to us. We raised a case with the support team and their professionals provided the needed support. The command line is user friendly once you understand it. If you need immediate use, then you might want to get assistance from someone who is well-versed in methods for using key patterns to find things.

Lengthier files for threat hunting or analysis are needed. The correlation happens, but exporting a large number of files to abstract them is not possible. For example, I want to present raw data to management so I should be able to customize a date range in my query and download the files.

For how long have I used the solution?

I have been using the solution for two years.

What do I think about the stability of the solution?

From a product point of view, the solution is stable so I rate stability an eight out of ten. 

What do I think about the scalability of the solution?

The solution is very scalable so I rate scalability a ten out of ten. 

How are customer service and support?

The support center does a lot and provides support but most of their team is new so they have to seek assistance from senior staff. This sometimes happens for basic queries but has improved over time. 

I rate support a seven out of ten. 

How would you rate customer service and support?

Neutral

Which solution did I use previously and why did I switch?

We previously used ArcSight but were looking for a mature solution that could perform a variation of data discovery and threat intel discovery. 

How was the initial setup?

The solution requires a huge infrastructure so that can be tough. It is complicated to manage a large number of servers. Basically, you have to arrange 15 servers for some very limited EPS.

Configuration, deployment, and administration of each and every component on top of those servers is very easy. 

What about the implementation team?

We utilized DNIF professional services to deploy along with our team. The solution was new to us, so we opted for their services rather than going with a third party. It took three to four months for end-to-end deployment. 

We deployed in 2020 and, within a period of five months, had 30,000 users and 2,000 servers in our infrastructure. 

What's my experience with pricing, setup cost, and licensing?

The solution requires a huge infrastructure and that is costly. 

SIEM solutions always cost more so you have to determine if your budget can handle the cost to get to ROI. 

In the future, I would like the solution to reduce its infrastructure requirements. 

Which other solutions did I evaluate?

The solution was selected after a POC with a couple of vendors. Deciding factors were cost and the fit to our use cases. The techno-commercial aspect was the final deciding factor. 

What other advice do I have?

Before buying the solution, ask for an overview and use-case session. Learn the infrastructure requirements and EPS cost. The solution is hyper-cloud which is a hybrid model, so budget for both on-premises needs and cloud service. Ensure that you can sustain the cost of running a SIEM solution because it is hard work to change solutions. 

If you need a parser to integrate existing technologies or a stack, be sure to tell your vendors before buying the solution. Bind them to the same timelines and agreements. We had a couple of lags during the POC stage that took DNIF a long time to resolve after implementation. Timelines published on the internet for TAC response are very generic so make sure they are customized as part of any agreement. 

In rating the solution, I have considered several factors. There are lots of improvements needed. The infrastructure specs are huge and require on-premises management. The solution should have a completely cloud-based option or only require a lightweight infrastructure it is managed as a service. There should be a two-way exchange where issues proactively flow to a dashboard where anyone can take action. 

Overall, I rate the solution a seven out of ten. 

Which deployment model are you using for this solution?

Hybrid Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Other
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Buyer's Guide
Security Information and Event Management (SIEM)
May 2023
Get our free report covering Microsoft, Splunk, Elastic, and other competitors of IBM Security QRadar. Updated: May 2023.
708,544 professionals have used our research since 2012.