IT Central Station is now PeerSpot: Here's why
Buyer's Guide
Security Information and Event Management (SIEM)
June 2022
Get our free report covering Splunk, Microsoft, Elastic, and other competitors of IBM QRadar. Updated: June 2022.
609,272 professionals have used our research since 2012.

Read reviews of IBM QRadar alternatives and competitors

Jordan Mauriello - PeerSpot reviewer
SVP of Managed Security at CRITICALSTART
MSP
Top 10
Be cautious of metadata inclusion for log types in pricing. Having the ability to do real-time analytics drives down attacker dwell time.
Pros and Cons
  • "The ability to have high performance, high-speed search capability is incredibly important for us. When it comes to doing security analysis, you don't want to be doing is sitting around waiting to get data back while an attacker is sitting on a network, actively attacking it. You need to be able to answer questions quickly. If I see an indicator of attack, I need to be able to rapidly pivot and find data, then analyze it and find more data to answer more questions. You need to be able to do that quickly. If I'm sitting around just waiting to get my first response, then it ends up moving too slow to keep up with the attacker. Devo's speed and performance allows us to query in real-time and keep up with what is actually happening on the network, then respond effectively to events."
  • "There is room for improvement in the ability to parse different log types. I would go as far as to say the product is deficient in its ability to parse multiple, different log types, including logs from major vendors that are supported by competitors. Additionally, the time that it takes to turn around a supported parser for customers and common log source types, which are generally accepted standards in the industry, is not acceptable. This has impacted customer onboarding and customer relationships for us on multiple fronts."

What is our primary use case?

We use Devo as a SIEM solution for our customers to detect and respond to things happening in their environment. We are a service provider who uses Devo to provide services to our customers.

We are integrating from a source solution externally. We don't exclusively work inside of Devo. We kind of work in our source solution, pivoting in and back out.

How has it helped my organization?

With over 400 days of hot data, we can query and look for patterns historically. We can pivot into past data and look for trends and analytics, without needing to have a change in overall performance nor restore data from cold or frozen data archives to get answers about things that may be long-term trends. Having 400 days of live data means that we can do analytics, both short-term and long-term, with high speed.

The integration of threat intelligence data absolutely provides context to an investigation. Threat intelligence integration provides great contextual data, which has been very important for us in our investigation process as well. The way that the data is integrated and accessible to us is very useful for security analysts. The ability to have the integration of large amounts of threat intelligence data and provide that context dynamically with real time correlation means that, as analysts, we are seeing events as they're happening in customer environments. We are getting the context of whether that is related to something that we're also watching from a threat intelligence perspective, which can help shape an investigation.

What is most valuable?

The ability to have high performance, high-speed search capability is incredibly important for us. When it comes to doing security analysis, you don't want to be sitting around waiting to get data back while an attacker is sitting on a network, actively attacking it. You need to be able to answer questions quickly. If I see an indicator of attack, I need to be able to rapidly pivot and find data, then analyze it and find more data to answer more questions. You need to be able to do that quickly. If I'm sitting around just waiting to get my first response, then it ends up moving too slow to keep up with the attacker. Devo's speed and performance allows us to query in real-time and keep up with what is actually happening on the network, then respond effectively to events.

The solution’s real-time analytics of security-related data does incredibly well. I think all the SIEM solutions have struggled to be truly real-time, because there are events that happen out in systems and on a network. However, when I look at its overall performance and correlation capabilities, and its ability to then analyze that data rapidly, it has given us performance, which is exceptional.

It is incredibly important in security that the real-time analytics are immediately available for query after ingest. One of the most important things that we have to worry about is attacker dwell time, e.g., how long is an attacker allowed to sit on a system after it is compromised and discover more data, then compromise more systems on a network or expand what they currently have. For us, having the ability to do real-time analytics essentially drives down attacker dwell time because we're able to move quickly and respond more effectively. Therefore, we are able to stop the attacker sooner during the attack lifecycle and before it becomes a problem.

The solution speed is excellent for us, especially in regards to attacker dwell time and the speed that we're able to both discover and analyze data as well as respond to it. The fact that the solution is high performance from a query perspective is very important for us.

Another valuable feature would be detection capability. The ability to write high quality detection rules to do correlation in an advanced manner that really works effectively for us. Sometimes, the correlation in certain engines can be hampered by performance, but it also can be affected by an inability to do certain types of queries or correlate certain types of data together. The flexibility and power of Devo has given us the ability to do better detection, so we have better detection capabilities overall.

The UI is very good. They have an implementation of CyberChef, which is very good for security analysts. It allows us to manipulate, transform, and enrich data for analytics in a very fast, effective manner. The query UI is something that most people who have worked with SIEM platforms will be very used to utilizing. It is very similar to things that they've seen before. Therefore, it's not going to take them a long time to learn their way around the platform.

The pieces of the Activeboards that are built into SecOps have been very good and helpful for us.

They have high performance and high-speed search as well as the ability to pivot quickly. These are the things that they do well.

What needs improvement?

There is room for improvement in the ability to parse different log types. I would go as far as to say the product is deficient in its ability to parse multiple, different log types, including logs from major vendors that are supported by competitors. Additionally, the time that it takes to turn around a supported parser for customers and common log source types, which are generally accepted standards in the industry, is not acceptable. This has impacted customer onboarding and customer relationships for us on multiple fronts.

I would like to see Devo rely more on the rules engine, seeing more things from the flow, correlation, and rules engine make its way into the standardized product. This would allow a lot of those pieces to be a part of SecOps so we can do advanced JOIN rules and capabilities inside of SecOps without flow. That would be a great functionality to add.

Devo's pricing mechanism, whereby parsed data is charged after metadata is added to the event itself, has led to unexpected price increases for customers based on new parsers being built. Pricing has not been competitive (log source type by log source type) with other vendors in the SEMP space.

Their internal multi-tenant architecture has not mapped directly to ours the way that it was supposed to nor has it worked as advertised. That has created challenges for us. This is something they are still actively working on, but it is not actually released and working, and it was supposed to be released and working. We got early access to it in the very beginning of our relationship. Then, as we went to market with larger customers, they were not able to enable it for those customers because it was still early access. Unfortunately, it is still not generally available for them. As a result, we don't get to use it to help get improvements on multi-tenant architecture for us.

For how long have I used the solution?

I have been using the solution for about a year.

What do I think about the stability of the solution?

Stability has been a little bit of a problem. We have had stability problems. Although we have not experienced any catastrophic outages within the platform, there have been numerous impacts to customers. This has caused a degradation of service over time by impacting customer value and the customer's perception of value, both from the platform and our service as a service provider.

We have full-time security engineers who do maintenance work and upkeep for all our SIEM solutions. However, that may be a little different because we are a service provider. We're looking at multiple, large deployments, so that may not be the same thing that other people experience.

What do I think about the scalability of the solution?

We haven't run into any major scalability problems with the solution. It has continued to scale and perform well for query. The one scalability problem that we have encountered has to do with multi-tenancy at scale for solutions integrating SecOps. Devo is still working to bring to market these features to allow multi-tenancy for us in this area. As a result, we have had to implement our own security, correlation rules, and content. That has been a struggle at scale for us, in comparison to using quality built-in, vendor content for SecOps, which has not yet been delivered for us.

There are somewhere between 45 to 55 security analysts and security engineers who use it daily.

How are customer service and technical support?

Technical support for operational customers has been satisfactory. However, support during onboarding and implementation, including the need for professional services engagements to develop parsers for new log types and troubleshoot problems during onboarding, has been severely lacking. Often, tenant set times and support requests during onboarding have gone weeks and even months without resolution, and sometimes without reply, which has impacted customer relationships.

Which solution did I use previously and why did I switch?

While we continue to use Splunk as a vendor for the SIEM services that we provide, we have also added Devo as an additional vendor to provide services to customers. We have found similar experiences at both vendors from a support perspective. Although professional services skill level and availability might be better at Devo, the overall experience for onboarding and implementing a customer is still very challenging with both.

How was the initial setup?

The deployment was fairly straightforward. For how we did the setup, we were building an integration with our product, which is a little more complicated, but that's not what most people are going to be doing. 

We were building a full integration with our platform. So, we are writing code to integrate with the APIs.

Not including our coding work that we had to do on the integration side, our deployment took about six weeks.

What about the implementation team?

It was just us and Devo's team building the integration. Expertise was provided from Devo to help work through some things, which was absolutely excellent.

What was our ROI?

In incidents where we are using Devo for analysis, our mean time to remediation for SIEM is lower. We're able to query faster, find the data that we need, and access it, then respond quicker. There is some ROI on query speed.

What's my experience with pricing, setup cost, and licensing?

Based on adaptations that they have made, where they are essentially charging for metadata around events that we collect now, that extra charge makes up any difference in price savings between Splunk or Azure Sentinel and them. 

Before, the cost was just the data itself, but they have adjusted it now where they even charge if we parse the data and add in names for a field that comes in. For example, we get a username. If you go to log into Windows, and it says, "That username tried to log in." Then, it labels the username with your name. They will charge us for the space that username takes up when they label it. On top of that, this has caused us to lose all of the price savings that were being found before. In fact, in some cases, it is more expensive than the competitors as a result. The charging for metadata on parsed fields has led to significant, unexpected pricing for customers.

Be cautious of metadata inclusion for log types in pricing, as there are some "gotchas" with that. This would not be charged by other vendors, like Splunk, where you are getting Windows Logs. Windows Logs have a bunch of blank space in them. Essentially, Splunk just compresses that. Then, after they compress and label it, that is the parse that you see, but they don't charge you for the white space. They don't charge you for the metadata. Whereas, Devo is charging you for that. There are some "gotchas" there around that. We want to point, "Pay attention to ingest charges for new data types, as you will be charged for metadata as a part of the overall license usage." 

There are charges for metadata, as Devo counts data after parsing and enrichment. It charges it against license usage, whereas other vendors charge the license before parsing and enrichment, e.g., you are looking at the raw, compressed, data first, then they parse and enrich it, and you don't get charged for that part. That difference is hitting some of our customers in a negative way, especially when there is an unparsed log type. They don't support it. One that is not supported right now is Cisco ASA, which should be supported as it is a major vendor out there. If a customer says, "Well, in Splunk, I'm currently bringing 50 gigabytes of Cisco ASA logs," but then they don't consider the fact that this adds 25% metadata in Splunk. Now, when they shift it over to Devo, it will actually be a 25% increase. They are going to see 62.5 gigs now when they move it over, because they are going to get charged for the metadata that they weren't being charged for in Splunk. Even though the price per gig is lower with Devo, by charging more for the metadata, i.e., by charging more gigs in the end, you are ending up either net neutral or even sometimes saving, if there is not a lot of metadata. Then, sometimes you are actually losing money in events that have a ton of metadata, because you are increasing it sometimes by as much as 50%. 

I have addressed this issue with Devo all the way to the CEO. They are not unaware. I talked to everyone, all the way up the chain of command. Then, our CEO has been having a direct call with their CEO. They have had a biweekly call for the last six weeks trying to get things moving forward in the right direction. Devo's new CEO is trying very hard to move things in the right direction, but customers need to be aware, "It's not there yet." They need to know what they are getting into.

Which other solutions did I evaluate?

We evaluated Graylog as well as QRadar as potential options. Neither of those options met our needs or use cases.

What other advice do I have?

No SIEM deployment is ever going to be easy. You want to attack it in order of priorities for what use cases matter to your business, not just log sources.

The Activeboards are easy to understand and flexible. However, we are not using them quite as much as maybe other people are. However, we are not using them quite as much as other people are. I would suggest investment in developing and working with Activeboards. Wait for a general availability release of SecOps to all your customers for use of this, as a SIEM product, if you lack internal SIEM expertise to develop correlation rules and content for Devo on your own.

I would rate this solution as a five out of 10.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Other
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Mark Lauteren - PeerSpot reviewer
Chief Information Officer at ECRMC
Real User
Top 5
Gives us a good quality view of what's going on in our environment
Pros and Cons
  • "There are a host of things that are most valuable. Obviously monitoring our environment and reporting out different events is important. They perform a suite of services. They monitor all of our servers, all of our key infrastructure, like our DNS, our switches, all that stuff. They aggregate and correlate that quarterly. They'll tell us if we're getting a lot of login failures and something is going on or if something's weird."
  • "Communication is always something that can be improved, but I feel that any time we've had a communication issue, it's quickly addressed when we bring those up at the monthly meetings. Usually, it's an individual that wasn't clear in the communication, it's not the process per se. You always have to be able to segregate if the process didn't work or an individual either didn't say the right thing or my people didn't understand what they were being told."

What is our primary use case?

EventTracker analyzes all of the different types of security events, it both aggregates and correlates. They send us a daily report of things like servers that aren't responding that normally respond and any kind of events that they see from the day before. If there is a serious perceived security event, they will call. I have two folks at InfoSec, so they will call directly and say, "Hey, we're seeing something here." Then between the two of them, they'll try and identify whether it is a true event or not, and then monthly, we sit down with them on a call where we talk about what's going on and if there are opportunities for improvement.

If there was an event that we felt they shouldn't have escalated to us then we'll let them know and we'll talk about how it could have been avoided or vice versa or if there was an event that we didn't get escalated but it should have been. We don't get a lot of those, mostly it's about, "Hey, we're adding this new device, we want to make sure it's on the list, so it's getting monitored", and things like that.

How has it helped my organization?

EventTracker enables us to keep on top of our work. We're a hospital, so we're 24/7. We don't have enough staff to do that, so they're able to monitor things off-hours, and then even during hours I get two people from InfoSec. They can't be sitting there staring at a screen all the time, they have to go out and do other things and attend meetings, etc. and so they're able to rely on the tool to correlate and then notify them either via pager or phone call if something comes up that is deemed to be important enough to be notified. That's huge for us because we don't have the budget from a staffing standpoint to have people on-site 24/7.

Back in the day, I used to work for Intel and we had a whole room full of people who just sat there and stared at the screen for events. It was in their data center group. We don't have that kind of staff. The only people half staring at a screen all day long are the call center, and they're the ones who take tickets and talk to end-users but they don't have the time to sit there and monitor the event logs and all of the other things. That's the value the tool gives us. I can have people doing real work and then things that need to be escalated are escalated. It saves us roughly two full-time employees. It cuts my team in half. 

EventTracker also helps us with compliance mandates. The tool helps us document that we're following best practice, that we're identifying issues and tracking them, and that we have logs of what issues were identified. That allows us to be able to show a lot of the documentation that we are really doing best practice. I just don't physically have enough team members to do that. This allows me to be able to provide that 24/7.

It's not just a tool, it's a service. The secret sauce is not the tool. I could buy a tool from a dozen vendors. I have a tool to be able to aggregate and correlate all of these events and send something to a screen. But if I still have to have somebody sitting there staring at a screen all day long, that's valuable but not as valuable as someone that has a team, that is an essential SOC, that is aware of what's going on in the world and is saying "I'm seeing this in seven places, including El Centro, let's get ahold of El Centro so they can start taking action on it."

There's nobody that's dedicated to internal incident management. I have two information security folks and they do everything from internal incident management to designing new implementations, to reviews of existing annual information, and security audits. They do all of that, but they don't sit there all day long, staring at a screen, looking at incidents, and trying to figure out what to do. That's the value that we get out of it. That's the extra value.

What is most valuable?

Monitoring our environment and reporting out different events is important. They perform a suite of services. They monitor all of our servers, all of our key infrastructure, like our DNS, our switches, all that stuff. They aggregate and correlate that quarterly. They'll tell us if we're getting a lot of login failures and something is going on or if something's weird.

I like the dashboard. Our security folks look at it all the time. They have it running, they have a big screen monitor in one of their offices and it's up all the time.

I don't use the UI very much but from what I've been told by the security team, it's very easy to use. Compared to other products, the team found it pretty easy to use. We've got the dashboards published on a large screen TV so they can look at it all the time, and then they typically have it on their desk. It is also available on smartphones.

We import log data into EventTracker. It feeds the overall picture of giving us a good quality view of what's going on in our environment.

What needs improvement?

Communication is always something that can be improved, but I feel that any time we've had a communication issue, it's quickly addressed when we bring those up at the monthly meetings. Usually, it's an individual that wasn't clear in the communication, it's not the process per se. You always have to be able to segregate if the process didn't work or an individual either didn't say the right thing or my people didn't understand what they were being told. So far, I have not understood or heard of any issues that were more process or tool-related, it's individual-related. 

The industry is changing. The landscape is changing all the time and they seem to do a pretty good job of keeping up with that. That's a challenge in information security. That's a target that doesn't just move. It moves from room to room, to room, not just a few inches, one way or the other. You're constantly changing. You're chasing a moving target that's really moving. It boils it down to here's what we think is going on versus our people. If all they did was keep track of what was going on in the industry, that's all they'd do because I only have two people.

For how long have I used the solution?

I have been using EventTracker since I have been at my company for the past year but it's been at my company for several years. 

What do I think about the stability of the solution?

It is as stable as a rock. I have not heard of a single outage on it.

What do I think about the scalability of the solution?

We haven't scaled it out to anything other than what we had. They've done a pretty good job of implementing it. Since I've been here, we've had a virtual server primarily here and there, but we have not done a lot of scaling out. There hasn't been a discussion about what limitations there would be.

It monitors all of our infrastructure, all of our servers. It's being very extensively used. As we grow those, we're getting ready to open a new building early next year, all of the equipment that goes into that building will be added to it.

We fully implemented it so I don't know that there's a lot other than organic growth that would need to be done.

How are customer service and technical support?

My InfoSec team talks to support occasionally. There have been a few cases where they saw something they didn't quite understand, so they would call and ask for information, but it's been few and far between. I have not heard of any issues with support. I heard that their experience with them has been good. 

Which solution did I use previously and why did I switch?

At a previous company, we used a different tool. It was a much more encompassing tool that does a bunch of different event monitoring, correlation, and aggregation. It was a management suite that did things like backups as well. I know when we implemented it at Intel, it was atrocious. The problem was the process. We had tens of thousands of servers and we implemented the tool and we turned everything on. Events scrolled by the screen so fast, you couldn't even see them. We had to say, "Well, wait a minute. Let's dial this back a little bit." They also didn't do a good job of aggregating or correlating. 

The main difference between that tool and EventTracker is the ease of use. That tool was all CLI based. Everything was command-line based. The syntax that you had to use with that CLI was very challenging and very specific. If you thought you were doing the right thing but something did work and it wouldn't warn you that you didn't do it right.

How was the initial setup?

I have not been told that there were any issues when it was implemented. We have not done any major upgrades since I've been here. We've done incremental patch-type things but I don't know of any issues.

I did hear it was relatively labor-intensive, but that's because of all of the processes around the communication, like what gets communicated and what doesn't. That's to be expected anytime you're doing a lot of workflow work, that takes time.

There's daily maintenance in that they're responding to events or they're working on the tool. There is very little done as far as trying to make changes to the tool itself. Our information security team does respond to events. It's a chunk of their time. We don't have to spend a lot of time at all tweaking the tool. I wouldn't say we spend even an hour a day.

I have two people in InfoSc and a couple of people in my network team that reviews it. My help desk people will review it but they don't really use it per se. They'll see events and that's it. Most of the time that really goes to the information security team.

What was our ROI?

Our ROI is $160,000 a year before overhead, then adding in the overhead of 30 to 40% with benefits and everything else, it's easily over $200,000 a year.

What's my experience with pricing, setup cost, and licensing?

They've been very fair. I think that we've had to push back a little bit here and there on pricing. 

What other advice do I have?

The biggest lesson I have learned is that the outsourcing of this service has a dramatic impact on the organization. We can't just keep throwing bodies at it internally, we have to leverage somebody else's knowledge.

Some people don't trust outsourcing. I'm not a big outsourcing guy. But I really don't treat them as an outsource, I treat them more as a partner. You're going to have to do this one way or the other, or are you going to get nailed at some point. That's just the way it is. If you're not following these things, you're going to get nailed. If you trust them and you realize that they're doing things that you should be doing or are doing, you're going to save a lot of money out. It's going to be cost-effective for you. It won't just save money, it will be cost-effective.

I would rate EventTracker a ten out of ten. 

Having dealt with a lot of vendors and their sales, they are probably one of the more low-keyed. They're not out there constantly trying to sell me stuff. I don't know if it's because we have everything so there's nothing left to sell or not, but they've been very easy to deal with. Their leadership and their sales organization have been very easy to deal with.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Chief Infrastructure & Security Office at a financial services firm with 51-200 employees
Real User
Top 5Leaderboard
Collects logs from different systems, works extremely fast, and has a predictable cost model
Pros and Cons
  • "It is a very comprehensive solution for gathering data. It has got a lot of capabilities for collecting logs from different systems. Logs are notoriously difficult to collect because they come in all formats. LogPoint has a very sophisticated mechanism for you to be able to connect to or listen to a system, get the data, and parse it. Logs come in text formats that are not easily parseable because all logs are not the same, but with LogPoint, you can define a policy for collecting the data. You can create a parser very quickly to get the logs into a structured mechanism so that you can analyze them."
  • "The thing that makes it a little bit challenging is when you run into a situation where you have logs that are not easily parsable. If a log has a very specific structure, it is very easy to parse and create a parser for it, but if a log has a free form, meaning that it is of any length or it can change at any time, handling such a log is very challenging, not just in LogPoint but also in everything else. Everybody struggles with that scenario, and LogPoint is also in the same boat. One-third of logs are of free form or not of a specific length, and you can run into situations where it is almost impossible to parse the log, even if they try to help you. It is just the nature of the beast."

What is our primary use case?

We use it as a repository of most of the logs that are created within our office systems. It is mostly used for forensic purposes. If there is an investigation, we go look for the logs. We find those logs in LogPoint, and then we use them for further analysis.

How has it helped my organization?

We have close to 33 different sources of logs, and we were able to onboard most of them in less than three months. Its adoption is very quick, and once you have the logs in there, the ability to search for things is very good.

What is most valuable?

It is a very comprehensive solution for gathering data. It has got a lot of capabilities for collecting logs from different systems. Logs are notoriously difficult to collect because they come in all formats. LogPoint has a very sophisticated mechanism for you to be able to connect to or listen to a system, get the data, and parse it. Logs come in text formats that are not easily parseable because all logs are not the same, but with LogPoint, you can define a policy for collecting the data. You can create a parser very quickly to get the logs into a structured mechanism so that you can analyze them.

What needs improvement?

The thing that makes it a little bit challenging is when you run into a situation where you have logs that are not easily parsable. If a log has a very specific structure, it is very easy to parse and create a parser for it, but if a log has a free form, meaning that it is of any length or it can change at any time, handling such a log is very challenging, not just in LogPoint but also in everything else. Everybody struggles with that scenario, and LogPoint is also in the same boat. One-third of logs are of free form or not of a specific length, and you can run into situations where it is almost impossible to parse the log, even if they try to help you. It is just the nature of the beast.

Its reporting could be significantly improved. They have very good reports, but the ability to create ad-hoc reports can be improved significantly.

For how long have I used the solution?

I have been using this solution for three years.

What do I think about the stability of the solution?

It has been stable, and I haven't had any issues with it.

What do I think about the scalability of the solution?

There are no issues there. However much free space I give it, it'll work well.

It is being used by only two people: me and another security engineer. We go and look at the logs. We are collecting most of the information from the firm through this. If we were to grow, we'll make it grow with us, but right now, we don't have any plans to expand its usage.

How are customer service and support?

Their support is good. If you call them for help, they'll give you help. They have a very good set of engineers to help you with onboarding or the setup process. You can consult them when you have a challenge or a question. They are very good with the setup and follow-up. What happens afterward is a whole different story because if you have to escalate internally, you can get in trouble. So, their initial support is very good, but their advanced support is a little more challenging.

Which solution did I use previously and why did I switch?

I used a product called Logtrust, which is now called Devo. I switched because I had to get a consultant every time I had to do something in the system. It required a level of expertise. The system wasn't built for a mere human to use. It was very advanced, but it required consultancy in order to get it working. There are a lot of things that they claim to be simple, but at the end of the day, you have to have them do the work, and I don't like that. I want to be able to do the work myself. With LogPoint, I'm able to do most of the work myself.

How was the initial setup?

It is very simple. There is a virtual machine that you download, and this virtual machine has everything in it. There is nothing for you to really do. You just download and install it, and once you have the machine up and running, you're good to go.

The implementation took three months. I had a complete listing of my log sources, so I just went down the list. I started with the most important logs, such as DNS, DHCP, Active Directory, and then I went down from there. We have 33 sources being collected currently.

What about the implementation team?

I did it on my own. I also take care of its maintenance.

What was our ROI?

It is not easy to calculate ROI on such a solution. The ROI is in terms of having the ability to find what you need in your logs quickly and being confident that you're not going to lose your logs and you can really search for things. It is the assurance that you can get that information when you need it. If you don't have it, you're in a trouble. If you are compromised, then you have a problem. It is hard to measure the cost of these things.

As compared to other systems, I'm getting a good value for the money. I'm not paying a variable cost. I have a pretty predictable cost model, and if I need to grow, it is all up to me for the resources that I put, not to them. That's a really good model, and I like it.

What's my experience with pricing, setup cost, and licensing?

It has a fixed price, which is what I like about LogPoint. I bought the system and paid for it, and I pay maintenance. It is not a consumption model. Most SIEMs or most of the log management systems are consumption-based, which means that you pay for how many logs you have in the system. That's a real problem because logs can grow very quickly in different circumstances, and when you have a variable price model, you never know what you're going to pay. Splunk is notoriously expensive for that reason. If you use Splunk or QRadar, it becomes expensive because there are not just the logs; you also have to parse the logs and create indexes. Those indexes can be very expensive in terms of space. Therefore, if they charge you by this space, you can end up paying a significant amount of money. It can be more than what you expect to pay. I like the fact that LogPoint has a fixed cost. I know what I'm going to pay on a yearly basis. I pay that, and I pay the maintenance, and I just make it work.

Which other solutions did I evaluate?

I had Logtrust, and I looked at AlienVault, Splunk, and IBM QRadar. Splunk was too expensive, and QRadar was too complex. AlienVault was very good and very close to LogPoint. I almost went to AlienVault, but its cost turned out to be significantly higher than LogPoint, so I ended up going for LogPoint because it was a better cost proposition for me.

What other advice do I have?

It depends on what you're looking for. If you really want a full-blown SIEM with all the functionality and all the correlation analysis, you might be able to find products that have more sophisticated correlations, etc. If you just want to keep your logs and be able to find information quickly within your systems, LogPoint is more than capable. It is a good cost proposition, and it works extremely well and very fast.

I would rate it an eight out of 10. It is a good cost proposition. It is a good value. It has all the functionality for what I wanted, which is my log management. I'm not using a lot of the feature sets that are very advanced. I don't need them, so I can't judge it based on those, but for my needs, it is an eight for sure.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Information Security Officer, Network Analyst at a university with 1,001-5,000 employees
Real User
Top 20
It puts things together and provides the evidence and has good automation and integration capabilities
Pros and Cons
  • "Automations are very valuable. It provides the ability to automate some of our small use cases. The ability to integrate with other products that use an API is also very useful. LogRhythm has a plugin for it that we can connect and start to move down towards the path of a single pane of glass instead of having multiple or different tools."
  • "Their ticketing system for managing cases can be improved. They can either do that or adopt some of the open-source ticket systems into theirs. The current system works and gets the job done, but it is very bare-bones and basic. There are some things that could be improved there. They should also bring in more threat intelligence into the product and also probably start to look into the integration of more cloud or SAS products for ingesting logs. They're doing the work, but with the explosion of COVID, a lot of businesses have started to move towards more cloud applications or SAS applications. There is a whole diverse suite of SAS products out there, which is a challenge for them and I get it. They seem to be focusing on the big ones, but it'll be nice to be able to, for example, pull in Microsoft logs from Office 365. They are working towards a better way of doing that, and they have a product in the pipeline to pull logs in from other SAS applications. The biggest thing for them is going to be moving away from a Windows Server infrastructure into a straight-up Linux, which is more stable in my eyes. For the backend, they can maybe move into more of an up-to-date Elastic search engine and use less of Microsoft products."

What is our primary use case?

We use it for log ingestion and monitoring activity in our environment.

How has it helped my organization?

It is a simpler system than what we had before. We had IBM QRadar, which used to give us everything, and we had to dig through, figure out, and piece it all together. LogRhythm lights up when an event occurs. As opposed to just giving us everything, it will piece things together for you and let you know that you probably should look at this. It also provides the evidence. 

It is easy to find what you're looking for. It is not like a needle in the haystack like QRadar was. It is not a mystery why something popped or why you're being alerted. It provides you the details or the evidence as to why it alerted or alarmed on something, making qualifying or investigations a little bit quicker and also allowing us to close down on remediation times.

What is most valuable?

Automations are very valuable. It provides the ability to automate some of our small use cases. 

The ability to integrate with other products that use an API is also very useful. LogRhythm has a plugin for it that we can connect and start to move down towards the path of a single pane of glass instead of having multiple or different tools.

What needs improvement?

Their ticketing system for managing cases can be improved. They can either do that or adopt some of the open-source ticket systems into theirs. The current system works and gets the job done, but it is very bare-bones and basic. There are some things that could be improved there. 

They should also bring in more threat intelligence into the product and also probably start to look into the integration of more cloud or SAS products for ingesting logs. They're doing the work, but with the explosion of COVID, a lot of businesses have started to move towards more cloud applications or SAS applications. There is a whole diverse suite of SAS products out there, which is a challenge for them and I get it. They seem to be focusing on the big ones, but it'll be nice to be able to, for example, pull in Microsoft logs from Office 365. They are working towards a better way of doing that, and they have a product in the pipeline to pull logs in from other SAS applications.

The biggest thing for them is going to be moving away from a Windows Server infrastructure into a straight-up Linux, which is more stable in my eyes. For the backend, they can maybe move into more of an up-to-date Elastic search engine and use less of Microsoft products.

For how long have I used the solution?

I have been using this solution for three years.

What do I think about the stability of the solution?

Bugs are there. We've encountered quite a few, but support is pretty quick at picking up and working with us through those and then escalating through their different peers until we get a solution. Now, the bugs are becoming less and less. Initially, they were rolling out features pretty quickly, and maybe some use cases weren't considered. We ran into those bugs because it was a unique use case.

What do I think about the scalability of the solution?

It is easy to scale. We run different appliances. So, for us scaling is not an issue. Each appliance does a different piece of the function, so scalability is not a problem. We started off doing say 10,000 logs per second or MPS event, and then we quickly upgraded. Now, we're sitting at a cool 15,000. There is no need to upgrade hardware or anything. You just update the license. That is it.

We have multiple users in there. We have a security team, operations teams, server team, and network team for operations. We also have our research team, HBC team, and support desk staff. We have security teams from other universities in the States. We're sitting at a cool 50 users.

How are customer service and technical support?

Their technical support is good. They are pretty quick at working with us. I would give them an eight out of ten. I don't know what they see on their end when a customer calls in and whether they are able to see previous tickets. It always feels like you're starting fresh every time. They could maybe improve on that end.

Which solution did I use previously and why did I switch?

We had IBM QRadar for what seemed to be almost a decade. So, we just needed something different. There was a loss of knowledge transfer, as you can imagine, over a decade with different people coming in and out of security teams, and the transfer of knowledge was very limited. At the time I got on board, I had to figure out how to use it and how to maintain it and keep it going. We had some difficulties or challenges with IBM in getting a grasp on how we can keep getting support. It was a challenge just figuring out who our account rep was. After I figured that out, it was somewhat smooth sailing, and then we just decided it was time for something different, just a break-off because products change in ten years. You can either stay with it and deal with issues, or you do a break-off and get what's best for the organization.

How was the initial setup?

It was complex simply because we had different products. 

What about the implementation team?

We did have professional services to help us, which made the installation a little bit smoother. Onboarding of logs and having somebody with whom you can bounce ideas and who can go find an answer for you if they didn't have one readily available made the transition from one product to the other pretty straightforward.

What's my experience with pricing, setup cost, and licensing?

We did a five-year agreement. We pay close to a quarter of a million dollars for our solution.

What other advice do I have?

I would definitely advise giving it a look. If you're able to deal with it in your environment and just give it a chance, it'll grow on you. It is not Splunk, but it's getting there. They're gaining visibility with other vendors. The integration with third parties is starting to light up a little bit for them, unlike IBM QRadar that has already created that bond with third parties to bring in their services into the product. LogRhythm is definitely getting there, and it is a quick way to leverage in-house talent. So, if you want to do automation and you have someone who is good at Python scripting or PowerShell, you can easily build something in-house to automate some of those use cases that you may want to do. 

I would rate LogRhythm NextGen SIEM an eight out of ten. 

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Muhammad Junaid Raza - PeerSpot reviewer
Sr. Security Engineer at Ebryx
Real User
Top 20
Because it is a cloud-based deployment, we don't need to worry about hardware infrastructure
Pros and Cons
  • "Azure Application Gateway makes things a lot easier. You can create dashboards, alert rules, hunting and custom queries, and functions with it."
  • "There are certain delays. For example, if an alert has been rated on Microsoft Defender for Endpoint, it might take up to an hour for that alert to reach Sentinel. This should ideally take no more than one or two seconds."

What is our primary use case?

We work as a managed security services provider (MSSP). We have different clients who have their own security team. 

One company that I worked for recently had a security team of three people, then they hired us for 24/7 analysis and monitoring. For that, I solely worked on building this product, then there are the eight to nine people who do 24/7 monitoring and analysis.

Sentinel is a full-fledged SIEM and SOAR solution. It is made to enhance your security posture and entirely centered around enhancing security. Every feature that is built into Azure Sentinel is for enhancing security posture.

How has it helped my organization?

It has increased our security posture a lot because there are a lot of services natively integrated to Azure Sentinel from Microsoft, e.g., Microsoft Defender for Endpoint and Defender for Office 365. 

From an analyst's point of view, we have created a lot of automation. This has affected the productivity of analysts because we have automated a lot of tasks that we used to do manually. From an end user's perspective, they don't even notice most of the time because most of our end users are mostly non-technical. They don't feel the difference. It is all about the security and operations teams who have felt the difference after moving from LogRhythm to Azure Sentinel.

What is most valuable?

It is cloud-based, so there isn't an accessibility issue. You don't have to worry about dialing a VPN to access it. Azure does require that for an on-prem solution that the security part is entirely on Microsoft's and Azure's sign-in and login processes.

Because it is a cloud-based deployment, we don't need to worry about hardware infrastructure. That is taken care of by Microsoft.

Azure Application Gateway makes things a lot easier. You can create dashboards, alert rules, hunting and custom queries, and functions with it.

Its integration capabilities are great. We have integrated everything from on-prem to the cloud.

What needs improvement?

There are certain delays. For example, if an alert has been rated on Microsoft Defender for Endpoint, it might take up to an hour for that alert to reach Sentinel. This should ideally take no more than one or two seconds.

There are a couple of delays with the service-to-service integration with Azure Sentinel as well as the tracking point.

For how long have I used the solution?

I have been using it for 14 to 15 months.

What do I think about the stability of the solution?

Azure Sentinel is pretty stable. Sometimes, the agents installed on endpoints go down for a bit. Also, we have faced a lot of issues with its correctors in particular. However, the platform is highly stable, and there have been no issues with that.

For operations, one to two people are actively using the solution. For analysis, there are eight to 10 people who are actively using it.

What do I think about the scalability of the solution?

Sentinel is scalable. If you want, you can hook up a lower balance security corrector. So, there are no issues with scalability.

We have coverage for around 60% to 70% of our environment. While this is not an ideal state, it has the capability to go to an ideal state, if needed.

How are customer service and support?

I have worked with Azure Sentinel for four clients. With only one of those clients, the support was great. For the last three clients, there were a lot of delays. For example, the issues that could have been resolved within one or two hours did not get resolved for a month or two. So, it depends on your support plan. It depends on the networking connections that you have with Microsoft. If you are on your own with a lower priority plan, it will take a lot of time to resolve minor issues. Therefore, Microsoft support is not that great. They are highly understaffed. I would rate them as six or seven out of 10.

How would you rate customer service and support?

Neutral

Which solution did I use previously and why did I switch?

We had a full-fledged SIEM, LogRhythm, already working, but we wanted to migrate towards something that was cloud-based and more inclusive of all technologies. So, we shifted to Azure Sentinel and migrated all our log sources onto Azure Sentinel. We also added a lot of log sources besides those that were reporting to LogRhythm.

We have used a lot of SIEMs. We have used Wazuh, QRadar, Rapid7's SIEM, EventLog Analyzer (ELA), and Splunk. We used Wazuh with ELK Stack, then we shifted to Azure Sentinel because of client requirements.

How was the initial setup?

The initial setup was really straightforward because I had already worked with FireEye Security Orchestrator, so the automation parts were not that difficult. There were a couple of things that got me confused, but it was pretty straightforward overall.

Initially, the deployment took seven and a half months.

What about the implementation team?

We used a lot of forums. We used Microsoft support and online help. We used a lot of things to get everything into one picture. There is plenty of help available online for any log sources that you want to move to Azure Sentinel.

What's my experience with pricing, setup cost, and licensing?

I have worked with a lot of SIEMs. We are using Sentinel three to four times more than other SIEMs that we have used. Azure Sentinel's only limitation is its price point. Sentinel costs a lot if your ingestion goes up to a certain point.

Initially, you should create cost alerts in the cost management of Azure. With one of my clients, we deployed the solution. We estimated that the ingestion would be up to this particular mark, but that ingestion somehow got way beyond that. Within a month to a month and a half, they got charged 35,000 CAD, which was a huge turn off for us. So, at the very beginning, do your cost estimation, then apply a cost alert in the cost management of Azure. You will then get notified if anything goes out of bounds or unexpected happens. After that, start building your entire security operation center on Sentinel.

Which other solutions did I evaluate?

The SOAR capabilities of Azure Sentinel are great. FireEye Security Orchestrator looks like an infant in front of Azure Sentinel's SOAR capabilities, which is great.

What other advice do I have?

The solution is great. As far as the product itself is concerned, not the pricing, I would rate it as nine out of 10. Including pricing, I would rate the product as five to six out of 10.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Buyer's Guide
Security Information and Event Management (SIEM)
June 2022
Get our free report covering Splunk, Microsoft, Elastic, and other competitors of IBM QRadar. Updated: June 2022.
609,272 professionals have used our research since 2012.