We changed our name from IT Central Station: Here's why
Get our free report covering Dynatrace, Datadog, IBM, and other competitors of Splunk. Updated: January 2022.
564,729 professionals have used our research since 2012.

Read reviews of Splunk alternatives and competitors

Dannie Combs
Senior Vice President and Chief Information Security Officer at Donnelley Financial Solutions
Video Review
Real User
Top 5
The alert fatigue and false positive rates have just plummeted, which is really exciting.
Pros and Cons
  • "As a result of the automation, we are able to manage SIEM with a small security team. I'm in a unique position where we have been growing the security organization quite rapidly over the last three and a half years. But, as a direct result of the empow transition and legacy collection of tools towards the empow platform, we've been able to keep that head count flat. We've been able to redirect a lot of the security team's time away from the wash, rinse, repeat activities of responding to alarms where we have a high degree of confidence that they will be false positives, adjusting the rules accordingly. This can be a bit frustrating for the analyst when they have to spend hours a day dealing with these types of probable false positives. So, it has helped not only us keep our headcount flat relative to the resources necessary to provide the assurances that our executives expect of us for monitoring, but allows our analyst team to spend the majority of their time doing what they love. They are spending their time meaningfully with a higher degree of confidence and enjoying getting into the incident response type activity."
  • "Relative to keeping up with the sheer pace of cloud-native technologies, it should provide more options for clients to deploy their technologies in unique ways. This is an area that I recommend that they maintain focus."

What is our primary use case?

My organization is in the financial services industry and the majority of services that we offer are financial services centric. We operate or support almost every industry in the marketplace. We restore processes and transmit highly sensitive information. Sometimes that information is premarket. Other times that information is personally identifiable information, personal health information, etc. It is dependent upon our client's requirements. Security is cornerstone in all that we do. It's in our DNA, as we would like to say internally. Being in a position to understand when we are at risk of a cyber attack is paramount.

We have a strong desire to understand who did what, where, when, and why internally. empow's near real-time, high fidelity, security monitoring capabilities are our primary use case. Other use cases revolve around:

  • Gaining as much insight from a threat intelligence perspective, being able to correlate that back to an alarm, and doing so in an automated fashion. 
  • The automated mitigation capability. 
  • The general reporting and analytics within the platform.

How has it helped my organization?

We have a significantly higher confidence in our ability to automate mitigations. We've had technologies across SOAR and cyber threat intelligence integrated into our platforms for over four years now. We would like to tell ourselves that we're reasonably experienced with both of those technology categories. 

One of the most impressive accomplishments that we were able to showcase internally was building metrics around the fidelity of our playbooks when they're executed. We have a high degree of confidence that we have the right playbooks in place. It's also worth mentioning that we're a global organization. We are corporate focused, primarily, not consumer focused. We know where our clients are from a geographic perspective, as an example, but our clients travel. We want to be hyperconservative on those mitigation techniques as to not adversely affect the client experience with our product lines. I was quite surprised, even though we took a very conservative approach initially, the degree of accuracy and percentages of false positive were almost zero when the mitigation playbooks were involved. The enablement of automated mitigations that the empow product line has provided us with is incredibly impressive.

One of the most impressive capabilities of the empow product line to our security analyst team is just how little maintenance is required to ensure that we are focusing on the right threats. The correlation rules themselves require effectively little to no maintenance from a client perspective, which is tremendous. This is leaps forward compared to other product lines and SIEMs over the last 10 years. 

Correlation rules maintenance has been one of the most time consuming bodies of work required. It is one of the areas where we had a higher degree of risk of focusing in the wrong areas. We spent an enormous amount of time being hyperfocused on ensuring that we have the right correlation rules in place, the fidelity of those rules was sound, etc. We just can't begin to mention how pleased we are that, for the most part, this is no longer something we have to be concerned about.

The power of the AI and the natural language processing capability is best measured by the outputs. The fidelity of the alarms that we receive is just night and day compared to SIEM platforms and other platforms we've used in the past. I also feel it is a leading reason (major theme) why our overall alarm volume is significantly lower, because we deal with far less alert fatigue. We are dealing with a lot less false positives as a direct result of the AI and NLP capabilities.

Our overall false positive rates are significantly fewer. It's definitely removed about 60 percent of the total volume of alarms that we have needed to respond to each month over the last year. Also, it's worth mentioning that we spent considerable amounts of time in years' past focusing on managing correlation rules, ensuring that we have the right prioritization applied to those rules, that the rules were accounted for, or they took into account our technology deployments, such as a general shift in our portfolio, adding/removing devices, retiring products and services, and adding new innovative solutions for our customers. This was to the extent that we had a 90-minute session twice a month with a partner of ours dedicated just to that session. Today, we don't have any meetings per month. We're focused on correlation rules as a direct result of our transition to empow.

Their ability to focus on an event with a high degree of fidelity really drives our level of confidence. Therefore, we are quick to respond with a high degree of urgency when we do receive an alarm because we recognize that there is a very high probability that the alarm is accurate and the fidelity is very high. This enables us to focus on other areas throughout the day. However, once we do receive an alarm from empow, we recognize it's something that needs to be responded to with a high degree of urgency.

The integration between Elastic and empow has been quite impressive for a couple of reasons: 

  1. We're a prime example of an organization who must have a high degree of flexibility in our deployments. We have full cloud-native deployments of products and corporate systems. We have on-prem deployments of both. Our cloud deployments span many cloud providers. Therefore, I need to be able to orchestrate and scale up and down my footprint, depending on geography, cloud providers, the tempos of the business relative to lifecycles with some of our products, and so on and so forth. Having a lot of leverage to pull on Elasticsearch has proven to be very attractive to us for supporting our set of requirements and flexibility. 
  2. They play a big role in making it incredibly easy to plug into other security tools, network platforms, and application platforms, whether they are internally developed or commercial offerings. The API model that the empow product provides has simplified the integration of almost any technology into their product lines.

empow has impacted our network security posture in a truly dramatic way, particularly in that we have a higher confidence when we are responding to an event that it is actionable and we should be concerned about. Secondly, it has positively impacted our network security posture by way of automated mitigations defined within the system. The playbooks that we define and can take a conservative approach to, they help us avoid any negative impact to our clients. The accuracy of those playbooks define the automated mitigations, and we have tremendous amount of confidence in them. Those playbooks are triggered daily and that reduces risk. They reduce the amount of time spent to contain and mitigate them. Overall, from a security perspective, there have been quite dramatic steps forward. 

It also directly supports our compliance programs. We're very easily able to measure when we have events and what actions were taken because the vast majority of them are addressed through automation.

I had worked using empow with a previous organization, but our requirements were very different. We are definitely enterprise-focused, but we are also corporate user-focused. Our client community is primarily that of mid to large enterprise organizations across the globe. How well a product organization in the services team responds to support calls is critically important. I give empow a lot of very high marks. The responsiveness has been very high, but more important than the responsiveness is the quality and accuracy of their recommended next steps to resolve whatever issue we may have. 

What is most valuable?

  • The automated mitigation capability. 
  • A next generation capability of attack replay, where it walks back from the event, historically, to provide that visualized representation of the attack lifecycle. 
  • The ability to rapidly deploy a comprehensive coverage tool without the need to spend months of planning for a deployment with emphasis placed on correlation rules. The ability to put aside the need for a high number of correlation rules is extremely advantageous to us, as it saves time and money, drives fidelity, and scores higher. It's just a fantastic capability.

When I think about the quality of the dashboard, it's one of the features that is just fantastic to speak about. They designed a dashboard where I can get a quick snapshot with a broad lens over the last seven to 10 days that dives specifically into areas which are a bit of concern. Also, from a SOC analyst perspective, there are many levels within a SOC organization, so whether they are entry level or a new hire, they can find that right altitude of interest relative to the depth of detail that's being presented. The flexibility of the dashboard to quickly drill up or down into an altitude of your choosing is fantastic.

Also, being able to pivot around between various data sets, whether it be:

  • Threat intelligence centric data
  • Alarm data
  • A specific asset
  • Elevating it to a solution level
  • Elevating it to an entity level

The degree of flexibility and speed in which you can change your view is very impressive. Oftentimes, with some of the more legacy SIEMs which have been in market for a long time, that was one of the major pain points: It took time to refresh views. The limitations of that flexibility was frustrating.

The platform has made mitigation faster primarily by way of the playbooks we defined (automated mitigation). We have a number of playbooks defined where our empow platform signals directly to the firewall to block traffic. For example, we have no customers in North Korea. Anytime we see an interrogation of our products or our assets from there, then we signal to the firewall to drop that traffic systematically when there's time. It is not some form of mean time to respond to an event, but really time relative to our analyst focusing more in other areas.

What needs improvement?

empow has a few areas of improvement as with any other technology, such as continuing to drive innovation in the dashboard. While we've been extremely impressed with the dashboard's ease of use, flexibility, ability to drill down deeply, and focus very intently on an area of interest, there will always be opportunities to be more innovative and open it up to a wider audience than just the operations group, for example. 

With reporting, there is always a desire to have custom reporting for every client of empow. 

Relative to keeping up with the sheer pace of cloud-native technologies, it should provide more options for clients to deploy their technologies in unique ways. This is an area that I recommend that they maintain focus.

For how long have I used the solution?

I've been using the empow i-SIEM platform for a total of four years across two companies, but for two years in my current company.

What do I think about the stability of the solution?

Someone knock on some wood here if you would, but we haven't had any stability challenges yet. That is directly attributable to the architecture that we've put in place for empow and other solutions that we deploy. We plan for highly available solutions across each deployment site, or per data center, making geo-redundancies available. So far so good, we have not had any significant operational hiccups with the platform.

We have one dedicated resource who is accountable for ensuring that the empow environment is healthy, so one from a maintaining perspective. We have a team of threat analysts on staff. We have a third-party managed security services partnership in place as well. There's definitely one FTE whose primary role is to ensure that the empow platform is up and running, healthy, and satisfying the needs of our internal clients, which would be our team of cyber threat analysts.

What do I think about the scalability of the solution?

The scalability of empow is endless. I feel that they have an architecture that is highly scalable. It's been proven for on-prem, cloud, and hybrid environments. We presently have a hybrid environment. I suspect they can scale to almost any size needed. The question will be as to what are the unique needs of the organization where they've been deployed and what is their appetite for investing to ensure resiliency either locally, regionally, or globally. Those things play a role in how quickly and complex the architecture must be in order to scale. 

It is the standard for security operations. Anywhere my organization deploys technology assets, empow will be providing coverage, if not already.

The ability for empow to be managed by a single analyst depends on the organization. I don't need a team of 20 to 25 analysts any longer. It's significantly fewer than that. To quantify one analyst really depends on the organization and what is their threshold for risk? That's unique to every organization. What is the size of the organization from a technology perspective: Are you dealing with hundreds or tens of thousands of servers? That will be indicative of your resource needs. It takes essentially one, maybe two, resources regardless of your size to directly support the care, feeding, capacity management, and monitoring of the empow platform. The simplicity of the architecture is remarkably impressive.

How are customer service and technical support?

The partnership between empow and Elasticsearch has a very positive impact on us from a couple of different angles:

  1. Support. There's one throat to choke. We pick up the phone, we reach out to the empow team, and we have one point of contact, whether we're experiencing an application issue, a data issue, etc. It just simplifies the overall management. 
  2. The licensing negotiation through one organization is more simplified. As it relates to preparing for major upgrades, it makes our lives quite a bit easier when there are fewer parties that we have to interface with.

The partnership between empow and Elastic has a few of key benefits:

  1. The simplification and how we have one point of contact for support regardless of what the issue type is. If we're experiencing a concern relative to the application, UI, reporting engines, etc., we have one phone number to call and one lead engineer to reach out to who takes ownership relative to determining if it's an internal empow matter, or if we need to reach out across the boundaries over to Elastic. 
  2. How it relates to our planning for upgrades or expansion of the environment for capacity management purposes, whatever the issue may be. Having that simplified licensing arrangement makes my life easier. As we have one agreement, we have one pricing scheme to work from. It's just really good, which keeps it nice and simple for maintaining the business.
  3. As we look forward to future product lines and other architectural endeavors, having a single point of contact for planning purposes simplifies the process quite a bit as we look to year two and three.

Which solution did I use previously and why did I switch?

It is worth mentioning we were able to retire two other platforms as part of our migration over to empow. We retired a legacy SIEM deployment that we had in place for nearly four years. While it is a great product, I just felt that empow demonstrated more innovation at a lower cost point with a simpler architecture that's more extensible and easier to scale. We were able to retire the SIEM that we had in place for three and a half years, as well as our SOAR platform. We continue to use our existing cyber threat intelligence (CTI) platform, but there is an overlap with the capabilities across empow. However, we still see value in that CTI platform, so we retired the SOAR and our legacy SIEM.

We needed to make a change in large part because the cost of scaling was becoming quite concerning. Also, we went through a series of upgrades a little over a year ago there were problematic. So, anytime we experience something that is impactful we now want to pause and reflect back what we did well, what our opportunities were, and did we miss any opportunities to avoid that situation from being realized. We used the outcomes of those reflections to revisit the market and made the decision to really pursue empow as our leading solution for security operations.

empow has significantly been able to reduce the time that we spend on just maintaining the platform, particularly as compared to other product lines that we've previously invested in. The biggest advantage in this regard would be the lack of time spent on managing the correlation rules. The simplified architecture allows us to really lean upon empow's support teams to effectively provide almost end-to-end support of the underlying infrastructure that comprises the platform.

How was the initial setup?

There was complexity to the initial deployment in so much that we were migrating from an existing, fairly sizable deployment, if not a product line. There were a couple of different solutions in place which comprised our overall enterprise security monitoring solution. We had the SIEM, our SOAR platform, a cyber intelligence interface from a number of different feeds, etc. 

The initial deployment took a couple of weeks and most of it was planning. The actual technical activities were executed quite quickly. Of course, there was the migration of the primary existing data storage that we needed to migrate from our old SIEM environment, but there was also the body of work to redirect log streams and other ingestion of data from our several thousand devices (north of 5,000 devices) for production alone, which takes time. Our primary migrate deployment was a couple of weeks. Most of that would be in planning. The primary migration of the existing data storage took about 14 days to go through three or four different change windows to make sure that it was complete and to wrap up some other activities. Then, the effort to redirect all the various log streams into the empow environment away from a multitiered architecture to a single destination IP address, just a single collector across the environments, took approximately two months. That was more to ensure that we understood the risks associated with change management collisions and we were hyperfocused on never losing a log throughout those migrations.

What about the implementation team?

There was some complexity that several of my teams and the empow team needed to walk through to make sure we mutually understood the goal and technical requirements. There were some business requirements and relative reporting that we wanted to make sure we were all aligned to. While there was complexities, I also would give empow very favorable feedback as it relates to them taking a sense of ownership to the migration and the overall deployment of their technology. They really looked out for us. There were a number of times that they cautioned me to make some minor adjustments in the plan to ensure that they weren't disruptive to our business for which I'm very appreciative.

Our overall implementation strategy was bringing both empow and representatives from my teams together to build a plan. We established a few key milestones and aligned those milestones relative to availability of resources on both sides. This ensured we really understood what we were trying to accomplish, not just from an architecture perspective, but also, e.g., taking into account key business timelines that we needed to be mindful of and where we just didn't have an appetite for some major change management activities to occur. We each brought project management resources, a lead architect, and a threat analyst to the table to ensure that we understood each of those perspectives to really comprise that team and ensure that they were set up for success. I would estimate the total team makeup, excluding myself, would be six: two from empow and four from my team.

What was our ROI?

We are saving so much time. We deal with billions of events a month. We are definitely a data-centric organization. Easily, we are able to save 75 percent of the head count for security operations that would otherwise be needed given our scale. Now, we are in a bit of a unique situation where the organization spun off from its parent company just shy of four years ago. So, we are still in a growth mode in many respects. While we are still continuing to expand our security organization from an FTE and head count perspective, it's very easy to quantify without empow we would be looking at seven to 10 more resources being required. This is opposed to the one or two who are focused on the platform today, where focused on the platform includes capacity management, general system administration of the environment, and monitoring/responding to alarms that are generated.

As a result of the automation, we are able to manage SIEM with a small security team. I'm in a unique position where we have been growing the security organization quite rapidly over the last three and a half years. But, as a direct result of the empow transition and legacy collection of tools towards the empow platform, we've been able to keep that head count flat. We've been able to redirect a lot of the security team's time away from the wash, rinse, repeat activities of responding to alarms where we have a high degree of confidence that they will be false positives, adjusting the rules accordingly. This can be a bit frustrating for the analyst when they have to spend hours a day dealing with these types of probable false positives. So, it has helped not only us keep our headcount flat relative to the resources necessary to provide the assurances that our executives expect of us for monitoring, but allows our analyst team to spend the majority of their time doing what they love. They are spending their time meaningfully with a higher degree of confidence and enjoying getting into the incident response type activity.

North of 75 percent of our time has been reduced relative to the support in the environment, starting from the general system administration, capacity management, the overall patching, and system admin of the ecosystem. Most notably would be on the time to maintain the application tier of empow, particularly that of the correlation rules. That has been reduced by north of 90 percent as compared to other platforms.

Mitigation time has been reduced by north of 75 percent for the vast majority of alarms that we receive. This varies depending on the event type. However, with the automated playbooks that we have defined and the confidence levels in the fidelity alarms, we have been able to enjoy significant reduction in our mean time to mitigate and mean time to respond.

As we have more alarms as a result of having more logs adjusted, this means we need more analysts to respond to those alarms in order for us to meet our SLAs because we have very aggressive SLAs. With a higher degree of fidelity in the alarms, we were able to avoid adding additional resources to our teams. We take into account the cost of security resources in the market and the significantly higher fidelity from the alarms that are being generated. This drove down our costs with our MSSP. It drove down my cost for human capital internally. It drove down our need to have multiple resources supporting the underlying infrastructure and health and maintenance of empow as a platform from several resources down to one. Therefore, human capital costs were significantly reduced. Our operating expenses were significantly reduced. Our capital costs were significantly reduced while tripling our capacity and our run rate reduced. It was almost a "too good to be true" situation. Fortunately, for us, it worked out very nicely.

What's my experience with pricing, setup cost, and licensing?

We were looking at a seven figure investment being necessary to sustain our growth projections for our log ingestion requirements, just for production. We had a goal of ensuring that we understood who did what, where, when, and why across all assets, production, staging, development, field devices, laptops, iPads, etc. Not only were we able to avoid all those costs relative to growth, from a point in time forward, the cost structure of empow (as compared to both existing tools) was cheaper for us to migrate then it was to sustain on our legacy platforms. When it's cheaper to migrate, that's very attractive. 

I don't have to put up with any longer with these hypercomplex licensing agreements. Every time I want to add some additional reporting as a compliance centric or regulatory specific, e.g., GDPR, PCI, or Sarbanes-Oxley, many providers would have an additional license for this, which felt a bit ridiculous to me. With the simplified licensing architecture, there were no hidden "gotchas" down the road with empow. Something I have experienced with other providers that I've worked with in the past.

As it relates to the SOAR side of the toolkit, there was no need to purchase an independent SOAR platform. The innovation that empow brings to the market just addressed all those use cases natively. We were able to just completely retire that toolkit out of our environment.

Which other solutions did I evaluate?

As we do with almost every technology selection, we looked at the markets. For this particular technology stack, there were five or six different players who we looked at intently. Then, as with most organizations, we took the broader view. We narrowed it down to two or three finalists who went into a formalized proof of concept lab environment, which runs for an extended period of time for something so critical as a SIEM or SOAR, which are the primary reporting capabilities for security purposes. We had an extended evaluation and followed that crawl, walk, run model relative to our rollout. In hindsight, it was a very structured, formalized process. What was interesting, the business case was very simple because of the cost savings from just the cost avoidance. As we look forward in our plans and the need to scale up, it more than covered the cost of transitioning in our entirety over to the empow platform.

Some of the other SIEM providers that we looked at when we revisited the market would include LogRhythm, Splunk, and empow to name a few. A pro of empow would be the simplified licensing model. Both my organizations have had feedback over the years from many clients that's an area of concern, particularly with Splunk. Also, the simplified architecture particularly leveraging the Elasticsearch technologies really prevents the need to have a complex architecture with high power CPUs in place across each of your footprints where you have log collectors deployed. That was a major value add and very attractive to us. 

We evaluate the partnership between empow and Elastic from a couple of different dimensions. First and foremost, what was our experience like as we were negotiating? Was empow in a position to adequately represent both the business terms for Elastic, support terms, and other commitments that we needed to work through. The answer was yes. It felt very seamless to us. Secondly, the simplicity of the licensing model just made the process of acquiring technologies so much simpler and straightforward than what we experienced in previous relationships.

The decision to partner with Elastic for that strategic partnership to be in place wasn't a Tier 1 criteria for us. However, what was a criteria for us was the outcome and capabilities which the partnership has resulted in, e.g., the speed to rapidly scale up, the ease of scaling down, and the ease of migrating a primary on-prem data center strategy to a hybrid 50/50 cloud on-prem strategy with long-term plans for pursuing cloud far more aggressively. As we need to pull the levers to keep up with the demands of our business, we wanted to have comfort that it would not be a disruptive series of changes as related to the SIEM and we wouldn't have to go back and re-architect, then buy additional licenses for new features and functionalities. We wanted to avoid that complex license structure. We want to have confidence in our ability to scale up and down and migrate from across multiple feeders as our business needs warranted. empow has done a great job in supporting us in that regard.

What other advice do I have?

If I was to rate empow on a scale of one to 10, I would give them a nine and a half, probably. Why it's so high is that there's no competitors on the market in my mind that has transformed the SIEM industry as much as empow. The speed is impressive in which they continue to innovate. Every couple of months, we're excited to learn about the latest and greatest capabilities of the platform. Most of the latest innovations have been centered around their automation capabilities. It's had such a tremendous impact on my organization. They tend to focus on what matters. It has given us high confidence that where we are spending our time is worth doing so. The alert fatigue and false positive rates have just plummeted, which is really exciting. They have transformed the industry, which no one would have expected not that long ago.

I'd like to give them a bit of a shout out for when they have given me commitments around enhancements, such as enhancing their reporting capabilities, some minor adjustments to the dashboard, and those types of feature requests, they've met those commitments as it relates to quality and timeline.

empow, without a doubt, is the most important monitoring tool that we have at our disposal. From a monitoring and incident response perspective, empow is the most valuable asset we have in our toolkit.

The biggest lesson I've learned from using empow would be just how far technology has come. It surprised me relative to the orchestration of the automation of our mitigations. That one was quite surprising. The accuracy and the level of confidence I have in the playbooks surprises me at how high it is, because it's quite high. Another area that surprised me would be the level of confidence that we have now in our ability to scale up and down, as scaling down sometimes can be equally as tricky.

The advice that I would give to anyone looking at empow would be primarily ensure that your planning is sound. When I think about our experiences with empow, it's refreshing to think back about how easy that journey was with such a difficult technology stack. Not only was it surprisingly simple, it should not have been since not long ago we were not standing up a deployment of a net new sandbox environment where we were needing to build and deploy, then migrate, a very sizeable deployment to this new ecosystem. Inevitably, we expected there to be some bumps along the road, but there were very few. I attribute this back to the quality of planning and reliability of the technology that empow brings to the table. Therefore, my advice would be ensure that your planning is sound. While it's exciting to know that the technology is very stable and the integrations are very straightforward with API driven integrations, they never can really take into full account the uniqueness of your business. Thus, planning is absolutely paramount.

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
JerryH
Director at a computer software company with 1,001-5,000 employees
Real User
Top 5
Enables us to bring all our data sources into a central hub for quick analysis, helping us focus on priorities in our threat landscape
Pros and Cons
  • "The real-time analytics of security-related data are super. There are a lot of data feeds going into it and it's very quick at pulling up and correlating the data and showing you what's going on in your infrastructure. It's fast. The way that their architecture and technology works, they've really focused on the speed of query results and making sure that we can do what we need to do quickly. Devo is pulling back information in a fast fashion, based on real-time events."
  • "Devo has a lot of cloud connectors, but they need to do a little bit of work there. They've got good integrations with the public cloud, but there are a lot of cloud SaaS systems that they still need to work with on integrations, such as Salesforce and other SaaS providers where we need to get access logs."

What is our primary use case?

Our initial use case is to use Devo as a SIEM. We're using it for security and event logging, aggregation and correlation for security incidents, triage and response. That's our goal out of the gate.

Their solution is cloud-based and we're deploying some relays on-premise to handle anything that can't send it up there directly. But it's pretty straightforward. We're in a hybrid ecosystem, meaning we're running in both public and private cloud.

How has it helped my organization?

We're very early in the process so it's hard to say what the improvements are. The main reason that we bought this tool is that we were a conglomeration of several different companies. We were the original Qualcomm company way back in the day. After they made billions in IP and wireless, they spun us off to Vista Equity, and we rapidly and in succession bought three or four companies in the 2014/2015 timeframe. Since then, we've acquired three or four more. Unfortunately, we haven't done a very good job of integrating those companies, from a security and business services standpoint.

This tool is going to be our global SIEM and log-aggregation and management solution. We're going to be able to really shore up our visibility across all of our business areas, across international boundaries. We have businesses in Canada and Mexico, so our entire North American operations should benefit from this. We should have a global view into what's going on in our infrastructure for the first time ever.

The solution is enabling us to bring all our data sources into a central hub. That's the goal. If we can have all of our data sources in one hub and are then able to pull them back and analyze that data as fast as possible, and then archive it, that will be helpful. We have a lot of regulatory and compliance requirements as well, because we do business in the EU. Obviously, data privacy is a big concern and this is really going to help us out from that standpoint.

We have a varied array of threat vectors in our environment. We OEM and provide a SaaS service that runs on people's mobiles, plus we provide an in-cab mobile in truck fleets and tractor trailers that are both short- and long-haul. That means our threat surface is quite large, not only from the web services and web-native applications that we expose to our customers, but also from our in-cab and mobile application products that we sell. Being able to pull all that information into one central location is going to be huge for us. Securing that type of landscape is challenging because we have a lot of different moving parts. But it will at least give us some insight into where we need to focus our efforts and get the most bang for the buck.

We've found some insights fairly early in the process but I don't think we've gotten to the point where we can determine that our mean time to resolution has improved. We do expect it to help to reduce our MTTR, absolutely, especially for security incidents. It's critical to be able to find a threat and do something about it sooner. Devo's relationship with Palo Alto is very interesting in that regard because there's a possibility that we will be pushing this as a direct integration with our Layer 4 through Layer 7 security infrastructure, to be able to push real-time actions. Once we get the baseline stuff done, we'll start to evolve our maturity and our capabilities on the platform and use a lot more of the advanced features of Devo. We'll get it hooked up across all of our infrastructure in a more significant way so that we can use the platform to not only help us see what's going on, but to do something about it.

What is most valuable?

So far, the most valuable features are the ease of use and the ease of deployment. We're very early in the process. They've got some nice ways to customize the tool and some nice, out-of-the-box dashboards that are helpful and provide insight, particularly related to security operations.

The UI is 

  • clean
  • easy to use
  • intuitive. 

They've put a lot of work into the UI. There are a few areas they could probably improve, but they've done a really good job of making it easy to use. For us to get engagement from our engineering teams, it needs to be an easy tool to use and I think they've gone a long way to doing that.

The real-time analytics of security-related data are super. There are a lot of data feeds going into it and it's very quick at pulling up and correlating the data and showing you what's going on in your infrastructure. It's fast. The way that their architecture and technology works, they've really focused on the speed of query results and making sure that we can do what we need to do quickly. Devo is pulling back information in a fast fashion, based on real-time events.

The fact that the real-time analytics are immediately available for query after ingest is super-critical in what we do. We're a transportation management company and we provide a SaaS. We need to be able to analyze logs and understand what's going on in our ecosystem in a very close to real-time way, if not in real time, because we're considered critical infrastructure. And that's not only from a security standpoint, but even from an engineering standpoint. There are things going on in our vehicles, inside of our trucks, and inside of our platform. We need to understand what's going on, very quickly, and to respond to it very rapidly.

Also, the integration of threat intelligence data provides context to an investigation. We've got a lot of data feeds that come in and Devo has its own. They have a partnership with Palo Alto, which is our primary security provider. All of that threat information and intel is very good. We know it's very good. We have a lot of confidence that that information is going to be timely and it's going to be relevant. We're very confident that the threat and intel pieces are right on the money. And it's definitely providing insights. We've already used it to shore up a couple of things in our ecosystem, just based on the proof of concept.

The solution’s multi-tenant, cloud-native architecture doesn't really affect our operations, but it gives us a lot of options for splitting things up by business area or different functional groups, as needed. It's pretty simple and straightforward to do so. You can implement those types of things after the fact. It doesn't really impact us too much. We're trying to do everything inside of one tenant, and we don't expose anything to our customers.

We haven't used the solution's Activeboards too much yet. We're in the process of building some of those out. We'll be building dashboards and customized dashboards and Activeboards based on what those tools are doing in Splunk. Devo's going to help us out with our ProServe to make sure that we do that right, and do it quickly.

Based on what I've seen, its Activeboards align nicely with what we need to see. The visual analytics are nice. There's a lot of customization that you can do inside the tool. It really gives you a clean view of what's going on from both interfaces and topology standpoints. We were able to get network topology on some log events, right out of the gate. The visualization and analytics are insightful, to say the least, and they're accurate, which is really good. It's not only the visualization, but it's also the ability to use the API to pull information out. We do a lot of customization in our backend operations and service management platforms, and being able to pull those logs back in and do something with them quickly is also very beneficial.

The customization helps because you can map it into your business requirements. Everybody's business requirements are different when it comes to security and the risks they're willing to take and what they need to do as a result. From a security analyst standpoint, Devo's workflow allows you to customize, in a granular way, what is relevant for your business. Once you get to that point where you've customized it to what you really need to see, that's where there's a lot of value-add for our analysts and our manager of security.

What needs improvement?

Devo has a lot of cloud connectors, but they need to do a little bit of work there. They've got good integrations with the public cloud, but there are a lot of cloud SaaS systems that they still need to work with on integrations, such as Salesforce and other SaaS providers where we need to get access logs.

We'll find more areas for improvement, I'm sure, as we move forward. But we've got a tight relationship with them. I'm sure we can get anything worked out.

For how long have I used the solution?

This is our first foray with Devo. We started looking at the product this year and we're launching an effort to replace our other technology. We've been using Devo for one month.

What do I think about the stability of the solution?

The stability is good. It hasn't been down yet.

What do I think about the scalability of the solution?

The scalability is unlimited, as far as I can tell. It's just a matter of how much money you have in your back pocket that you're willing to spend. The cost is based on log ingestion rate and how much retention. They're running in public cloud meaning it's unlimited capacity. And scaling is instantaneous.

Right now, we've got about 22 people in the platform. It will end up being anywhere between 200 and 400 when we're done, including software engineers, systems engineers, security engineers, and network operations teams for all of our mobile and telecommunications platforms. We'll have a wide variety of roles that are already defined. And on a limited basis, our customer support teams can go in and see what's going on.

How are customer service and technical support?

Their technical support has been good. We haven't had to use their operations support too much. We have a dedicated team that's working with us. But they've been excellent. We haven't had any issues with them. They've been very quick and responsive and they know their platform.

Which solution did I use previously and why did I switch?

We were using Splunk but we're phasing it out due to cost.

Our old Splunk rep went to Devo and he gave me a shout and asked me if I was looking to make a change, because he knew of some of the problems that we were having. That's how we got hooked up with Devo. It needed to have a Splunk-like feel, because I didn't want to have a long road or a huge cultural transformation and shock for our engineering teams and our security teams that use Splunk today. 

We liked the PoC. Everything it did was super-simple to use and was very cost-effective. That's really why we went down this path.

Once we got through the PoC and once we got people to take a look at it and give us a thumbs-up on what they'd seen, we moved ahead. From a price standpoint, it made a lot of sense and it does everything we needed to do, as far as we can tell.

How was the initial setup?

We were pulling in all of our firewall logs, throughout the entire company, in less than 60 minutes. We deployed some relay instances out there and it took us longer to go through the bureaucracy and the workflow of getting those instances deployed than it did to actually configure the platform to pull the relevant logs.

In the PoC we had a strategy. We had a set of infrastructure that we were focusing on, infrastructure that we really needed to make sure was going to integrate and that its logs could be pulled effectively into Devo. We hit all of those use cases in the PoC.

We did the PoC with three people internally: a network engineer, a systems engineer, and a security engineer.

Our strategy going forward is getting our core infrastructure in there first—our network, compute, and storage stuff. That is critical. Our network layer for security is critical. Our edge security, our identity and access stuff, including our Active Directory and our directory services—those critical, core security and foundational infrastructure areas—are what we're focusing on first.

We've got quite a few servers for a small to mid-sized company. We're trying to automate the deployment process to hit our Linux and Windows platforms as much as possible. It's relatively straightforward. There is no Linux agent so it's essentially a configuration change in all of our Linux platforms. We're going through that process right now across all our servers. It's a lift because of the sheer volume.

As for maintenance of the Devo platform we literally don't require anybody to do that.

We have a huge plan. We're in the process of spinning up all of our training and trying to get our folks trained as a day-zero priority. Then, as we pull infrastructure in, I want those guys to be trained. Training is a key thing we're working on right now. We're building the e-learning regimen. And Devo provides live, multi-day workshops for our teams. We go in and focus the agenda on what they need to see. Our focus will be on moving dashboards from Splunk and the critical things that we do on a day-to-day basis.

What about the implementation team?

We worked straight with Devo on pretty much everything. We have a third-party VAR that may provide some value here, but we're working straight with Devo.

What was our ROI?

We expect to see ROI from security intelligence and network layer security analysis. Probably the biggest thing will be turning off things that are talking out there that don't need to be talking. We found three of those types of things early in the process, things that were turned on that didn't need to be turned on. That's going to help us rationalize and modify our services to make sure that things are shut down and turned off the way they're supposed to be, and effectively hardened.

And the cost savings over Splunk is about 50 percent.

What's my experience with pricing, setup cost, and licensing?

Pricing is pretty straightforward. It's based on daily log ingestion and retention rate. They keep it simple. They have breakpoints, depending on what your volume is. But I like that they keep it simple and easy to understand.

There were no costs in addition to their standard licensing fees. I don't know if they're still doing this, but we got in early enough that all of the various modules were part of our entitlement. I think they're in the process changing that model a little bit so you can pick your modules. They're going to split it up and charge by the module. But everything was part of the package that we needed, day-one.

Which other solutions did I evaluate?

We were looking at ELK Stack and Datadog. Datadog has a security option, but it wasn't doing what we needed it to do. It wasn't hitting a couple of the use cases that we have Splunk doing, from a logging and reporting standpoint. We also looked at Logstash, some of the "roll-your-own" stuff. But when you do the comparison for our use case, having a cloud SaaS that's managed by somebody else, where we're just pushing up our logs, something that we can use and customize, made the most sense for us. 

And from a capability standpoint, Devo was the one that most aligned with our Splunk solution.

What other advice do I have?

Take a look at it. They're really going after Splunk hard. Splunk has a very diverse deployment base, but Splunk really missed the mark with its licensing model, especially when it relates to the cloud. There are options out there, effective alternatives to Splunk and some of the other big tools. But from a SaaS standpoint, if not best-in-breed, Devo is certainly in the top-two or top-three. It's definitely a strong up-and-comer. Devo is already taking market share away from Splunk and I think that's going to continue over the next 24 to 36 months.

Devo's speed when querying across our data is very good. We haven't fully loaded it yet. We'll see when the rubber really hits the road. But based on the demos and the things that we've seen in Devo, I think it's going to be extremely good. The architecture and the way that they built it are for speed, but it's also built for security. Between our DevOps, our SecOps, and our traditional operations, we'll be able to quickly use the tool, provide valuable insights into what we're doing, and bring our teams up to speed very quickly on how to use it and how to get value out of it quickly.

The fact that it manages 400 days of hot data falls a little bit outside of our use case. It's great to have 400 days of hot data, from security, compliance, and regulatory retention standpoints. It makes it really fast to rehydrate logs and go back and get trends from way back in the day and do some long-term trend analysis. Our use case is a little bit different. We just need to keep 90 days hot and we'll be archiving the rest of that information to object-based long-term storage, based on our retention policies. We may or may not need to rehydrate and reanalyze those, depending on what's going on in our ecosystem. Having the ability to be able to reach back and pull logs out of long-term storage is very beneficial, not only from a cost standpoint, but from the standpoint of being able to do some deeper analysis on trends and reach back into different log events if we have an incident where we need to do so.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Sean Moore
Lead Azure Sentinel Architect at a financial services firm with 10,001+ employees
Real User
Top 20
Quick to deploy, good performance, and automatically scales with our requirements
Pros and Cons
  • "The most valuable feature is the performance because unlike legacy SIEMs that were on-premises, it does not require as much maintenance."
  • "If Azure Sentinel had the ability to ingest Azure services from different tenants into another tenant that was hosting Azure Sentinel, and not lose any metadata, that would be a huge benefit to a lot of companies."

What is our primary use case?

Azure Sentinel is a next-generation SIEM, which is purely cloud-based. There is no on-premises deployment. We primarily use it to leverage the machine learning and AI capabilities that are embedded in the solution.

How has it helped my organization?

This solution has helped to improve our security posture in several ways. It includes machine learning and AI capabilities, but it's also got the functionality to ingest threat intelligence into the platform. Doing so can further enrich the events and the data that's in the backend, stored in the Sentinel database. Not only does that improve your detection capability, but also when it comes to threat hunting, you can leverage that threat intelligence and it gives you a much wider scope to be able to threat hunt against.

The fact that this is a next-generation SIEM is important because everybody's going through a digital transformation at the moment, and there is actually only one true next-generation SIEM. That is Azure Sentinel. There are no competing products at the moment.

The main benefit is that as companies migrate their systems and services into the Cloud, especially if they're migrating into Azure, they've got a native SIEM available to them immediately. With the market being predominately Microsoft, where perhaps 90% of the market uses Microsoft products, there are a lot of Microsoft houses out there and migration to Azure is common.

Legacy SIEMs used to take time in planning and looking at the specifications that were required from the hardware. It could be the case that to get an on-premises SIEM in place could take a month, whereas, with Azure Sentinel, you can have that available within two minutes. 

This product improves our end-user experience because of the enhanced ability to detect problems. What you've got is Microsoft Defender installed on all of the Windows devices, for instance, and the telemetry from Defender is sent to the Azure Defender portal. All of that analysis in Defender, including the alerts and incidents, can be forwarded into Sentinel. This improves the detection methods for the security monitoring team to be able to detect where a user has got malicious software or files or whatever it may be on their laptop, for instance.

What is most valuable?

It gives you that single pane of glass view for all of your security incidents, whether they're coming from Azure, AWS, or even GCP. You can actually expand the toolset from Azure Sentinel out to other Azure services as well.

The most valuable feature is the performance because unlike legacy SIEMs that were on-premises, it does not require as much maintenance. With an on-premises SIEM, you needed to maintain the hardware and you needed to upgrade the hardware, whereas, with Azure Sentinel, it's auto-scaling. This means that there is no need to worry about any performance impact. You can send very large volumes of data to Azure Sentinel and still have the performance that you need.

What needs improvement?

When you ingest data into Azure Sentinel, not all of the events are received. The way it works is that they're written to a native Sentinel table, but some events haven't got a native table available to them. In this case, what happens is that anything Sentinel doesn't recognize, it puts it into a custom table. This is something that you need to create. What would be good is the extension of the Azure Sentinel schema to cover a lot more technologies, so that you don't have to have custom tables.

If Azure Sentinel had the ability to ingest Azure services from different tenants into another tenant that was hosting Azure Sentinel, and not lose any metadata, that would be a huge benefit to a lot of companies.

For how long have I used the solution?

I have been using Azure Sentinel for between 18 months and two years.

What do I think about the stability of the solution?

I work in the UK South region and it very rarely has not been available. I'd say its availability is probably 99.9%.

What do I think about the scalability of the solution?

This is an extremely scalable product and you don't have to worry about that because as a SaaS, it auto-scales.

We have been 20 and 30 people who use it. I lead the delivery team, who are the engineers, and we've got some KQL programmers for developing the use cases. Then, we hand that over to the security monitoring team, who actually use the tool and monitor it. They deal with the alerts and incidents, as well as doing threat hunting and related tasks.

We use this solution extensively and our usage will only increase.

How are customer service and support?

I would rate the Microsoft technical support a nine out of ten.

Support is very good but there is always room for improvement.

Which solution did I use previously and why did I switch?

I have personally used ArcSight, Splunk, and LogRythm.

Comparing Azure Sentinel with these other solutions, the first thing to consider is scalability. That is something that you don't have to worry about anymore. It's excellent.

ArcSight was very good, although it had its problems the way all SIEMs do.

Azure Sentinel is very good but as it matures, I think it will probably be one of the best SIEMs that we've had available to us. There are too many pros and cons to adequately compare all of these products.

How was the initial setup?

The actual standard Azure Sentinel setup is very easy. It is just a case where you create a log analytics workspace and then you enable Azure Sentinel to sit over the top. It's very easy except the challenge is actually getting the events into Azure Sentinel. That's the tricky part.

If you are talking about the actual platform itself, the initial setup is really simple. Onboarding is where the challenge is. Then, once you've onboarded, the other challenge is that you need to develop your use cases using KQL as the query language. You need to have expertise in KQL, which is a very new language.

The actual platform will take approximately 10 minutes to deploy. The onboarding, however, is something that we're still doing now. It's use case development and it's an ongoing process that never ends. You are always onboarding.

It's a little bit like setting up a configuration management platform and you're only using one push-up configuration.

What was our ROI?

We are getting to the point where we see a return on our investment. We're not 100% yet but getting there.

What's my experience with pricing, setup cost, and licensing?

Azure Sentinel is very costly, or at least it appears to be very costly. The costs vary based on your ingestion and your retention charges. Although it's very costly to ingest and store data, what you've got to remember is that you don't have on-premises maintenance, you don't have hardware replacement, you don't have the software licensing that goes with that, you don't have the configuration management, and you don't have the licensing management. All of these costs that you incur with an on-premises deployment are taken away.

This is not to mention running data centers and the associated costs, including powering them and cooling them. All of those expenses are removed. So, when you consider those costs and you compare them to Azure Sentinel, you can see that it's comparative, or if not, Azure Sentinel offers better value for money.

All things considered, it really depends on how much you ingest into the solution and how much you retain.

Which other solutions did I evaluate?

There are no competitors. Azure Sentinel is the only next-generation SIEM.

What other advice do I have?

This is a product that I highly recommend, for all of the positives that I've mentioned. The transition from an on-premises to a cloud-based SIEM is something that I've actually done, and it's not overly complicated. It doesn't have to be a complex migration, which is something that a lot of companies may be reluctant about.

Overall, this is a good product but there are parts of Sentinel that need improvement. There are some things that need to be more adaptable and more versatile.

I would rate this solution a nine out of ten.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Microsoft Azure
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
Flag as inappropriate
Chief Infrastructure & Security Office at a financial services firm with 51-200 employees
Real User
Top 5Leaderboard
Collects logs from different systems, works extremely fast, and has a predictable cost model
Pros and Cons
  • "It is a very comprehensive solution for gathering data. It has got a lot of capabilities for collecting logs from different systems. Logs are notoriously difficult to collect because they come in all formats. LogPoint has a very sophisticated mechanism for you to be able to connect to or listen to a system, get the data, and parse it. Logs come in text formats that are not easily parseable because all logs are not the same, but with LogPoint, you can define a policy for collecting the data. You can create a parser very quickly to get the logs into a structured mechanism so that you can analyze them."
  • "The thing that makes it a little bit challenging is when you run into a situation where you have logs that are not easily parsable. If a log has a very specific structure, it is very easy to parse and create a parser for it, but if a log has a free form, meaning that it is of any length or it can change at any time, handling such a log is very challenging, not just in LogPoint but also in everything else. Everybody struggles with that scenario, and LogPoint is also in the same boat. One-third of logs are of free form or not of a specific length, and you can run into situations where it is almost impossible to parse the log, even if they try to help you. It is just the nature of the beast."

What is our primary use case?

We use it as a repository of most of the logs that are created within our office systems. It is mostly used for forensic purposes. If there is an investigation, we go look for the logs. We find those logs in LogPoint, and then we use them for further analysis.

How has it helped my organization?

We have close to 33 different sources of logs, and we were able to onboard most of them in less than three months. Its adoption is very quick, and once you have the logs in there, the ability to search for things is very good.

What is most valuable?

It is a very comprehensive solution for gathering data. It has got a lot of capabilities for collecting logs from different systems. Logs are notoriously difficult to collect because they come in all formats. LogPoint has a very sophisticated mechanism for you to be able to connect to or listen to a system, get the data, and parse it. Logs come in text formats that are not easily parseable because all logs are not the same, but with LogPoint, you can define a policy for collecting the data. You can create a parser very quickly to get the logs into a structured mechanism so that you can analyze them.

What needs improvement?

The thing that makes it a little bit challenging is when you run into a situation where you have logs that are not easily parsable. If a log has a very specific structure, it is very easy to parse and create a parser for it, but if a log has a free form, meaning that it is of any length or it can change at any time, handling such a log is very challenging, not just in LogPoint but also in everything else. Everybody struggles with that scenario, and LogPoint is also in the same boat. One-third of logs are of free form or not of a specific length, and you can run into situations where it is almost impossible to parse the log, even if they try to help you. It is just the nature of the beast.

Its reporting could be significantly improved. They have very good reports, but the ability to create ad-hoc reports can be improved significantly.

For how long have I used the solution?

I have been using this solution for three years.

What do I think about the stability of the solution?

It has been stable, and I haven't had any issues with it.

What do I think about the scalability of the solution?

There are no issues there. However much free space I give it, it'll work well.

It is being used by only two people: me and another security engineer. We go and look at the logs. We are collecting most of the information from the firm through this. If we were to grow, we'll make it grow with us, but right now, we don't have any plans to expand its usage.

How are customer service and support?

Their support is good. If you call them for help, they'll give you help. They have a very good set of engineers to help you with onboarding or the setup process. You can consult them when you have a challenge or a question. They are very good with the setup and follow-up. What happens afterward is a whole different story because if you have to escalate internally, you can get in trouble. So, their initial support is very good, but their advanced support is a little more challenging.

Which solution did I use previously and why did I switch?

I used a product called Logtrust, which is now called Devo. I switched because I had to get a consultant every time I had to do something in the system. It required a level of expertise. The system wasn't built for a mere human to use. It was very advanced, but it required consultancy in order to get it working. There are a lot of things that they claim to be simple, but at the end of the day, you have to have them do the work, and I don't like that. I want to be able to do the work myself. With LogPoint, I'm able to do most of the work myself.

How was the initial setup?

It is very simple. There is a virtual machine that you download, and this virtual machine has everything in it. There is nothing for you to really do. You just download and install it, and once you have the machine up and running, you're good to go.

The implementation took three months. I had a complete listing of my log sources, so I just went down the list. I started with the most important logs, such as DNS, DHCP, Active Directory, and then I went down from there. We have 33 sources being collected currently.

What about the implementation team?

I did it on my own. I also take care of its maintenance.

What was our ROI?

It is not easy to calculate ROI on such a solution. The ROI is in terms of having the ability to find what you need in your logs quickly and being confident that you're not going to lose your logs and you can really search for things. It is the assurance that you can get that information when you need it. If you don't have it, you're in a trouble. If you are compromised, then you have a problem. It is hard to measure the cost of these things.

As compared to other systems, I'm getting a good value for the money. I'm not paying a variable cost. I have a pretty predictable cost model, and if I need to grow, it is all up to me for the resources that I put, not to them. That's a really good model, and I like it.

What's my experience with pricing, setup cost, and licensing?

It has a fixed price, which is what I like about LogPoint. I bought the system and paid for it, and I pay maintenance. It is not a consumption model. Most SIEMs or most of the log management systems are consumption-based, which means that you pay for how many logs you have in the system. That's a real problem because logs can grow very quickly in different circumstances, and when you have a variable price model, you never know what you're going to pay. Splunk is notoriously expensive for that reason. If you use Splunk or QRadar, it becomes expensive because there are not just the logs; you also have to parse the logs and create indexes. Those indexes can be very expensive in terms of space. Therefore, if they charge you by this space, you can end up paying a significant amount of money. It can be more than what you expect to pay. I like the fact that LogPoint has a fixed cost. I know what I'm going to pay on a yearly basis. I pay that, and I pay the maintenance, and I just make it work.

Which other solutions did I evaluate?

I had Logtrust, and I looked at AlienVault, Splunk, and IBM QRadar. Splunk was too expensive, and QRadar was too complex. AlienVault was very good and very close to LogPoint. I almost went to AlienVault, but its cost turned out to be significantly higher than LogPoint, so I ended up going for LogPoint because it was a better cost proposition for me.

What other advice do I have?

It depends on what you're looking for. If you really want a full-blown SIEM with all the functionality and all the correlation analysis, you might be able to find products that have more sophisticated correlations, etc. If you just want to keep your logs and be able to find information quickly within your systems, LogPoint is more than capable. It is a good cost proposition, and it works extremely well and very fast.

I would rate it an eight out of 10. It is a good cost proposition. It is a good value. It has all the functionality for what I wanted, which is my log management. I'm not using a lot of the feature sets that are very advanced. I don't need them, so I can't judge it based on those, but for my needs, it is an eight for sure.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Simon Thornton
Cyber Security Services Operations Manager at a aerospace/defense firm with 501-1,000 employees
Real User
Top 10
Provides a single window into your network, SIEM, network flows, and risk management of your assets
Pros and Cons
  • "The most valuable thing about QRadar is that you have a single window into your network, SIEM, network flows, and risk management of your assets. If you use Splunk, for instance, then you still need a full packet capture solution, whereas the full packet capture solution is integrated within QRadar. Its application ecosystem makes it very powerful in terms of doing analysis."
  • "I'd like them to improve the offense. When QRadar detects something, it creates what it calls offenses. So, it has a rudimentary ticketing system inside of it. This is the same interface that was there when I started using it 12 years ago. It just has not been improved. They do allow integration with IBM Resilient, but IBM Resilient is grotesquely expensive. The most effective integration that IBM offers today is with IBM Resilient, which is an instant response platform. It is a very good platform, but it is very expensive. They really should do something with the offense handling because it is very difficult to scale, and it has limitations. The maximum number of offenses that it can carry is 16K. After 16K, you have to flush your offenses out. So, it is all or nothing. You lose all your offenses up until that point in time, and you don't have any history within the offense list of older events. If you're dealing with multiple customers, this becomes problematic. That's why you need to use another product to do the actual ticketing. If you wanted the ticket existence, you would normally interface with ServiceNow, SolarWinds, or some other product like that."

What is our primary use case?

We're a customer, partner, or reseller. We use QRadar on our own internal SOC. We are also a reseller of QRadar for some of the projects. So, we sell QRadar to customers, and we're also a partner because we have different models. We roll the product out to a customer as part of our service where we own it, but the customer is paying. We also do a full deployment that a customer owns. So, we are actually fulfilling all three roles.

What is most valuable?

The most valuable thing about QRadar is that you have a single window into your network, SIEM, network flows, and risk management of your assets. If you use Splunk, for instance, then you still need a full packet capture solution, whereas the full packet capture solution is integrated within QRadar. Its application ecosystem makes it very powerful in terms of doing analysis.

What needs improvement?

In terms of the GUI, they need to improve the consistency. It has been written by different teams at different times. So, when you go around the interface, you'll find a lot of inconsistencies in terms of the way it works.

I'd like them to improve the offense. When QRadar detects something, it creates what it calls offenses. So, it has a rudimentary ticketing system inside of it. This is the same interface that was there when I started using it 12 years ago. It just has not been improved. They do allow integration with IBM Resilient, but IBM Resilient is grotesquely expensive. The most effective integration that IBM offers today is with IBM Resilient, which is an instant response platform. It is a very good platform, but it is very expensive. They really should do something with the offense handling because it is very difficult to scale, and it has limitations. The maximum number of offenses that it can carry is 16K. After 16K, you have to flush your offenses out. So, it is all or nothing. You lose all your offenses up until that point in time, and you don't have any history within the offense list of older events. If you're dealing with multiple customers, this becomes problematic. That's why you need to use another product to do the actual ticketing. If you wanted the ticket existence, you would normally interface with ServiceNow, SolarWinds, or some other product like that. 

Their support should also be improved. Their support is very slow, and it is very difficult to find knowledgeable people within IBM.

Its price and licensing should be improved. It is overly expensive and overly complex in terms of licensing. 

For how long have I used the solution?

I have been using this solution for 12 years.

How are customer service and technical support?

Their support is very slow. it is very difficult to find knowledgeable people within IBM. I'm an expert in the use of QRadar, and I know the technical insights of QRadar very well, but it is sometimes very painful to deal with IBM's support and actually get them to do something. Their support is very difficult to work with for some customers.

Which solution did I use previously and why did I switch?

I work with Prelude, which is by a French company. It is a basic beginner's SIEM. If you never had a SIEM before and you wanted to experiment, this is where you would start, but it is probably that you would leave very quickly. I've also worked with ArcSight and Splunk.

My recommendation would depend upon your technical appetite or your technical capability. QRadar is essentially a Linux-based Red Hat appliance. Unfortunately, you still need some Linux knowledge to work with this effectively. Not everything is through the GUI. 

Comparing it with Splunk, in terms of licensing, IBM's model is simpler than Splunk's model. Splunk has two models. One is volume metrics, so you pay for the number of bytes that are transmitted daily. The other one is based upon the number of events per second, which they introduced relatively recently. Splunk can be more expensive than QRadar when you start to get into adding what they call indexes. So, basically, you create specific indexes to hold, for instance, logs related to Cisco. This is implicit within QRadar, and it is designed that way, but within Splunk, if you want to get that performance and you have large volumes of logs, you need to create indexes. This is where the cost of Splunk can escalate.

How was the initial setup?

Installing QRadar is very simple. You insert a DVD, boot the system, and it runs the installation after asking you a few questions. It runs pretty much automatically, and then you're up and going. From an installation point of view, it is very easy.

The only thing that you have to get right before you do the installation is your architecture because it has event collectors, event processes, flow collectors, flow processes, and a number of other components. You need to understand where they should be placed. If you want more storage, then you need to place data nodes on the ends of the processes. All this is something that you need to have in mind when you design and deploy.

What's my experience with pricing, setup cost, and licensing?

It is overly expensive and overly complex in terms of licensing. They have many different appliances, which makes it extremely difficult to choose the technology. It is very difficult to choose the technology or QRadar components that you should be deploying. 

They have improved some of it in the last few years. They have made it slightly easy with the fact that you can now buy virtual versions of all the appliances, which is good, but it is still very fragmented. For instance, on some of the smaller appliances, there is no upgrade path. So, if you exceed the capacity of the appliance, you have to buy a bigger appliance, which is not helpful because it is quite a major cost. If you want to add more disks to the system, they'll say that you can't. If they ship a disk with 2 terabytes that the older appliances have, and you say to them that you can commercially get 10 terabyte disks, they will say this is not possible, even though there is no technical reason why it cannot be done. So, they're not very flexible from that point of view. For IBM, it is good because you basically have to buy new appliances, but from a customer's point of view, it is a very expensive investment.

What other advice do I have?

Make sure that you have the buy-in from different teams in the company because you will need help from the network teams. You will potentially need help from IT. 

You need to have a strategy of how you onboard logs into SIEM. Do you take a risk-based approach or do you onboard everything? You should take the time to understand the architecture and the implications of design choices. For instance, QRadar Components communicate with each other using SSH tunnels. The normal practice in security is that if I put a device in a DMZ, then communication between the device on the normal network, which is a higher security zone, and the DMZ, which is a lower security zone, will be initiated from the high-security zone. You would not expect the device in the DMZ to initiate communication back into the normal network. In the case of QRadar, if you put your processes in the DMZ, then it has to communicate with the console, which means that you have to allow the processor to communicate. This has consequences. If you have remote sites or you plan to use cloud-based processes, collectors, etc, and have an internal console, the same communication channels have to exist. So, it requires some careful planning. That's the main thing.

I would rate QRadar an eight out of 10 as compared to other products.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Get our free report covering Dynatrace, Datadog, IBM, and other competitors of Splunk. Updated: January 2022.
564,729 professionals have used our research since 2012.