The primary use case is compliance requirements.
It is performing at the moment, but we are still in the process of implementing it.
The primary use case is compliance requirements.
It is performing at the moment, but we are still in the process of implementing it.
We haven't fully integrated it or stood up the platform, so the benefits are realized yet.
The most valuable features would be the automation, reporting, and the support.
I do plan to use the full extent of the correlation and AI Engine to streamline our processes.
My big thing is the easability. I don't like to go to two different systems. The fat client that you have to install to configure it, then the web console which is just for reporting and analysis. These features need to collapse, and it needs to be in a single solution. Going through the web solution in the future is the way to do it, because right now, it is a bit cumbersome.
If I remember correctly, there are some compatibility issues with different browsers. The user system work only on Chrome. In order to use something like this solution, we would have to have that extra browser. It would be nice if LogRhythm had a full support compatibility across all browsers, regardless of what platform they're using and whether they are on desktop or mobile devices.
I'm a little on the fence about stability, because the platform runs on Windows at the moment. There has been some finicky administration stuff, especially if we are going to try to integrate it with our own domain's policies which need to be correctly reflected. In the instance that we have, it is not necessarily a good idea to have an endpoint security, but when you have to meet compliance and follow rules, these are some of the exceptions. There needs to be a way to allow organizations to utilize these platforms and still be in compliant.
I don't what the demand is. I know the number of systems that we have. We try to forecast the demand ahead of time by coming up and listing the services that we need in the environment, but there are still things which are probably still yet to be seen.
As we run into systems which we were not aware of and need custom integration, I don't know what the pain points will look like or if things will be overlooked: Is the system scalable enough to where it will allow me to continue to log certain things without any restrictions? I don't know at this time, and I will find out once it happens.
So far, the technical support has been good.
I was hired in because I have the skill set to implement it. The original acquisition of the product was done by other people. Now, they have somebody who has the skill set and understands the technology deploying and configuring it, then going forward maintaining it.
For the development and maintenance, it will be just me. However, for the day-to-day log analysis, there will be a second person providing that function.
While we are aware of the playbooks, we still need to look into them.
We are close to a gig of messages a second, so quite a bit of data.
To capture your use cases, understand exactly what you are looking at ingesting. Do the research as far as what the company has done. For example:
Understand what everybody else has done previously with the solution.
I'm an admin and analyst, so use cases cover a lot of log sources for applications, mostly.
Being able to see when one of our assets is down and being able to restart it really quickly has been a definite benefit. It has been really helpful in the general maintenance of our whole environment.
We're able to look at our environment and see how it's being affected, according to the log sources. We can immediately see how the system responds to things that our development team does.
The Web Console is my favorite. It enables me, at a glance, to see the health of the environments. That is really important to me and to us.
I would like to see more widgets. I just love the widgets on the Web Console, I love to play with them, so more would be better.
The stability has been great since the upgrade.
We just upgraded to 7.35 and, although I wasn't involved in that, it seems like since then everything has been working really well. It scaled really well and we are taking in new network monitors. That has been really easy.
We usually do end up having to remind technical support about our issues, get back in touch with them to see what the status is on our tickets. That has been frustrating in the past, but they do find solutions. Sometimes it takes a while. And sometimes that communication gets lost. Some of our tickets had to be escalated to engineers. They get a little bit lost, at times, when that happens to a ticket.
Overall, I would rate tech support at three out of five.
I would definitely recommend LogRhythm. Work with the LogRhythm team to help learn how your environment works. Use as much help as LogRhythm can provide in your initial setup, so you can understand your environment best.
We have more than 20 log sources. We average around 3,000 messages per second. We have hit 8,000 in the past, but not since the new upgrade in which we got more room. In terms of staff for deployment and maintenance, there are just two of us who share it. But when we're on-call, all of us use it. There are nine of us who use it every day when on-call.
I rate the solution at seven out of ten. I'm very happy with it. I love how powerful it is. However, the customer service is where the points come off. I know they're working on it.
We use it for centralized log management and for alerting. It's been working pretty well. We're on the beta program so what we're on right now has not been working quite as well lately. We're helping them find the bugs, but before this we didn't have any really major issues with it.
It makes everything quicker when it's all centralized. Anything we need to find, it brings to our attention. Even other products we have that feed into it, instead of having to watch all of them we only have to watch one. For example, we have CrowdStrike, so instead of having to pay attention that solution - because its dashboard doesn't really pop when an alarm comes up - we can see issues with the red on the LogRhythm alarm. That is very nice.
We have seen a measurable decrease in the mean time to detect and respond to threats.
Being able to find everything in one place is really nice when you're doing your searches.
One thing we have mentioned to them before is that we'd like to be able to do searches, or drill-downs, directly from an alarm. When you click it and the Inspector tab slides out, that might be a good place to be able to click the host to search for the last 24 hours. I know the search is right there but it would be even nicer to just click that and then have an option to search something there.
Going into the beta, stability was very good, but in the beta its not been as great for us lately.
There was a known bug where, after about five minutes it would duplicate alarms, up to about 10,000. After 10,000 alarms in five minutes, everything is shutting down. Also, some of the maintenance jobs get deleted when upgrading, so our database was filling up without deleting the old backups. Those are the two major issues so far.
I just took it over recently but we got it built to last. It's been the same since we put it up.
I open tickets frequently, especially in the beta program. To get the first response is usually a little slow, but once they're talking to you it's very good.
Figure out what you need it for before just getting everything you can into it. That's probably the main thing. We recently brought in an external firewall and it has everything enabled. So make sure it can do what you want and don't try to do more than what you need.
We have made a few playbooks, but we haven't done too much with them yet. For deployment and maintenance of the solution, it's just me doing the administration.
We're at 60 or 70 log sources right now. With some of the newer ones, we've had to open up tickets for them, like the newer Cisco Wireless. We've had issues with Windows Firewall and AdBlocker. We've had to get those fixed. We process about 600 messages per second.
In terms of the maturity of our security program, we got this solution right after we started up, so it has been growing with us. We're now at a point where we're happy with it and getting good value out of it.
We collect from our primary devices and our endpoints and we look to identify any concerns around regulatory requirements in business use. We have payment card industry regulations that we are monitoring, to make sure everything's going the way it's supposed to, as well as for HIPAA, HITECH, and general security practices.
In terms of seeing a measurable decrease in the meantime to detect and respond to threats, we live in the Web Console and we see things when they come in right away, and then we triage.
There's value in all of it. The most valuable is the reduction in time to triage. We take in around 750 million logs a day. We have a lot of products and that would be a lot of different panes of glass that we would have to look through otherwise. By centralizing, we can triage and take steps much more quickly than if we tried to man all the interfaces that come with the products.
There are two improvements we'd like to see. I mentioned these last year and they haven't implemented them yet.
The first one is service protection. I have Windows administrators who will remove the agent when they think that that is what's fouling up their upgrade or their install or their reconfiguration, etc. The first thing they do is to turn off the antivirus, turn down the firewall, and take off anything else. They don't realize that the LogRhythm agent is just sitting there monitoring. Most antivirus products have application protection features built-in where, if I'm an admin on a box, I can't uninstall antivirus. I need to have to the antivirus admin password to do that.
Why does the LogRhythm agent not have that built-in so that I don't have well-intended admins removing things or shutting off agents? I don't like that.
The second one is, you can imagine my logging levels vary. We do about 750 million a day and some days we do 715 million. Some days we do 820 million or 1.2 billion. But there's no way to drill in and find out: "Where did I get 400,000 extra logs today?" What was going on in my environment that I was able to absorb that peak?" I have no way to identify it without running reports, which will produce a long-running PDF that I have to somehow compare to another long-running PDF. I have to analyze it and say, "Well, last month, Exchange entity was only averaging this many logs. Now it jumped up this much. It could have been that." But then, if I find something that spiked, I still have to make sure nothing else bottomed out, because there might be a 600,000 log delta if something else wasn't producing as many logs as it normally does.
I would like to see like profiling behavior awareness around systems, like they've been gunned to do around users with UEBA.
It's a well-written platform. That being said, with our log levels, we ultimately have almost 30 servers involved. Some of them are very large servers. It will bury itself quickly if there's a problem.
I find the product to be well-written and very efficient. However, sometimes the error-logging is not altogether helpful. For example, on an upgrade, a systems data processor, a Windows box, was throwing an error code like 1083. Then it just stopped and it died right out of the installer and nobody looked. We searched through Google and what it means is the Windows Firewall wasn't turned on so that it could create a rule for the product. Why wouldn't they bubble up that description so that I wouldn't have to call support and I could just know, "Okay, the firewall wasn't turned on. Turn it back on. Re-run the installer and keep going."
There have been many times where I've been disappointed, where I'll ramp an agent up to Verbose and it will say, "LogRhythm critical error, the agent won't bind to a NIC," or the like. I end up with no really actionable or identifiable information coming in, even though I've ramped up the logging level.
There's room for the solution to grow in those situations, especially with regards to a large deployment where it can quickly bury itself if it can't bubble-up something meaningful. I need to be able to differentiate it from other stuff that can be triaged at a much lower priority.
The scalability is good. We're deployed in two data centers at the moment. We had a little bit of difficulty implementing a disaster recovery situation because it was leveraging only Microsoft native DNS and it wouldn't work with the Infoblox DNS deployment that we use in our environment. They've been working on that behind the scenes. That's one of the things that is queued up for me next.
Scalability, volume-wise, the product works very well. As far as the DR piece goes, I think there's room to improve that.
Tech support is good. There are a lot of guys that know what's going on. Sometimes though, I've stood my ground saying, "I don't want to do that." If we have a problem with a server, we can bounce it and maybe it starts running right, but then we don't know what was wrong. We can't do anything about it in the future except bounce it again because that's what worked last time. Sometimes I need to push them and say, "Okay, I want to identify what's wrong. I want to see If I can write a rule that will show me when something's happening," or "I want to figure out if there's something wrong with my scaling and my sizing."
I like support. I think they're customer-focused. But sometimes it seems they've got a lot of tickets in the queue and they want to do the "easy-button." I push back more on some of that. It could just be a situation where the logs aren't going to have that information, and they already know that, but they don't want to say, "Well, our logging is not sufficient. This is the best way forward."
What I find is that there are die-hard Splunkers. The problem is that Splunk is not affordable at a large scale. QRadar is not any better. It's just as bad. LogRhythm, for the price point, is the most reasonable, when you begin to compare apples to apples.
From a performance standpoint, I have no problems recommending LogRhythm because it allows me to get in under the hood and tweak some things. It also comes with stuff out-of-the-box that is usable. I think it's a good product. Things like this RhythmWorld 2018 User Conference help me understand the company's philosophy and intentions and its roadmap, which gives me a little more confidence in the product as well.
Regarding playbooks, we have Demisto which is a security orchestration automation tool, and we're on LogRhythm 7.3. Version 7.4 is not available yet because of the Microsoft patch that took it down. We're looking to go to 7.4 in our test environment and to deploy up to that. I'm not quite sure how its automation, or the playbook piece, will compare with Demisto, which is primarily built around that area and is a mature product. However, from a price point, it is probably going to be very competitive.
In terms of the full-spectrum analytics, some of the visualizations that we have available via the web console are, as others have expressed, short-lived, since they're just a snapshot in time. Whereas, deploying Kibana will, perhaps, give us a trend over time, which we also find to be valuable. We're exploiting what is native to the product, but we're looking to improve that with either going with the Kibana or the ELK Stack to enrich our visualizations and depict greater time periods.
We have somewhere north of 22,000 log sources and we average a little over 12,000 messages per second.
The staff for deployment and maintenance is myself - I'm the primary owner of this product - and I have one guy as a backup. The rest of my team will use it in an analysis role. However, they're owning and managing other products. It's a very hectic environment. We're probably short a few FTEs.
One thing that we've yet to implement very well is the use of cases and metrics. Because oftentimes, if we see something that we know - we glance at it, it's a false positive - we're not going to make a case out of it. We might not close it for a day or two because we know it's nothing, and because we're busy with other things since we are a little bit short on staff.
In terms of our security program maturity we have a fairly mature environment with a lot of in-depth coverage. The biggest plus of LogRhythm is that we can custom-write the rules based on the logs and then speed up time to awareness, the meantime to detect. I can create an alarm for virtually anything I can log.
We have a small population of users, but we are large physically and geographically spread out with a lot of devices on our network. We need all that login capability going into one spot where we can see it and correlate events across all our infrastructure with a small staff.
We're still struggling to get a real return on it and finding something that isn't false noise.
There have been a few things, such as weird service accounts that have an encrypted password which are locking things out. However, we haven't had a big security event success with it as of yet. We could be missing things here, not seeing what is going on.
We do a lot of the alerting, as far as user accounts. We have NetFlow information going into it, so we can examine a lot of traffic patterns and anomalies, especially if something stands out and is not the baseline. This helps a lot.
We still have a lot of noise, so this is a problem. We are having a hard time visually sifting through it. We need help dialing it in. We don't have the in-house expertise. Do we hire someone just for this purpose and have them sit there all day, every day doing that? It is almost at that point. We are looking at Optiv as solution right now.
It is so robust. There are so many moving pieces that you can't dabble in them. This is the problem that we are struggling with. You have to have somebody who works with it, and that is their job. Maybe a bigger company could have a whole team which could do this, but we don't have the capability right now.
I would like to see the client and the web client merged, so all the administrative functions are in the same web interface. It is just clunky right now. If you leave it running, it slows down your machine. However, we are still on version 7.3.
It seems to be stable.
It should meet our needs going forward. It seems like it is a mature enough product.
As far as what it takes, I don't know if it's worth the effort to get it on all the desktops, like every single user desktop and laptop reporting to it or if it is better just to target the main controllers, etc.
I haven't had to use them too much. We will find out after we go online with Optiv.
I have my sales engineer with LogRhythm, who has been helpful. She reaches out to me every couple months just to see if we need anything and offers assistance, because we'd already used up our block of hours when we first provisioned this solution.
We probably will contact them, if we go with Optiv, then they can help us upgrade.
We did an on-premise solution. If I had to do it again, I would probably do a cloud-based solution. They basically shipped two boxes which were essentially ready to go. Then, I worked with an engineer who had a block of hours and he got the HA capability going. We got it dialed in and tied it up with the mainframe.
Our team is in the process this week of doing a health check and trying to get everything up to speed. We are doing an upgrade, because we are still on 7.3. We need to be upgraded to 7.4.
We have been using it for about a year. We are probably only about 75 percent there. We need help getting it dialed in, having some of the noise tuned out, and getting the alerts set up properly, so we can work off hours on different triggers. This is where we are struggling because we need to sleep, and we are blind during that time. So, we something to help us with that.
The installation wasn't too bad. LogRhythm did most of it. I just had to do the stuff that was specific for our environment.
We went back and forth between LogRhythm, Splunk, and AlienVault.
I liked LogRhythm mostly for how it integrated with the network infrastructure. It was my decision, and I'm not 100% sure that I picked the right one.
LogRhythm works well with our network-centric environment. However, it may not be the best for other things.
I am rating the solution a six out of ten, because we have not gotten it to work yet. With all its components, there is such a learning curve.
I haven't gotten far enough along in the process to know if the solution has a shortcoming or if it is our shortcoming with somehow getting it dialed in.
It is for security monitoring.
It has helped us centralize and have better visibility into devices on our network. We are better able to respond to threats in a timely manner.
I would like to have threat indexing and a cloud version.
When we had version 7.2.6, there were a lot of issues deploying that version and with the indexing. The indexer was unstable. So, we were not able to use the platform when we were on that version until we were able to upgrade to 7.3.4. That is when it became more useful to us.
Now, the stability is good. Right now, it is more a matter of fine tuning the alerts and rules that we have, then we can reduce the hit on the XM performance.
In terms of capacity, we have the same XM appliance. We still haven't touched it (going beyond having that appliance), deployed another indexer, or moved to a distributed architecture.
Tech support has been good. They have fixed whatever has been bothering me when I contact them.
I do the deployment and maintenance for the solution.
We have seen a measurable decrease in the mean time when detecting and responding to threats.
Definitely consider LogRhythm. There are a lot of players in the market, but LogRhythm is a solid solution.
We don't have the playbooks. They are on version 7.4. We just upgraded to version 7.3.4. We are going to wait before we upgrade again due to performance issues.
We have around 22,000 log sources and average 5000 messages per second.
Our primary use case would be for compliance. We needed a check in the box for compliance. Right now, it's performing and doing its job, allowing us to say that we are compliant with HIPAA, PCI, etc.
It has improved the way our organization functions. It has allowed us to dive deeper into our network and figure out what is going on by parsing logs properly and being able to reduce the time it takes to work cases down from seven days to approximately two days.
LogRhythm has increased productivity because all the tools that we need are in the web UI, allowing us to find threats on our network fast and efficiently.
Our security program is still in its infancy. There is a lot of work that needs to be done. We finally were able to get our SIEM. A few things that we need to do are data loss protection, user behavior analytics, and another feature that LogRhythm offers that we're probably going to invest in the future. The program could use some work, but it is pretty solid now.
The most valuable feature is the Threat Intelligence Services (TIS).
We would like to see more things out of the console into the web UI. I guess this is what they are doing in 7.4.
In the three weeks that we have had it, we have had 99 percent uptime. It is a very stable platform.
It is scalable. They don't charge for going over your messages per second. It does scale with the business.
Technical support could use a little work in the terms of responding back. The feedback that we received is they do need a little more staff, but every issue that we've opened a ticket up for has been resolved.
We did not have a previous solution that we were using.
The initial setup is straightforward and complex as it requires a lot of work. It's very straightforward and very organized. Our consultant guided us as to what we needed to do, but the entire thing is complex. One misstep or incorrect character can bring the whole thing down.
I do all the deployment and maintenance.
The sales engineers and salespeople who come in and scope out what you need are very knowledgeable. They are not there to upsell you. They get you what you need for what you have, so everything runs perfectly. The consultants are extremely knowledgeable. Getting LogRhythm up took less than a week. It's a very solid solution.
When it comes time to renew, they say, "This is what you are using. This is what we can do for you." So, they work with you on pricing.
There were multiple competitors. We almost went with Splunk, but LogRhythm ended up being the best for the price. It ended up being everything we needed in one solution.
Everyone needs a SIEM. Go with LogRhythm.
We are not using the full-spectrum analytic capabilities yet, as we are brand new.
We have not used any of the playbooks. We do have them. We find them to be very detailed and organized. We just need to find a way to implement them.
I run in about 45 log sources with 12 of them being domain controllers, aka DNS.
Messages per second are fluctuating between 3000 and 9000. We are still trying to figure out why. We think it is our very chatty domain controllers, as we do deal with the Hard Rock and Seminole tribe, but I would say that we average about 5000.
Most important criteria when selecting a vendor: customer service. Do they care about our business as much as we care about our business? Also know as, do they care about our data as much as we care about our data?
We use it to alarm our help desk.
We staring to use it for SMART Response. We have been using SMART Response for about a year. Now, we are starting to push that towards the help desk, so the junior analysts can do more.
It allows us to automate a lot of things with a smaller team.
Our appliance is a little older, so we need to upgrade it. We are going to probably move to the software-only version. However, the issues that we have are our own fault because we didn't buy the right-size appliance.
We are not that big of a company. We are only at about 800 events per second.
We have had a couple of custom logs built, but we don't call in that much.
The initial setup is easy with the physical appliance.
We have two people who are setting it up and doing the admin side.
Make sure you size the appliance correctly.
We use Ansible and Terraform for infrastructure, so the same concept as the playbooks. We are looking to use the playbooks going forward.
We have about 1500 log sources. We do about a 25 million logs a day. Obviously, they're not all events.
