We have an engineering team working on the back end to receive data, they do data modeling, and create dashboards. That's been pretty useful.
Splunk Enterprise Security helped our organization a lot. In the past, we relied on every single product that had its own kind of audit trail information. We needed to go and look for it, for example, in the Windows environment. We have to use the event viewer a lot to look for certain things, like system applications and security logs. In Linux, we have to use the log file, and under certain applications in the Linux environment, we have to look at the logs for that as well.
That's just part of the operating system. It is not the infrastructure, like network devices. When we centralize logs, we put everything in one location.
Our advanced users can do the SPL query anything they want. Executives or higher-up management users need to look for certain things, like how many systems are missing patches for this month or who logged in today from where, what they did, and how often they re-authenticated to the systems.
We have a lot of data from businesses, data from our devices, and more. When we put it all in the ES, it gives us the ability to look at certain functions. It provides more insight into our data, where it's traveling from, between endpoints, and what they're doing with it.
We also look into performance. We use other monitoring tools as well, and that data is also piped into Splunk. We have a centralized platform that we can navigate to look for everything we need rather than having to go to each individual system, like Cisco Syslog or we have to go to the Forcepoint console to look for it. It is a centralized platform that gives us more insights into our data or what's happening in general.
It is very important that Splunk Enterprise Security provides end-to-end visibility into our environment because, at any given point in time, we want to know what's happening to the data. Data privacy is the primary concern. We want to make sure that authorized users get access to what they are authorized to so that data would not leak out or travel from a different path. Again, we get a lot of data in there. We understand more about our data to improve the business in certain aspects.
We know that during certain times of the day, a lot of people access a server or website.
Then it'll give us more insight about where we need more network bandwidth or where we need to upgrade network devices. We understand more about our data, like how many people access the data lake house. And that's just for performance.
On the security side, we would know who's accessing it from where. Are they authorized to do so, or is there any weird access pattern in locations that they're not supposed to be in?
So again, we get the data, we centralize it, and we can do data mining. We can pull out anything from there rather than looking all over the place, like, "I want to find out if he's working today if someone's using his account, or from which devices he accessed data from two different places."
From Splunk Enterprise, we can either do it manually or have our engineers create an audit dashboard. Or, if you are an advanced user, you can do SPL queries that will give you anything you need.
The alert volume depends on the users. If they do what they're supposed to, then there's nothing to talk about. If not, it's more or less on how you manage the data, educate your users, and control your system. Based on that, Splunk might play zero, fifty percent, or seventy-five percent role.
In a way, it has helped improve our organization's business resilience. It's a way for us to predict the pattern of data access and other things going on.
Knowing a way to do that, if we have enough resources to do it, is fine because we have so much data, but no one's really monitoring it. If we get alerts in the middle of the night and we don't have anyone to handle it, it's not going to help.
It's another aspect that we worry the most about, where our data is floating.
Now that we've centralized our log information into Splunk, we want it to be secured well because now users can predict a pattern of data access from where, and from whom.
We put all of our logs and data into Splunk, like network switches, firewalls, and web-based protection. In general, every component within the infrastructure sends data to Splunk.
Then, we have an engineering team transforming, manipulating, and analyzing the data to create a front-end dashboard in a meaningful way.
With the new announcement of version eight, it's going to give us a single point-and-click. On the front page there, that will give us a whole lot of information that we need to look into on the right panel without navigating down or going to more details, clicking here and there.
We've been using it for quite a few years now.
The solution of choice depends on the engineers and teams. If they manage Linux, they're comfortable with certain tools to read the logs. In a Windows environment, it depends on the engineers. They favor any certain tool; they would do it, but it would be to cut down costs and consolidate all the software strings.
Splunk was not that big years ago. But then we started seeing that they put more investment into it and made the tool more useful.
We're not using the cloud version yet. This is just the enterprise product on-premises.
Splunk can improve the pricing. People like certain features, and sales use the features that they provide, the automated features, to hook customers into paying for the big-price license.
Everyone does it, like Microsoft and Cisco. Initially, you try out the free version, but once you get it in your shop and turn it into production, you start relying on it and don't want to get out. You start paying a lot more for it.
Splunk is on the right path. It's good, but it does not provide everything that we need. There's a lot more to it. I look at it as ideal for detecting in real-time, but we're always behind and just look at the log information.
If you have a network device, a Splunk Enterprise instance, and you have to send data to it. You're relying on network connections.
If you're using a cloud service or anything where Splunk is not on-premises, there's high latency. If that network connection is down, that's it. You don't know what's going on. So even if you have it on-prem, you're still relying on it after the fact.
When you look at Splunk, you're looking at things that have already happened. It's nothing that's actively going out there and doing something for you.
If you had to give it a number, from one to ten, since they've gone this far, I'd give it a five or six. Because locking or monitoring is just a part of business, and how you're going to receive those alerts and act on them is another part of it, when I look at the overall infrastructure and infrastructure management.