We use Cribl for data normalization, which involves standardizing data from various sources before sending it to a SIEM. This helps reduce costs associated with SIEM ingestion. Additionally, we use Cribl to sanitize data by removing or masking sensitive information from certain fields.
Cribl filters out unnecessary events and data, and we reduced the costs associated with SIEM ingestion.
You can use Cribl to route the same data to different destinations. For instance, if a company uses multiple SIEMs and needs data in each, Cribl makes it easy to direct that data to various destinations. Setting up API connections to get data into the platform is easy. Cribl offers a cloud version, allowing different workspaces to segregate various functions within a company or organization.
The documentation part could be better. Their documentation could be updated, as new features often outdated existing information. Additionally, there are inconsistencies between the documentation for Cribl Cloud and Cribl on-premises. This can be confusing, as features may differ, leading to potential misunderstandings if you use documentation intended for one version while working with another. Consolidating and improving the clarity of the Cribl Cloud documentation would be very helpful.
I have been using Cribl for a year and a half.
It is highly scalable. If you need more cloud worker groups, you're just a click or two away from doing that at extra cost.
Depending on the license, we usually provide a Customer Success Manager to assist with any questions or issues when onboarding Cribl. They are very responsive, and their support is quite helpful.
We employed a hybrid strategy, setting up Cribl Cloud as the head node in their environment. For data processing, we used worker nodes within the client’s environment, which are closer to the data sources. This setup allowed us to process data locally before sending it to our destination. For cloud assets, such as SaaS applications like Salesforce, we used the cloud-hosted Cribl instance to handle that information. Meanwhile, the on-premises data was processed by the hybrid worker nodes.
We encountered delays due to third-party issues, extending the timeline to six to seven months. Without these issues, it likely would have taken around three months, depending on the speed of obtaining API keys, authorizations from networking teams, and other factors. Under ideal circumstances, a three-month timeframe would be more accurate.
You need to maintain the pipeline, which includes data processing, before it reaches its destination. When onboarding new data, managing and rotating API keys as needed is important. Maintaining these aspects ensures faster and more efficient deployments.
If you want to reduce log ingestion or route data to multiple destinations, consider using an on-premises or cloud solution. Your choice will depend on your organization’s network constraints. For example, if critical assets on your network need to connect to the internet, your network team might have restrictions. Weigh the benefits of cloud versus on-premises options to determine what best fits your needs.
With less data coming into our system, we can now run queries faster since we're not processing as much data as before. The reduction has made our queries more efficient because we're working with more streamlined data.
The quick connects are great for testing and allow you to rapidly set up a proof of concept, which is very beneficial. They can also be useful in production environments. Another significant feature is the recent Sentinel integration. The provided pack simplifies the setup process, making it much easier than the previous method, where you had to manually handle tasks like finding API keys. This integration makes the setup much more efficient.
Overall, I rate the solution a seven out of ten.