Share your experience using Comodo Dome Data Loss Prevention

The easiest route - we'll conduct a 15 minute phone interview and write up the review for you.

Use our online form to submit your review. It's quick and you can post anonymously.

Your review helps others learn about this solution
The PeerSpot community is built upon trust and sharing with peers.
It's good for your career
In today's digital world, your review shows you have valuable expertise.
You can influence the market
Vendors read their reviews and make improvements based on your feedback.
Examples of the 84,000+ reviews on PeerSpot:

Andrei Predoiu - PeerSpot reviewer
DevOps Engineer at a wholesaler/distributor with 10,001+ employees
Real User
Top 10
At the end of the day, no secrets or confidential keys are getting into our GitHub undetected
Pros and Cons
  • "GitGuardian Internal Monitoring has helped increase our secrets detection rate by several orders of magnitude. This is a hard metric to get. For example, if we knew what our secrets were and where they were, we wouldn't need GitGuardian or these types of solutions. There could be a million more secrets that GitGuardian doesn't detect, but it is basically impossible to find them by searching for them."
  • "Right now, we are waiting for improvement in the RBAC support for GitGuardian."

What is our primary use case?

Our main use case is operational security. We have a big IT platform. A lot of it is built in-house. We are not a focused IT company. We are a retailer. We have a lot of developers with a lot of different levels and projects. For example, with fashion brands, it is just, "Oh, we want to do this new app," and then they put it on our GitHub. Suddenly, we see all kinds of API keys and secrets in there. This solution is very useful for us because GitGuardian lets us know about them, then we can take care of it.

It is on the cloud. We gave GitGuardian access to our organization and codebase. It just scans it on an ongoing basis.

How has it helped my organization?

Since using GitGuardian Internal Monitoring, we have found potentially enterprise-destroying secrets in our GitHub. For example, if our Git got compromised or we had a rogue employee, they could get a lot of business-critical data, disrupt the business, or put us at very high risk. More or less, we are now somewhat protected against that. Now, we are at a stage where we are just keeping an eye on it. As soon as a developer pushes something or does something, we get informed and act on it. The exposure time is very small. Before GitGuardian, we had secrets for years that we did not know about.

As soon as a new secret is detected, we get an email. We look at it, then contact the developers. It is a very fast process. For example, if it is my code and I pushed it, that is the fastest scenario. It is my problem. GitGuardian finds it, then I can fix it myself. I don't have to call another team, talk to them, etc. It could be within a minute that it is remediated.

What is most valuable?

The most valuable feature is automatic secrets detection, which is quite intelligent. It gives us very few false positives, which is definitely worth it at the end of the day as no secrets or confidential keys are getting into our GitHub undetected.

The majority of false positives are things like test credentials or dummy data that we put in for testing, etc. Therefore, it is not really feasible for GitGuardian to understand which is dummy data and which is real data. They do well in terms of false positives. They work quite hard on them. Sometimes they understand that it is dummy data. 

For certain types of secrets, they can check if it is being used. They will go to Microsoft and check if the key is valid in Microsoft, then tell you, "Hey, this secret is actually live somehow." This is another feature that they are working on that I like.

The breadth of the solution's detection capabilities are really good. We haven't walked upon a piece of code where we question, "Oh, why isn't GitGuardian picking this up?" That's for sure. If there are some secrets in our Git that we don't know about and GitGuardian hasn't found it, I don't think it is likely that we will.

What needs improvement?

For remediation, GitGuardian is quite good at pointing out all the incidents and helping us handle them. However, remediation is mostly in our hands. We have to go in and resettle. If they could detect secrets before they end up in our GitHub, that is the only improvement that would be a meaningful improvement from what they have. 

Right now, we are like the SRE team for the company. We need to monitor all the secrets, because when we give somebody access, they either see nothing or everything in GitGuardian. We would like to be able to tune it so developers can see the secrets that GitGuardian detected in their own repositories and teams. Then, they could manage it themselves. We wouldn't have to be in the middle anymore. We could just supervise and make sure that they do fix it. For example, if they might not care about their secrets getting spilled into Git, then we need to get our stick and chase them around the office.

For how long have I used the solution?

I have been using it for a year.

What do I think about the stability of the solution?

We have not had any issues at all with stability.

We do not do maintenance, per se. We do need to react to all the incidents that the solution finds. We have to triage them if we find false positive or test credentials. It is reacting to GitGuardian's information. We don't have to do anything else.

Four or five people from my team are monitoring the solution.

What do I think about the scalability of the solution?

I haven't seen any problem with its scaling. We pay per repository, or something like that, but otherwise it is very agile and fast. 

How are customer service and support?

We didn't have to use the technical support for anything. The solution has worked great and we haven't had any issues. We have just had questions, specifically regarding RBAC and self-service type of stuff, but that is more roadmap development.

Which solution did I use previously and why did I switch?

We actually have a lot of tools for developers to handle and manage their secrets in regards to whatever applications or code they develop, but not all of the teams and developers know how to use those properly. This causes secrets ending up in our codebase. Before we had GitGuardian, we did not understand that certain teams had this blind spot. We thought, "Oh, they know what they are doing. They just forgot, made a mistake, or committed some code by accident." However, we found out some of them had some learning to do.

How was the initial setup?

The initial setup is very quick and simple to do. It takes a few minutes and about 10 clicks to do it.

What was our ROI?

As an engineer, I am not paying for it. We just implement and use it. After using it in the trial, we went into a long-term contract with GitGuardian. That is definitely the business deciding that it is worth it. It is paying for itself.

We realized the benefits from it immediately. We started with a 30-day demo and said, "Just clean up our repository. We will be happy with that." However, it was so useful. It was immediately obvious that it would save our bacon in many situations. We decided to keep it.

GitGuardian Internal Monitoring has helped increase our secrets detection rate by several orders of magnitude. This is a hard metric to get. For example, if we knew what our secrets were and where they were, we wouldn't need GitGuardian or these types of solutions. There could be a million more secrets that GitGuardian doesn't detect, but it is basically impossible to find them by searching for them.

There are the obvious benefits, but they are very hard to count in the security business. Until you get hacked or compromised, your costs are zero, but then you are destroyed. So,  the relationship between cost and time savings is a hard thing to measure. GitGuardian is pulling a lot of the hard work when it comes to this, as it was one of our biggest holes in our security, and GitGuardian plugs it up completely.

We had issues that were ongoing for a long time before we had GitGuardian. I remember that it once took us a month to understand that we did something very bad. GitGuardian would have caught that in a minute. A month in, it was a lot harder to remediate because the solution was pushed out to other teams. It was used by a bunch of people, then we had to take it down and reset everything, etc. It was a much bigger downtime than it could have been.

What's my experience with pricing, setup cost, and licensing?

It could be cheaper. When GitHub secrets monitoring solution goes to general access and general availability, GitGuardian might be in a little bit of trouble from the competition, and maybe then they might lower their prices. The GitGuardian solution is great. I'm just concerned that they're not GitHub.

Which other solutions did I evaluate?

We played around with others. GitHub has a big advantage because they are GitHub. Their focus is on zero false positives, but we would rather have a few false positives and get everything.

We tried TruffleHog once. I don't remember why, but it didn't work quite right for us. We did see a lot more secrets being detected by GitGuardian than TruffleHog.

We ran GitGuardian and TruffleHog in parallel. We noticed that GitGuardian was finding a bunch of random secrets that TruffleHog did not. I think that GitGuardian is using machine learning, or something like that, to understand Azure, AWS, Google API keys, or standard secrets very commonly pushed into GitHub. They figure out even random API keys or secrets that developers made up by themselves and put them in their code. Other solutions do not detect these unless we put a specific rule for that, but how can we put a rule for something that a developer just thought up in their head.

GitGuardian's surveillance perimeter is better for removing blind spots than any of the other products that we tested.

With the Git solutions, we spent a lot of time doing research. Because we have a big contract with GitHub, we were leaning heavily towards them. GitHub relies on some very hard-coded rules that they build themselves about, "What do secrets look like? What does a password look like? What does a key look like?" If you want to catch new types of secrets, you need to make the rules yourself or wait until GitHub adds new rules. While GitGuardian is very flexible, it will show you, "Hey, we think this might be something that you should look at." Then, we just say, "No, it's not," or, "Oh my God. That is definitely something that we should look at." That is the main advantage of GitGuardian.

This is where they are at a disadvantage. One of our biggest issues is that GitGuardian doesn't just search the code as it is right now. It searches the whole history of your code change in every repository. So, if we ever push a secret, even if you deleted it, it is still in the history because that is how Git works. We can reset those keys, secrets, and even delete them from the history itself. We can rewrite the history so they were never there to begin with if you search for them now. What we cannot do is delete them from pull requests and such. Those pull requests are controlled by GitHub and only GitHub can do it. We actually have to call GitHub support to erase the secrets from our requests. So, it's not really GitGuardian's problem; it's GitHub's.

What other advice do I have?

We don't use it for monitoring our developer's public activities. We just focus on our own secrets. We are slowly building up our operational security and our security in general to Git. Right now, we are waiting for improvement in the RBAC support for GitGuardian.

I would say, "Good luck," to someone who says secrets detection isn't a priority. Their priorities are probably wrong. One of the easiest ways for intrusion, as well as losing a lot of money in your company, is getting your secrets leaked somehow.

Secrets detection to a security program for application development is one of the most important things. There are a few stages that application development goes through, it is:

  1. on the developer's machine
  2. in the code repository
  3. packaged as an application
  4. then it is running somewhere.

All these steps have to be secured and taken care of. The application itself needs to be secure from a hacker coming in and trying to use brute force or exploit some software. All of these steps need to be airtight since your security is only as strong as your weakest link. This is so you can make very modern, secure applications. However, if your secrets are in your GitHub and anybody can see them, then those people who have access to one application or code repository, then can see your secrets. They can then take that and do a lot of stuff with it.

I would go with nine out of 10. It would be almost a 10 if it had RBAC.

Which deployment model are you using for this solution?

Public Cloud
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Raheel Naveed - PeerSpot reviewer
Senior Consultant DIS-InfoSec at Systems Limited
Real User
Highly effective in safeguarding sensitive data in cloud storage
Pros and Cons
  • "The incident response options and reporting features are particularly strong, with the inclusion of Incident Classification Assessment (ICA) for integrated reporting."
  • "Agent configuration should be improved for easier interaction for users, particularly by allowing configuration changes to be done on a grid."

What is our primary use case?

Symantec Data Loss Prevention (DLP) is highly effective in safeguarding sensitive data in cloud storage. Its granular features allow us to tailor implementations according to specific use cases and customer requirements.

How has it helped my organization?

Symantec Data Loss Prevention primarily targets confidential data. We utilize the indexing feature along with Endpoint Discover (EDM) to analyze consultation files. This involves scanning their content using OCR and managing data access accordingly.

What is most valuable?

Symantec Data Loss Prevention (DLP) doesn't inherently classify data, but it provides robust features for managing policies and incident response effectively. The incident response options and reporting features are particularly strong, with the inclusion of Incident Classification Assessment (ICA) for integrated reporting. While DLP itself doesn't classify data, it can respond to classified files based on configured policies. DLP can be integrated with other classification tools like GoldenGate, data visibility tools, or Microsoft MIP for enhanced data protection and management.      

What needs improvement?

Agent configuration should be improved for easier interaction for users, particularly by allowing configuration changes to be done on a grid. I would like to see OCR (Optical Character Recognition) features extended to endpoint devices in Symantec DLP. Currently, OCR is only available for network channels, but many users also require OCR functionality on endpoints, especially for scenarios involving data migration or interaction with USB devices. Enhancements for OCR support on endpoints would be beneficial for technical support and implementation on these devices.

For how long have I used the solution?

I have been using Symantec Data Loss Prevention for the past 7 years. 

What do I think about the stability of the solution?

I would rate stability 9 out of 10. The solution has proven to be smooth and reliable in maintaining data security and protection, earning a high rating for stability.

What do I think about the scalability of the solution?

I would rate Symantec Data Loss Prevention (DLP) as a 9 for scalability. Our customers range from small to enterprise businesses, with varying numbers of endpoints and data levels. 

How are customer service and support?

The service quality declined after the platform transition. There's room for improvement by assigning support engineers based on region to ensure better assistance.

How would you rate customer service and support?

Positive

How was the initial setup?

I would rate setting up Symantec Data Loss Prevention (DLP) 7 out of 10. It can be challenging initially, especially for newcomers to the Symantec solution. It typically takes three to four implementation attempts to become comfortable with the setup process. Following the prescribed steps is crucial, as it leads to a smoother experience and better understanding of the setup requirements.The deployment process for Symantec Data Loss Prevention (DLP) is relatively short and can vary depending on specific requirements. If implementing only the core features, it typically takes about two days. However, if additional components like the endpoint and network channels are included, it may extend to three to four days. 

What's my experience with pricing, setup cost, and licensing?

In terms of pricing, Symantec DLP offers reasonable rates compared to other products like Forcepoint or Jargill, making it a good investment for ROI, typically seen within two years.

What other advice do I have?

I recommend using Symantec DLP, especially for its smooth integration, tiered architecture, and efficient troubleshooting, earning it a solid 9 out of 10 from my experience.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer:
Flag as inappropriate