We changed our name from IT Central Station: Here's why
Get our free report covering SonarSource, Checkmarx, Micro Focus, and other competitors of Veracode. Updated: January 2022.
564,729 professionals have used our research since 2012.

Read reviews of Veracode alternatives and competitors

Ramesh Raja
Senior Security Architect at a tech services company with 5,001-10,000 employees
Real User
Top 5
Continuously looks at application traffic, adding to the coverage of our manual pen testing
Pros and Cons
  • "We use the Contrast OSS feature that allows us to look at third-party, open-source software libraries, because it has a cool interface where you can look at all the different libraries. It has some really cool additional features where it gives us how many instances in which something has been used... It tells us it has been used 10 times out of 20 workloads, for example. Then we know for sure that OSS is being used."
  • "Contrast Security Assess covers a wide range of applications like .NET Framework, Java, PSP, Node.js, etc. But there are some like Ubuntu and the .NET Core which are not covered. They have it in their roadmap to have these agents. If they have that, we will have complete coverage."

What is our primary use case?

We use the solution for application vulnerability scanning and pen-testing. We have a workflow where we use a Contrast agent and deploy it to apps from our development team. Contrast continuously monitors the apps.

When any development team comes to us and asks, "Hey, can you take care of the Assess, run a pen test and do vulnerability scanning for our application?" We have a workflow and deploy a Contrast agent to their app. Because Contrast continuously monitors the app, when we have notifications from Contrast and they go to the developers who are responsible for fixing that piece of the code. As soon as they see a notification, and especially when it's a higher, critical one, they go back into Contrast, look at how to fix it, and make changes to their code. It's quite easy to then go back to Contrast and say, "Hey, just consider this as fixed and if you see it come back again, report it to us." Since Contrast continuously looks at the app, if the finding doesn't come back in the next two days, then we say, "Yeah, that's fixed." It's been working out well in our model so far.

We have pre-production environments where dedicated developers look at it. We also have some of these solutions in production, so that way we can switch back.

It's hosted in their cloud and we just use it to aggregate all of our vulnerabilities there.

How has it helped my organization?

If an app team is going to deploy new features to prod, they put in a ticket saying, "We are including these features in our 2.0 release." The ticket comes to our team. We deploy Contrast Security and then we do a bunch of manual pen tests. During the time that we're doing manual pen tests, Contrast will have a bunch of additional findings because Contrast is sensor-based. It's an agent-based solution which continuously looks at traffic coming in and going out of the application. When my team does manual penetration tests, Contrast looks through those flows and that makes our coverage better. It goes hand-in-hand with our pen test team. When the manual pen-test team tests the application, Contrast is looking at that traffic. Another application, like a Qualys, doesn't go hand-in-hand with a manual pen test team. Contrast really helps us because it's more like another resource looking at traffic, and at logs. It's like a watchman looking at traffic going in and going out. I literally consider it as another resource looking at traffic, day in and day out.

Contrast has also reduced the number of false positives we have to deal with, by something like 10 to 20 percent over the 18-plus months that we've had it.

The solution is accurate 90 percent of the time. Most of the time, when Contrast has identified top vulnerabilities in the OWASP Top 10, our manual pen-test team has gone in and said, "Yes, for sure." There were times when, because of resourcing issues, we did not have people pen-testing and they would just say, "Okay, we'll see what Contrast says." And sure enough, Contrast would come back with 10 to 20 critical vulnerabilities. Then we would backtrack and have manual pen do some pen tests. They would come back and say, "Yes, it has literally identified most of them;" things like a SQL Injection, which is in the OWASP Top 10. So we've seen that happen in the past, and that's why I feel the accuracy of Contrast is pretty good.

The advantage of using Contrast is that it is continuous.

I've seen some of the development teams completely take up Contrast themselves and work in Contrast. For example, a developer will be notified of an issue and will fix the code. He will then go back to Contrast and mark it as remediated. Then, he will keep watching the portal. He will be notified if the same vulnerability is found. We have seen teams that completely like the information that Contrast provides and they work independently with Contrast, instead of having a security team guiding them and holding their hands. There are times when we do hold hands for some of the teams, but it really depends on the software developers' maturity and secure coding practices.

In addition, it definitely helps save us time and money by being able to fix software bugs earlier in the software development lifecycle. It really depends on where you put Contrast. If you put Contrast in your Dev environment, sure enough, as soon as the developer deploys his code and QA is testing it in that environment, it will immediately flag and say, for instance, "You're not using TLS 1.2." The developer will go back and make those changes. It really depends on what model you have and where you want to use Contrast to your advantage. A lot of teams put it in the development environment or a preparation environment and get to fixing vulnerabilities before something is released.

I've also seen the other side of the fence where people have deployed it in production. The vulnerabilities keep coming. Newer hacks develop over time. When teams put it in prod and an exploit happens, they can use Contrast Protect and block it on the other side. You can use it as you need to use it.

The time it saves us is on the order of one US-based FTE, a security person at an average pay level. At a bare minimum, Contrast helps us like that resource. It's like having a CISSP guy, in the US, on our payroll. That's how we quantify it in our team and how we did so in our project proposal.

What is most valuable?

Contrast has a feature called Protect. When a real exploit comes through, we can look at it and say, "Hey, yeah, this is a Cross-Site Scripting or SQL Injection," and then we can block it.

Another especially valuable feature is the stack trace. I've been in the application security space for about 15-plus years now. I saw it when it was a baby or when people thought of it as the "icing on the cake." That was especially true when they had money. Then they would say, "Yeah, we can now look at security." Now, security is a part of the SDLC. So when Contrast identifies a vulnerability, it provides very important information, like stack trace and variables.

It also has another feature called IAST, interactive application security testing. When I started out I was actually an embed developer, and now I'm managing an OWASP team. I've seen both ends of the spectrum and I feel that the information for every vulnerability that Contrast provides is really cool and amazing, enabling us to go and fix the vulnerabilities.

It also has features so you can tweak a policy. You can make a rule saying, "Hey, if this vulnerability comes back, it is not an issue." Or you can go and change some code in a module and tell Contrast, "This is per-design." Contrast will cleverly identify and recognize that it was marked as per-design. It will not come back and say that's a vulnerability.

We use the Contrast OSS feature that allows us to look at third-party, open-source software libraries, because it has a cool interface where you can look at all the different libraries. It has some really cool additional features where it gives us how many instances in which something has been used. For example, of the total, say, 500 calls, has the OSS been used that many times? It tells us it has been used 10 times out of 20 workloads, for example. Then we know for sure that OSS is being used. There are tools that would tell you something is being used, but sometimes developers can include libraries that are never used. Contrast goes one step further and tells you how many times something has been used. 

I can't quantify the effect of the OSS feature on our software development, but it gives us a grading from A to F. In this evolving security world, customers come back to us and say, "Hey, do you guys have a pen test report? We can go back to Contrast and pull all this stuff and provide it to customers.

What needs improvement?

Contrast Security Assess covers a wide range of applications like .NET Framework, Java, PSP, Node.js, etc. But there are some like Ubuntu and the .NET Core which are not covered. They have it in their roadmap to have these agents. If they have that, we will have complete coverage. 

Let's say you have .NET Core in an Ubuntu setup. You probably don't have an agent that you could install, at all. If Contrast gets those built up, and provides wide coverage, that will make it a masterpiece. So they should explore more of technologies that they don't support. It should also include some of the newer ones and future technologies. For example, Google is coming up with its own OS. If they can support agent-based or sensor-based technology there, that would really help a lot.

For how long have I used the solution?

I have been using Contrast Security Assess for a year and a half.

What do I think about the stability of the solution?

You can't quantify anything about the stability. It's more an autopilot, like agents. It's more like a process monitor that keeps looking at traffic. It's quite similar to that. Once you put it on there, it just hangs in there until the infrastructure team decides to move the old apps from PCF to another environment. Once it has been deployed it's done. It's all auto-maintained.

What do I think about the scalability of the solution?

It depends on how many apps a company or organization has. But whatever the different apps are that you have, you can scale it to those apps. It has wide coverage. Once you install it in an app server, if the app is very convoluted, it has too many workflows, that is no problem. Contrast is per app. It's not like when you install source-code tools, where they charge by lines of code, per KLOC. Here, it's per app. You can pick 50 apps or 100 apps and then scale it. If the app is complex, that's still no problem, because it's all per app.

We have continuously increased our license count with Contrast, because of the ease of deployment and the ease of remediating vulnerabilities. We had a fixed set for one year. When we updated about six months ago, we did purchase extra licenses and we intend to ramp up and keep going. It will be based on the business cases and the business apps that come out of our organization.

Once we get a license for an app, folks who are project managers and scrum masters, who also have access to Contrast, get emails directly. They know they can put defects right from Contrast into JIRA. We also have other different tools that we use for integration like ThreatFix, and risk and compliance and governance tools. We take the results and upload them to those tools for the audit team to look at.

How are customer service and technical support?

They have a cool, amazing support team that really helps us. I've seen a bunch of other vendors where you put in tickets and they get back to you after a few days. But Contrast responds really fast. From the word "go," Contrast support has been really awesome.

That's their standard support. They don't have premium support. I've worked with different vendors, doing evaluations, and Contrast is top-of-the line there.

Which solution did I use previously and why did I switch?

Before Contrast we were using regular manual pen-testing tools like Burp and other common tools. We switched to Contrast because the way it scans is different. Back in those days, security would do a pen test on Friday or Saturday — over the weekend when the traffic is less. We used to set aside time. Contrast doesn't work that way. It's continuous scanning. We install an agent and it continuously does it. Continuous is way better than having a separate time where you say, "We're going to scan at this time." The Dev-SecOps model is continuous and Contrast fits well there. That's why we made the switch.

Contrast is above par with respect to the different applications that I've used in the past, like Veracode. I saw false positives and false negatives with all those tools. But Contrast is better than all the other tools that I've used.

How was the initial setup?

The initial setup was straightforward. At the time, I was doing a proof of concept of Contrast Security to see how it works. It was fairly simple. Our company has a bunch of apps in various environments. Initially, we wanted to make sure that it works for .NET, Java, and PCF before we procured it. It was easy.

Our implementation strategy was coverage for a complete .NET application and then coverage for a complete Java application, in and out, where you find all the vulnerabilities and you have all the different remediation steps. Then we set up meetings with the app teams to go over some of it and explain things. And then, we had a bunch of apps in PCF. These were the three that we wanted: .NET, Java, and PCF. They are our bread and butter. We did all three in 45 days.

From our side, it was just me and another infrastructure guy involved.

What about the implementation team?

We only worked with Contrast. There were times when Contrast worked with Pivotal, internally, for PCF. But they pulled it off because they have a fairly good agreement with Pivotal and its support team. Initially, we had a few issues with deploying a Contrast tile to Pivotal. But Contrast worked things out with Pivotal and got all of it up for us. It was easy for us to just deploy the tile and bind the application. Once the application is bound, it's all about the vulnerabilities and remediation.

What was our ROI?

We expect to see ROI with the architecture team, the infrastructure team, and with the development teams, especially when it comes to how early in our development cycle the vulnerabilities are found and remediated. That plays a big part because the more time it takes to find a software vulnerability, obviously, the more your cost to market will be substantially higher.

What's my experience with pricing, setup cost, and licensing?

I like the per-application licensing model, but there are reasons why some solutions want to do per KLOC. For us, especially because it's per app, it's really easy. We just license the app and we look at different vulnerabilities on that app and we remediate within the app. It's simpler.

If you have to go to somebody, like a Dev manager and ask him, "Hey, how many thousands of lines of code does your application have?" he will be taken aback. He'll probably say, "I don't know." It's difficult to cost-segregate and price things in that kind of model. But if, like with Contrast, they say, "Hey, your entire application — however big it is, we don't care. We're just going to use one license," that is simpler. This type of license model works better for us.

Which other solutions did I evaluate?

Before choosing Contrast Assess, we looked at Veracode and Checkmarx. 

Contrast does things continuously so it's more of an IAST. Checkmarx didn't. Using it, you would have to upload a .war file and then it would do analysis. You would then go back to the portal and see the vulnerabilities there. 

It was the same with Veracode. When you take a SAST piece or a DAST piece, you have to have some specific timing in some workflows and then you upload all of the stuff to their portal and wait for results. The results would only come after three days or after five days, depending on how long it takes to scan that specific workflow. 

The way the scanning is done is fundamentally different in Contrast compared to how the solutions do it. You just install Contrast on the app server and voilà. Within five minutes you might see some vulnerabilities when you use that application workflow.

What other advice do I have?

If you are thinking about Contrast, you should evaluate it for your specific needs. Companies are different. The way they work is different. I know a bunch of companies that still have the Waterfall model. So evaluate and see how it fits in your mode. It's very easy to go and buy a tool, but if it does not fit very well in your processes and in your software development lifecycle, it will be wasted money. My strongest advice is: See how well it fits in your model and in your environment. For example, are developers using more of pre-production? Are they using a Dev sandbox? How is QA working and where do they work? It should work in your process and it should work in your business model.

"Change" is the lesson I have taken away by using Contrast. The security world evolves and hackers get smarter, more sophisticated, and more technology-driven. Back in the day when security was very new, people would say a four-letter or six-letter password was more than enough. But now, there is distributed computing, where they can have a bunch of computers trying to compute permutations and combinations of your passwords. As things change, Contrast has adapted well to all the changes. Even five years ago, people would sit in a war room and deploy on weekends. Now, with the DevOps and Dev-SecOps models, Contrast is set up well for all the changes. And Contrast is pretty good in providing solutions.

Contrast is not like other, traditional tools where, as you write the code they immediately tell you there is a security issue. But when you have the plugin and something is deployed and somebody is using the application, that's when it's going to tell you there's an issue. I don't think it has an on-desktop tool where, when the developer writes this code, it's going to tell him about an issue at that time, like a Veracode Greenlight. It is more of an IAST.

We don't have specific people for maintenance. We have more of a Dev-SecOps model. Our AppSec team has four people, so we distribute the tasks and share it with the developers. We set up a team's integration with them, or a notification with them. That way, as soon as Contrast finds something, they get notified. We try to integrate teams and integrate notifications. Our concern is more about when a vulnerability is found and how long it takes for the developer to fix it. We have worked all that out with Power BI so it actually shows us, when a vulnerability is found, how long it takes to remediate it. It's more like autopilot. It's not like a maintenance type of thing.

I would rate Contrast at nine out of 10. I would never give anything a 10, but Contrast is right up there.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
VP of Engineering at a tech vendor with 11-50 employees
Real User
Top 10
Scans our thousands of dependencies every time we build and rechecks them daily, making us aware of what's going on
Pros and Cons
  • "We're loving some of the Kubernetes integration as well. That's really quite cool. It's still in the early days of our use of it, but it looks really exciting. In the Kubernetes world, it's very good at reporting on the areas around the configuration of your platform, rather than the things that you've pulled in. There's some good advice there that allows you to prioritize whether something is important or just worrying. That's very helpful."
  • "There is always more work to do around managing the volume of information when you've got thousands of vulnerabilities. Trying to get those down to zero is virtually impossible, either through ignoring them all or through fixing them. That filtering or information management is always going to be something that can be improved."

What is our primary use case?

Our use case is basically what Snyk sells itself as, which is for becoming aware of and then managing any vulnerabilities in third-party, open-source software that we pull into our product. We have a lot of dependencies across both the tools and the product services that we build, and Snyk allows us to be alerted to any vulnerabilities in those open-source libraries, to prioritize them, and then manage things.

We also use it to manage and get visibility into any vulnerabilities in our Docker containers and Kubernetes deployments. We have very good visibility of things that aren't ours that might be at risk and put our services at risk.

Snyk's service is cloud-based and we talk to that from our infrastructure in the cloud as well.

How has it helped my organization?

We are a business that sells services to other businesses. One of the things that we have to sell is trust. As a small company, we've had to go quite a long way to mature our development and security processes. We've been ISO 27001-certified for a while and we got that very early, compared to the life cycle of most businesses. But that's because when we're talking contracts with customers, when we're talking information security reviews with customers, it's really powerful to be able to say, "We have Snyk, we use it in this way." A lot of the questions just go away because people understand that that means we've got a powerful and comprehensive tool.

Certainly, from a finding-of-vulnerabilities perspective, it's extremely good. Our problem is scale. We have something like 7,000 dependencies in our code and we could go and check those ourselves, but that would be a huge waste of time. Snyk's ability to scan all of those every time we build, and keep a running status of them and recheck them daily, is extremely valuable for making us aware of what's going on. We've wired Snyk up into Slack and other things so that we get notifications of status, and that's useful.

It has reduced the amount of time it takes to find problems by orders of magnitude because it's scanning everything. Without the tool it would be horrific; we just couldn't do it. It takes seconds for a scan to run on each of our libraries and so that's an amazing performance improvement. Compared to having nothing, it's amazing.

In terms of developer productivity, because of the way that our development community works, they're pulling in third-party libraries. So they worry less about the choice of the third-party library, but it could inform them that there's a risk, and then they then have to take action. We probably spend more time securing our product, but get a more secure product, which is actually what we want.

Overall, knowing what the risks are, and being able to make considered judgments about those risks, means that we are much more comfortable that our product is secure. And when there are high-risk issues, we're able to take action very quickly. The time to resolution for anything serious that is discovered in downstream libraries is dramatically reduced, and that's really useful.

What is most valuable?

The core offering of reporting across multiple projects and being able to build that into our build-pipelines, so that we know very early on if we've got any issues with dependencies, is really useful.

We're loving some of the Kubernetes integration as well. That's really quite cool. It's still in the early days of our use of it, but it looks really exciting. In the Kubernetes world, it's very good at reporting on the areas around the configuration of your platform, rather than the things that you've pulled in. There's some good advice there that allows you to prioritize whether something is important or just worrying. That's very helpful.

In terms of actionable items, we've found that when you're taking a container that has been built from a standard operating system, it tends to be riddled with vulnerabilities. It's more akin to trying to persuade you to go for something simpler, whether that's a scratch or an Alpine container, which has less in it. It's more a nudge philosophy, rather than a specific, actionable item.

We have integrated Snyk into our software development environment. The way Snyk works is that, as you build the software in your pipelines, you can have a Snyk test run at that point, and it will tell you if there are newly-discovered vulnerabilities or if you've introduced vulnerabilities into your software. And you can have it block builds if you want it to. Our integrations were mostly a language-based decision. We have Snyk integrated with Python, JavaScript Node, and TouchScript code, among others, as well as Kubernetes. It's very powerful and gives us very good coverage on all of those languages. That's very positive indeed.

We've got 320-something projects — those are the different packages that use Snyk. It could generate 1,000 or 2,000 vulnerabilities, or possibly even more than that, most of which we can't do anything about, and most of which aren't in areas that are particularly sensitive to us. One of our focuses in using Snyk — and we've done this recently with some of the new services that they have offered — is to partition things. We have product code and we have support tools and test tools. By focusing on the product code as the most important, that allows us to scope down and look at the rest of the information less frequently, because it's less important, less vulnerable.

From a fixing-of-vulnerabilities perspective, often Snyk will recommend just upgrading a library version, and that's clearly very easy. Some of the patching tools are a little more complicated to use. We're a little bit more sensitive about letting SaaS tools poke around in our code base. We want a little bit more sensitivity there, but it works. It's really good to be able to focus our attention in the right way. That's the key thing.

Where something is fixable, it's really easy. The reduction in the amount of time it takes to fix something is in orders of magnitude. Where there isn't a patch already available, then it doesn't make a huge amount of difference because it's just alerting us to something. So where it wins, it's hugely dramatic. And where it doesn't allow us to take action easily, then to a certain extent, it's just telling you that there are "burglaries" in your area. What do you do then? Do you lock the windows or make sure the doors are locked? It doesn't make a huge difference there.

What needs improvement?

One of the things that I have mentioned in passing is because we have a security team and we have the development team. One of the things that would make the most difference to me is because those two teams work independently of each other. At the moment, if a developer ignores a problem, there's no way that our security team can easily review what has been ignored and make their own determination as to whether that's the right thing to do or not. That dual security team process is something that I'd love to see.

Other than that, there is always more work to do around managing the volume of information when you've got thousands of vulnerabilities. Trying to get those down to zero is virtually impossible, either through ignoring them all or through fixing them. That filtering or information management is always going to be something that can be improved.

For how long have I used the solution?

We've been using Snyk for about 18 months.

What do I think about the stability of the solution?

The stability is pretty good.

We've had two challenges over the two years we've been using Snyk. One was the size of our projects in our JavaScript world. It meant that some of the tests would fail through memory issues. They've done a lot of work on improving that, and we have found some workarounds. 

Sometimes, because we're talking out to Snyk services, our pipelines fail because the Snyk end isn't running successfully. That doesn't happen very often, so it hasn't been a major impact, but there have been one or two cases where things didn't work there.

What do I think about the scalability of the solution?

The solution is scalable, absolutely. We plan to increase our usage of Snyk. As we grow, every developer will be put into it. Everything we build, all of our development, is using Snyk as the security scanning tool.

How are customer service and technical support?

Snyk's technical support is very good. We haven't used it much. I've engaged with customer success and some of the product managers and they're really keen to get feedback on things. 

We have had one or two things where we have talked to support and they have been very positive engagements.

Which solution did I use previously and why did I switch?

We were small enough that we didn't have a previous solution.

How was the initial setup?

The deployment was easy. When we were first evaluating Snyk, our automation engineer got a test account, installed it, and built it into our development pipelines without needing any support at all from Snyk. It was one of the more interesting sales engagements. They sent us an email, but we got it up and going and were using it in its trial mode without needing any assistance at all. That's clearly a demonstration of that ease of integration.

Working end-to-end, it took a couple of days for one person to get it wired up.

We followed the Snyk recommendations. We built a container that takes the Snyk service, and we run that in our build-pipeline. It dropped in very easily because of the way we were already operating.

In terms of developer adoption, we had to mandate it. So everybody uses it. It's built into all the pipelines. Generally, it's pretty good. The engineering team has 17 people and pretty much everybody is using Snyk as part of that. I don't think security is necessarily at the forefront of everybody's minds, and we're working on that. Snyk has helped.

We have a very complex infrastructure so the only challenge with Snyk is that it tells us a lot of information. They're pretty good at managing that, but you still have to take action. It's very good for knowing things, but it's also pretty good at being able to work out how to focus your attention.

That volume of information, where you get lots of things that are not important or not critical, tends to create a little bit of "blindness" to things. We're used to Snyk tests failing, alerting us to things that we're choosing to ignore at that moment because they're not fixable. That's one of the interesting challenges, to turn it into actionable information.

What was our ROI?

We had a lot of information security audits and we found that Snyk enabled sales because they weren't being blocked by InfoSec issues. That means that it probably paid for itself with the first customer deal that we were able to sign. We were able to show them that we had Snyk up and working really quickly, which was great. 

In terms of other metrics, it's slightly harder to measure, because it's allowing us to prevent problems before they become issues. But from a commercial engagement point of view, it was well worth it, very quickly.

What's my experience with pricing, setup cost, and licensing?

It's good value. That's the primary thing. It's not cheap-cheap, but it's good value. We managed to build a package of features that we were able to live with, in negotiation, and that worked really well. We did a mix and match. We got single sign-on and some of the other things.

The Kubernetes, the container service, versus the source-code service, for us, as a cloud deployment, was well worth it. The ability there has been really useful, but that's clearly an extra cost.

Which other solutions did I evaluate?

There are other tools that can perform some of the functions Snyk does. We did some analysis of competitors, including Black Duck Synopsys and Veracode, but Snyk was clearly the most hungry and keen to assist, as a business. There were a lot of incumbent competitors who didn't really want our business. It felt like Snyk clearly did want to do the right thing and are continuing to improve and mature their product really fast, which is brilliant.

Snyk, was at a good price, has very comprehensive coverage, and as a company they were much easier to engage with. It felt like some of the other competitors were very "big boys." With Snyk we had the software working before we'd even talked to a sales guy, whereas with other solutions, we weren't even allowed to see the software running in a video call or a screen-sharing session until we'd had the sales call. It was completely ridiculous.

What other advice do I have?

My advice is just try it. If you've got a modern development pipeline, it's really easy to wire up, if you've got somebody with the right skills to do that. We found with a development community, it's really easy to build these things. Get on with it and try it. It's really easy to trial and see what it's telling you about. That's one of the great upsides of that model: Play with it, convince yourself it's worth it, and then talk to them about buying it.

It's hard to judge Snyk's vulnerability database in terms of comprehensiveness and accuracy. It clearly is telling us a lot of information. I have no reason to doubt that it is very good, but I can't categorically back that up with my own empirical evidence. But I trust them.

I don't get the sense there are many false positives from Snyk, and that's a very positive thing. When it tells us something, it's almost certainly a real issue, or at least that a real issue has been found somewhere in the open-source world. 

What is always harder to manage is to know what to do if there is no resolution. If somebody has found a problem, but there is no fix, then we have a much more interesting challenge around evaluation of whether we should do something. Do we remove that library? Do we try and fix it ourselves, or do we just wait? That process is the more complicated one. It's less of a false positive issue and more an issue of a real finding that you can't do anything about easily. That can sometimes leave you ignoring things simply because there's no easy action to take, and that can be slightly dangerous.

The solution allows our developers to own security for the applications and the containers they run in the cloud, although that's still something we're working on. It's always a challenge to get security to be something that is owned by developers. The DevOps world puts a lot of responsibility on developers and we're still working to help them know; to have better processes and understand what they need to be doing. We still have a security oversight function who is trying to keep an eye on things. We're still maturing ourselves, as a team, into DevSecOps.

As for Snyk's lack of SAST and DAST, that's just another one of the tools in the toolkit. We do a lot of our own security scanning for application-level or platform-level attacks. We have pen tests. So the static application is not something that we've seen as particularly important, at this point.

Snyk is an eight out of 10. It's not perfect. There are little things that could clearly be improved. They're working on it as a company. They're really engaged. But the base offering is really good. We could also use it better than we are at the moment, but it's well worth it. It's brilliant.

The biggest lesson I have learned from using this solution is that there is a big gap between thinking your software is safe and knowing what the risks are. Information is power. You don't have to take action, but at least you are informed and can make a considered judgment if you take it seriously. That is what Snyk really provides.

The ethos of Snyk as a company is really positive. They're keen to engage with customers and do something in a slightly different way, and that younger, hungrier, more engaged supplier is really nice to work with. They're very positive, which is good.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Security Consultant at a tech services company with 11-50 employees
Consultant
Top 20
Straightforward to install and reports few false positives, but it should be easier to specify your own validation and sanitation routines
Pros and Cons
  • "The most valuable feature is that there were not a whole lot of false positives, at least on the codebases that I looked at."
  • "It should be easier to specify your own validation routines and sanitation routines."

What is our primary use case?

I am a consultant and I work to bring solutions to different companies. Static code analysis is one of the things that I assist people with, and Coverity is one of the tools that I use for doing that.

I worked with Coverity when doing a couple of different PoCs. For these, I get a few different teams of developers together and we want to decide what makes the most sense for each team as far as scanning technologies. So, part of that is what languages are supported, part of that is how extensible it is, and part of that extensibility is do the developers have time to actually create custom roles?

We also want to know things like what the professional are services like, and do people typically need many hours of professional services to get the system spun up. Other factors include whether it deployed on-premises or in the cloud, and also, which of those environments it can operate with.

One of the things is there's not really a shining star out of all of these tools. SaaS tools have been getting more mature in the past decade, particularly in how fast they run, but also in the results they get. Of course, framework and language additions that increase the capability with results are considered.

What is most valuable?

The most valuable feature is that there were not a whole lot of false positives, at least on the codebases that I looked at.

What needs improvement?

It should be easier to specify your own validation routines and sanitation routines.

For example, if you have data coming into the application, perhaps something really simple like it's getting a parameter from a web page that is your username when you go to a website to login, and then ultimately that's being consumed by something, the data goes through some business logic and then, let's say, it enters that username into a database. 

Well, what if I say my username is JavaScript calling alert hello. Now I've just entered JavaScript code as my username and you should be able to sanitize that pretty easily with a number of different techniques to remove the actual executable code from what they entered on the login page. However, once you do that, you want the program to understand that you are doing it and then remove what looks like a true positive at first glance because, in fact, the data being consumed in the SQL exec statement is not unsanitized. It's not just coming from the web.

Likewise, let's say you log in, and then it says, "Hello" Such and such. You can inject JavaScript code there and have it be executed when it says hello. So basically the ability to say that this validates and then also above and beyond that, this validates data coming from any GET parameter on the web. You should be able to specify a particular routine validates all of that, or this particular routine validates anytime we read data from a database, maybe an untrusted database.

So, if I reach for that data eight times and I say, "Hey," this validates it once, I also get the option to say it validates it the other seven times, or I could just say it's a universal validator. Obviously, a God validator so to speak is not a good practice because you're sure to miss some edge cases, but to have one routine validate three or four different occurrences is not rare and is often not a bad practice.

Another thing that Coverity needs to implement or improve is a graphical way to display the data. If you can see an actual graphical view of the data coming in, then it would be very useful. Let's say, the first node would be GET parameter from a webpage, and then it would be an arrow to another method like validate user ID, and then another method of GET data about the user. Next, that goes into the database, and so forth. When that's graphically displayed, then it is helpful for developers because they can better grab onto it.

The speed of Coverity can be improved, although that is true for any similar product.

What do I think about the stability of the solution?

It never crashed so stability has not been an issue.

What do I think about the scalability of the solution?

I have never used it for more than four relatively small to medium-sized projects at a time, so I've never needed to scale it.

How are customer service and technical support?

I have dealt with sales engineering, rather than technical support. They would sometimes provide a liaison to tech support if they didn't know the answer, but really, they guided us through the proof of concept and they knew that they were under a competitive evaluation against the other tools. They were able to resolve any issues that we came across and got us up and running fairly quickly, as far as I recall.

How was the initial setup?

Coverity is on the good side when it comes to setting it up. I think that it is pretty straightforward to get up and running.

What about the implementation team?

We implement Coverity on our own, with guidance from Coverity.

What's my experience with pricing, setup cost, and licensing?

The price is competitive with other solutions.

Which other solutions did I evaluate?

In addition to Coverity, I have experience with Checkmarx, Fortify, Veracode, and HCL AppScan, which was previously known as IBM AppScan.

Checkmarx is probably the most extensible and customizable of these products, and you're able to use the C# language to do so, which a lot of developers are familiar with.

HCL AppScan is another tool that has customization capabilities. They are not as powerful but they are easier to implement because you don't need to write any code.

I cannot give an endorsement for any particular one. They all have their merits and it just depends on the requirements. Generally, however, all of these tools are getting better.

What other advice do I have?

My advice for anybody who is considering this product is to first look around your organization to see if it has already been implemented in another group. If you're a big organization then Coverity or a similar tool may already be in use. In cases like this, I would say that it is best to adopt the same tool because your organization has already gone down that path and there are no huge differences in the capabilities of these tools. Some of them do it in different ways and some do things that others don't, but you won't have the initial bump of the learning curve and you can leverage their experience.

I would rate this solution a seven out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Gustavo Lugo
Chief Solutions Officer at CleverIT B.V.
Reseller
Top 20
Easy to deploy and applicable for various uses
Pros and Cons
  • "It is an easy tool that you can deploy and configure. After that you can measure the history of your obligation and integrate it with other tools like GitLab or GitHub or Azure DevOps to do quality code analysis."
  • "In terms of what can be improved, the areas that need more attention in the solution are its architecture and development."

What is our primary use case?

I am now working in a consultancy company and I work with different clients in different industries. For this reason I implement, for example, a delivery pipeline with the process whereby we need to validate the quality gate of the quality code. Meaning, the developer creates the unit testing and the code coverage, but grants the code coverage for a specific person. In other cases, we used to see what the technical depth was to see if if there are any bugs in the applications - the web application, mobile application and different languages, like, C-Sharp, JavaScript or Java, et cetera.

We deploy SonarQube on-premise on a Linux server and our pipelines were created with GitLab and Azure DevOps. Meaning that Azure DevOps and GitLab are the tools that do the build and release process.

We use Microsoft Azure and Google Cloud Platform a little.

What is most valuable?

In terms of most valuable feature, when you compute SonarQube you need to install an extension. This extension depends on the version control. You need to install different extensions or work with a specific language to use as the extensions, all of which I work in with different projects.

What needs improvement?

In terms of what can be improved, the areas that need more attention in the solution are its architecture and development.

Additionally, the QA team also needs work in different aspects. When you think about the support area - when the support team has an incident they need to do a hostage. When they do that they do a commit in the version control. These commits trigger a new build process and this process needs validation from SonarQube because we need to validate the quality of the software product for different cases and different aspects.

For how long have I used the solution?

I have been using SonarQube for about four years, with different versions.

What do I think about the stability of the solution?

SonarQube works very well, but I prefer SonarCloud because the tendency of the technology world is to think less about the structure and more about the process and the value that this process provides.

What do I think about the scalability of the solution?

In terms of scalability, with proper configuration and deployment, there is higher availability.

I have companies with 20 users and I have customers with 100 users. We work with a big company in Chile and in some cases national companies, in other cases international companies. With the international companies the majority of them are more than 1,000 users.

I have a technical DevOps team. The majority of the time we implement the trial version so that we show the value of the tool to our clients and they understand about the pricing and the cost of the tool.

It depends on the maturity of the company. In some case, we have companies that don't know about SonarQube so we deploy it to show the value. In other cases we have clients with no SonarQube experience but they know the quality of the codes. In this case we provide a license. In the majority of the cases we provide the license or the subscription for SonarCloud. Other clients get access to SonarQube directly.

How are customer service and technical support?

I have never used technical support from the SonarQube support team.

I work very well with the documentation you find on the internet.

How was the initial setup?

The initial setup is straightforward the majority of time. It takes about two hours.

What about the implementation team?

I work in a consultancy company so we do the implementation. We deploy for our customers.

Which other solutions did I evaluate?

We did evaluate other options, for example Q1 and Veracode. In specific cases we created different aspects with different tools and these were the top peers that we would compare it to - Q1 and Veracode.

In terms of differences, Veracode is used more for the security of the development and you can configure the gates while thinking about software security and things like that. With Q1, the difference is the type of the license. In Q1 you have projects and you pay for the line. I know that SonarQube was changing the licensing plan. Right now, before you pay for a license, you pay for fair lines that you extend. This is the difference between these three tools.

What other advice do I have?

I do recommend SonarQube because it is an easy tool that you can deploy and configure. After that you can measure the history of your obligation and integrate it with other tools like GitLab or GitHub or Azure DevOps to do quality code analysis.

On a scale of one to ten, I would give SonarQube an eight. To give it a 10 and not an eight, I would like to see architecture development and the QA area improved.

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: My company has a business relationship with this vendor other than being a customer: Reseller
Balaji Senthiappan
Assistant Vice President at Hexaware Technologies Limited
Real User
Top 20
Great at reporting vulnerabilities, helps with security, and reveals development threats well
Pros and Cons
  • "The solution is good at reporting the vulnerabilities of the application."
  • "It would be ideal if I could try some pre-built deployment scenarios so that I don't have to worry about whether the configuration sector team is doing it right or wrong. That would be very helpful."

What is our primary use case?

Currently, we build our products for the banking industry and use this solution in that process.

From a development cycle, we update the SQL injections that basically shows what a developer may have to address. Then, if there is still a problem, we're concerned at the architect level. That's at least initially reported by the customers when they do another round of review after we deliver our code. 

What is most valuable?

The solution is good at reporting the vulnerabilities of the application. 

It can help us with security, SQL injection vulnerability, known vulnerabilities, et cetera. Any kind of a threat that we get in the development cycle, is what we will look for. This solution helps us find them.

What needs improvement?

I can't recall any features that are lacking. In my role as a service provider, I only go up to standards defined by somebody else. So far, this solution has met their standards.

So far I've not come across a scenario where we had to do anything that's a major rework due to the fact that we didn't catch something soon enough in the queries that we are using.

It would be ideal if I could try some pre-built deployment scenarios so that I don't have to worry about whether the configuration sector team is doing it right or wrong. That would be very helpful.

Right now, I can't give it off to a team and expect them to give me a report that I'm happy with. I will give it to a team and they will have to have another person sit with them to make sure they have configured it right. Some kind of pre-designed templates, pre-designed guidelines, or patterns to compliment the tool would go a long way in helping us use the solution.

For how long have I used the solution?

I've been using the solution for five or six years at this point.

What do I think about the stability of the solution?

From the perspective of the development cycle that we use, we find it stable enough. I don't use it in production or I don't have to update sites running all the time. Once a week when I will build a VM pack, I push into another environment, and that's probably the time I would make it. For me, I find it to be stable enough.

How are customer service and technical support?

I haven't really used technical support. Therefore, I can't really speak to their level of responsiveness or knowledgeability.

Which solution did I use previously and why did I switch?

I'm not a security specialist, however, to be clear, we provide services. On a development project, we frequently run into various solutions. It's not just OWASP. It could be Veracode, for example, or multiple other tools. 

How was the initial setup?

The initial setup is not necessarily straightforward. Most are complex. You need a senior person to specialize, understand the set up in which they are running, and understand the tools they are going to use. You need to ask: do they know what to look for and support? I wouldn't say it's complex to use. That said, normally the resources are costly.

What's my experience with pricing, setup cost, and licensing?

In security, you'd expect the product is priced at a premium, so people don't check the pricing for the most part. In my case, I don't buy the product myself. I have the customers buy it for me. I'm not very worried about the price as a consultant.

What other advice do I have?

We are an IT service provider, which means that we use a variety of tools based on what our customer preferences are. 

There's all, at most, I would say, about 20 companies that we would have the funds to use the solution with. OWASP is definitely in the top three as a tool that we would probably recommend to our team, as a frequent users' tool, however, I don't believe we have any kind of a formal relationship with the company. 

Multiple teams use it. I have not heard of anybody complaining about anything to do with this particular solution. I would say it's pretty good. I would give it a rating of eight out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Get our free report covering SonarSource, Checkmarx, Micro Focus, and other competitors of Veracode. Updated: January 2022.
564,729 professionals have used our research since 2012.