We changed our name from IT Central Station: Here's why
Get our free report covering SonarSource, Veracode, Micro Focus, and other competitors of Checkmarx. Updated: January 2022.
564,643 professionals have used our research since 2012.

Read reviews of Checkmarx alternatives and competitors

Ramesh Raja
Senior Security Architect at a tech services company with 5,001-10,000 employees
Real User
Top 5
Continuously looks at application traffic, adding to the coverage of our manual pen testing
Pros and Cons
  • "We use the Contrast OSS feature that allows us to look at third-party, open-source software libraries, because it has a cool interface where you can look at all the different libraries. It has some really cool additional features where it gives us how many instances in which something has been used... It tells us it has been used 10 times out of 20 workloads, for example. Then we know for sure that OSS is being used."
  • "Contrast Security Assess covers a wide range of applications like .NET Framework, Java, PSP, Node.js, etc. But there are some like Ubuntu and the .NET Core which are not covered. They have it in their roadmap to have these agents. If they have that, we will have complete coverage."

What is our primary use case?

We use the solution for application vulnerability scanning and pen-testing. We have a workflow where we use a Contrast agent and deploy it to apps from our development team. Contrast continuously monitors the apps.

When any development team comes to us and asks, "Hey, can you take care of the Assess, run a pen test and do vulnerability scanning for our application?" We have a workflow and deploy a Contrast agent to their app. Because Contrast continuously monitors the app, when we have notifications from Contrast and they go to the developers who are responsible for fixing that piece of the code. As soon as they see a notification, and especially when it's a higher, critical one, they go back into Contrast, look at how to fix it, and make changes to their code. It's quite easy to then go back to Contrast and say, "Hey, just consider this as fixed and if you see it come back again, report it to us." Since Contrast continuously looks at the app, if the finding doesn't come back in the next two days, then we say, "Yeah, that's fixed." It's been working out well in our model so far.

We have pre-production environments where dedicated developers look at it. We also have some of these solutions in production, so that way we can switch back.

It's hosted in their cloud and we just use it to aggregate all of our vulnerabilities there.

How has it helped my organization?

If an app team is going to deploy new features to prod, they put in a ticket saying, "We are including these features in our 2.0 release." The ticket comes to our team. We deploy Contrast Security and then we do a bunch of manual pen tests. During the time that we're doing manual pen tests, Contrast will have a bunch of additional findings because Contrast is sensor-based. It's an agent-based solution which continuously looks at traffic coming in and going out of the application. When my team does manual penetration tests, Contrast looks through those flows and that makes our coverage better. It goes hand-in-hand with our pen test team. When the manual pen-test team tests the application, Contrast is looking at that traffic. Another application, like a Qualys, doesn't go hand-in-hand with a manual pen test team. Contrast really helps us because it's more like another resource looking at traffic, and at logs. It's like a watchman looking at traffic going in and going out. I literally consider it as another resource looking at traffic, day in and day out.

Contrast has also reduced the number of false positives we have to deal with, by something like 10 to 20 percent over the 18-plus months that we've had it.

The solution is accurate 90 percent of the time. Most of the time, when Contrast has identified top vulnerabilities in the OWASP Top 10, our manual pen-test team has gone in and said, "Yes, for sure." There were times when, because of resourcing issues, we did not have people pen-testing and they would just say, "Okay, we'll see what Contrast says." And sure enough, Contrast would come back with 10 to 20 critical vulnerabilities. Then we would backtrack and have manual pen do some pen tests. They would come back and say, "Yes, it has literally identified most of them;" things like a SQL Injection, which is in the OWASP Top 10. So we've seen that happen in the past, and that's why I feel the accuracy of Contrast is pretty good.

The advantage of using Contrast is that it is continuous.

I've seen some of the development teams completely take up Contrast themselves and work in Contrast. For example, a developer will be notified of an issue and will fix the code. He will then go back to Contrast and mark it as remediated. Then, he will keep watching the portal. He will be notified if the same vulnerability is found. We have seen teams that completely like the information that Contrast provides and they work independently with Contrast, instead of having a security team guiding them and holding their hands. There are times when we do hold hands for some of the teams, but it really depends on the software developers' maturity and secure coding practices.

In addition, it definitely helps save us time and money by being able to fix software bugs earlier in the software development lifecycle. It really depends on where you put Contrast. If you put Contrast in your Dev environment, sure enough, as soon as the developer deploys his code and QA is testing it in that environment, it will immediately flag and say, for instance, "You're not using TLS 1.2." The developer will go back and make those changes. It really depends on what model you have and where you want to use Contrast to your advantage. A lot of teams put it in the development environment or a preparation environment and get to fixing vulnerabilities before something is released.

I've also seen the other side of the fence where people have deployed it in production. The vulnerabilities keep coming. Newer hacks develop over time. When teams put it in prod and an exploit happens, they can use Contrast Protect and block it on the other side. You can use it as you need to use it.

The time it saves us is on the order of one US-based FTE, a security person at an average pay level. At a bare minimum, Contrast helps us like that resource. It's like having a CISSP guy, in the US, on our payroll. That's how we quantify it in our team and how we did so in our project proposal.

What is most valuable?

Contrast has a feature called Protect. When a real exploit comes through, we can look at it and say, "Hey, yeah, this is a Cross-Site Scripting or SQL Injection," and then we can block it.

Another especially valuable feature is the stack trace. I've been in the application security space for about 15-plus years now. I saw it when it was a baby or when people thought of it as the "icing on the cake." That was especially true when they had money. Then they would say, "Yeah, we can now look at security." Now, security is a part of the SDLC. So when Contrast identifies a vulnerability, it provides very important information, like stack trace and variables.

It also has another feature called IAST, interactive application security testing. When I started out I was actually an embed developer, and now I'm managing an OWASP team. I've seen both ends of the spectrum and I feel that the information for every vulnerability that Contrast provides is really cool and amazing, enabling us to go and fix the vulnerabilities.

It also has features so you can tweak a policy. You can make a rule saying, "Hey, if this vulnerability comes back, it is not an issue." Or you can go and change some code in a module and tell Contrast, "This is per-design." Contrast will cleverly identify and recognize that it was marked as per-design. It will not come back and say that's a vulnerability.

We use the Contrast OSS feature that allows us to look at third-party, open-source software libraries, because it has a cool interface where you can look at all the different libraries. It has some really cool additional features where it gives us how many instances in which something has been used. For example, of the total, say, 500 calls, has the OSS been used that many times? It tells us it has been used 10 times out of 20 workloads, for example. Then we know for sure that OSS is being used. There are tools that would tell you something is being used, but sometimes developers can include libraries that are never used. Contrast goes one step further and tells you how many times something has been used. 

I can't quantify the effect of the OSS feature on our software development, but it gives us a grading from A to F. In this evolving security world, customers come back to us and say, "Hey, do you guys have a pen test report? We can go back to Contrast and pull all this stuff and provide it to customers.

What needs improvement?

Contrast Security Assess covers a wide range of applications like .NET Framework, Java, PSP, Node.js, etc. But there are some like Ubuntu and the .NET Core which are not covered. They have it in their roadmap to have these agents. If they have that, we will have complete coverage. 

Let's say you have .NET Core in an Ubuntu setup. You probably don't have an agent that you could install, at all. If Contrast gets those built up, and provides wide coverage, that will make it a masterpiece. So they should explore more of technologies that they don't support. It should also include some of the newer ones and future technologies. For example, Google is coming up with its own OS. If they can support agent-based or sensor-based technology there, that would really help a lot.

For how long have I used the solution?

I have been using Contrast Security Assess for a year and a half.

What do I think about the stability of the solution?

You can't quantify anything about the stability. It's more an autopilot, like agents. It's more like a process monitor that keeps looking at traffic. It's quite similar to that. Once you put it on there, it just hangs in there until the infrastructure team decides to move the old apps from PCF to another environment. Once it has been deployed it's done. It's all auto-maintained.

What do I think about the scalability of the solution?

It depends on how many apps a company or organization has. But whatever the different apps are that you have, you can scale it to those apps. It has wide coverage. Once you install it in an app server, if the app is very convoluted, it has too many workflows, that is no problem. Contrast is per app. It's not like when you install source-code tools, where they charge by lines of code, per KLOC. Here, it's per app. You can pick 50 apps or 100 apps and then scale it. If the app is complex, that's still no problem, because it's all per app.

We have continuously increased our license count with Contrast, because of the ease of deployment and the ease of remediating vulnerabilities. We had a fixed set for one year. When we updated about six months ago, we did purchase extra licenses and we intend to ramp up and keep going. It will be based on the business cases and the business apps that come out of our organization.

Once we get a license for an app, folks who are project managers and scrum masters, who also have access to Contrast, get emails directly. They know they can put defects right from Contrast into JIRA. We also have other different tools that we use for integration like ThreatFix, and risk and compliance and governance tools. We take the results and upload them to those tools for the audit team to look at.

How are customer service and technical support?

They have a cool, amazing support team that really helps us. I've seen a bunch of other vendors where you put in tickets and they get back to you after a few days. But Contrast responds really fast. From the word "go," Contrast support has been really awesome.

That's their standard support. They don't have premium support. I've worked with different vendors, doing evaluations, and Contrast is top-of-the line there.

Which solution did I use previously and why did I switch?

Before Contrast we were using regular manual pen-testing tools like Burp and other common tools. We switched to Contrast because the way it scans is different. Back in those days, security would do a pen test on Friday or Saturday — over the weekend when the traffic is less. We used to set aside time. Contrast doesn't work that way. It's continuous scanning. We install an agent and it continuously does it. Continuous is way better than having a separate time where you say, "We're going to scan at this time." The Dev-SecOps model is continuous and Contrast fits well there. That's why we made the switch.

Contrast is above par with respect to the different applications that I've used in the past, like Veracode. I saw false positives and false negatives with all those tools. But Contrast is better than all the other tools that I've used.

How was the initial setup?

The initial setup was straightforward. At the time, I was doing a proof of concept of Contrast Security to see how it works. It was fairly simple. Our company has a bunch of apps in various environments. Initially, we wanted to make sure that it works for .NET, Java, and PCF before we procured it. It was easy.

Our implementation strategy was coverage for a complete .NET application and then coverage for a complete Java application, in and out, where you find all the vulnerabilities and you have all the different remediation steps. Then we set up meetings with the app teams to go over some of it and explain things. And then, we had a bunch of apps in PCF. These were the three that we wanted: .NET, Java, and PCF. They are our bread and butter. We did all three in 45 days.

From our side, it was just me and another infrastructure guy involved.

What about the implementation team?

We only worked with Contrast. There were times when Contrast worked with Pivotal, internally, for PCF. But they pulled it off because they have a fairly good agreement with Pivotal and its support team. Initially, we had a few issues with deploying a Contrast tile to Pivotal. But Contrast worked things out with Pivotal and got all of it up for us. It was easy for us to just deploy the tile and bind the application. Once the application is bound, it's all about the vulnerabilities and remediation.

What was our ROI?

We expect to see ROI with the architecture team, the infrastructure team, and with the development teams, especially when it comes to how early in our development cycle the vulnerabilities are found and remediated. That plays a big part because the more time it takes to find a software vulnerability, obviously, the more your cost to market will be substantially higher.

What's my experience with pricing, setup cost, and licensing?

I like the per-application licensing model, but there are reasons why some solutions want to do per KLOC. For us, especially because it's per app, it's really easy. We just license the app and we look at different vulnerabilities on that app and we remediate within the app. It's simpler.

If you have to go to somebody, like a Dev manager and ask him, "Hey, how many thousands of lines of code does your application have?" he will be taken aback. He'll probably say, "I don't know." It's difficult to cost-segregate and price things in that kind of model. But if, like with Contrast, they say, "Hey, your entire application — however big it is, we don't care. We're just going to use one license," that is simpler. This type of license model works better for us.

Which other solutions did I evaluate?

Before choosing Contrast Assess, we looked at Veracode and Checkmarx. 

Contrast does things continuously so it's more of an IAST. Checkmarx didn't. Using it, you would have to upload a .war file and then it would do analysis. You would then go back to the portal and see the vulnerabilities there. 

It was the same with Veracode. When you take a SAST piece or a DAST piece, you have to have some specific timing in some workflows and then you upload all of the stuff to their portal and wait for results. The results would only come after three days or after five days, depending on how long it takes to scan that specific workflow. 

The way the scanning is done is fundamentally different in Contrast compared to how the solutions do it. You just install Contrast on the app server and voilà. Within five minutes you might see some vulnerabilities when you use that application workflow.

What other advice do I have?

If you are thinking about Contrast, you should evaluate it for your specific needs. Companies are different. The way they work is different. I know a bunch of companies that still have the Waterfall model. So evaluate and see how it fits in your mode. It's very easy to go and buy a tool, but if it does not fit very well in your processes and in your software development lifecycle, it will be wasted money. My strongest advice is: See how well it fits in your model and in your environment. For example, are developers using more of pre-production? Are they using a Dev sandbox? How is QA working and where do they work? It should work in your process and it should work in your business model.

"Change" is the lesson I have taken away by using Contrast. The security world evolves and hackers get smarter, more sophisticated, and more technology-driven. Back in the day when security was very new, people would say a four-letter or six-letter password was more than enough. But now, there is distributed computing, where they can have a bunch of computers trying to compute permutations and combinations of your passwords. As things change, Contrast has adapted well to all the changes. Even five years ago, people would sit in a war room and deploy on weekends. Now, with the DevOps and Dev-SecOps models, Contrast is set up well for all the changes. And Contrast is pretty good in providing solutions.

Contrast is not like other, traditional tools where, as you write the code they immediately tell you there is a security issue. But when you have the plugin and something is deployed and somebody is using the application, that's when it's going to tell you there's an issue. I don't think it has an on-desktop tool where, when the developer writes this code, it's going to tell him about an issue at that time, like a Veracode Greenlight. It is more of an IAST.

We don't have specific people for maintenance. We have more of a Dev-SecOps model. Our AppSec team has four people, so we distribute the tasks and share it with the developers. We set up a team's integration with them, or a notification with them. That way, as soon as Contrast finds something, they get notified. We try to integrate teams and integrate notifications. Our concern is more about when a vulnerability is found and how long it takes for the developer to fix it. We have worked all that out with Power BI so it actually shows us, when a vulnerability is found, how long it takes to remediate it. It's more like autopilot. It's not like a maintenance type of thing.

I would rate Contrast at nine out of 10. I would never give anything a 10, but Contrast is right up there.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Security Architect at a financial services firm with 1,001-5,000 employees
Real User
Top 20
Effective at preventing vulnerable code from going into production, but static analysis is prone to false positives
Pros and Cons
  • "The policy reporting for ensuring compliance with industry standards and regulations is pretty comprehensive, especially around PCI. If you do the static analysis, the dynamic analysis, and then a manual penetration test, it aggregates all of these results into one report. And then they create a PCI-specific report around it which helps to illustrate how the application adheres to different standards."
  • "The static analysis is prone to a lot of false positives. But that's how it is with most static analysis tools... Also, the static analysis can sometimes take a little while. The time that it takes to do a scan should be improved."

What is our primary use case?

We use it to scan our web applications before we publish them to see if there are any security vulnerabilities. We use it for static analysis and dynamic analysis.

How has it helped my organization?

Veracode has helped immensely with developer security training and in building developer security skills. Before we implemented it, we would find a lot more vulnerabilities in our applications. Now, with Veracode, the developers have started doing a lot more secure coding and they have much better coding practices.

It has also helped our organization to review code quicker, about 50 percent quicker, and to deploy more secure code.

And when it comes to the solution's ability to prevent vulnerable code from going into production, so far, I haven't seen any instances in which we've had false negatives. So it's pretty effective at that.

What is most valuable?

Among the most valuable features are the ability to 

  • submit the software and get automated scan results from it
  • collaborate with developers through the portal while looking at the code
  • create compliance reports.

Otherwise, we would have to do working sessions with developers and pull together all the different findings and then probably manage it in a separate mechanism like Excel. And to have to go through source code manually would be quite time intensive and tedious.

The solution also provides you with some guidance as well as best practices around how vulnerabilities should be fixed. It points you in that direction and gives the developers educational cues.

In addition, the policy reporting for ensuring compliance with industry standards and regulations is pretty comprehensive, especially around PCI. If you do the static analysis, the dynamic analysis, and then a manual penetration test, it aggregates all of these results into one report. And then they create a PCI-specific report around it which helps to illustrate how the application adheres to different standards.

The solution also integrates with developer tools such as Visual Studio and Eclipse.

What needs improvement?

It's pretty efficient, but sometimes the static analysis is prone to a lot of false positives. But that's how it is with most static analysis tools. In some cases, they might have other mechanisms which would deal with a particular vulnerability, but it wouldn't be captured in the code. I would estimate the false positive rate at about 20 percent.

Upon review, the developers understand the solution. But when they get the initial list of findings, it can be a bit daunting to them if it's not managed appropriately.

Also, the static analysis can sometimes take a little while. The time that it takes to do a scan should be improved. There are times when we need a quick turnaround but it will take a little while. We might have something scanning and not get a result until the following day. It's not too critical, but it does increase the delay. Most of the time, when developers submit their code, because of the way that we use it, it's because in their minds they're ready to have that code deployed into production. But the security testing, especially with the feedback, introduces additional time into the project, especially if a security fix is needed.

For how long have I used the solution?

I have been using Veracode for about two years.

What do I think about the stability of the solution?

There have been no issues with the stability. We haven't had any outages or any unavailability of the system, so far.

What do I think about the scalability of the solution?

We have about 40 developers but we use this product per project rather than per developer. All our projects will pass through this product. At any given time we have about 10 to 12 projects going on. Outside of developers, it's just the five security team members who also use Veracode.

Any increase of usage will be based on the business and if there are more software projects. Whenever there are additional software projects, we will then increase our usage.

How are customer service and technical support?

Their technical support is good, but we haven't really had to use it much, so far.

How was the initial setup?

The initial setup was pretty straightforward but, depending on the type of applications or the types of code that you're using, the setup requirements may be a little different. It takes a little getting used to, based on the environment in which you're working.

For example, for Visual studio, it might have specific requirements that are needed to package an application for scanning, whereas an Angular application would have different requirements. For me, as a non-developer, the issue would be around understanding those different requirements for each development environment.

Our deployment didn't take long; it took a couple of days. There were three people involved in, including a developer, someone setting it up, and a code reviewer. By "setting it up" I mean putting in the applications, saying what the application does—providing the business rules of the application.

We didn't have a specific strategy for deploying it. The software is pretty straightforward, once you have the application bundles to be scanned. There's not a whole lot to do after the packaging.

Maintenance-wise, it doesn't take much because it's SaaS. We don't really do much on our end.

What about the implementation team?

We did it in-house with Veracode. Working with Veracode for the deployment was pretty easy, pretty straightforward.

What was our ROI?

We've seen ROI in that we've cut down on the number of penetration tests we've been doing by about 50 percent, and also because of the stage at which the vulnerabilities are found, before they get into production. That means the risk has also been reduced.

It has reduced the cost of application security for our organization, but more than it has reduced the cost, it provides better software assurance.

What's my experience with pricing, setup cost, and licensing?

In addition to the standard licensing fees there's a support cost and an implementation cost at the beginning.

Which other solutions did I evaluate?

This year I looked at other vendors in the market, including Synopsys, Contrast, and Checkmarx. What I didn't like about them is that their licensing models are based on how many developers you have. That wasn't a good fit for me. In addition, Checkmarx didn't have a SaaS solution.

What other advice do I have?

If you are doing pipeline-based implementation, it would be more complex than the way that I'm doing this, but I didn't see any real challenges that would be tool-specific or vendor-specific, with implementation.

Your development model will really determine what the best fit is for you in terms of licensing, because of the project-based licensing. If you do a few projects, that's more attractive. If you have a large number of developers, that would also make the product a little more attractive. But if you have maybe one or two developers doing many projects, then you might look more towards software that has a developer-centric model.

We don't use the Static Analysis Pipeline Scan because of the build process that our  developers use. They don't really have an automated build pipeline in which they push the code to production. Also, with the false positive rate, it's a bit tricky when you implement that into the pipeline, as it might stop a developer from pushing code out to test. We use it more like a gate. The developers submit the code to us and then we scan it and review it with them.

The biggest lesson I've learned from using Veracode is that you need to manage it with the developers, so that you speak through the findings with them. It's not just a tool that you throw down their throats.

Overall, I would rate it at seven out of 10. Ideally, I would prefer a product that had the interactive testing, as well as the ability to scan a little faster.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Raja_Reddy
Manager at kellton
Real User
Top 20
Good integration and has useful feedback features, such as Quality Gate
Pros and Cons
  • "One of the most valuable features of SonarQube is its ability to detect code quality during development. There are rules that define various technologies—Java, C#, Python, everything—and these rules declare the coding standards and code quality. With SonarQube, everything is detectable during the time of development and continuous integration, which is an advantage. SonarQube also has a Quality Gate, where the code should reach 85%. Below that, the code cannot be promoted to a further environment, it should be in a development environment only. So the checks are there, and SonarQube will provide that increase. It also provides suggestions on how the code can be fixed and methods of going about this, without allowing hackers to exploit the code. Another valuable feature is that it is tightly integrated with third-party tools. For example, we can see the SonarQube metrics in Bitbucket, the code repository. Once I raise the full request, the developer, team lead, or even the delivery lead can see the code quality metrics of the deliverable so that they can make a decision. SonarQube will also cover all of the top OWASP vulnerabilities, however it doesn't have penetration testing or hacker testing. We use other tools, like Checkmarx, to do penetration testing from the outside."
  • "SonarQube could be improved with more dynamic testing—basically, now, it's a static code analysis scan. For example, when the developer writes the code and does the corresponding unit test, he can cover functional and non-functional. So the SonarQube could be improved by helping to execute unit tests and test dynamically, using various parameters, and to help detect any vulnerabilities. Currently, it'll just give the test case and say whether it passes or fails—it won't give you any other input or dynamic testing. They could use artificial intelligence to build a feature that would help developers identify and fix issues in the early stages, which would help us deliver the product and reduce costs. Another area with room for improvement is in regard to automating things, since the process currently needs to be done manually."

What is our primary use case?

Our primary use case of SonarQube is getting feedback on code. We are using Spring Boot and Java 8. We are also using SonarLint, which is an Eclipse IDE plugin, to detect vulnerabilities during development. Once the developer finishes the code and commits the code into the Bitbucket code repository, the continuous integration pipeline will automatically run using Jenkins. As part of this pipeline, there is a build unit test and a SonarQube scan. All the parameters are configured as per project requirements, and the SonarQube scan will run immediately once the developer commits the code to the repository. The advantage of this is that we can see immediate feedback: how many vulnerabilities there are, what the code quality is, the code quality metrics, and if there are any issues with the changes that we made. Since the feedback is immediate, the developer can rectify it immediately and can further communicate changes. This helps us with product quality and having less vulnerabilities in the early stages of development. 

This solution is deployed on-premise. 

What is most valuable?

One of the most valuable features of SonarQube is its ability to detect code quality during development. There are rules that define various technologies—Java, C#, Python, everything—and these rules declare the coding standards and code quality. With SonarQube, everything is detectable during the time of development and continuous integration, which is an advantage. SonarQube also has a Quality Gate, where the code should reach 85%. Below that, the code cannot be promoted to a further environment, it should be in a development environment only. So the checks are there, and SonarQube will provide that increase. It also provides suggestions on how the code can be fixed and methods of going about this, without allowing hackers to exploit the code. 

Another valuable feature is that it is tightly integrated with third-party tools. For example, we can see the SonarQube metrics in Bitbucket, the code repository. Once I raise the full request, the developer, team lead, or even the delivery lead can see the code quality metrics of the deliverable so that they can make a decision. SonarQube will also cover all of the top OWASP vulnerabilities, however it doesn't have penetration testing or hacker testing. We use other tools, like Checkmarx, to do penetration testing from the outside. 

What needs improvement?

SonarQube could be improved with more dynamic testing—basically, now, it's a static code analysis scan. For example, when the developer writes the code and does the corresponding unit test, he can cover functional and non-functional. So the SonarQube could be improved by helping to execute unit tests and test dynamically, using various parameters, and to help detect any vulnerabilities. Currently, it'll just give the test case and say whether it passes or fails—it won't give you any other input or dynamic testing. They could use artificial intelligence to build a feature that would help developers identify and fix issues in the early stages, which would help us deliver the product and reduce costs. 

Another area with room for improvement is in regard to automating things, since the process currently needs to be done manually.

Aside from other helpful features, the most important thing that SonarQube needs to do—the key feature—is to detect security vulnerabilities. The rest of the other features are helpful to the developer and the team to deliver the product faster, but security is a mandatory feature. 

As for additional features, SonarQube covers most of the languages, but there is still room for improvement covering the latest version of the tech stack—for example, Java 13. They're still improving, and they're focusing on SonarCloud nowadays. Currently, we aren't using all the top quality features of SonarCloud. I also think it would be helpful if SonarQube could integrate with Jira, a work management tool, or other communication tools, like Skype or Microsoft Teams, so that a bot could report directly to the developer. 

For how long have I used the solution?

I have been using SonarQube for the past three years. 

What do I think about the stability of the solution?

The stability and performance of SonarQube are good. We use it on a daily basis, as part of our code development. 

As far as maintenance, it mainly happens when the product is being developed. There may be some features which can be enhanced, based on customer feedback and the tech stack, such as how we can improve performance of have a deployment with zero downtime. There are so many technologies coming, so many things happening, and there is always room for code improvements and the product we develop. Our top considerations are quality and security, which are being improved in a continuous process. There are many new features and enhancements coming in—for example, if you want to upgrade from the Java 6 version, then you can upgrade the tech stack, which will reduce the number of lines of code and improve performance. 

What do I think about the scalability of the solution?

This solution is easy to scale. The instances in which we are deploying it are easy to scale because we are using it in production. We aren't supposed to deploy as part of the development, but the scalability feature is there because we are using Ansible, Kubernetes, and Docker. 

In our organization, there are currently around 25,000 people working with SonarQube. 

Which solution did I use previously and why did I switch?

We also use Checkmarx and Snyk. One of the main differences between them and SonarQube is that they have dynamic testing and analysis, rather than static analysis. 

How was the initial setup?

The initial setup wasn't a complex process. It was straightforward, and I had no issues. The deployment happened automatically and the pipeline was complete in three minutes. It depends on the scale of the project, the number of code repositories, the number of modules you are deploying, and all that. I would say deployment should take five minutes, maximum. 

What about the implementation team?

We implemented this solution through an in-house team. Everything happens internally and we have our own internal tools, so there are no third-parties involved in development. 

What's my experience with pricing, setup cost, and licensing?

I'm not too aware of the pricing because a different team covers that, but SonarQube has been on the market for a very long time, so I would guess the pricing would be decent. 

What other advice do I have?

I rate SonarQube an eight out of ten. 

To those looking to implement SonarQube, I would advise you not to run it manually—integrate it with tools like Bitbucket and Jenkins, and make it automatic. If you change one line of code, the SonarQube should run automatically and give you the report. Don't go and run it manually and check the reports and all—it should run automatically to the entire code base, not to your particular module. So you need to configure that, as well as your project requirements and what code quality metrics will be achievable—like 85% or 95%—because you want code quality for a better product, without loopholes. You need to configure these things before starting to work with SonarQube. 

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Security Consultant at a tech services company with 11-50 employees
Consultant
Top 20
Straightforward to install and reports few false positives, but it should be easier to specify your own validation and sanitation routines
Pros and Cons
  • "The most valuable feature is that there were not a whole lot of false positives, at least on the codebases that I looked at."
  • "It should be easier to specify your own validation routines and sanitation routines."

What is our primary use case?

I am a consultant and I work to bring solutions to different companies. Static code analysis is one of the things that I assist people with, and Coverity is one of the tools that I use for doing that.

I worked with Coverity when doing a couple of different PoCs. For these, I get a few different teams of developers together and we want to decide what makes the most sense for each team as far as scanning technologies. So, part of that is what languages are supported, part of that is how extensible it is, and part of that extensibility is do the developers have time to actually create custom roles?

We also want to know things like what the professional are services like, and do people typically need many hours of professional services to get the system spun up. Other factors include whether it deployed on-premises or in the cloud, and also, which of those environments it can operate with.

One of the things is there's not really a shining star out of all of these tools. SaaS tools have been getting more mature in the past decade, particularly in how fast they run, but also in the results they get. Of course, framework and language additions that increase the capability with results are considered.

What is most valuable?

The most valuable feature is that there were not a whole lot of false positives, at least on the codebases that I looked at.

What needs improvement?

It should be easier to specify your own validation routines and sanitation routines.

For example, if you have data coming into the application, perhaps something really simple like it's getting a parameter from a web page that is your username when you go to a website to login, and then ultimately that's being consumed by something, the data goes through some business logic and then, let's say, it enters that username into a database. 

Well, what if I say my username is JavaScript calling alert hello. Now I've just entered JavaScript code as my username and you should be able to sanitize that pretty easily with a number of different techniques to remove the actual executable code from what they entered on the login page. However, once you do that, you want the program to understand that you are doing it and then remove what looks like a true positive at first glance because, in fact, the data being consumed in the SQL exec statement is not unsanitized. It's not just coming from the web.

Likewise, let's say you log in, and then it says, "Hello" Such and such. You can inject JavaScript code there and have it be executed when it says hello. So basically the ability to say that this validates and then also above and beyond that, this validates data coming from any GET parameter on the web. You should be able to specify a particular routine validates all of that, or this particular routine validates anytime we read data from a database, maybe an untrusted database.

So, if I reach for that data eight times and I say, "Hey," this validates it once, I also get the option to say it validates it the other seven times, or I could just say it's a universal validator. Obviously, a God validator so to speak is not a good practice because you're sure to miss some edge cases, but to have one routine validate three or four different occurrences is not rare and is often not a bad practice.

Another thing that Coverity needs to implement or improve is a graphical way to display the data. If you can see an actual graphical view of the data coming in, then it would be very useful. Let's say, the first node would be GET parameter from a webpage, and then it would be an arrow to another method like validate user ID, and then another method of GET data about the user. Next, that goes into the database, and so forth. When that's graphically displayed, then it is helpful for developers because they can better grab onto it.

The speed of Coverity can be improved, although that is true for any similar product.

What do I think about the stability of the solution?

It never crashed so stability has not been an issue.

What do I think about the scalability of the solution?

I have never used it for more than four relatively small to medium-sized projects at a time, so I've never needed to scale it.

How are customer service and technical support?

I have dealt with sales engineering, rather than technical support. They would sometimes provide a liaison to tech support if they didn't know the answer, but really, they guided us through the proof of concept and they knew that they were under a competitive evaluation against the other tools. They were able to resolve any issues that we came across and got us up and running fairly quickly, as far as I recall.

How was the initial setup?

Coverity is on the good side when it comes to setting it up. I think that it is pretty straightforward to get up and running.

What about the implementation team?

We implement Coverity on our own, with guidance from Coverity.

What's my experience with pricing, setup cost, and licensing?

The price is competitive with other solutions.

Which other solutions did I evaluate?

In addition to Coverity, I have experience with Checkmarx, Fortify, Veracode, and HCL AppScan, which was previously known as IBM AppScan.

Checkmarx is probably the most extensible and customizable of these products, and you're able to use the C# language to do so, which a lot of developers are familiar with.

HCL AppScan is another tool that has customization capabilities. They are not as powerful but they are easier to implement because you don't need to write any code.

I cannot give an endorsement for any particular one. They all have their merits and it just depends on the requirements. Generally, however, all of these tools are getting better.

What other advice do I have?

My advice for anybody who is considering this product is to first look around your organization to see if it has already been implemented in another group. If you're a big organization then Coverity or a similar tool may already be in use. In cases like this, I would say that it is best to adopt the same tool because your organization has already gone down that path and there are no huge differences in the capabilities of these tools. Some of them do it in different ways and some do things that others don't, but you won't have the initial bump of the learning curve and you can leverage their experience.

I would rate this solution a seven out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Nawal Singh
Senior DevSecOps/Cloud Engineer at Valeyo
Real User
Top 20
Provides information about the issue as well as resolution, easy to integrate, and never fails
Pros and Cons
  • "It has a nice dashboard where I can see all the vulnerabilities and risks that they provided. I can also see the category of any risk, such as medium, high, and low. They provide the input priority-wise. The team can target the highest one first, and then they can go to medium and low ones."
  • "Its reports are nice and provide information about the issue as well as resolution. They also provide a proper fix. If there's an issue, they provide information in detail about how to remediate that issue."
  • "It would be great if they can include dynamic, interactive, and run-time scanning features. Checkmarx and Veracode provide dynamic, interactive, and run-time scanning, but Snyk doesn't do that. That's the reason there is more inclination towards Veracode, Checkmarx, or AppScan. These are a few tools available in the market that do all four types of scanning: static, dynamic, interactive, and run-time."
  • "We have to integrate with their database, which means we need to send our entire code to them to scan, and they send us the report. A company working in the financial domain usually won't like to share its code or any information outside its network with any third-party provider."

What is our primary use case?

We are using Snyk along with SonarQube, and we are currently more reliant on SonarQube.

With Snyk, we've been doing security and vulnerability assessments. Even though SonarQube does the same when we install the OWASP plugin, we are looking for a dedicated and kind of expert tool in this area that can handle all the security for the code, not one or two things.

We have the latest version, and we always upgrade it. Our code is deployed on the cloud, but we have attached it directly with the Azure DevOps pipeline.

What is most valuable?

It is a nice tool to check the dependencies of your open-source code. It is easy to integrate with your Git or source control. 

It has a nice dashboard where I can see all the vulnerabilities and risks that they provided. I can also see the category of any risk, such as medium, high, and low. They provide the input priority-wise. The team can target the highest one first, and then they can go to medium and low ones. 

Its reports are nice and provide information about the issue as well as resolution. They also provide a proper fix. If there's an issue, they provide information in detail about how to remediate that issue.

It is easy to integrate without a pipeline, and we just need to schedule our scanning. It does that overnight and sends the report through email early morning. This is something most of the tools have, but all of these come in a package together.

It never failed, and it is very easy, reliable, and smooth. 

What needs improvement?

It would be great if they can include dynamic, interactive, and run-time scanning features. Checkmarx and Veracode provide dynamic, interactive, and run-time scanning, but Snyk doesn't do that. That's the reason there is more inclination towards Veracode, Checkmarx, or AppScan. These are a few tools available in the market that do all four types of scanning: static, dynamic, interactive, and run-time.

We have to integrate with their database, which means we need to send our entire code to them to scan, and they send us the report. A company working in the financial domain usually won't like to share its code or any information outside its network with any third-party provider. Such companies try to build the system in-house, and their enterprise-level licensing cost is really huge. There is also an overhead of updating the vulnerability database.

For how long have I used the solution?

It has been more than one and a half years. 

What do I think about the stability of the solution?

It is stable. I haven't had any problems with its stability.

What do I think about the scalability of the solution?

It is easy. We have integrated Snyk with two to four projects, and we do run scanning every week to check the status and improvement in the quality of our code.

Currently, only I am using this solution because I'm handling all the stuff related to infrastructure and DevOps stuff in my company. It is a very small company with 100 to 200 people, and I am kind of introducing this tool in our organization to have enterprise-level stuff. I have used this tool in my old organization, and that's why I am trying to implement it here. I am the only DevOps engineer who works in this organization, and I want to integrate it with different code bases.

How are customer service and technical support?

I've never used their technical support.

How was the initial setup?

It is really straightforward. If someone has set up a simple pipeline, they can just integrate in no time.

What's my experience with pricing, setup cost, and licensing?

Pricing-wise, it is not expensive as compared to other tools. If you have a couple of licenses, you can scan a certain number of projects. It just needs to be attached to them.

What other advice do I have?

I have been using this solution for one and a half years, and I definitely like it. It is awesome in whatever it does right now.

It is a really nice tool if you really want to do the dependency check and security scanning of your code, which falls under static code analysis. You can implement it and go for it for static code analysis, but when it comes to dynamic, interactive, and run-time scanning, you should look for other tools available in the market. These are the only things that are missing in this solution. If it had these features, we would have gone with it because we have already been using it for one and a half years. Now, the time has come where we are looking for new features, but they are not there.

Considering the huge database they have, all the binaries it scans, and other features, I would rate Snyk an eight out of 10. 

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Get our free report covering SonarSource, Veracode, Micro Focus, and other competitors of Checkmarx. Updated: January 2022.
564,643 professionals have used our research since 2012.