IT Central Station is now PeerSpot: Here's why
Buyer's Guide
Application Security Tools
July 2022
Get our free report covering Sonar, Checkmarx, Micro Focus, and other competitors of Veracode. Updated: July 2022.
622,063 professionals have used our research since 2012.

Read reviews of Veracode alternatives and competitors

VP of Engineering at a tech vendor with 11-50 employees
Real User
Top 10Leaderboard
Scans our thousands of dependencies every time we build and rechecks them daily, making us aware of what's going on
Pros and Cons
  • "We're loving some of the Kubernetes integration as well. That's really quite cool. It's still in the early days of our use of it, but it looks really exciting. In the Kubernetes world, it's very good at reporting on the areas around the configuration of your platform, rather than the things that you've pulled in. There's some good advice there that allows you to prioritize whether something is important or just worrying. That's very helpful."
  • "There is always more work to do around managing the volume of information when you've got thousands of vulnerabilities. Trying to get those down to zero is virtually impossible, either through ignoring them all or through fixing them. That filtering or information management is always going to be something that can be improved."

What is our primary use case?

Our use case is basically what Snyk sells itself as, which is for becoming aware of and then managing any vulnerabilities in third-party, open-source software that we pull into our product. We have a lot of dependencies across both the tools and the product services that we build, and Snyk allows us to be alerted to any vulnerabilities in those open-source libraries, to prioritize them, and then manage things.

We also use it to manage and get visibility into any vulnerabilities in our Docker containers and Kubernetes deployments. We have very good visibility of things that aren't ours that might be at risk and put our services at risk.

Snyk's service is cloud-based and we talk to that from our infrastructure in the cloud as well.

How has it helped my organization?

We are a business that sells services to other businesses. One of the things that we have to sell is trust. As a small company, we've had to go quite a long way to mature our development and security processes. We've been ISO 27001-certified for a while and we got that very early, compared to the life cycle of most businesses. But that's because when we're talking contracts with customers, when we're talking information security reviews with customers, it's really powerful to be able to say, "We have Snyk, we use it in this way." A lot of the questions just go away because people understand that that means we've got a powerful and comprehensive tool.

Certainly, from a finding-of-vulnerabilities perspective, it's extremely good. Our problem is scale. We have something like 7,000 dependencies in our code and we could go and check those ourselves, but that would be a huge waste of time. Snyk's ability to scan all of those every time we build, and keep a running status of them and recheck them daily, is extremely valuable for making us aware of what's going on. We've wired Snyk up into Slack and other things so that we get notifications of status, and that's useful.

It has reduced the amount of time it takes to find problems by orders of magnitude because it's scanning everything. Without the tool it would be horrific; we just couldn't do it. It takes seconds for a scan to run on each of our libraries and so that's an amazing performance improvement. Compared to having nothing, it's amazing.

In terms of developer productivity, because of the way that our development community works, they're pulling in third-party libraries. So they worry less about the choice of the third-party library, but it could inform them that there's a risk, and then they then have to take action. We probably spend more time securing our product, but get a more secure product, which is actually what we want.

Overall, knowing what the risks are, and being able to make considered judgments about those risks, means that we are much more comfortable that our product is secure. And when there are high-risk issues, we're able to take action very quickly. The time to resolution for anything serious that is discovered in downstream libraries is dramatically reduced, and that's really useful.

What is most valuable?

The core offering of reporting across multiple projects and being able to build that into our build-pipelines, so that we know very early on if we've got any issues with dependencies, is really useful.

We're loving some of the Kubernetes integration as well. That's really quite cool. It's still in the early days of our use of it, but it looks really exciting. In the Kubernetes world, it's very good at reporting on the areas around the configuration of your platform, rather than the things that you've pulled in. There's some good advice there that allows you to prioritize whether something is important or just worrying. That's very helpful.

In terms of actionable items, we've found that when you're taking a container that has been built from a standard operating system, it tends to be riddled with vulnerabilities. It's more akin to trying to persuade you to go for something simpler, whether that's a scratch or an Alpine container, which has less in it. It's more a nudge philosophy, rather than a specific, actionable item.

We have integrated Snyk into our software development environment. The way Snyk works is that, as you build the software in your pipelines, you can have a Snyk test run at that point, and it will tell you if there are newly-discovered vulnerabilities or if you've introduced vulnerabilities into your software. And you can have it block builds if you want it to. Our integrations were mostly a language-based decision. We have Snyk integrated with Python, JavaScript Node, and TouchScript code, among others, as well as Kubernetes. It's very powerful and gives us very good coverage on all of those languages. That's very positive indeed.

We've got 320-something projects — those are the different packages that use Snyk. It could generate 1,000 or 2,000 vulnerabilities, or possibly even more than that, most of which we can't do anything about, and most of which aren't in areas that are particularly sensitive to us. One of our focuses in using Snyk — and we've done this recently with some of the new services that they have offered — is to partition things. We have product code and we have support tools and test tools. By focusing on the product code as the most important, that allows us to scope down and look at the rest of the information less frequently, because it's less important, less vulnerable.

From a fixing-of-vulnerabilities perspective, often Snyk will recommend just upgrading a library version, and that's clearly very easy. Some of the patching tools are a little more complicated to use. We're a little bit more sensitive about letting SaaS tools poke around in our code base. We want a little bit more sensitivity there, but it works. It's really good to be able to focus our attention in the right way. That's the key thing.

Where something is fixable, it's really easy. The reduction in the amount of time it takes to fix something is in orders of magnitude. Where there isn't a patch already available, then it doesn't make a huge amount of difference because it's just alerting us to something. So where it wins, it's hugely dramatic. And where it doesn't allow us to take action easily, then to a certain extent, it's just telling you that there are "burglaries" in your area. What do you do then? Do you lock the windows or make sure the doors are locked? It doesn't make a huge difference there.

What needs improvement?

One of the things that I have mentioned in passing is because we have a security team and we have the development team. One of the things that would make the most difference to me is because those two teams work independently of each other. At the moment, if a developer ignores a problem, there's no way that our security team can easily review what has been ignored and make their own determination as to whether that's the right thing to do or not. That dual security team process is something that I'd love to see.

Other than that, there is always more work to do around managing the volume of information when you've got thousands of vulnerabilities. Trying to get those down to zero is virtually impossible, either through ignoring them all or through fixing them. That filtering or information management is always going to be something that can be improved.

For how long have I used the solution?

We've been using Snyk for about 18 months.

What do I think about the stability of the solution?

The stability is pretty good.

We've had two challenges over the two years we've been using Snyk. One was the size of our projects in our JavaScript world. It meant that some of the tests would fail through memory issues. They've done a lot of work on improving that, and we have found some workarounds. 

Sometimes, because we're talking out to Snyk services, our pipelines fail because the Snyk end isn't running successfully. That doesn't happen very often, so it hasn't been a major impact, but there have been one or two cases where things didn't work there.

What do I think about the scalability of the solution?

The solution is scalable, absolutely. We plan to increase our usage of Snyk. As we grow, every developer will be put into it. Everything we build, all of our development, is using Snyk as the security scanning tool.

How are customer service and technical support?

Snyk's technical support is very good. We haven't used it much. I've engaged with customer success and some of the product managers and they're really keen to get feedback on things. 

We have had one or two things where we have talked to support and they have been very positive engagements.

Which solution did I use previously and why did I switch?

We were small enough that we didn't have a previous solution.

How was the initial setup?

The deployment was easy. When we were first evaluating Snyk, our automation engineer got a test account, installed it, and built it into our development pipelines without needing any support at all from Snyk. It was one of the more interesting sales engagements. They sent us an email, but we got it up and going and were using it in its trial mode without needing any assistance at all. That's clearly a demonstration of that ease of integration.

Working end-to-end, it took a couple of days for one person to get it wired up.

We followed the Snyk recommendations. We built a container that takes the Snyk service, and we run that in our build-pipeline. It dropped in very easily because of the way we were already operating.

In terms of developer adoption, we had to mandate it. So everybody uses it. It's built into all the pipelines. Generally, it's pretty good. The engineering team has 17 people and pretty much everybody is using Snyk as part of that. I don't think security is necessarily at the forefront of everybody's minds, and we're working on that. Snyk has helped.

We have a very complex infrastructure so the only challenge with Snyk is that it tells us a lot of information. They're pretty good at managing that, but you still have to take action. It's very good for knowing things, but it's also pretty good at being able to work out how to focus your attention.

That volume of information, where you get lots of things that are not important or not critical, tends to create a little bit of "blindness" to things. We're used to Snyk tests failing, alerting us to things that we're choosing to ignore at that moment because they're not fixable. That's one of the interesting challenges, to turn it into actionable information.

What was our ROI?

We had a lot of information security audits and we found that Snyk enabled sales because they weren't being blocked by InfoSec issues. That means that it probably paid for itself with the first customer deal that we were able to sign. We were able to show them that we had Snyk up and working really quickly, which was great. 

In terms of other metrics, it's slightly harder to measure, because it's allowing us to prevent problems before they become issues. But from a commercial engagement point of view, it was well worth it, very quickly.

What's my experience with pricing, setup cost, and licensing?

It's good value. That's the primary thing. It's not cheap-cheap, but it's good value. We managed to build a package of features that we were able to live with, in negotiation, and that worked really well. We did a mix and match. We got single sign-on and some of the other things.

The Kubernetes, the container service, versus the source-code service, for us, as a cloud deployment, was well worth it. The ability there has been really useful, but that's clearly an extra cost.

Which other solutions did I evaluate?

There are other tools that can perform some of the functions Snyk does. We did some analysis of competitors, including Black Duck Synopsys and Veracode, but Snyk was clearly the most hungry and keen to assist, as a business. There were a lot of incumbent competitors who didn't really want our business. It felt like Snyk clearly did want to do the right thing and are continuing to improve and mature their product really fast, which is brilliant.

Snyk, was at a good price, has very comprehensive coverage, and as a company they were much easier to engage with. It felt like some of the other competitors were very "big boys." With Snyk we had the software working before we'd even talked to a sales guy, whereas with other solutions, we weren't even allowed to see the software running in a video call or a screen-sharing session until we'd had the sales call. It was completely ridiculous.

What other advice do I have?

My advice is just try it. If you've got a modern development pipeline, it's really easy to wire up, if you've got somebody with the right skills to do that. We found with a development community, it's really easy to build these things. Get on with it and try it. It's really easy to trial and see what it's telling you about. That's one of the great upsides of that model: Play with it, convince yourself it's worth it, and then talk to them about buying it.

It's hard to judge Snyk's vulnerability database in terms of comprehensiveness and accuracy. It clearly is telling us a lot of information. I have no reason to doubt that it is very good, but I can't categorically back that up with my own empirical evidence. But I trust them.

I don't get the sense there are many false positives from Snyk, and that's a very positive thing. When it tells us something, it's almost certainly a real issue, or at least that a real issue has been found somewhere in the open-source world. 

What is always harder to manage is to know what to do if there is no resolution. If somebody has found a problem, but there is no fix, then we have a much more interesting challenge around evaluation of whether we should do something. Do we remove that library? Do we try and fix it ourselves, or do we just wait? That process is the more complicated one. It's less of a false positive issue and more an issue of a real finding that you can't do anything about easily. That can sometimes leave you ignoring things simply because there's no easy action to take, and that can be slightly dangerous.

The solution allows our developers to own security for the applications and the containers they run in the cloud, although that's still something we're working on. It's always a challenge to get security to be something that is owned by developers. The DevOps world puts a lot of responsibility on developers and we're still working to help them know; to have better processes and understand what they need to be doing. We still have a security oversight function who is trying to keep an eye on things. We're still maturing ourselves, as a team, into DevSecOps.

As for Snyk's lack of SAST and DAST, that's just another one of the tools in the toolkit. We do a lot of our own security scanning for application-level or platform-level attacks. We have pen tests. So the static application is not something that we've seen as particularly important, at this point.

Snyk is an eight out of 10. It's not perfect. There are little things that could clearly be improved. They're working on it as a company. They're really engaged. But the base offering is really good. We could also use it better than we are at the moment, but it's well worth it. It's brilliant.

The biggest lesson I have learned from using this solution is that there is a big gap between thinking your software is safe and knowing what the risks are. Information is power. You don't have to take action, but at least you are informed and can make a considered judgment if you take it seriously. That is what Snyk really provides.

The ethos of Snyk as a company is really positive. They're keen to engage with customers and do something in a slightly different way, and that younger, hungrier, more engaged supplier is really nice to work with. They're very positive, which is good.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Aggelos Karonis - PeerSpot reviewer
Technical Information Security Team Lead at Kaizen Gaming
Real User
Top 5
An easy, fast way to improve your code security and health
Pros and Cons
  • "In our most critical applications, we have a deep dive in the code evaluation, which was something we usually did with periodic vulnerability assessments, code reviews, etc. Now, we have real time access to it. It's something that has greatly enhanced our code's quality. We have actually embedded a KPI in regards to the improvement of our code shell. For example, Contrast provides a baseline where libraries and the usability of the code are evaluated, and they produce a score. We always aim to improve that score. On a quarterly basis, we have added this to our KPIs."
  • "Personalization of the board and how to make it appealing to an organization is something that could be done on their end. The reports could be adaptable to the customer's preferences."

What is our primary use case?

Up to this point, as an information security company, we had very limited visibility over the testing of the code. We have 25 Scrum teams working but we were only included in very specific projects where information security feedback was required and mandatory to be there. With the use of Contrast, including the evaluation we did, and the applications we have included in the system, we now have clear visibility of the code.

How has it helped my organization?

In our most critical applications, we have a deep dive in the code evaluation, which was something we usually did with periodic vulnerability assessments, code reviews, etc. Now, we have real time access to it. It's something that has greatly enhanced our code's quality. We have actually embedded a KPI in regards to the improvement of our code shell. For example, Contrast provides a baseline where libraries and the usability of the code are evaluated, and they produce a score. We always aim to improve that score. On a quarterly basis, we have added this to our KPIs.

We have a site that serves many different products. We have a sportsbook and casino, where a lot of casinos are using the provider's code. Our false positives are mainly due to points missing since we have not integrated the application on the provider's side. Therefore, a request that is not checked on our side is checked on their side, leading to gaps of knowledge which causes the false positive. 

In regards to the applications that have been onboarded fully, we have had very effective results. Everything that it has identified has given us value, either in fixing it or knowing what's there and avoiding doing it again on other parts of our code. It's been very effective and straightforward.

What is most valuable?

The real-time evaluation and library vulnerability checks are the most valuable features, because we have a code that has been inherited from the past and are trying to optimize it, improve it, and remove what's not needed. In this aspect, we have had many unused libraries. That's one of the key things that we are striving to carve out at this point.

An additional feature that we appreciate is the report associated with PCI. We are Merchant Level 1 due to the number of our transactions, so we use it for test application compliance. We also use the OWASP Top 10 type of reports since it is used by our regulators in some of the markets that we operate in, such as, Portugal and Germany.

The effectiveness of the solution’s automation via its instrumentation methodology is very effective and was a very easy integration. It does not get affected by how many reviews we perform in the way that we have designed the release methodologies. So, it has clear visibility over every release that we do, because it is the production code which is being evaluated. 

The solution has absolutely helped developers incorporate security elements while they are writing code. The great part about the fixes is they provide a lot of sensory tapes and stuff like what you should avoid to do in order to avoid future occurrences around your code. Even though the initial assessment is being done by a senior, more experienced engineers in our organization, we provide the fixes to more junior staff so they have a visceral marker for what they shouldn't do in the future, so they are receiving a good education from the tool as well.

What needs improvement?

During the period that we have been using it, we haven't identified any major issues. Personalization of the board and how to make it appealing to an organization is something that could be done on their end. The reports could be adaptable to the customer's preferences, but this isn't a big issue, as it's something that the customer can do as he builds his experience with the tool.

On the initial approaches during the PoC and the preparation of the solution, it would be more efficient if we were presented with a wider variety of scenarios aimed towards our main concern, which is system availability. However, once we fine tuned those by their scenarios that they provided later on in our discussion, we fixed it and went ahead.

For how long have I used the solution?

We evaluated the product twice: once in PoC and once in a 30-day trial. Then, we proceeded with using it in production, where it's been for four months. Our initial approach was almost nine months ago. So, we had a fair bit of experience with them.

What do I think about the stability of the solution?

The application is very stable because it is on-premise. So, we have had no issues with it. The stability of the solution is at a level where we just have the health check run on it and nothing more is needed. We don't have issues with capacity. We do not have issues with very high level of requests nor delays. It is very smooth at this point. We fine tuned it during the testing week. After that, nothing changed. It handles the traffic in a very easy way. We just configure it through the Contrast tool, if needed, which is very straightforward.

The maintenance is very simple. We have had two patches applied. Therefore, I have only needed to involve our systems team two times during these four months for one hour of time. The health check of the system has been added to our monitoring team's task, therefore there is no overhead for us.

What do I think about the scalability of the solution?

At this point, we have provided access to 20 people in the Contrast platform. However, it is being used by more people than that because once a vulnerability is identified and marked as something that we should fix, then it's handled by a person who may not have access to Contrast and is only presented with a specific vulnerability in order to fix it. Top management receives the reports that we give them as well as the KPI's. So, it's used across the organization. It's not really limited to just the teams who have actual access to it.

At this point, we see great value for the applications that we have it on. We want to spread it across lower criticality applications. This is something that's a positive thing, because if we want to have it on a larger scale, we'll just add another web node and filter different apps on it. It's a very scalable and easy to manage. We are more than sure that it will cover the needs that we'll have in the future as well. We have weekly releases with no issues so far.

How are customer service and technical support?

Every time that we approach them with a request, we have had an immediate response, including the solution, with the exact point in the documentation. Therefore, they have been very helpful.

It was a very smooth completion of the paperwork with the sales team. That's a positive as well because we are always scared by the contract, but they monitor it on a very efficient level.

I really want to highlight how enthusiastic everyone is in Contrast, from day one of the evaluation up until the release. If we think that we should change something and improve upon it, then they have been open to listening and helping. That is something that greatly suits our mentality as an organization. 

Which solution did I use previously and why did I switch?

Prior to to this, we did not have such a solution and relied on other controls.

Our initial thought was that we needed a SAST tool. So, we proceeded with approaching some vendors. What sparked the interest for Contrast is its real-time evaluation of requests from our users and identification of real-time vulnerabilities.

We have now established specific web nodes serving those requests. We get all the feedback from there along with all the vulnerabilities identified. Then, we have a clear dashboard managed by our information security team, which is the first step of evaluation. After that, we proceed with adding those pieces of the vulnerabilities to our software development life cycle.

Prior to using Contrast, we didn't have any visibility. There were no false positives; we had just the emptiness where even false positives would be a good thing. Then, within the first week of having the tool, 80 or 90 vulnerabilities had been identified, which gave us lots to do with minor false positives.

How was the initial setup?

The setup is very straightforward. Something that has worked greatly in their favor: The documentation, although extensive, was not very time consuming for us to prepare. We have a great team and had a very easy integration. The only problems that we stumbled onto was when we didn't know which solution would work better for our production. Once we found that out, everything went very smoothly and the operation was a success.

The final deployment: Once the solution was complete, it took us about less than a day. However, in order to decide which solution we would go with, we had a discussion that lasted two or three working days but was split up over a week or so to have the feedback from all the teams. The deployment was very fast. It took one day tops.

What about the implementation team?

Their support was one of the best I have seen. They were always very responsive, which is something that we appreciate. When you assign a person and time to work the project, you want it to be as effective as can be and not have to wait for responses from the provider.

Their sales team gave us feedback from the solution architects. They wanted to be involved in order to help us with some specific issues that we were dealing with since we were using two different technologies. We wanted some clarifications there, but this was not customer support. Instead, it was more at a solution level.

The integration was very simple of the solution’s automation via its instrumentation methodology. We had excellent help from the solution architects from the Security Assess team. We had the opportunity to engage many teams within our organization: our enterprise architects, DevOps team, systems team, and information security team members. Therefore, we had a clear picture of how we should implement it, not only systems-wise, but also in organization-wide effect. At this point, we have embedded it in our software development life cycle (SDLC), and we feel that it brings value on a day-to-day basis.

We prepared a solution with the solution architect that we agreed upon. We had a clear picture of what we wanted to do. Once we put the pieces together, the deployment was super easy. We have a dedicated web node for that. So, it only runs that. We have clear applications installed on that node setup, so it's very straightforward and easy to set up. That's one of the key strengths of Contrast: It is a very easy setup once you decide what you want to do.

On our end, we had the one person from the systems team, the enterprise architect who consulted in regards to which applications we should include, myself from information security, and DevOps, who was there just to provide the information in regards to the technologies we use on the CI/CD front. However, the actual involvement with the project to the implementation was the systems team along with me.

From their end, they had their solution architect and sales acted as a project manager, who helped tremendously in their time limits of responses. There was just two people. 

What was our ROI?

The solution has helped save us time and money by fixing software bugs earlier in the SDLC. The code shells and quality improve through missed links and libraries as well as units of extensive code where it's not needed. From many aspects, it has a good return of investment because we have to maintain less code use, a smaller number of libraries and stuff like that, which greatly increases the cost of our software development.

What it saves is that when a developer writes something, he can feel free to post it for review, then release it. We are sure that if something comes up, then it will be raised by the automated tool and we will be ready to assess and resolve it. We are saving time on extensive code reviews that were happening in the past.

What's my experience with pricing, setup cost, and licensing?

For what it offers, it's a very reasonable cost. The way that it is priced is extremely straightforward. It works on the number of applications that you use, and you license a server. It is something that is extremely fair, because it doesn't take into consideration the number of requests, etc. It is only priced based on the number of applications. It suits our model as well, because we have huge traffic. Our number of onboarded applications is not that large, so the pricing works great for us.

There is a very small fee for the additional web node we have in place; it's a nonexistent cost. If you decide to apply it on existing web nodes, that is eliminated as well. It's just something that suits our solution.

Which other solutions did I evaluate?

We had an extensive list that we examined. We dove into some portable solutions. We did have some excellent competitors because they gave us a clear indication of what we wanted to do. We examined SonarQube and Veracode, who presented us with a great product, but was not a great fit for us at the time. These solutions gave us the idea of going with something much larger and more broad than just a tool to produce findings. So, many competitors were examined, and we just selected the one who mostly fit our way of doing things.

The main thing to note is the key differentiation between Contrast and everything else we evaluated is the production value range since we had the chance to examine actual requests to our site using our code. Contrast eliminated the competition with their ability to add the live aspects of a request taken. That was something we weren't able to find in other solutions.

Some of the other competitive solutions were more expensive.

What other advice do I have?

I would recommend trying and buying it. This solution is something that everyone should try in order to enhance their security. It's a very easy, fast way to improve your code security and health.

We do not use the solution’s OSS feature (through which you can look at third-party open-source software libraries) yet. We have not discussed that with our solutions architect, but it's something that we may use in the future when we have more applications onboard. At this point, we have a very specific path in order to raise the volume of those critical apps, then we will proceed to more features.

During the renewal, or maybe even earlier than that, we will go with more apps, not just three.

One of the key takeaways is that in order to have a secure application, you cannot rely on just the pentest, vulnerability assessments, and the periodicity of the reviews. You need the real-time feedback on that, and Contrast Assess offers that. 

We were amazed to see how much easier it is to be PCI-compliant once you have the correct solution applied to it. We were humbled to see that we have vulnerabilities which were so easy to fix, but we wouldn't have noticed them if we didn't have this tool in place.

It is a great product. I would rate it a nine out of 10. 

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Kevin Dsouza - PeerSpot reviewer
Intramural OfficialIntramural at Northeastern University
Real User
Top 20
Easy to set up with vulnerability analysis and is reliable
Pros and Cons
  • "The vulnerability analysis is the best aspect of the solution."
  • "The only thing that I don't find support for on Mend Prioritize is C++."

What is our primary use case?

We use Mend especially for code analysis. I work in the application security part of my company. Developers will build and push the code to the GitHub repository. We have a build server that pulls in the code, and we are using Jenkins to automate that to do the DevOps stuff.

Once the code is built, we create a product for that particular version on Mend. We are currently working with three different versions for our particular product. We have the products created on Mend via White Source, which has a configuration file and a back file that runs. The configuration files basically tell what parameters to use, which server URL to use, which files to ignore, and which files to use.

For example, if I just have to do Python, I can make changes in the configuration files in Excel to include just .py files and exclude all of the files. If I have to do Python and C++, I can make changes in the configuration file itself to make .py, .C++ and exclude all of those. Once that configuration file is ready, then we run a White Source back file that just connects to the server, contacts the configuration file as well, does the scan on all the files that are there in the project, the project being for, and then pushes it to Mend, our Mend page.

On our Mend page, once we go into the product page of it, we can see what libraries have been used by us and what have some vulnerabilities. We also can set policies on Mend. We set some policies for our organization to accept and reject. For each product, we also get the policy violations that the libraries go through and any new versions for any new libraries that are available on that library's parent page - the parent page being the official developers of the library. We can get the new versions as well. We get the licenses we use with the library, and most importantly, we get vulnerability alerts regarding every library we use in our code.

Once the code is pulled, scanned, and pushed, we get the UI. We go to the library alerts. Once we go to the library alerts, we can see the different severities and the different libraries with vulnerabilities. We normally just sort according to higher severity first and go down to lower severity. We check what can be ignored or what is acceptable and what cannot be ignored, and what is of high priority. Ones that are a high priority, we flag and create a ticket on JIRA. That's our platform for collaboration.

Once we create a ticket for JIRA, the developers can see it, the QA team can see it, and they will go through that as well. They can tell if the update or the upgrade of the library is possible or not. They'll check its compatibility and see if it's actually doable or not. If it's not doable, they'll just tell us it's not doable, and probably our next version of the application will have the changes - not this one. We term that as acceptable or within our domains of acceptance. However, daily, if a JIRA ticket is created, the developers get back to us saying yes or no. Mostly they can say yes to changing the library to upgrade the library. If it's upgraded, they upgrade it to the next version. We scan it again. We do a weekly scan. We'll just check the next week if that particular liability is upgraded and the vulnerability has been remediated.

What is most valuable?

The vulnerability analysis is the best aspect of the solution. It’s my main go-to.

We can't do static code analysis ourselves; it's manual. That's a lot of manual tasks to handle. It's close to impossible to do that. That was a lot for static code analysis of our projects, alerting on vulnerabilities whenever it's possible. Whenever there's a vulnerability available, Mend does that. It vulnerability analyst is a report as well with how many high vulnerabilities, how many medium, how many lows we got, and how many accepted or how many are without any vulnerabilities basically.

I see a lot of it is pretty good and has a high level of trust.

It’s stable and easy to set up.

What needs improvement?

All applications in the world that are created have room for improvement.

Within Mend itself, there’s Mend Prioritize, which prioritizes the vulnerability automatically by itself with relevance to our application. Mend Prioritize has support for five or six languages right now, including JavaScript, C, and C#. The only thing that I don't find support for on Mend Prioritize is C++, which they'll be working on since the product is under development. Once that's done, we can also add it into Mend Prioritize for our weekly scans, which will help us with our analysis and efforts for remediation.

It's everything we need right now. There's nothing as such that’s out of the world that they should do. We use it just for one thing and focus on that. Therefore, they should not do anything else. We're fine with it as it is.

For how long have I used the solution?

I've been using Mend for six months now.

What do I think about the stability of the solution?

It’s quite stable. There are no bugs or glitches. It doesn’t crash or freeze. A lot of infrastructure is dependent on Mend right now, and it's not disappointing.

What do I think about the scalability of the solution?

It is a pretty scalable product.

The application security team uses it. That’s four people using it regularly.

We are using everything that it does. Mend does a lot of things. It does SAST, SCA, it does DAST as well. We are using just the SCA module of it, which we need, and we are using the SCA model to its fullest. I hope we're doing the most efficient deployment of it.

How are customer service and support?

We’ve used technical support in the past. We had some issues with One RPM last month. That was sorted quickly.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We did not previously use any different solution prior to Mend.

We did look at other solutions. There was Veracode that we tried and Tenable. There was Qualys as well. However, we chose Mend, and we have had a license for three years right now.

How was the initial setup?

The initial setup was pretty easy.

The deployment didn’t take long. Within a day or two, it was done.

There's no maintenance and deployment of Mend as such.

What about the implementation team?

We have a license, so once the license was set up, once the server was set up, after that, we rolled it out by ourselves.

What was our ROI?

We’ve seen a terrific ROI. I’d rate the solution a 4.5 out of five in terms of delivering us ROI.

What's my experience with pricing, setup cost, and licensing?

I don’t have any information in regards to pricing.

What other advice do I have?

I would advise potential users to go through the documentation extensively. The documentation is pretty extensive. It's easy to miss some points in the initial setup itself. If the initial setup's gone wrong, it is difficult to debug it once the infrastructure is up. Therefore, start slow. If the deployment is done correctly, it's only a matter of two files after that for each project that you scan.

I’d rate the solution a nine out of ten.

Which deployment model are you using for this solution?

Private Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Security Consultant at a tech services company with 11-50 employees
Consultant
Top 20
Straightforward to install and reports few false positives, but it should be easier to specify your own validation and sanitation routines
Pros and Cons
  • "The most valuable feature is that there were not a whole lot of false positives, at least on the codebases that I looked at."
  • "It should be easier to specify your own validation routines and sanitation routines."

What is our primary use case?

I am a consultant and I work to bring solutions to different companies. Static code analysis is one of the things that I assist people with, and Coverity is one of the tools that I use for doing that.

I worked with Coverity when doing a couple of different PoCs. For these, I get a few different teams of developers together and we want to decide what makes the most sense for each team as far as scanning technologies. So, part of that is what languages are supported, part of that is how extensible it is, and part of that extensibility is do the developers have time to actually create custom roles?

We also want to know things like what the professional are services like, and do people typically need many hours of professional services to get the system spun up. Other factors include whether it deployed on-premises or in the cloud, and also, which of those environments it can operate with.

One of the things is there's not really a shining star out of all of these tools. SaaS tools have been getting more mature in the past decade, particularly in how fast they run, but also in the results they get. Of course, framework and language additions that increase the capability with results are considered.

What is most valuable?

The most valuable feature is that there were not a whole lot of false positives, at least on the codebases that I looked at.

What needs improvement?

It should be easier to specify your own validation routines and sanitation routines.

For example, if you have data coming into the application, perhaps something really simple like it's getting a parameter from a web page that is your username when you go to a website to login, and then ultimately that's being consumed by something, the data goes through some business logic and then, let's say, it enters that username into a database. 

Well, what if I say my username is JavaScript calling alert hello. Now I've just entered JavaScript code as my username and you should be able to sanitize that pretty easily with a number of different techniques to remove the actual executable code from what they entered on the login page. However, once you do that, you want the program to understand that you are doing it and then remove what looks like a true positive at first glance because, in fact, the data being consumed in the SQL exec statement is not unsanitized. It's not just coming from the web.

Likewise, let's say you log in, and then it says, "Hello" Such and such. You can inject JavaScript code there and have it be executed when it says hello. So basically the ability to say that this validates and then also above and beyond that, this validates data coming from any GET parameter on the web. You should be able to specify a particular routine validates all of that, or this particular routine validates anytime we read data from a database, maybe an untrusted database.

So, if I reach for that data eight times and I say, "Hey," this validates it once, I also get the option to say it validates it the other seven times, or I could just say it's a universal validator. Obviously, a God validator so to speak is not a good practice because you're sure to miss some edge cases, but to have one routine validate three or four different occurrences is not rare and is often not a bad practice.

Another thing that Coverity needs to implement or improve is a graphical way to display the data. If you can see an actual graphical view of the data coming in, then it would be very useful. Let's say, the first node would be GET parameter from a webpage, and then it would be an arrow to another method like validate user ID, and then another method of GET data about the user. Next, that goes into the database, and so forth. When that's graphically displayed, then it is helpful for developers because they can better grab onto it.

The speed of Coverity can be improved, although that is true for any similar product.

What do I think about the stability of the solution?

It never crashed so stability has not been an issue.

What do I think about the scalability of the solution?

I have never used it for more than four relatively small to medium-sized projects at a time, so I've never needed to scale it.

How are customer service and technical support?

I have dealt with sales engineering, rather than technical support. They would sometimes provide a liaison to tech support if they didn't know the answer, but really, they guided us through the proof of concept and they knew that they were under a competitive evaluation against the other tools. They were able to resolve any issues that we came across and got us up and running fairly quickly, as far as I recall.

How was the initial setup?

Coverity is on the good side when it comes to setting it up. I think that it is pretty straightforward to get up and running.

What about the implementation team?

We implement Coverity on our own, with guidance from Coverity.

What's my experience with pricing, setup cost, and licensing?

The price is competitive with other solutions.

Which other solutions did I evaluate?

In addition to Coverity, I have experience with Checkmarx, Fortify, Veracode, and HCL AppScan, which was previously known as IBM AppScan.

Checkmarx is probably the most extensible and customizable of these products, and you're able to use the C# language to do so, which a lot of developers are familiar with.

HCL AppScan is another tool that has customization capabilities. They are not as powerful but they are easier to implement because you don't need to write any code.

I cannot give an endorsement for any particular one. They all have their merits and it just depends on the requirements. Generally, however, all of these tools are getting better.

What other advice do I have?

My advice for anybody who is considering this product is to first look around your organization to see if it has already been implemented in another group. If you're a big organization then Coverity or a similar tool may already be in use. In cases like this, I would say that it is best to adopt the same tool because your organization has already gone down that path and there are no huge differences in the capabilities of these tools. Some of them do it in different ways and some do things that others don't, but you won't have the initial bump of the learning curve and you can leverage their experience.

I would rate this solution a seven out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Gustavo Lugo - PeerSpot reviewer
Chief Solutions Officer at CleverIT B.V.
Reseller
Top 5Leaderboard
Easy to deploy and applicable for various uses
Pros and Cons
  • "It is an easy tool that you can deploy and configure. After that you can measure the history of your obligation and integrate it with other tools like GitLab or GitHub or Azure DevOps to do quality code analysis."
  • "In terms of what can be improved, the areas that need more attention in the solution are its architecture and development."

What is our primary use case?

I am now working in a consultancy company and I work with different clients in different industries. For this reason I implement, for example, a delivery pipeline with the process whereby we need to validate the quality gate of the quality code. Meaning, the developer creates the unit testing and the code coverage, but grants the code coverage for a specific person. In other cases, we used to see what the technical depth was to see if if there are any bugs in the applications - the web application, mobile application and different languages, like, C-Sharp, JavaScript or Java, et cetera.

We deploy SonarQube on-premise on a Linux server and our pipelines were created with GitLab and Azure DevOps. Meaning that Azure DevOps and GitLab are the tools that do the build and release process.

We use Microsoft Azure and Google Cloud Platform a little.

What is most valuable?

In terms of most valuable feature, when you compute SonarQube you need to install an extension. This extension depends on the version control. You need to install different extensions or work with a specific language to use as the extensions, all of which I work in with different projects.

What needs improvement?

In terms of what can be improved, the areas that need more attention in the solution are its architecture and development.

Additionally, the QA team also needs work in different aspects. When you think about the support area - when the support team has an incident they need to do a hostage. When they do that they do a commit in the version control. These commits trigger a new build process and this process needs validation from SonarQube because we need to validate the quality of the software product for different cases and different aspects.

For how long have I used the solution?

I have been using SonarQube for about four years, with different versions.

What do I think about the stability of the solution?

SonarQube works very well, but I prefer SonarCloud because the tendency of the technology world is to think less about the structure and more about the process and the value that this process provides.

What do I think about the scalability of the solution?

In terms of scalability, with proper configuration and deployment, there is higher availability.

I have companies with 20 users and I have customers with 100 users. We work with a big company in Chile and in some cases national companies, in other cases international companies. With the international companies the majority of them are more than 1,000 users.

I have a technical DevOps team. The majority of the time we implement the trial version so that we show the value of the tool to our clients and they understand about the pricing and the cost of the tool.

It depends on the maturity of the company. In some case, we have companies that don't know about SonarQube so we deploy it to show the value. In other cases we have clients with no SonarQube experience but they know the quality of the codes. In this case we provide a license. In the majority of the cases we provide the license or the subscription for SonarCloud. Other clients get access to SonarQube directly.

How are customer service and technical support?

I have never used technical support from the SonarQube support team.

I work very well with the documentation you find on the internet.

How was the initial setup?

The initial setup is straightforward the majority of time. It takes about two hours.

What about the implementation team?

I work in a consultancy company so we do the implementation. We deploy for our customers.

Which other solutions did I evaluate?

We did evaluate other options, for example Q1 and Veracode. In specific cases we created different aspects with different tools and these were the top peers that we would compare it to - Q1 and Veracode.

In terms of differences, Veracode is used more for the security of the development and you can configure the gates while thinking about software security and things like that. With Q1, the difference is the type of the license. In Q1 you have projects and you pay for the line. I know that SonarQube was changing the licensing plan. Right now, before you pay for a license, you pay for fair lines that you extend. This is the difference between these three tools.

What other advice do I have?

I do recommend SonarQube because it is an easy tool that you can deploy and configure. After that you can measure the history of your obligation and integrate it with other tools like GitLab or GitHub or Azure DevOps to do quality code analysis.

On a scale of one to ten, I would give SonarQube an eight. To give it a 10 and not an eight, I would like to see architecture development and the QA area improved.

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: My company has a business relationship with this vendor other than being a customer: Reseller
Buyer's Guide
Application Security Tools
July 2022
Get our free report covering Sonar, Checkmarx, Micro Focus, and other competitors of Veracode. Updated: July 2022.
622,063 professionals have used our research since 2012.