We changed our name from IT Central Station: Here's why
Get our free report covering WhiteSource, Snyk, Micro Focus, and other competitors of Black Duck. Updated: January 2022.
563,148 professionals have used our research since 2012.

Read reviews of Black Duck alternatives and competitors

Nicholas Secrier
Information Security Officer at a tech services company with 51-200 employees
Real User
Top 5Leaderboard
Helps Avoid The Pain And The Cost Of Trying To Retrofit Security in your Code.
Pros and Cons
  • "The dependency checks of the libraries are very valuable, but the licensing part is also very important because, with open source components, licensing can be all over the place. Our project is not an open source project, but we do use quite a lot of open source components and we want to make sure that we don't have surprises in there."
  • "Generating reports and visibility through reports are definitely things they can do better."

What is our primary use case?

We are using it to identify security weaknesses and vulnerabilities by performing dependency checks of the source code and Docker images used in our code. We also use it for open-source licensing compliance review. We need to keep an eye on what licenses are attached to the libraries or components that we have in use to ensure we don't have surprises in there.

We are using the standard plan, but we have the container scanning module as well in a hybrid deployment. The cloud solution is used for integration with the source code repository which, in our case, is GitHub. You can add whatever repository you want to be inspected by Snyk and it will identify and recommend solutions for your the identified issues. We are also using it as part of our CI/CD pipelines, in our case it is integrated with Jenkins. 

How has it helped my organization?

As the developers work they can run the checks and they can validate if their work meets our expectation or not. Then they can address the potential issues during development, rather than going through the whole process and then being pushed back and told, "Hey, you've got issues in here. This is an old component that is no longer supported," or "It's something that has a vulnerability." From that point of view, it's very valuable.

I'm not a developer, I'm an information security officer, but the false positive rate seems to be pretty good. Generally, when it picks up something, it's there. Snyk is not an antivirus. When it highlights something then there is a problem. Sometimes you can fix it, sometimes you cannot fix it. The good thing is that at least you are aware that there is a potential issue. If it's something serious, you can try to validate, but you can usually validate the issue against other databases by looking at a CVV. You've got enough information to identify if this is a real problem or not. In the vast majority of the cases, if you look at dependency, it's pretty straightforward. It matches the database that is being picked up, and you can have a look at more details.

Generally, security tools don't necessarily end up in increased productivity. What Snyk prevents is the wasting of time or productivity. If you're trying to go back and fix issues that are caused by potential vulnerabilities discovered by a pen test, trying to retrofit security can be quite painful. From that point of view, you may go a little bit slower because it's an extra step, but at the same time, you save time on the overall process and you're saving exposing the company to risks.

As a tool, Snyk allows us to identify areas where we need to improve, and this could be at the vulnerability level if there is a library that has a vulnerability. It also helps us with the licensing compliance, identifying if the new components that have been added to the code meet our company's open source compliance. In those ways it helps us as a company. The overall impact of Snyk depends on the way you use it. To me, it's the users, not Snyk, doing something.

We are a new company. We started roughly three years ago. But we knew security is a very important factor. We work with some very large companies out there. Privacy and security of their data is very important. Security was something that we knew we had to put in place from the beginning, as a way of demonstrating that we take things seriously. And we also satisfy the needs of our investors and clients when it comes to trusting us as a provider.

What is most valuable?

The dependency checks of the libraries are very valuable, but the licensing part is also very important because, with open source components, licensing can be all over the place. Our project is not an open source project, but we do use quite a lot of open source components and we want to make sure that we don't have surprises in there. That's something that we pay attention to.

The ease of use for developers is quite straightforward. They've got good documentation. It depends on the language that you use for development, but for what we have — Java, JavaScript, Python — it seems to be pretty straightforward.

It also has good integration with CI/CD pipelines. In the past we had it integrated with Concourse and now it's running on Jenkins, so it seems to be quite versatile.

What needs improvement?

They've recently launched their open source compliance. That's an area that is definitely of interest. The better the capability in that, the better it will be for everyone. There may be room to improve the level of information provided to the developers so they understand exactly why using, say, a GPL license is a potential issue for a company that is not intending to publish its code.

There is potential for improvement in expanding the languages they cover and in integrating with other solutions. SonarQube is something that I'm quite interested in, something that I want to bring into play. I know that Snyk integrates with it, but I don't know how well it integrates. I will have to see.

Generating reports and visibility through reports are definitely things they can do better.

For how long have I used the solution?

We've been using Snyk for nearly two years.

What do I think about the stability of the solution?

Generally, the stability of Snyk is fine. From time to time the reporting bits, when you look at them on the cloud, can be a little bit sluggish when you start having quite a bit of information in there. But there have been no major outages when we couldn't use it. I don't know if the sluggishness is internet-related or it's something within Snyk. They are based in the United States and I don't know if the traffic across the pond is causing any of these issues.

It's not something that you constantly use all the time. When you want to commit something, it runs on a schedule. When you put something through the pipeline, it runs. But again, there have been no outages or issues with the stability.

What do I think about the scalability of the solution?

We have had no issues with scalability. We haven't needed to do anything special to address that. So far, we have had no problems.

Usage, in our case, will depend on the number of developers that we have. So unless Snyk develops additional features, something more we can use, and we expand because of those capabilities, I don't see a massive increase in our user base. It's a development-orientated solution with a small number of people, from management, who generally keep an eye on the reports, from a compliance point of view. It all depends on our company. The only impact that will come from Snyk is if it comes out with new features that we would like to implement.

How are customer service and technical support?

We had some chats with technical support at the beginning. They seemed to be pretty responsive. Generally, you communicate with them on a support chat-group. If you need more, you can have Zoom sessions. But we only speak with them now if one of the devs finds something that doesn't look right. We haven't spoken to them in a long time.

Which solution did I use previously and why did I switch?

Snyk replaced some potential candidates. We had some people looking at maybe using CoreOS Clair and there were some discussions about what we could use to scan our repository. But we didn't have anything officially in place. In fact, Snyk was one of the first solutions that I put in place as a paid solution for the security of our code.

Security is something that is quite important for us. We take security seriously and it's something that we baked in from the early stages. We try to shift it as far left as possible. Snyk is a result of our organization's approach towards security, rather than vice-versa. It's playing its role alongside our security needs.

How was the initial setup?

In our organization, I ask that things be done and people are doing them, so I wasn't directly involved in the setup. But the installation seemed to be quite straightforward. I don't get pushback from the dev community. My background is more infrastructure, I'm not a developer, so I can't comment how easy it is to bring everything together. But when I worked with my devs, when we migrated from Concourse to Jenkins, it wasn't such a huge undertaking and it didn't cause us too many headaches.

In terms of developer adoption, they have to use it because we asked them to use it. And once it's part of the pipeline; everything that they push through the pipeline goes through Snyk. It was a company decision to go that way.

The initial rollout took about one week. Most of the stuff was already in place. We just migrated from one pipeline provider to another. It was quite straightforward.

We have a bit of a hybrid approach. Some of it was in the cloud, and we haven't touched that. The integration of the container bit, the CLI integration is done on our cloud and it's something we maintain. We tried to use Snyk's recommendations. It has an API that you can call use to run some scans, but their full-feature recommended solution is to use the CLI, using your own instance of Snyk. So we have a container that's running Snyk, and whenever we run the scans we just call on that.

The deployment involved one or two people internally. When it was just GitHub, it was me and one developer. And when it came to infrastructure, it was me with an infra guy. It depends on the level of expertise that you have in-house and how comfortable people are with similar solutions. At the end of the day, to roll up a container image and pull that into your pipeline is quite straightforward. It's not difficult.

We don't do that much maintenance on Snyk. It's integrated. It's running in the background. We only touch it when we need to touch it. It's not like we need dedicated resources for that.

Between 50 and 70 people are using Snyk at a given time in our organization. Most of them are developers. We might have some QAs who look at something.

What was our ROI?

It hits ROI for us very well in a couple of areas that we want to address: to ensure that we don't have surprises when it comes to vulnerabilities on our dependencies — libraries and images. And from a compliance point of view, we don't want to be in a situation where we're forced to publish code because someone has decided to use libraries that would force us to either publish everything under GPL or put us in a situation where licenses are not compatible and we would have to redo part of the code.

The ROI is one of those things that is difficult to quantify. It's not something where you can say how much money you have saved. But looking at overall cost versus the benefit, it's worth the money.

Time-to-value is a difficult topic because the way that I see it, Snyk is a preventative measure. It's similar to insurance. How much money are you prepared to spend to address a potential risk? By having a solution like Snyk in place, you prevent your company from being an easy target and being exposed. It's not something you can easily quantify, but Snyk falls under the cost of doing business. You want to have something in there because the overall cost and the overall problems will be a lot greater.

What's my experience with pricing, setup cost, and licensing?

Pricing and licensing of Snyk is okay. Their model is based on the number of committers of your source code, which can be a little bit false at times. It can be false because we have some QAs and some BAs, for example, who sometimes go in and add comments. They're not writing code, but they're flagged as committers of the code. That can cause some misunderstanding but we discussed this Snyk and explained the situation. They were quite okay with that. So although the number of people they see in Snyk is slightly higher, they're not holding us with our backs to the wall, saying, "Hey, you're over your license."

The only cost is whatever you run on your cloud. If you deploy the CLI integration and you run Snyk you need to take into account the cost, but it's not huge.

Which other solutions did I evaluate?

There are a number of other solutions out there that you can use. We looked at Black Duck from Synopsys and CoreOS Clair for containers. I had a bit of a look at WhiteSource. Because we're using open source software, a lot of our devs like the open source ethos. They had different suggestions so we looked at a number of potential use case scenarios. These days, for example, GitHub also allows you to scan your reports for dependencies and vulnerabilities. AWS also has the ability to scan your base images. You can validate different things at different stages. But the main one for moving the security to the left is Snyk.

In terms of the comprehensiveness and accuracy of Snyk's vulnerability database, I looked at that in the past. When I picked Snyk as a solution and was looking at Black Duck and other companies, I knew Snyk had its own database and was doing quite a lot of research in that area. To me it seems to be quite good compared to other solutions, like GitHub or Amazon. I get more out of Snyk. Snyk picks up more, highlights more, than other solutions I've seen.

Both Black Duck and WhiteSource are more established companies but they're probably more expensive. I haven't looked at the overall costs, but as you throw more into them they tend to be more expensive. Snyk meets our requirements.

What other advice do I have?

If your company develops software, and if you are an open source consumer, you need to have something in place. Do your research and find the best solution. For us, Snyk worked. I am involved in a security working group with my counterparts at our investors. We discussed what we're doing and what we are using and I discussed Snyk there. I discussed it with a couple of companies in particular and shared ideas and I recommended that they have a look at Snyk. Snyk is open source. You can take it for a ride and see if you like it. Once you're happy with it, you can move forward.

The biggest lesson I've learned from using Snyk is that it brings in a little bit of discipline in terms of what people can and cannot use. It also highlights the importance of security. It also adds a little bit of structure by surfacing potential issues. That's one of the most important factors. And having something like Snyk means you can validate and you can demonstrate, when meeting your clients and your investors, that you are meeting security needs and concerns.

In terms of the time it takes for developers to find fixed vulnerabilities, it depends on the type of issue. In some cases, for example, if there is an upgrade and there is a new version of the library, Snyk does make recommendations. If Snyk can do something for you it will do it. It can automatically generate a pull request so you can do a library upgrade. If there is something quite straightforward regarding licensing, they'll highlight that for you. But other issues are a little bit more complex. If it's a container image, for example, and there's no immediate image upgrade, maybe you want to do something like upgrade a library within the image. Some things are quite straightforward, and if Snyk can, it recommends it, and it's pretty easy, pretty straightforward. For other situations it will say you can manually upgrade this, but you'll have to do that process on your own.

Snyk's actionable advice when it comes to container vulnerabilities is aligned with the rest of the solution. We were one of the early users of containers. We have had Snyk in place for nearly two years, so when we started, containers were something very new for them. It's definitely better than other solutions which are free. Can it be better? Yes. As always, things can always be improved, but it's more or less on par with what we have on the regular dependency checks that we have on normal libraries as part of the source code.

If you look purely at the source code, we can do it with a SaaS application. You link your GitHub or your code repository with Snyk and, as you commit code, Snyk scans and reports. For containers, we tend to use the integration part of the CI/CD pipeline as well. All the images are passed through and we're using CLI commands to run this. This requires a little bit of extra setup, but once you put it in place it tends to be quite straightforward and doesn't require any additional work. As for allowing developers to own security for the applications and the containers they run in in the cloud, to be honest with you, in a lot of cases, their main focus is on developing the app. The scanning is more on the infra side. When it comes to containers and streamlining the application installation, that usually falls on the infra. They stay on top of that task. We have it integrated and we keep an eye out in case something has been plugged up. I just ask them to make sure it's addressed as soon as possible.

We're using Qualys to do external scans and external assessments. We also do penetration testing. But at the end of the day, you have to look at what you want from a tool. Maybe there are other solutions out there that claim to do a lot more. I'm sure that there are other vendors that can potentially give you a more integrated and better view, but they come with additional costs and additional complications. It all depends on what you want to do and how you want to achieve that. For us, the purpose of Snyk was to look at the vulnerabilities in the code or Docker container images, and to address the licensing aspect. 

Some companies go with individual solutions for every single part. For example, they use Clair to scan just the containers and something else to scan just the code. They have linting tools and other things. We're not just using Snyk. For example, we also have linting tools for code quality. This is not something that Snyk is doing. We're trying to use what is suitable for us.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Patrick Lonergan
Associate General Counsel at Circleci
Real User
Top 5
Provides contextualized, easily actionable intelligence that alerts us to compliance issues
Pros and Cons
  • "FOSSA provided us with contextualized, easily actionable intelligence that alerted us to compliance issues. I could tell FOSSA exactly what I cared about and they would tell me when something was out of policy. I don't want to hear from the compliance tool unless I have an issue that I need to deal with. That was what was great about FOSSA is that it was basically "Here's my policy and only send me an alert if there's something without a policy." I thought that it was really good at doing that."
  • "I wish there was a way that you could have a more global rollout of it, instead of having to do it in each repository individually. It's possible, that's something that is offered now, or maybe if you were using the CI Jenkins, you'd be able to do that. But with Travis, there wasn't an easy way to do that. At least not that I could find. That was probably the biggest issue."

What is our primary use case?

Our primary use case was for the license compliance. We were doing all the open-source scanning in our CI build using FOSSA. So we would use it, have a step where FOSSA would be installed, and it would scan all the open-source libraries that were being used and then report back on what those licenses were. Then that would match up with policies that we had preset in the FOSSA UI and let us know if there are any license violations with our use of open-source.

How has it helped my organization?

Prior to FOSSA, we were really struggling to get priority using FOSSA to get open-source set up on a repository. We were actually using Flexera before we came to process and we would run a scan on one of our repos and get around 10,000 results and I'm one person and this is a tiny fraction of my job. I didn't know how I was ever going to get through all those results and once I saw what FOSSA could do, we were up and running on a lot more repos much more quickly with FOSSA. It wasn't giving us tons of false positives, FOSSA was just giving us what we cared about. We had presets and it was matching against policies. That was a big thing.

FOSSA provides functionality that allowed you to do public reports as the dependencies you use. So if you were doing attribution for a mobile app, for example, you could iframe FOSSA's report of all the dependencies and use that as the attribution that they require for a mobile app or other distributed software. That was really nice. That was a functionality that put them ahead at the time. Prior to using FOSSA, we would run these scans and we had figured out the tendencies and then I had the engineers implement it in the mobile app with all the lists of all the attributions we needed. If something changed, I would have to have the engineers redo it, whereas with FOSSA, since those reports were constantly being generated every time CI build was run, then that list was always up to date. I didn't have to worry about the engineers updating it or keeping it current if something changed. That was a really nice functionality I liked.

FOSSA provided us with contextualized, easily actionable intelligence that alerted us to compliance issues. I could tell FOSSA exactly what I cared about and they would tell me when something was out of policy. I don't want to hear from the compliance tool unless I have an issue that I need to deal with. That was what was great about FOSSA. It was basically "Here's my policy and only send me an alert if there's something without a policy." I thought that it was really good at doing that.

As soon as I got an alert from FOSSA, I could reach out to the engineers who were working or owned that repo and say FOSSA's telling me that we're using this dependency that's out of our policy or if they can't find a license for the dependency or whatever it was, and it would tell me exactly what the issue was. There's no license on this dependency and then I could just tell them exactly what the issue was. They could look into it and say, "Oh, actually there is a license. For some reason FOSSA wasn't picking it up." Or, maybe the projects dual licensed and FOSSA thought it was GTL, but it's actually GPL and it would be a fee.

I felt that FOSSA told me exactly when there was an issue, what the issue was and then I could work with the engineers to easily figure out if there truly was an issue that needed remediation, or if it was some sort of course in-process tool. The other thing that was helpful is that a lot of times people will come and say "Send me a list. What are all the dependencies that we use on this project?" I could easily generate those reports in FOSSA. I could go in and see where all the dependencies are and if it was a transitive or direct dependency. That's all really nicely done in FOSSA's UI. For open-source license compliance, FOSSA had the nicest UI of any of the products that I looked at. We tried a few. For me on the legal team, that was really what I cared about. 

I would describe FOSSA as being holistic in that it helps us work with both legal teams and DevOps. Our engineers found it easy enough to use. I think a lot of engineers are willing to follow a policy but they're not really interested in being in charge of managing it. They like the fact that they could easily get in the tool and see if there was an issue and that they didn't have to do a lot of tinkering with it to keep it running. That was probably their favorite part about it was that it was easy enough for them to use and help me out, but didn't require a lot of work on their part.

It enabled us to deploy software at scale. It's a huge company. We could keep doing what we were doing and feel that we were in compliance with all of our open-source obligations.

FOSSA also decreased the time our staff spent on troubleshooting. It helped us save time with staying on top of open-source license compliance. Once it was set up, it kind of ran itself. It only reached out to me with an issue when it thought there was one.

I would say it probably saved me on average five or six hours a week. It's allowed me to only spend a few hours a week doing things related to open source license compliance, which I thought was great.

What is most valuable?

The box policy was great. It was very closely aligned. We had multiple policies depending on which code base we were scanning so we had some code that was software as a service and we had some code that was distributed. We had different policies for that. The policy-setting at FOSSA is the number one reason I picked it because the policy set up and having the different policies was so easy and so intuitive. It was really exactly what I needed for what we cared about at my company, what we were looking for, and the checking again as the policy and licensing really meshed well with the way FOSSA did it.

I like that their result set with very tailored. Some other open-source license management things, like Flexera, for example, would do a really in-depth, crazy scan where it gives you 10,000 results and then you have to go through and check which result sets you actually care about and clear the stuff that you're not concerned about it, which was too time-consuming. FOSSA is very tailored. It gives us the dependencies that we know we use.

FOSSA's result set was very tailored to what I cared about. I didn't have to send a whole bunch of time clearing a whole bunch of false positives. I was really the only person on the legal team doing open-source compliance. I didn't have a whole team of compliance people to go through and look at a million potentially false positives. I needed something that would just give me the information I cared about and then tell me if there was a change once I had approved the ongoing list.

In terms of its compatibility with the wide range of developer ecosystem tools, when I was at my previous company, we'd use it with three different CI tools. We used it with CircleCI, Travis, and with Jenkins. It was set up to work the best with CircleCI. I thought it was pretty easy to set up with all three. I think it depended on the complexity of your CI setup. Like Jenkins, for example, which is notoriously difficult to set up, the setup there was also pretty complex.

Overall, I thought it was pretty easy to set up. I did most of the coding myself and I'm not a software engineer anymore but I was still able to figure it out. It was pretty easy, pretty compatible, pretty user-friendly and certainly, for an actual true software developer, not a reformed one, it wouldn't be a problem for someone to set up and use.

It made it so that it was something that even a legal team could set up. It's a one-time setup and then you're just off and running unless you change something or add a new repository you want to do scanning on. It's great. Setting it up in the CI and having it run was one of the appeals. 

What needs improvement?

I wish there was a way that you could have a more global rollout of it, instead of having to do it in each repository individually. It's possible that's something that is offered now, or maybe if you were using the CI Jenkins, you'd be able to do that. But with Travis, there wasn't an easy way to do that. At least not that I could find. That was probably the biggest issue.

Another thing that is they were super great to work with. I could contact them and the engineers were very responsive to the questions I had or if there was some issue I found they were always helpful working it out. I would say that the documentation would probably be another area that could use some work. If I was doing something that was undocumented but I might know about it because I talked to one of the engineers at FOSSA, then our engineers were always a little worried that it wasn't documented and if they should be using an undocumented feature. I felt like the documentation a lot of times trailed the product functionality a little bit. If you were trying to solve problems on your own, sometimes it wasn't the easiest.

For how long have I used the solution?

I used FOSSA for a year. 

We had the integration of the field, the FOSSA CLI integration into our CI service, which we were using Travis there. We would just use each build, we would install whatever the current version of the field that client was on.

What do I think about the stability of the solution?

We only had like a few times where we ran into any sort of issue with them having downtime.

What do I think about the scalability of the solution?

The scalability was good. We had no issues with the scaling to our whole organization, aside from just the limits on my time to spend doing it.

I'm not sure how many people were actually using it, but all of the engineers had access. Then there were a few of us on the legal and security teams who all had access too.

I supported most of the engineering team because I was more technical and a more IP-focused attorney amongst other things. I had a lot of relationships with different people on our engineering team at my company. I would reach out to them directly. When we started using FOSSA we were one of the earlier customers. I got to know them, they were a couple of blocks away, and they were really accessible. I would end up reaching out to them via a Slack channel.

Which solution did I use previously and why did I switch?

We also used Flexera. I thought the setup was too complicated and the results weren't focused enough. It wasn't set up in our CI system. You'd have to manually run scans periodically instead of it being run every time that a build was run in CI. It wasn't scalable for us and it was not efficient enough for us with the team size that we had.

How was the initial setup?

The initial setup was straightforward. It depends on how many repos you have, it can be a bit time consuming, at least in the Travis world, only because you have to do it for every single project, this might just be because of how the set up is in Travis. There might be a simpler way, but we spent a decent amount of time getting it set up only because we also had around a thousand repos to set up. It wasn't so much that any individual setup was complicated. It was the number of projects that needed to be set up and that you had to do each one individually. The entire setup took a few months.

There is a simpler setup that FOSSA offers, which is like a more traditional scan where it's not set up in CI. That was set up where it scanned all of our repos. When we very first started with FOSSA, it did that in a matter of a few hours. We had results for all of our projects in a few hours. It was just the actual CI setup part that took a few months.

I had a priority list of things that I cared about. I evaluated the repos that I thought were a higher priority to know where we were. I had a list that I created and worked down from. They were either bigger, distributed projects, or for a variety of other reasons, I might've prioritized them and then just worked down through that list.

What about the implementation team?

We did not use a third-party for the implementation. Although I understand that FOSSA offers professional services to help with implementation, we decided to do it ourselves.

It was primarily me with input from engineers. I had realized that once I had a pretty good idea of how to set it up on a Go Project in the way the CI for a Go Project was set up at my previous company, then I could replicate that work. Usually, it would be me working with an engineer who was familiar with that sort of type of project. Then I would just take it from there.

We don't need too many people for maintenance. Depending on what the issue was I would reach out to whatever engineer I thought I needed depending on what the project was, who owned it, what the issue was, and things like that.

All the engineers had access if they wanted to. I don't know how many of them used it.

What was our ROI?

We did see ROI. We had results the very first day that we had set it up. My confidence in what the results were was much higher with FOSSA than it had been with Flexera. I would say that we've had a nice return on investment just from the time spent by our team reviewing the results, plus our confidence in the accuracy of those results.

What's my experience with pricing, setup cost, and licensing?

In terms of pricing, I thought FOSSA was reasonable but slightly more expensive than Flexera if I recall. You weren't having to do IT stuff yourself. I certainly think in terms of time saved, it was more than satisfactory. 

Which other solutions did I evaluate?

We had also looked at Black Duck and that was pretty much it.

My recollection was that Black Duck was a lot like Flexera. It wasn't set up in CI. The results set was too big. Also, the setup was hard. We had to host Flexera. I had to have IT set up an AWS instance that I could then use for the set up of Flexera. It was a lot of work.

What other advice do I have?

It's easy to use, it's easy to maintain, and it saves you time on your open-source license compliance work. I felt like the solution was very tailored for open-source license compliance with their license.

I would rate FOSSA a nine out of ten. There were a few little things that could be improved, but overall for my use case, it was great.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Principle Consultant at a tech services company with 11-50 employees
Consultant
Top 10
Provides extensive guidance for writing secure code and pointing to vulnerable open source libraries
Pros and Cons
  • "Within SCA, there is an extremely valuable feature called vulnerable methods. It is able to determine within a vulnerable library which methods are vulnerable. That is very valuable, because in the vast majority of cases where a library is vulnerable, none of the vulnerable methods are actually used by the code. So, if we want to prioritize the way open source libraries are updated when a library is found vulnerable, then we want to prioritize the libraries which have vulnerable methods used within the code."
  • "Veracode has a few shortcomings in terms of how they handle certain components of the UI. For example, in the case of the false positive, it would be highly desirable if the false positive don't show up again on the UI, instead still showing up for any subsequent scan as a false positive. There is a little bit of cluttering that could be avoided."

What is our primary use case?

Software Composition Analysis (SCA) is used to detect vulnerabilities in open source libraries, which are used by our customers for their own product. 

We are a consulting company who provides consulting services to clients. We don't buy the software for our own internal use. However, we advise customers about which solutions will fit their environment.

Most of our clients use SCA for cloud applications. 

How has it helped my organization?

For application security, the SCA product from Veracode is a good solution. It has a good balance. Altogether, the balance between the outcome of the tool, the speed of the tool, and its cost make it a good choice. 

One of the reasons why we recommend Veracode because it is very important in that SAST and SCA tools, independently from the vendor, should work seamlessly within the build pipeline. Veracode does a good job in this respect.

In this day and age, all software is developed using a large amount of open source libraries. It is kind of unavoidable. Any product application has a lot of embedded libraries. In our experience, many times customers don't realize that it is not just a code that can be vulnerable, but also an open source library that they may take for granted. In many ways, this has been a learning experience for the customers to understand that there are other components to open source libraries, and that SCA is an invaluable tool to address those issues.

What is most valuable?

SCA provides guidance for fixing vulnerabilities. It provides extensive guidance for both writing secure code and pointing to vulnerable open source libraries are being used.

From the time it takes for the solution to detect a vulnerability, both in the source code and the open source library, it is efficient. 

Within SCA, there is an extremely valuable feature called vulnerable methods. It is able to determine within a vulnerable library which methods are vulnerable. That is very valuable, because in the vast majority of cases where a library is vulnerable, none of the vulnerable methods are actually used by the code. So, if we want to prioritize the way open source libraries are updated when a library is found vulnerable, then we want to prioritize the libraries which have vulnerable methods used within the code. 

The Static Analysis Pipeline Scan is faster than the traditional scan that Veracode has. All Veracode products are fast. I have no complaints. On average, a piece of code for a customer takes 15 to 20 minutes to build versus the Static Analysis Pipeline Scan of Veracode that takes three or four minutes. So, that is 20 to 30 percent of the total time, which is fairly fast.

What needs improvement?

Most of our time is spent configuring the SAST and SCA tools. I would consider that one of the weak points of the product. Otherwise, once the product is set up on the computer, it is fairly fast.

Like many tools, Veracode has a good number of false positives. However, there are no tools at this point in the market that they can understand the scope of an application. For example, if I have an application with only internal APIs and no UI, Veracode can detect that. It might detect that the HTML bodies of the requests are not sanitized, so it would then be prone to cross-site injections and SQL injections. But, in reality, that is a false positive. It will be almost impossible for a tool to understand the scope unless we start using machine learning and AI. So, it's inevitable at this point that there are false positives. Obviously, that doesn't make the developers happy, but I don't think there is another way around this, but it is not just because of Veracode. It's just the nature of the problem, which cannot be solved with current technologies. 

Once we explain to the developers why there are false positives, they understand. In Veracode, embedded features (where there are false positives) can be flagged as such. So, next time that they run the same scan, the same "vulnerability" will be still flagged as a false positive. Therefore, it's not that bad from that point of view.

Veracode has a few shortcomings in terms of how they handle certain components of the UI. For example, in the case of the false positive, it would be highly desirable if the false positive don't show up again on the UI, instead still showing up for any subsequent scan as a false positive. There is a little bit of cluttering that could be avoided. However, that is not necessarily a shortcoming of the product. I think it's more of a shortcoming of the UI. It's just the way it's visualized. However, going forward, I personally don't want to see any more vulnerabilities that I already flagged as a false positive.

It does take some time to understand the way the product works and be able to configure it properly. Veracode is aware of that. Because the SCA tools are actually a company that they acquired, SourceClear, the SCA tool and SAST tool are not completely integrated at this point. You are still dealing with two separate products, which can cause some headaches. I did have a conversation with the Veracode development team not too long ago where I voiced my concerns. They acknowledged that they're working on this and are aware of it. Developers have limited amounts of time dedicated to learning how to use a tool. So, they need quite a bit of help, especially when we're talking about this type of integration between the SAST and SCA. I would really like to see better integration between the SAST and SCA.

For how long have I used the solution?

I have been using it for almost a year.

What do I think about the stability of the solution?

It is stable. One of the selling points is that it is a cloud solution. The maintenance is more about integrating Veracode into the pipeline. There is a first-time effort, then you can pretty much reproduce the same pipeline code for all the development teams. At that point, once everything runs in the pipeline, I think the maintenance is minimal.

What do I think about the scalability of the solution?

We have deployed the solution to FinTech or technology medium-sized companies with more than 100 employees.

How are customer service and technical support?

Their technical support is less than stellar. They have essentially two tiers: the technical support and the consulting support. With the consulting support, you have the opportunity to talk to people who have intimate knowledge of the product, but this usually takes a bit of effort so customers still like to go through the initial technical support that is less than stellar. We rarely get an answer from the technical support. They seem a lot more like they are the first line of defense or help. But, in reality, they are not very helpful. Until we get to the second level, we can't accomplish anything. This is another complaint that I have brought up to Veracode.

Which solution did I use previously and why did I switch?

One of the reasons why we decided on Veracode is because they have an integrated solution of SAST and SCA within the same platform. Instead of relying upon two different, separate products, the attraction of using a Veracode was that we could use one platform to cover SAST and SCA. 

How was the initial setup?

The SAST tool is pretty straightforward; there is very little complexity. The pipeline works very well. The SCA tool is more complex to set up, and it doesn't integrate very well with the SAST tool. At the end of the day, you have essentially two separate products with two separate setups. Also, you have two different reports because the report integration is not quite there. However, I'm hopeful that they are going to fix that soon. They acquired SourceClear less than two years ago, so they are still going through growing pains of integrating these two products.

The setting up of the pipeline is fairly straightforward. It works a lot of the main languages, like Java, Python, etc. We have deployed it across several development teams. Once we create a pipeline and hand the code to the developers, they have been able to make a little adjustment here or there, then it worked.

What about the implementation team?

For both SCA and SAST tools, including documentation, providing the code, writing the code for the pipeline, and giving some training to the developers, a deployment can take us close to two weeks. 

Deploying automated process tools, like Veracode, Qualys, and Checkmarx, does take more effort than uploading the code manually each time.

What was our ROI?

As long as developers use the tool and Veracode consistently, that can reduce the cost of penetration testing.

What's my experience with pricing, setup cost, and licensing?

Checkmarx is a very good solution and probably a better solution than Veracode, but it costs four times as much as Veracode. You need an entire team to maintain Checkmarx. You also need on-premise servers. So, it is a solution more for an enterprise customer. If you have a small- to medium-sized company, Checkmarx is very hard to use, because it takes so many resources. From this point of view, I would certainly recommend for now, Veracode for small- to medium-sized businesses. 

Compared to other similar products, the licensing and pricing are definitely competitive. If you see Checkmarx as the market leader, then we are talking about Veracode being a fraction of the cost. You also have to consider your hidden costs: you need a team to maintain it, a server, and resources. From that point of view, Veracode is great because the cost is really a fraction of many competitors. 

Veracode provides a very good balance between a working solution and cost.

Which other solutions did I evaluate?

There are other products in the market. However, some of those products are extremely expensive or require a larger team to support them. Often, they have to be installed on-prem. Veracode is a bit more appealing for our organizations who don't have larger AppSec teams or where budget is a constraint. In this respect, SCA is a good solution.

We have been using Checkmarx for years, but mainly for their on-prem solution. They do have an offering in the cloud, but we haven't done any side-by-side tests in respect to speed. We did do a side-by-side comparison between Veracode and Checkmarx two or three years ago from a technical ability standpoint. At that time, Checkmarx came in a bit ahead of Veracode.

Checkmarx is more complex to set up because it is on-prem with multiple servers as well as there are a lot of things going up. If you have a larger budget and team, look into Checkmarx because it is a market leader. However, when it comes to a price, I would choose Veracode for a smaller company, not a large enterprise. 

Another consideration for Checkmarx, as an on-prem solution, is that you are pretty much ascertained that your code doesn't leave your company. With companies like Veracode, even if they are saying that you only upload the binary code, that's not quite true. The binary code can be reverse-engineered and the source code can be essentially reconstructed. For example, Veracode would not be suitable for a government agency or a government consultancy. 

For DAST, our customers like to use Qualys Web Application Scanning. There are very few players out there that can test APIs, but Qualys is one of them. 

Another promising solution that allows for testing APIs is Wallarm. We have done a couple of PoCs with them.

We tested Black Duck a few years ago, but they only had a SCA solution. They didn't have a SAST solution. I think they do now have a SAST solution because they acquired another company, Fujita.

What other advice do I have?

I don't think that Veracode has helped developers with security training, but it helps developers have a reality check on the code that they write and their open source library. That is the best value that developers can get from the product. 

Veracode products can be run as part of the development pipeline. That is also valuable.

It integrates with tools like GitHub or Jenkins. At a high level, it does integrate with most of the pipeline of tools. It would be a showstopper if the incorporation of security was not in the developer workflows. We are past a time when developers or software engineers run a SCA or DAST scan on the code, then hand it off to the development team. What works instead is to inject a security tool in a development pipeline, which is why it is absolutely paramount and important that tools, like Veracode, be a part of the build pipeline.

We limited the user to SAST and SCA. We haven't used any of the penetration testing, especially for the DAST solution that they have. For that, they are behind the curve, meaning that there are other products in the market that are being established. In my opinion, they don't have a viable product for DAST, because I believe they are not even testing APIs. So, it's not mature enough. We also have never used their pen testing because that is one of the services that we provide.

At this point, Veracode is one of the best solutions available, though it's not perfect by any means, but you have to work with whatever you have.

I will give the solution a seven (out of 10). When they integrate the SCA and SAST portions more tightly together, I could probably bump it up to an eight. Also, if they make improvements to the UI and the support, they can get a better rating. However, at this point, I would still pick Veracode for a company who doesn't have a million dollar plus budget.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Wes Kanazawa
Sr. DevOps Engineer at Primerica
Real User
Top 5
Enables our developers to proactively select components that don't have a vulnerability or a licensing issue
Pros and Cons
  • "The proxy repository is probably the most valuable feature to us because it allows us to be more proactive in our builds. We're no longer tied to saving components to our repository."
  • "It would be helpful if it had a more detailed view of what has been quarantined, for people who don't have Lifecycle licenses. Other than that, it's pretty good."

What is our primary use case?

We're using it to change the way we do our open-source. We used to actually save our open-source and now we're moving towards a firewall approach where we are proxy to Maven repos or NPM repos, and we are using those proxies so that we can keep ourselves from pulling in known bad components at build time. We're able to be more proactive on our builds.

How has it helped my organization?

It's allowed our developers, instead of waiting till the last minute before a release, to know well ahead of time that the components are bad and they are able to proactively select different components that don't have a vulnerability or a licensing issue.

Also, the solution's data quality seems to be good. We haven't had any issues. We're definitely able to solve problems a lot faster and get answers to the developers a lot faster.

And Nexus Lifecycle integrates well with your existing DevOps tools. We were able to put it right into our build pipelines. We use Jenkins and we're able to stop the builds right in the actual build process whenever there's a quarantined item.

In addition, it has brought open-source intelligence and policy enforcement across our SDLC. It has totally changed the way we do our process. We have been able to speed up the approval process of OSS. Given the policies, we're able to say, "These are okay to use." We've been able to put in guardrails to allow development to move faster using the product. Our pipelines are automated and it is definitely a key component of our automation.

Finally, the developers like it because they're able to see and fix their issues right away. That has improved. For example, let's say a developer had to come to us and said, "Hey, scan this. I want to use it," and we scan it and it has a vulnerability. They've already asked us to do something that they could have done through the firewall product or Lifecycle. Suppose it takes us a day and then we turn around and say, "Okay, here are the results," and we say they can use this version of that product. They've got to download it and see if it works. So we're already saving a day there. But then let's say they have to send it off to security to get approval on something that security would probably approve anyways. It's just they didn't know security would approve it. They would have to wait two or three days for security to come back and give them an answer. So we're looking at possibly saving four days on a piece of code.

What is most valuable?

The proxy repository is probably the most valuable feature to us because it allows us to be more proactive in our builds. We're no longer tied to saving components to our repository.

The default policies are good, they're a good start. They're a great place to start when you are looking to build your own policies. We mostly use the default policies, perhaps with changes here and there. It's deceptively easy to understand. It definitely provides the flexibility we need. There's a lot more stuff that you can get into. It definitely requires training to properly use the policies.

We like the integrations into developer tooling. We use the Lifecycle piece for some of our developers and it integrates easily into Eclipse and into Visual Studio code. It's a good product for that.

What needs improvement?

It would be helpful if it had a more detailed view of what has been quarantined, for people who don't have Lifecycle licenses. Other than that, it's pretty good.

For how long have I used the solution?

We've been using it for about a year now.

What do I think about the stability of the solution?

The stability has been great. We haven't had any issues in the year that we've had it running. So far, so good.

What do I think about the scalability of the solution?

It's probably not that scalable in its current state. That has to do with the way that the applications are designed. I think they're working on that when they start working on the HA solutions. For Nexus and Nexus IQ I think that will change. But right now, it's not very scalable.

How are customer service and technical support?

Sonatype's technical support for this solution has been great. They answer my questions, even my stupid questions. I might be asking them, "Hey, how do I do this? I can't find it." and they'll say "Oh, it's just this button right here." They never make me feel too bad about it.

Which solution did I use previously and why did I switch?

We didn't have something that does what a firewall does. We used a different repository and used Nexus IQ to do the enforcement of policies by scanning OSS's individually. It's nice having it happen automatically on the repositories now.

How was the initial setup?

The directions on the site are good. Once you follow those, you're good. But if you're looking to set it up by just clicking around, you will probably have a hard time figuring it out. But it's easy once you know what you're doing.

From inserting the license file to proxying my first repos, it took about an hour, at the most.

We were doing a conversion. So the implementation strategy, if we're just talking about firewall, was that we already had Nexus. We bought Nexus and the firewall at the same time. Once Nexus was installed and set up, it was a matter of importing our repositories from Artifactory Pro and then connecting the proxy repositories. I can't say there was any "super-strategy." It was just turning it on, getting it going, and then moving the developers over to it using their settings, XML, etc. And we had to set our NPM RC files to point to our new repository using the firewall and, for those repositories that have a firewall, they had to be turned on with them.

What about the implementation team?

We did it ourselves.

What was our ROI?

We don't have any evidence that the solution improves the time it takes us to release secure apps to market because we haven't released an app yet, but I'm sure it will.

Just the dev happiness is already a type of ROI, in addition to how fast they're able to go using it.

What's my experience with pricing, setup cost, and licensing?

We pay yearly.

Which other solutions did I evaluate?

I looked at a few others, like Black Duck, and I was not impressed by them. I didn't get a chance to actually use Black Duck but everything I read said that Black Duck came up with more false positives than Sonatype.

What other advice do I have?

My advice would be to use it as soon as you can. Get it implemented into your environment as quickly as you can because it's going to help. Once you get it, get your devs on it because they're going to thank you for it.

All of our development is happening using the firewall. All our build pipelines are going through there. As far as licensed users go who can look at Nexus, we've got about 35. They range from devs to security personnel to DevOps people.

All our applications are moving over to it, so that's definitely going to increase the usage. We've got about another 200 applications on the board that will come into our greenfield process so they will be pulled straight into that repository using the firewall. It's definitely going to keep growing.

For deployment and maintenance of this solution there is really just our DevOps team of about four people, but I'm primarily responsible.

I would rate it a 10 out of 10. It does everything I need it to do.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Principal Software Architect at a tech services company with 10,001+ employees
Real User
Top 20
Scalable and stable, with a broad range of features
Pros and Cons
  • "The solution boasts a broad range of features and covers much of what an ideal SCA tool should."
  • "The initial setup could be simplified."

What is our primary use case?

To my knowledge, we are using the latest, SaaS, version. 

What is most valuable?

The solution boasts a broad range of features and covers much of what an ideal SCA tool should. It covers the containers. One can create his teams and, should he encounter an issue, send an alert to the team's DL. 

I am quite happy with WhiteSource. It is very good and provides many things, including extensive reports involving vulnerabilities. 

What needs improvement?

I am not clear if WhiteSource provides on-premises service. I know that its competitors provide on-premises and SaaS-based services for the same licensing fee and model, but I am not sure if this applies to WhiteSource, as well. I believe it does not. 

It is preferable to use on-cloud services, although on-premises one should equally be an option, if I would prefer to not go for SaaS-based hosting. The licensing model should be the same for the different options. 

The initial setup could be simplified. 

For how long have I used the solution?

I have been using WhiteSource for more than a year. 

What do I think about the stability of the solution?

The solution is very stable. 

What do I think about the scalability of the solution?

It is a preferequisite that the solution is scalable, as it is SaaS-based. 

How are customer service and technical support?

I have not had experience with customer support. 

How was the initial setup?

The initial setup was of an intermediate complexity. It was neither complex, nor straightforward. It could have been easier. Understandably, it involved a certain amount of configuration. 

What's my experience with pricing, setup cost, and licensing?

I cannot comment on billing, as this was handled by other departments in my previous organization. 

As we were using an SaaS-based service, the solution must be scalable, although my understanding is that this is based on the licensing model one is using. 

Which other solutions did I evaluate?

The reason I logged into the IT Central Station web site is because I was looking for crisp documentation so that I may compare WhiteSource with Black Duck. I did not find what I was looking for. All I found was a conglomerate of user experiences, not the research reports I was searching for.

I am currently using both of these products.

What other advice do I have?

I rate Whitesource as an eight out of ten. 

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Get our free report covering WhiteSource, Snyk, Micro Focus, and other competitors of Black Duck. Updated: January 2022.
563,148 professionals have used our research since 2012.