We use both the static and the dynamic scanning. What we do is run the code through the scanner once we make any modifications. And periodically, we also run the dynamic to connect several applications. We use Veracode to check for specific vulnerabilities such as cross-site scripting. When we are checking for those vulnerabilities, we take a portion of code that is going to be generated and we run the scanner.
Application Security Tools Scan Reviews
Showing reviews of the top ranking products in Application Security Tools, containing the term Scan
reviewer1542384 says in a Veracode review
Senior Project Manager at a computer software company with 501-1,000 employees
We are customers and end-users. We don't really have a business relationship with Veracode.
I'm more from the performance testing side of things. I've just added the security testing to my list of responsibilities recently.
We're using a mix of deployment models. We use both on-premises and cloud deployments.
It's a good tool. I've done some comparisons with both SAST and DAST. It gives us this end-to-end sort of feature that we appreciate. Therefore, rather than you doing SAST with one tool and DAST with another tool, I prefer going with Veracode, which offers both.
You can learn both static and dynamic scans with a single tool. You could effectively negotiate a price and do that. If you got some simple apps, from a CAC standpoint, I'd recommend folks to use Veracode.
I'd rate the solution at a seven out of ten.
reviewer1596348 says in a Veracode review
IT security architect at a consumer goods company with 10,001+ employees
The solution could improve the Dynamic Analysis Security Testing(DAST).
There could be better support for different languages. It is very difficult in some languages to prepare the solution for the static analysis and this procedure is really hard for a pipeline, such as GitHub. They should make it easy to scan projects for any language like they do in other vendors, such as Checkmarx.
We have found there are a lot of false positives and the severity rating we have been receiving has been different compared to other vendor's solutions. For example, in Veracode, we receive a rating of low but in others solutions, we receive a rating of high when doing the glitch analysis.
Nachu Subramanian says in a Veracode review
Automation Practice Leader at a financial services firm with 10,001+ employees
The solution has issues with scanning. It tries to decode the binaries that we are trying to scan. It decodes the binaries and then scans for the code. It scans for vulnerabilities but the code doesn't. They really need two different ways of scanning; one for static analysis and one for dynamic analysis, and they shouldn't decode the binaries for doing the security scanning. It's a challenge for us and doesn't work too well.
As an additional feature I'd like to see third party vulnerability scanning as well as any container image scanning, interactive application security testing and IAS testing. Those are some of the features that Veracode needs to improve. Aside from that, the API integration is very challenging to integrate with the different tools. I think Veracode can do better in those areas.
Reviewer339593 says in a Veracode review
Cybersecurity Executive at a computer software company with 51-200 employees
We utilize it to scan our in-house developed software, as a part of the CI/CD life cycle. Our primary use case is providing reporting from Veracode to our developers. We are still early on in the process of integrating Veracode into our life cycle, so we haven't consumed all features available to us yet. But we are betting on utilizing the API integration functionality in the long-term. That will allow us to automate the areas that security is responsible for, including invoking the scanning and providing the output to our developers so that they can correct any findings.
Right now, it hasn't affected our AppSec process, but our 2022 strategy is to implement multiple components of Veracode into our CI/CD life cycle, along with the DAST component. The goal is to bridge that with automation to provide something closer to real-time feedback to the developers and our DevOps engineering team. We are also looking for it to save us productivity time across the board, including security.
It's a SaaS solution.
Veracode provides guidance for fixing vulnerabilities. It provides guidance to help us understand what it flags, and what we can do about it. It still takes some interpretation and insight on our side, but we aren't generally security experts, so we get good information from Veracode to help inform us.
The developers are able to understand the types of issues Veracode looks for, and then as they see that happen, it helps them to learn. It's good because they consider it the next time and hopefully, we don't need Veracode to flag the issue because there is no issue.
With respect to efficiency when it comes to creating secure software, Veracode is able to help us with very low overhead. There's not a lot of work needed on our side unnecessarily. Once we've wired everything together, it's seamless to get the scan done and get the results back and know what we need to do about them.
We use Veracode for some of our older, more monolithic software, as well as for our newer solutions, which are designed to be cloud-native. We've found Veracode useful in both use cases; first, with our huge monolithic software, as well as with our microservices cloud-native solutions.
In terms of AppSec, there are a lot of benefits that cloud-native design brings in terms of not only cost and scalability, but testability and security. Certainly, the design patterns of cloud-native are well aligned with delivering good security practices. Working with products that support cloud-native solutions is an important part of our evolution.
Using Veracode has helped with developer security training and skill-building. It's definitely a good way to create awareness and to deliver information that's meaningful and in context. It's not abstract or theoretical. It's the code that they've written yesterday that they're getting feedback on, and it is a pretty ideal way to learn and improve.
The static scan capability is very powerful. It's very good in terms of the signal-to-noise ratio. The findings that we get are meaningful, or at least understandable, and there's not a bunch of junk that some other code scanning tools can sometimes produce. Having results like that make it hard to find the valuable bits. Veracode is highly effective at finding meaningful issues.
The speed of the static scan is okay. It meets or exceeds our expectations. For our monolithic application, which is a million lines of code, it takes a while to scan, but that's totally understandable. If it could be done magically in five minutes, I wouldn't say that's bad. Overall, it's very reasonable and appropriate.
Veracode has policy reporting features for ensuring compliance with industry standards and regulations. We have one such policy configured and it's helpful to highlight high-priority areas. We can address and help focus our effects, which ensures that we're spending our time in the best way possible for security movement. The policy is a good structure to guide results over time.
We use Veracode as one metric that we track internally. It gives us information in terms of knowing that we are resolving issues and not introducing issues. I cannot estimate metrics such as, for example, Veracode has made us 10% more secure. I can certainly say it's very important when we talk to our customers about the steps we follow. We do external pen tests, we do web app pen tests, and we also use Veracode. It's certainly very helpful in those conversations, where we can state that it is one of our security practices, but there's no outcome-based quantitative statistic that I can point to.
reviewer1705929 says in a Veracode review
Sr. VP Engineering at a computer software company with 51-200 employees
There are three areas where we started using Veracode immediately. One is static component analysis. The second is their static application security test, where they take a static version of your code and scan through it, looking for security vulnerabilities. The third piece is the DAST product or dynamic application security test.
We also use their manual pen-testing professional services solution in which they manually hit a live version of your product and try to break it or to break through passwords or try to get to your database layer—all that stuff that hackers typically do.
Using Veracode has helped to improve our organization in that we now have discipline in terms of periodically scanning our systems. We do this every six months, and it is done to meet our compliance requirements.
We are now at the point where it is integrated as part of our software lifecycle automation. I can't point to a particular example of how it has improved our product, although it has helped in terms of validating our product. Also, it has shown us the competency of our teams.
We use it for static scans. It is mandatory in our company for every sort of project.
Veracode provides the organization an understanding of security bugs and security holes in our software, finding out if the software is production-ready. It is used as gate management, so we can have a fast understanding if the software is suitable for deployment and production.
My job is to help projects by getting the data integrated in Veracode. I don't own the code or develop code. In this area, I am a little bit like an integration specialist.
We use Azure and AWS, though AWS is relatively fresh as we are now just starting to define guidelines and how the architecture will look. Eventually, within a half year to a year, we would like to have deployments there. I am not sure if dynamic scanning is possible in AWS Cloud. If so, that would be just great.
Our primary use case for Veracode is SAST and SCA in our SDLC pipelines. We also use it for DAST on a periodic basis and time-based scans on our staging system. We use the trading modules for certifying all our developers annually.
In addition, we use Veracode to scan within our build's pipeline. We do use Greenlight, which is their IDE solution for prevention of issues of vulnerabilities.
we are FedRAMP certified as a company, so we use this as part of our certification process for Veracode ISO 27001 and various other certifications we have.
Earlier, we did not have any such dedicated tool for the security analysis of our application. It was quite challenging for us when on a day-to-day basis, it was accessed by the users because there could be security flaws making it prone to any third-party attacks, malware, unauthenticated access, etc. Veracode gives us a complete scanning report, which is very useful. It is informative and helpful to understand the things that we need to focus on.
Within three months of its implementation, we realized that it is a very powerful solution, and it works perfectly for all the use cases of our applications. Scanning through the application code is a very big task, and Veracode does that perfectly. It enhances the development and the coding work and is helpful for the development team and the product team.
Now, there is peace of mind. All the static and dynamic scans are done by Veracode, and we are making sure that there are no security flaws in the application. The automation of the analysis is helpful and saves our time and cost.
reviewer2041767 says in a Veracode review
Senior Software Engineer at a tech vendor with 11-50 employees
I like Veracode's integration with our CI/CD. It automatically scans our code when we do the build. It can also detect any security flaws in our third-party libraries. Veracode is good at pinpointing the sections of code that have vulnerabilities.
Justin Swanson says in a Veracode review
Manager of Application Development and Integrations at a university with 1,001-5,000 employees
We use Veracode for dynamic, static, and software composition scanning. Veracode is a SaaS solution.
We initially had more than 15,000 vulnerabilities. Veracode helped us to regulate all the teams. I gave the consult level access and a basic level of access to developers. My manager and I trained the developers in secure coding practices.
DevSecOps is a process that helps improve security in software development. From a DevSec perspective, it is a great way to improve security in software development. However, from a DAST perspective, it is not as good because the results cannot be easily integrated into the CI/CD pipeline. Integration with Jenkins is seamless. It didn't make much of a difference for us, but it could be different for other applications of the latest technology. Veracode has the feature of issue creation in the Jira portal itself. For example, if we're scanning an application and Veracode reports 15 issues after the security scan is complete, the solution will automatically create Jira tasks related to security, which can be assigned to the appropriate developers. Veracode is good from that perspective, but it needs more evolution. The solution needs moderation because if by some chance a big module or issue pops up, we could get 10,000 issues. That would be a real complication from the Jira point of view.
When it comes to false positives, I used Veracode for two-and-a-half years and it has been fine and fair.
When our developers find a false positive it doesn't make much of a difference. They are just happy knowing what is wrong and right. Developers know how to code, but they don't know secure coding. We are generally there to guide them and most of the time, I used to do the false positive analysis by myself and not leave it to the developers. The developers would get a refined and concrete number of vulnerabilities to quickly work on. In some cases, the developers also find issues that we missed because we have to work on multiple applications at once.
I don't believe there's any cost related to the machine-learning side of Veracode, but it takes a lot of time because SaaS issues are those that couldn't be resolved by a junior or intermediate-level developer generally. Most of the time, these issues are resolved by people with five-plus years of experience because there are security issues. To understand the security complications, we need to have some knowledge of the architecture and design levels of the application. If we don't have design-level information, it's difficult to correct. Without a senior-level developer to guide us, it can cost us a lot. The senior resources getting deployed could be used elsewhere for more development activities. However, the mitigation is provided by Veracode and the detailed report is very good.
Veracode has helped fix flaws affecting our organization by making the applications a lot more secure.
I am a software engineer, and one of my clients needed Veracode for security requirements. We needed to send the code through some security tools to see if there are breaches or malicious code that could attack the company. In this case, the client used Veracode to scan third-party libraries from our application. Veracode was running on a private cloud using Azure.
We use it primarily for our application security concerns. We use the dynamic, static, and SCA scanning tools. We run our static scans after the code is compiled, and that gets uploaded automatically through our DevOps tool. We have installed an agent in one of our cloud servers that is behind a firewall to run the dynamic scan against the runtime. We run our SCA scans when we do the static scans, which is after compilation.
Qualys Web Application Scanning: Scan
NagarajSheshachalam says in a Qualys Web Application Scanning review
Lead Cyber Security engineer at a tech services company with 201-500 employees
My advice to those wanting to implement this solution is if you have experience and knowledge with vulnerability management and reading through all the threats, this could be a good platform for you. If you are a new starter this solution is not a good place to start.
I rate Qualys Web Application Scanning an eight out of ten.
reviewer1138395 says in a Qualys Web Application Scanning review
Sr Cybersecurity Leader at a non-tech company with 1,001-5,000 employees
There are two parts. We use Web Application Scanning licenses to constantly assess our websites. When there are any changes on our websites, Qualys checks to see if there is a vulnerability. We use a SecOps/DevOps methodology, so Qualys is integrated into the development cycle. Qualys runs every time we update the site.
I've been using Qualys Web Application Scanning for a year and a half.
The solution is primarily used purely as a web-based vulnerability scanning tool.
Anubhav Goswami says in an Acunetix review
Security Specialist at a tech services company with 11-50 employees
The solution is mostly used for vulnerability scanning purposes.
For such services, scalability is not relevant because you just scan your service and make a document of the problems that you have. After that, you have to take care of them and fix them. So, it's not like other services that have to be working 24/7. You only run it and receive information.
Its users vary because in some companies, the web is under the IT team, and in some companies, the web is under security, CISO, or something like this. It depends on how much personnel the company has to manage these tools.
The most valuable feature of Acunetix is the UI and the scan results are simple.
PortSwigger Burp Suite Professional: Scan
reviewer1526550 says in a PortSwigger Burp Suite Professional review
Lead Security Architect at a comms service provider with 1,001-5,000 employees
It's an individual tool that security professionals use for their manual pen-testing. We use it for capturing the traffic, intercepting the traffic between the browser and the application. We try to manipulate the applications, the traffic so that whatever input that is accepted by the application is sanitized and validated. We try to analyze the application for input validation. All inputs are handled correctly.
Another use case is having a scanner module built-in where you can browse the entire application. The scanner can continuously scan the application for vulnerabilities based on OWASP Top 10 standards. Likewise, you can come to know what vulnerabilities are in the application. Later, you can go through the vulnerabilities one by one and triage them.
There are many different modules in Burp Suite. We have a comparator module where you can compare the request and response. You have the Repeater module where you can repeat the sequences. They can be used for other test use cases such as doing disciplinary attacks or brute force attacks on the applications.
Basically, there are a wide variety of use cases and applications.
Nagaraj Sheshachalam says in a PortSwigger Burp Suite Professional review
Lead Cyber Security engineer at a manufacturing company with 10,001+ employees
Eldar Aydayev says in a PortSwigger Burp Suite Professional review
President & Owner at Aydayev's Investment Business Group
I have found this solution has more plugins than other competitors which is a benefit. You are able to attach different plugins to the security scan to add features. For example, you can check to see if there are any payment systems that exist on a server, or username and password brute force analysis. You are able to do many different types of scans, such as SQL injection. There are a lot of deep packages analyzing functions that make this solution have more usability.
VinothKumar5 says in a PortSwigger Burp Suite Professional review
Senior Technical Architect at Hexaware Technologies Limited
The automated scan is what I find most useful because a lot of customers will need it. Not every domain will be looking for complete security, they just need a stamp on the security key. For these kinds of customers, the scan works really well.
Nirosh Anda says in a PortSwigger Burp Suite Professional review
Chief Info Sec Engineer at Sri Lanka CERT
We wish that the Spider feature would appear in the same shape that it does in previous versions.
I believe we have developmental tools such Accuratix. It would be nice if the report that was accepted upon scanning would highlight all the weaknesses from the perspective of my application.
reviewer1753959 says in a PortSwigger Burp Suite Professional review
Application Security Engineer at a transportation company with 10,001+ employees
Burp Suite gives you a very good automated scanning tool, which gives you around sixty to seventy percent security coverage without having to use a security resource. Once the developer gets the report, they've got the PortSwigger lab to explain the vulnerability and have a POC right there, so it's very beneficial for developers.
reviewer1871559 says in a PortSwigger Burp Suite Professional review
Cyber Security Analyst at a comms service provider with 10,001+ employees
In some cases, we got a few file postings while doing it by the automatic scan. If that could be better, that would be ideal. The scanner could just be updated a bit more.
We'd like to have more integration potential across all versions of the product. The enterprise version seems to have better integration services than others.
The solution is primarily used for scanning the webpage and for the incoming traffic for the application.
In general, there's not much to complain about but the stability of the tool is not good enough. I know that the RAM utilization is something they're working on but using a scan currently takes up too much memory. Resource utilization is an issue because when you're application testing, there are multiple threats and multiple application requests that are going in the backend.
reviewer1966164 says in a PortSwigger Burp Suite Professional review
Cyber Security Specialist at a university with 10,001+ employees
PortSwigger Burp Suite Professional could improve the static code review.
In an upcoming release, PortSwigger Burp Suite Professional can give some possible remedies for any issues it has discovered after a scan of an application. At this time it provides vulnerabilities, having the possible remedies would be a benefit. It would be useful for the developers, to fix the issue immediately.
Micro Focus Fortify on Demand: Scan
reviewer1468542 says in a Micro Focus Fortify on Demand review
Principal Solutions Architect at a security firm with 11-50 employees
Our clients use it for scanning their applications and evaluating their application security. It is mostly for getting the application security results in, and then they push the vulnerabilities to their development team on an issue tracker such as Jira.
I usually have the latest version unless I need to support something on an older version for a client. We're not really deploying any of these solutions except for kind of testing and replicating the situations that our clients get into.
Raghu Krishna Y says in a Micro Focus Fortify on Demand review
GM - Technology at a outsourcing company with 10,001+ employees
The most valuable features are the server, scanning, and it has helped identify issues with the security analysis.
reviewer1529571 says in a Micro Focus Fortify on Demand review
Acquisitions Leader at a healthcare company with 10,001+ employees
It is a very easy tool for developers to use in parallel while they're doing the coding. It does auto scanning as we are progressing with the CI/CD pipeline. It has got very simple and efficient API support.
It is an extremely robust, scalable, and stable solution.
It enhance the quality of code all along the CI/CD pipeline from a security standpoint and enables developers to deliver secure code right from the initial stages.
Whenever we have a new application we scan it using Micro Focus Fortify on Demand. We then receive a service connection from Azure DevOps to Micro Focus Fortify on Demand and the information from the application tested.
We are using Micro Focus Fortify on Demand in two ways in most of our processes. We are either using it from our DevOps pipeline using Azure DevOps or the teams which are not yet onboarded in Azure DevOps, are running it manually by putting in the code then sending it to the security team where they will scan it.
We use two solutions for our application testing. We use SonarQube for next-level unit testing and code quality and Micro Focus Fortify on Demand mostly for vulnerabilities and security concerns.
reviewer1250178 says in a Micro Focus Fortify on Demand review
Security Information Manager at a tech services company with 10,001+ employees
The features that I have found most valuable include its security scan, the vulnerability finds, and the web interface to search and review the issues.
The pricing model it's based on how many applications you wish to scan.
There are lots of limitations with code technology. It cannot scan .net properly either.
The vulnerability detection and scanning are awesome features.
Fortify is used for static scans — cold-scanning.
I mainly use Fortify on Demand for static scanning.
Micro Focus is a bit heavy on resources and uses up a lot of my RAM. My machine tends to slow down when I use it. A beneficial additional feature would be scanning executable files. Currently, it scans the uncompiled code only. I'd also like to see support for additional languages and support for scanning libraries whether they're outdated or not. The solution scans for security vulnerabilities but not for outdated versions or policy violations.
reviewer1403718 says in an Invicti review
Lead Security Architect at a comms service provider with 1,001-5,000 employees
The dashboard is really cool, and the features are really good. It tells you about the software version you're using in your web application. It gives you the entire technology stack, and that really helps. Both web and desktop apps are good in terms of application scanning. It has a lot of security checks that are easily customizable as per your requirements. It also has good customer support.
reviewer1521882 says in a Checkmarx review
Information Security Architect at a tech services company with 1,001-5,000 employees
We are using multiple solutions for application security, and Checkmarx is one of them. We are a client-centric organization, and we are also providing support to clients for application security. Sometimes, we have our own production, and then we scan the customer information and provide application security. For a few clients, it is deployed on the cloud, and for a few customers, it is on-premises.
reviewer1398084 says in a Checkmarx review
Procurement Analyst at a pharma/biotech company with 10,001+ employees
We use the solution for scanning the code for security.
reviewer932058 says in a Checkmarx review
AVP, aPaaS Engineer at a financial services firm with 10,001+ employees
We are using Checkmarx for application code scanning, such as scanning for different leverages in the application code.
reviewer1108275 says in a Checkmarx review
Security at a tech services company with 51-200 employees
We use it for code scanning and security testing for our in-house application development. We are using its latest version.
The most valuable feature of Checkmarx is the user interface, it is very easy to use. We do not need to configure anything, we only have to scan to see the results.
The main thing we find valuable about Checkmarx is the ease of use. It's easy to initiate scans and triage defects.
The most valuable features of Checkmarx are difficult to pinpoint because of the way the functionalities and the features are intertwined, it's difficult to say which part of them I prefer most. You initiate the scan, you have a scan, you have the review set, and reporting, they all work together as one whole process. It's not like accounting software, where you have the different features, et cetera.
The software languages that they support are one of the largest in the market.
When something happens in a test, then you need to know why. In many cases, you would have to run a scan and find all the problems, and then hand that off to development and have development go back and rewrite that code. If you had an issue with a particular aspect where you have a limited amount of personnel or knowledgeable personnel, based on the language that an application was written in, well, then you would need some type of assistance in order to rewrite that code in that particular language, with the limited knowledge that developer might have had. I assisted with that and helped with educating the developer on how to write that code. It was a two-pronged effort.
The number one use case would be a failed PEN test. Number two would be, "Hey, we have a waterfall DEV approach to our SDLC today. We want to become more agile around speed and quality of code." That would be the second. The third would be able to provide an appropriate availability of knowledge for training developers in secure coding.
The stability of Checkmarx could improve. We're having issues with it, but we don't want to upgrade to the newest version until we make sure that the issues we're having now aren't present in the newer version.
The scan reliability sometimes is impacted and we sometimes have to restart the services to allow scans out of the queue.
Prakash Ganesan says in a Checkmarx review
Engineer senior at a hospitality company with 10,001+ employees
We would like to be able to run scans from our local system, rather than having to always connect to the product server, which is a longer process.
As with other tools, if you want more, you have to pay more. You have to pay for additional modules or functionalities. For instance, if you want to do some scanning to external dependencies of the software, you have to buy another tool provided by Checkmarx.
You have to pay for licenses for the number of projects that you want to scan and the number of users. I think you have to pay licenses for three features: the number of users, the projects, and I don't remember the other one.
We are currently using the solution for scanning vulnerabilities.
We use it for non-functional insight because it's a security vulnerability scanner. We can use Checkmarx for scanning anytime on our code base. We integrated that as part of our build-a-pipeline, and it helps us detect early. We have piloted in few applications for the shift of testing. From a metric perspective, I am unsure how we benefited from the quantifiable data, but we did benefit.
reviewer1503354 says in a SonarQube review
Senior Software Engineering Manager at a computer software company with 10,001+ employees
We use SonarQube to scan our security protection.
The scalability depends on the use case. You cannot install it with minimal resources and expect it to run thousands of jobs. It is scalable based on your environment. How big is your project? How many APIs do you want to scan? How many APIs per minute, etc. Based on that information you need to first decide upfront how much memory or how much storage you want to give to it. You need to have clear data with you and then use the resources to design accordingly. I think it is highly scalable and can operate seamlessly if you give it the environment that is sufficient. You cannot expect magic from it.
We have some projects that have 150 users with ten teams using the solution.
reviewer1537167 says in a SonarQube review
Digital Solutions Architect at a tech services company with 1,001-5,000 employees
We are a $4 billion valuation large company and we use the solution for status security, scanning, and code quality. I am currently in the process of building a pipeline for one of my customers and for that we are utilizing this solution for the static analysis.
reviewer1565832 says in a SonarQube review
DevOps Lead at a marketing services firm with 1,001-5,000 employees
The solution has a very shallow SAST scanning. That is something that can be improved.
I'm not sure if there is any plan for having DAST, as well, which is the dynamic scanning. If they offered that in SonarQube that would be ideal. I'd like to know if there is a plan or roadmap for Sonar to have that included. However, right now, at least, from the SAST perspective, it can improve.
The pricing could be reduced a bit. It's a little expensive.
SonarQube does not cover BPM programming language. It only covers the Java layer from BPM WebMethods. When we were faced with this issue with one of your applications, we found that we were not able to scan the BPM code for configurations generated from the WebMethod.
The BPM language is important and should be considered in SonarQube.
It utilizes a lot of resources from the servers. I think this issue should be resolved because it takes approx 20% of the CPU utilization.
Reporting related to SonarQube only exists in the enterprise edition, and not in the Community Edition.
There are no limitations in the lines of code with the Community Edition, but with the Enterprise Version, there are limitations related to the lines of code.
I don't understand why you can use an infinite line code amount with the Community Edition and the Enterprise Edition is limited.
Nachu Subramanian says in a SonarQube review
Automation Practice Leader at a financial services firm with 10,001+ employees
The most important feature is the software quality gate. When that's implemented we're able to streamline the product's quality. The other good features are SonarQube's code quality scanning and code coverage. If we use it effectively, we can capture the software code bugs early in the software development. It also helps us to identify the test coverage for the code that we're writing. It's a very, very important feature for the software developers and testers.
In Community Edition, I don't think that we have enough scalability options because it runs only on one instance, plus it runs only one scan at a time. It doesn't even provide a settings capability where multiple scans are running simultaneously. That's why we want to move to the Enterprise Edition because it gives you a possibility of parallel analysis of reports, and that could speed up things.
reviewer1599105 says in a SonarQube review
Senior Security Engineer at a financial services firm with 10,001+ employees
I was more focused on the security aspects and not on quality. SonarQube focuses a lot on security and is going to provide some visibility around that area, but if there could be more focus on team management. For example, what type of remediation is going to be provided when the types of scans are being applied based on different rule sets at the SonarQube level, from the security point of view, this would be helpful.
If there was an official Docker image of SonarQube that could easily integrate into the pipeline would help the user to plug in and plug out and use it directly without any custom configuration. I am not sure if this is being offered already in an update but it would be very helpful.
In an upcoming release of the solution, I would like to see more types of programming languages added and improvement in their SaaS offering to compete better with other enterprise solutions, such as Fortify.
reviewer1643052 says in a SonarQube review
Manager, Software Development Engineering at a computer software company with 51-200 employees
SonarQube does SAST and SCAs pretty well. One of the important things for me, something that is different from a solution like Checkmarx, was that SonarQube had SonarLint that we can use for local scanning for developers. The product does well in scanning and vulnerability.
Warayuth Wongpaiboonwattana says in a SonarQube review
System Quality Assurance Manager at AIS - Advanced Info Services Plc.
We use SonarQube to scan SAS code for quality control in mostly mobile applications, such as iOS and Android applications.
SonarQube is used for in-production scanning of applications. We are only doing unit testing to improve the overall quality of the code.
reviewer1078050 says in a SonarQube review
Staff DevOps Specialist at a computer software company with 201-500 employees
A little bit more emphasis on security and a bit more security scanning features would be nice.
It would also be nice if the discrepancy between the basic or free version and the enterprise version was less. In my opinion, some of the base functionality in the enterprise version should be in the basic version.
Currently, we have static code scanning, and we have the scanning of the Docker containers. It would be great if some sort of penetration testing could easily be implemented in SonarQube for deploying something and doing some basic security scans. Currently, we have to use third-party tools for that. If everything was all under one roof, it would be more comfortable, but I don't know if it is possible or feasible. It is a typical issue of centralization versus distribution. In our particular case, because we're using SonarQube for almost every other project, it would make sense, but that doesn't necessarily mean that it is the same case with everybody else.
Our primary use case of SonarQube is getting feedback on code. We are using Spring Boot and Java 8. We are also using SonarLint, which is an Eclipse IDE plugin, to detect vulnerabilities during development. Once the developer finishes the code and commits the code into the Bitbucket code repository, the continuous integration pipeline will automatically run using Jenkins. As part of this pipeline, there is a build unit test and a SonarQube scan. All the parameters are configured as per project requirements, and the SonarQube scan will run immediately once the developer commits the code to the repository. The advantage of this is that we can see immediate feedback: how many vulnerabilities there are, what the code quality is, the code quality metrics, and if there are any issues with the changes that we made. Since the feedback is immediate, the developer can rectify it immediately and can further communicate changes. This helps us with product quality and having less vulnerabilities in the early stages of development.
This solution is deployed on-premise.
SonarQube is a code-scanning tool that ensures people follow the right coding standard. It detects any memory leaks or unwanted functions that have been written so developers can optimize the code for better performance. We don't know too much about how our customers use SonarQube because we just set it up for them. We show them how the reporting works and what to do to fix common issues.
reviewer1141026 says in a SonarQube review
Head of IT Security Department at a tech services company with 501-1,000 employees
reviewer841284 says in a SonarQube review
Lead Engineer at a healthcare company with 10,001+ employees
I have it integrated with our continuous integration server. On a scheduled basis, typically in the middle of the night, it'll do performance scans so that the results are available and viewable by the developers on the website. The scans are done automatically by using a continuous integration server, which is TeamCity.
We are using version 5.6.6. It is a very old version, but that's what we've been using. We haven't gotten around to updating it.
reviewer1158774 says in a SonarQube review
Senior Technical Architect at a tech services company with 501-1,000 employees
We are using SonarQube for scanning our services for issues as part of our IT department.
reviewer1526550 says in a SonarQube review
Lead Security Architect at a comms service provider with 1,001-5,000 employees
This solution has helped with the integration and building of our CICD pipeline. Without any scans or assessments, the pipeline and build are not complete. One of the good features of SonarQube is the many languages it supports including Java, dotNET, Typescript and HTML CSS. It also allows us to set custom quality gates and rules.
We are using the latest version.
We use the solution for regular code scanning for C and C++, as well as for MISRA rules
reviewer937347 says in a Klocwork review
Sr. Test Engineering Manager - Embedded Linux SW / RF at a comms service provider with 51-200 employees
I am utilizing Kiuwan for quick and efficient scans, specifically static scanning for web applications. This includes checking the application's code base and dependencies, known as SaaS scans. In the first quarter, there is a "code-based security and insight" tab where we can review the application's code for any vulnerabilities arising from dependencies. We then analyze these vulnerabilities and provide solutions for mitigating them.
Fortify Application Defender: Scan
Warayuth Wongpaiboonwattana says in a Fortify Application Defender review
System Quality Assurance Manager at AIS - Advanced Info Services Plc.
We use Fortify Application Defender for scanning our whole repository source code for security. We have more than 4,000 repositories in our company.
reviewer1317438 says in a Mend review
Business Process Analyst at a financial services firm with 1,001-5,000 employees
We have ended our relationship with WhiteSource. We were using an agent that we built in the pipeline so that you can scan the projects during build time. But unfortunately, that agent didn't work at all. We have more than 500 projects, and it doubled or tripled the build time. For other projects, we had the failure of the builds without any known reason. It was not usable at all. We spent maybe one year working on the issues to try to make it work, but it didn't in the end.
We should be able to integrate it with ID and Shift Left so that the developers are able to see the scan results without waiting for the build to fail.
AnandHosamani says in a Mend review
FOSS Coordinator at a manufacturing company with 5,001-10,000 employees
I use the solution for free and open source scanning.
reviewer1252050 says in a Mend review
AVP at a computer software company with 5,001-10,000 employees
I would recommend using WhiteSource. It has an edge over other tools in the market and is a faster solution.
WhiteSource is easy to integrate with the CICD pipeline and runs standalone scans as it is a SaaS deployment. Integration of this solution does not require much time or knowledge.
I would rate this solution a nine out of ten.
We have started the trial version of WhiteSource last week. We concluded the trial this week and we are beginning to use the full licensed solution later on in the week.
We use WhiteSource for automating open-source vulnerability, by finding the open-source libraries that were used and fixing them. Additionally, we set up policies to disallow some of the risky open sources to be used in our solutions by developers. We are able to scan and fix vulnerabilities in our containers, to ensure that if there are any licenses that violate the open source usage or put our product at risk, we make sure that either we remove or remediate the open sources with risky licenses. Those are the main three use cases.
WhiteSource stood out mainly for the way it approached scanning code. Some of these solutions often send the code somewhere else to be scanned, whereas WhiteSource allows us to scan wherever our tenant is. The reason we chose this solution was to look at the security analysis of these third-party libraries.
Finding vulnerabilities is pretty easy. Mend (formerly WhiteSource) does a great job of that and we had quite a few when we first put this in place. Governance up until that time had been manual and when we tried to do manual governance of a large codebase, our chances of success were pretty minimal. Mend (formerly WhiteSource) does a very good job of finding the open-source, checking the versions, and making sure they're secure. They notify us of critical high, medium, and low impacts, and if anything is wrong. We find the product very easy to use and we use it as a core part of our strategy for scanning product code moving toward release.
We use Mend (formerly WhiteSource) Smart Fix. I’d say pretty much everything in Mend (formerly WhiteSource) is easy to use. We really don't have too much difficulty using the product at all. I've implemented other scanners and tools and had much more trouble with those products than we've ever had with Mend (formerly WhiteSource). That’s extremely important. It's hard to sell to some of these teams to put any level of overhead on top of their product development efforts and the fact that Mend (formerly WhiteSource) is as easy as it is to use is a critical aspect of adoption here. It scores very highly on that scale.
Mend (formerly WhiteSource) Smart Fix helps our developers fix vulnerable transitive dependencies. It's all very helpful to our development community. First of all, we're able to find that there are issues. Second of all, we're able to figure out very quickly what needs to be done to remediate the issues.
Mend (formerly WhiteSource) helped reduce our mean time to resolution since adopting it. A lot of it is process improvement and technical aspects that can tell us how to go about remediating the issues. We get that out of Mend (formerly WhiteSource). Making the developers aware that these issues are there and insisting they be corrected and making the effort to do that visibly is very valuable to us.
Overall, Mend (formerly WhiteSource) helped dramatically reduce the number of open-source software vulnerabilities running in our production at any given point in time. I won't give metrics, however, it's fair to say that our state before and after Mend (formerly WhiteSource) is dramatically different and moved in a positive direction.
Mend's ability to integrate our developer's existing workflows, including their IDE repository and CI is good. Azure DevOps is really important. That's what the pipelines are. That's a very important piece of the entire puzzle. If this was just an external scanner where periodically we'd go through and scan our repos and give them a report, we’d do that with pen testing products, for example, for security testing. The problem is, by the time they get those reports, they've already shipped the code to multiple environments and it's too late to stop the train. With these features being baked into the pipelines like this, they know immediately. As a result, we're able to quickly take action to remediate findings.
We use WhiteSource for scanning open source libraries called SCA and both the vulnerabilities and open source licenses. We deployed WhiteSource with Azure DevOps.
It is used to manage open-source associated risks. I'm a consultant, and I provide consultancy and management services in the domain of open-source risk management. I use this product as a part of the services to my customers. I'm not using it in my company because my company is not developing anything.
Its deployment is hybrid where scans are on-premise and the knowledge base is on the cloud.
We use Mend especially for code analysis. I work in the application security part of my company. Developers will build and push the code to the GitHub repository. We have a build server that pulls in the code, and we are using Jenkins to automate that to do the DevOps stuff.
Once the code is built, we create a product for that particular version on Mend. We are currently working with three different versions for our particular product. We have the products created on Mend via White Source, which has a configuration file and a back file that runs. The configuration files basically tell what parameters to use, which server URL to use, which files to ignore, and which files to use.
For example, if I just have to do Python, I can make changes in the configuration files in Excel to include just .py files and exclude all of the files. If I have to do Python and C++, I can make changes in the configuration file itself to make .py, .C++ and exclude all of those. Once that configuration file is ready, then we run a White Source back file that just connects to the server, contacts the configuration file as well, does the scan on all the files that are there in the project, the project being for, and then pushes it to Mend, our Mend page.
On our Mend page, once we go into the product page of it, we can see what libraries have been used by us and what have some vulnerabilities. We also can set policies on Mend. We set some policies for our organization to accept and reject. For each product, we also get the policy violations that the libraries go through and any new versions for any new libraries that are available on that library's parent page - the parent page being the official developers of the library. We can get the new versions as well. We get the licenses we use with the library, and most importantly, we get vulnerability alerts regarding every library we use in our code.
Once the code is pulled, scanned, and pushed, we get the UI. We go to the library alerts. Once we go to the library alerts, we can see the different severities and the different libraries with vulnerabilities. We normally just sort according to higher severity first and go down to lower severity. We check what can be ignored or what is acceptable and what cannot be ignored, and what is of high priority. Ones that are a high priority, we flag and create a ticket on JIRA. That's our platform for collaboration.
Once we create a ticket for JIRA, the developers can see it, the QA team can see it, and they will go through that as well. They can tell if the update or the upgrade of the library is possible or not. They'll check its compatibility and see if it's actually doable or not. If it's not doable, they'll just tell us it's not doable, and probably our next version of the application will have the changes - not this one. We term that as acceptable or within our domains of acceptance. However, daily, if a JIRA ticket is created, the developers get back to us saying yes or no. Mostly they can say yes to changing the library to upgrade the library. If it's upgraded, they upgrade it to the next version. We scan it again. We do a weekly scan. We'll just check the next week if that particular liability is upgraded and the vulnerability has been remediated.
reviewer1915362 says in a Mend review
IT Service Manager at a wholesaler/distributor with 51-200 employees
The tool is now a mandatory part of our organization to use as a benchmark, giving us a technical advantage. When we acquire other companies, we look to determine if Mend is applicable to them and bring them into our culture of using the solution where possible. We can leverage it for financial benefits when implemented and used to scan on the technical front. We consider Mend a permanent integration with our company for the foreseeable future, so we decided to reinvest in the solution by renewing our contract twice up to this point.
reviewer1928817 says in a Mend review
Sr. Manager at a financial services firm with 10,001+ employees
Earlier, Mend was used as a tool behind the scenes for periodic vulnerability checks. It was more reactive. We only began exploring its full potential once we started integrating it with GitHub because that helped us control, and manage the process. It centrally controls all the code going into production. We have built-in rules against the license policy vulnerabilities. We don't allow code to go to production if it hasn't met those criteria.
Mend has a SaaS environment where all the data is stored, but we do all scanning and remediation work on a component that scans and identifies dependencies. It can be deployed on-prem or on the cloud using their containers. Then, it talks to the SaaS platform for the final identification of vulnerabilities and license composition.
They have also devised a smart tree containing other tools we plan to evaluate. We haven't used their SAST solution yet, but we're considering it and comparing it to the other SAST tool we use. We use Mend Renovate, which was previously an open-source product. The merge confidence feature is part of Renovate feature. Most of our people are focusing on vulnerability remediation. It gives an excellent idea about how we can move forward with a change.
Mend is deployed on the AWS cloud, and we have multi-region enabled. It is deployed active-active in both regions. This is a heavy implementation. The company has a centralized GitHub platform where every developer and team manages their code. There are more than 5,000 users. Changes appear in the report, and actions are happening internally based on that. They may not all be going to the Mend platform to see these results. There are only maybe 5,000 active commits happening monthly. The number of records per project enabled for our company is nearly 60,000.
HCL AppScan: Scan
reviewer1428084 says in a HCL AppScan review
Principal Architect, Application Build Security. at a transportation company with 10,001+ employees
HCL AppScan is primarily used to improve application security. We are transitioning from DevOps to DevSecOps.
We are attempting to integrate these tools into our CICD pipeline in order to meet our business use cases. And if we notice that the tool is missing any business features or a feature, we will highlight them and work to have them fixed or implemented. That is how we go about it. We don't go for any generic features because that will be handled by the product team. We are here to identify our gaps and then have them implemented by the vendor team.
AppScan is only used for web scanning; we do not use it for anything else.
The most valuable feature of HCL AppScan is scanning QR codes.
reviewer1676757 says in a HCL AppScan review
Innovation manager at a computer software company with 51-200 employees
I have a set project, and I'm writing an application for monitoring server status, and I tried several times to scan it with AppScan in order to understand if there are vulnerabilities in my code.
I mainly use AppScan for vulnerability scanning and database bridging.
We are evaluating other options like Fortify and Checkmarx. We have worked with Fortify before. The advantage of this solution over HCL is its cloud setup. It is a solution that integrates well with other products. It also provides less false positives. Our main use case is that it should easily integrate with the CI/CD pipeline. The second requirements is that it should be easily integrate with the developer environment. These were the two main things which HCL AppScan does not provide.
Sonatype Nexus Firewall: Scan
reviewer1534461 says in a Sonatype Nexus Firewall review
Senior Cyber Security Architect and Engineer at a computer software company with 10,001+ employees
With the security concerns around open source, the management and vulnerability scanning, it's relatively new. In today's world more and more people are going through the open source arena and downloading code like Python, GitHub, Maven, and other external repositories. There is no way for anyone to know what our users, especially our data scientists and our developers, are downloading. We deployed Sonatype to give us the ability to see if these codes are vulnerable or not. Our Python users and our developers use Sonatype to download their repositories.
Given the confidentiality of our customer, we keep everything on-prem. We have four instances of Sonatype running, two Nexus Repositories and two IQ Servers, and they're both HA. If one goes down, then all the data will be replicated automatically.
Sonatype Nexus Lifecycle: Scan
reviewer1535436 says in a Sonatype Nexus Lifecycle review
Senior Architect at a insurance company with 1,001-5,000 employees
Shubham Shrivastava says in a Sonatype Nexus Lifecycle review
Engineering Tools and Platform Manager at BT - British Telecom
IQ Server is part of BT's central DevOps platform, which is basically the entire DevOps CI/CD platform. IQ Server is a part of it covering the security vulnerability area. We have also made it available for our developers as a plugin on IDE. These integrations are good, simplistic, and straightforward. It is easy to integrate with IQ Server and easy to fetch those results while being built and push them onto a Jenkins board. My impression of such integrations has been quite good. I have heard good reviews from my engineers about how the plugins that are there work on IDE.
It basically helps us in identifying open-source vulnerabilities. This is the only tool we have in our portfolio that does this. There are no alternatives. So, it is quite critical for us. Whatever strength Nexus IQ has is the strength that BT has against any open-source vulnerabilities that might exist in our code.
The data that IQ generates around the vulnerabilities and the way it is distributed across different severities is definitely helpful. It does tell us what decision to make in terms of what should be skipped and what should be worked upon. So, there are absolutely no issues there.
We use both Nexus Repository and Lifecycle, and every open-source dependency after being approved across gets added onto our central repository from which developers can access anything. When they are requesting an open-source component, product, or DLL, it has to go through the IQ scan before it can be added to the repo. Basically, in BT, at the first door itself, we try to keep all vulnerabilities away. Of course, there would be scenarios where you make a change and approve something, but the DLL becomes vulnerable. In later stages also, it can get flagged very easily. The flag reaches the repo very soon, and an automated system removes it or disables it from developers being able to use it. That's the perfect example of integration, and how we are forcing these policies so that we stay as good as we can.
We are using Lifecycle in our software supply chain. It is a part of our platform, and any software that we create has to pass through the platform, So, it is a part of our software supply chain.
Ingmar Vis says in a Sonatype Nexus Lifecycle review
Product Owner Secure Coding at a financial services firm with 10,001+ employees
We use it in the pipeline. So, software development is done in a pipeline in automated steps. One of those steps is Quality Assurance for which we use, amongst others, Sonatype, and this is done automatically. Based upon the outcome of this scan, the software product can proceed to the next step, or its blocks need to be rebuilt with updates.
We are using Nexus IQ Server 114, and we're about to upgrade to 122.
Katrin Schenker says in a Sonatype Nexus Lifecycle review
Software Engineer at a manufacturing company with 10,001+ employees
Before we had Nexus Lifecycle, our software developers needed to clear each download from open source libraries. That meant they needed to scan the library on a separate PC, and then they would integrate it into their solutions, but it would be local and not available for the other developers. Now, we have an automatic process for downloading open source libraries, and this has removed a huge effort for all of our software developers. That is the big advantage, that we have an automated software development pipeline, which is something we did not have before. All of our developers are happy to have the solution.
Another benefit is connected to the fact that we also have applications we host for external users and those users can obtain a very good report about which external, open source libraries we are using, and their security status.
reviewer1329402 says in a Sonatype Nexus Lifecycle review
Technical Consultant at a computer software company with 10,001+ employees
We are using Sonatype Nexus Lifecycle within our company for scanning our products with the Jenkins pipeline.
reviewer1418712 says in a Sonatype Nexus Lifecycle review
Lead Member Of Technical Staff at a tech vendor with 10,001+ employees
We use this product for scanning containers and binary artifacts, and to scan for vulnerabilities. It's provides a software composition analysis mainly for application security. I'm the lead member of technical staff and we are customers of Sonatype.
The most valuable features of the Sonatype Nexus Lifecycle are the evaluation of the unit test coverage, vulnerability scanning, duplicate code lines, code smells, and unnecessary loops.
reviewer1960260 says in a Sonatype Nexus Lifecycle review
Section Chief at a government with 201-500 employees
We're using Sonatype Nexus Lifecycle to scan for vulnerabilities in our continuous integration and deployment pipelines. We're also using the solution as part of our IDEs for developer support.
Tenable.io Web Application Scanning: Scan
reviewer1674711 says in a Tenable.io Web Application Scanning review
Senior Cyber Security Specialist at a tech services company with 1,001-5,000 employees
Tenable.io Web Application Scanning is very useful for scanning container exposure, and also for scanning all of the external IP addresses for any organization using Tenable predefined scanners.
reviewer1248330 says in a Tenable.io Web Application Scanning review
Security Specialist at a tech services company with 51-200 employees
I work for a security company, and I implement Tenable for our customers. I just implement this technology. I'm not working with the users.
Our main use case is for implementing and starting scans for the whole company or a specific host. It is used for creating reports or dashboards for the vulnerabilities of the whole company. As a product for web application scanning, the results are uploaded to the cloud, and the management is on the cloud, but we can implement an on-premises scanner, or we can scan the on-premises web applications of our customers.
I ratate Tenable.io Web Application Scanning eight out of 10. Tenable.io is still reliable, and we would recommend it depending on your needs. Tenable.io is a general solution, so it may not have specific features you need for your use case.
reviewer1990596 says in a Tenable.io Web Application Scanning review
Director of Cyber Security at a outsourcing company with 501-1,000 employees
We are using Tenable.io Web Application Scanning for security assurance, workability management, and patch management.
Our primary use case for the solution is automated scanning. It doesn't require scripting knowledge or any of those suites or other tools. So it is fully automated, and we provide the credentials and URL. The tool does all scanning and will show the result per the requirement.
The solution is primarily used for vulnerability scanning.
We are using it for our internal network and scanning for access in our internal network.
For example, sometimes, there are some vulnerabilities in the system that are happening and are unexpected. We use this for finding these vulnerable cases in that system, like blocks4j, and then deal with them. If there's a patch required, we handle advice on patching the system to remove the vulnerability. I put in patch requests and handle reports.
It is a nice tool to check the dependencies of your open-source code. It is easy to integrate with your Git or source control.
It has a nice dashboard where I can see all the vulnerabilities and risks that they provided. I can also see the category of any risk, such as medium, high, and low. They provide the input priority-wise. The team can target the highest one first, and then they can go to medium and low ones.
Its reports are nice and provide information about the issue as well as resolution. They also provide a proper fix. If there's an issue, they provide information in detail about how to remediate that issue.
It is easy to integrate without a pipeline, and we just need to schedule our scanning. It does that overnight and sends the report through email early morning. This is something most of the tools have, but all of these come in a package together.
It never failed, and it is very easy, reliable, and smooth.
reviewer1649319 says in a Snyk review
Cloud Security Engineer at a manufacturing company with 10,001+ employees
Snyk is a code analysis tool. It is a vulnerability finding tool. We use it for those purposes. We use this tool to detect issues particular to users.
Snyk is configured on our local ID environment. So our team and many other teams use it to do a scan before they deploy anything in the production.
The reporting mechanism of Snyk could improve. The reporting mechanism is available only on the higher level of license. Adjusting the policy of the current setup of recording this report is something that can improve. For instance, if you have a certain license, you receive a rating, and the rating of this license remains the same for any use case. No matter if you are using it internally or using it externally, you cannot make the adjustment to your use case. It will always alert as a risky license. The areas of licenses in the reporting and adjustments can be improved.
Having bolting scans into a single solution can be useful, maybe snippet capabilities of reading the actual scan rather than reading the manifest can be very useful.
I have used Snyk in my present and past workplace, along with Veracode, Checkmarx, and GitHub Advanced Security. The main product that really brought Snyk to market was software component scanning for third-party components, however I like the new things that they're doing as well.
They've got container scanning, which they're just now starting to do, and they're also bringing in new use cases such as static analysis (i.e. SAST) and secrets scanning, although I don't know exactly what's happening on that side of things.
In my previous workplace, we had about 100 users as it was still being scaled up and it was a relatively new product at the time. As for the version number, we use the latest version of Snyk since it is a cloud-based SaaS offering which is always kept up to date.
The main functionality that we found useful is scanning. A main feature of Snyk is that when you go with SCA, you do get properly done security composition, also from the licensing and open-source parameters perspective. A lot of companies often use open-source libraries or frameworks in their code, which is a big security concern. Snyk deals with all the things and provides you with a proper report about whether any open-source code or framework that you are using is vulnerable. In that way, Snyk is very good as compared to other tools.
Contrast Security Assess: Scan
reviewer1494855 says in a Contrast Security Assess review
Senior Customer Success Manager at a tech company with 201-500 employees
A good use case is a development team with an established DevOps process. The Assess product natively integrates into developer workflows to deliver immediate results. Highly accurate vulnerability findings are available at the same time as functional /regression testing results. There is no wait for time-consuming static scans.
Assess works with several languages, including Java and .NET, which are common in enterprise environments, as well as Node.JS, Ruby and Python.
reviewer1605099 says in a Contrast Security Assess review
Director of Threat and Vulnerability Management at a consultancy with 10,001+ employees
The way that it has improved our application security process is that we are no longer performing scans of specific environments to provide point-in-time vulnerability data. Instead, we're gathering vulnerability data from multiple environments in real time. That's a fundamental change in terms of how our program operates and how we identify vulnerabilities in applications. It gives us greater visibility and it gives us visibility much faster, while allowing us to identify issues throughout the environment, and not in just a single location.
Assess has also reduced the number of false positives we encounter. Because it is observing application traffic and it's not dependent on a response from a web server or other information, it tends to be more accurate.
Assess can identify vulnerabilities associated with application libraries where we would otherwise be dependent on other third-party solutions. It provides us visibility that we didn't have before, which is very helpful. This tends to be an area where our application owners are less focused. They're generally interested in whether or not their application has a vulnerability that is the result of code that they've written. They tend to ignore whether or not they've inherited a vulnerability from a library that they're using. Our ability to point out to them that they are using a vulnerable library is information they didn't have before.
It helps us save time and money by fixing software bugs earlier in the software development cycle, although that's difficult to quantify unless you have a metric for the resource impact of a vulnerable application, or an incident that occurs because an application was vulnerable. But we are certainly identifying vulnerabilities earlier in the process and feel that we are identifying vulnerabilities more accurately.
GitGuardian Internal Monitoring: Scan
Danny says in a GitGuardian Internal Monitoring review
Chief Software Architect at a tech company with 501-1,000 employees
In general, we use Gitguardian as a safety net. We have our internal tools for validating that there is no sensitive data in there. GitGuardian is a more general and robust solution to double-check our work and make sure that if we are committing something, it only contains development IDs and not anything that is production-centric or customer-centric.
The main way in which we're using it at the moment is that it is connected through the GitHub integration. It is deployed through our code review process. When pull requests are created they connect with GitGuardian, which runs the scan before there is a review by one of our senior devs. That means we can see if there are any potential risk items before the code goes into the main branch.
reviewer1692456 says in a GitGuardian Internal Monitoring review
DevSecOps Engineer at a computer software company with 1,001-5,000 employees
I think GitGuardian scales well. It's adequately scaled for what we are using it for right now. I don't see that growing. Right now, we just have it hooked up to our source, and it can handle that. Now, if we were to expand into possibly doing the Splunk use case, that might bring in an API. In that case, I'm not sure what the performance impact would be, but I don't think it would be that bad. You throw a couple of extra nodes out there, and it should be fine. It's currently being used by all of our developers. Everyone who commits code is using it. It scans all of our code.
Don Magee says in a GitGuardian Internal Monitoring review
Security Engineer at a tech services company with 11-50 employees
The scanning on pull requests has been the most useful feature. When someone checks in code and they are waiting for another engineer to approve that code, they have a tool that scans it for secrets. There are three places where engineers could realize that they are about to do something dangerous:
- On their own machine. They have to set up tools on their machine to do that, and a lot of the time, they are not going to do that.
- On pull requests before it gets into our main code branch.
- Once it is already in our code branches, which is the least optimal place. This is where we can inject a check before it makes it into our main code branch. This is the most valuable spot since we are stopping bad code from making it into production.
The solution has a 90% to 95% accuracy of detection for its false positive rate. The only time that it is not accurate is when we purposely check in fake secrets for unit tests. That is on us. They have the ability for us to fix this by excluding the test directory, and we are just too nervous to do that.
We just weren't doing this before we had GitGuardian. It has enabled us to do something that we weren't able to do before. If we were doing it manually, then we might have spent 200 hours doing this manually over the past year. So, we just wouldn't do it if we didn't have something like GitGuardian.
The solution has significantly reduced our mean time to remediation, by three or four months. We wouldn't know about it until we did our quarterly or semi-annual review for secrets and scan for secrets.
We have seen a return on investment. The amount of time that we would have spent manually doing this definitely outpaces the cost of GitGuardian. It is saving us about $35,000 a year, so I would say the ROI is about $20,000 a year.
Our main use case is operational security. We have a big IT platform. A lot of it is built in-house. We are not a focused IT company. We are a retailer. We have a lot of developers with a lot of different levels and projects. For example, with fashion brands, it is just, "Oh, we want to do this new app," and then they put it on our GitHub. Suddenly, we see all kinds of API keys and secrets in there. This solution is very useful for us because GitGuardian lets us know about them, then we can take care of it.
It is on the cloud. We gave GitGuardian access to our organization and codebase. It just scans it on an ongoing basis.
The main benefit is that, previously, secrets would be leaked and nobody would ever hear about it. Now, we actually have alerts and the opportunity to follow up with researchers to deal with these problems. It has provided the opportunity to collaborate on remediation rather than not knowing there are issues.
In addition, we do a review of security alerts when we open-source software. We used to have a script that we wrote that we would run to scan these repositories. It would produce a lot of noise. Now, we go to GitGuardian and immediately we have a dashboard that tells us what vulnerabilities there are.
GitGuardian has helped to modestly increase security team productivity whenever we do a review of open-source software for security leaks. Previously, that would take about an hour per repository and now it takes five minutes. We have 1,500 repositories, which is a lot. We're open-sourcing them weekly, so it doesn't amount to a huge number of hours, but it's turned something from fairly inconvenient, that had the potential to take an hour out of someone's day, to something that's just quick, easy, minimal, and more effective.
It has also helped to decrease false positives.
GitGuardian makes us more confident that our sensitive secrets aren't being leaked. I estimate our secret-detection rate is around three times as accurate as what we got with the previous open-source tool. In the past, we had to manually add regular expressions, etc. The other valuable thing is that it scans all Git history, so we can find old commits that might have sensitive information in them.
GitGuardian has probably increased the security team's productivity tenfold. It's hard to quantify. Using after-the-fact detection as an example, we didn't know about information in our Git history until we came across it. We went from nothing to an excellent solution for finding secrets in our Git history. It's also completely shifted the burden from our team to the development teams in terms of what to do when these issues arise again.
It's equivalent to a security engineer reviewing every pool request to look for secrets. We have dozens and dozens of pool requests and commits daily, and GitGuardian performs a security review of each commit. We couldn't scale by having one person perform all that work. GitGuardian saves the security team about four to six hours per incident.
It supported our shift-left strategy by reducing our overall operational burden. The developer receives a GitGuardian alert, and they're often aware of it and addressing the issue by the time I'm triaging it.
I would want to see some form of code security scanning implemented.
The development team pushes the code into a repository, and the CI/CD pipeline will perform the build. We need open-source libraries to perform the builds. It would be helpful to have the ability to link to open-source libraries like npm libraries. I don't know if GitHub Actions provides this. I would like to see that in GitHub Actions if they don't.
If you know the language for your build, it would be wonderful if GitHub automatically provided the link to those language-specific libraries so we don't need to search for the library.
For example, if I'm using Node.js, I should be in a position to link it to the npm libraries associated with that version so my build using the CI pipeline will work well. Then the results in the library must go into an artifact repository. We'll have to depend on JFrog or Sonatype to provide binary repositories. Git has the repository technology, so why not offer a binary repository feature?
GitHub has a static code repository; now, GitHub Actions provides CI/CD. The resulting packages should stay somewhere. I don't know whether they have added this or not because I have not explored the GitHub Actions. They're all public libraries, and the result of the build or CI pipeline is a deployment-ready package. Where will we keep them? That's where we need a binary repository.
In addition to the binary repository, I think they could also include some vulnerability scans to ensure the code we deliver is clean. SonarQube is a static code analysis we use. There are tools coming from Fortify or Veracode that can ensure there is no security vulnerability in the code. It's a complete CA practice-related tenant. It would be wonderful if they could add this functionality.
It’s quite stable. there are no bugs or glitches. It doesn’t crash or freeze.
When we are coding and see some unsafe code on the repository, GitHub is able to automatically scan and tell us, "You have a vulnerability somewhere. Maybe certain dependency you are using has vulnerability." And we can cut such vulnerabilities before we release the software.
reviewer1500300 says in a GitLab review
UAS Innovation Group Lead at a computer software company with 11-50 employees
It would be really good if they integrated more features in application security.
I would also like to see scanning for some vulnerabilities and allow people to have a one-stop glance at the state of the security application
KulbhushanMayer says in a GitLab review
Co Founder and Technical Architect at Think NYX Technologies LLP
The SaaS setup is impressive, and it has DAST solutioning. It also has dependency check and scanning mechanisms. If we were using other solutions, they would have to be configured, and we would have to set them us as a third party, but GitLab is straightforward. GitLab is a single solution that helps us do everything we need.