We performed a comparison between BlazeMeter and OpenText Silk Test based on real PeerSpot user reviews.
Find out what your peers are saying about Tricentis, OpenText, Perforce and others in Functional Testing Tools."I really like the recording because when I use the JMeter the scripting a lot of recording it takes me a lot of time to get used to. The BlazeMeter the recording is quick."
"They have good support documentation and when we have contacted them, they helped to guide us."
"BlazeMeter's most valuable feature is its cloud-based platform for performance testing."
"Its most valuable features are its strong community support, user-friendly interface, and flexible capacity options."
"In our company, various teams use BlazeMeter, particularly appreciating its cloud license software, which supports up to 5,000 users. BlazeMeter's cloud capabilities allow us to load test or simulate traffic from any location worldwide, such as Europe, North America, South America, Australia, and even specific cities like Delhi. So, with one cloud license, we can simulate user load from various locations globally."
"The product's initial setup phase was straightforward."
"It has a unique programming dashboard that is very user-friendly."
"BlazeMeter can be used for both API and performance testing, it is a multi-facility tool."
"The feature I like most is the ease of reporting."
"Scripting is the most valuable. We are able to record and then go in and modify the script that it creates. It has a lot of generative scripts."
"The ability to develop scripts in Visual Studio, Visual Studio integration, is the most valuable feature."
"The statistics that are available are very good."
"A good automation tool that supports SAP functional testing."
"The scalability of the solution is quite good. You can easily expand the product if you need to."
"The major thing it has helped with is to reduce the workload on testing activities."
"For a new user of BlazeMeter, it might be difficult to understand it from a programming perspective."
"The should be some visibility into load testing. I'd like to capture items via snapshots."
"The product currently doesn't allow users to run parallel thread groups, making it an area that should be considered for improvement."
"From a performance perspective, BlazeMeter needs to be improved...BlazeMeter has not found the extensions for WebSockets or Java Applet."
"Integration with APM tools like Dynatrace or AppDynamics needs to be improved."
"BlazeMeter has room for improvement in terms of its integration with GitLab, particularly in the context of CI/CD processes. While it has multiple integrations available, the level of integration with GitLab may need further enhancements. It is known to work well with Git and Jenkins, although the extent of compatibility with GitLab is uncertain."
"The Timeline Report panel has no customization options. One feature that I missed was not having a time filter, which I had in ELK. For example, there are only filter requests for a time of less than 5 seconds."
"BlazeMeter needs more granular access control. Currently, BlazeMeter controls everything at a workspace level, so a user can view or modify anything inside that workspace depending on their role. It would be nice if there was a more granular control where you could say, "This person can only do A, B, and C," or, "This user only has access to functional testing. This user only has access to mock services." That feature set doesn't currently exist."
"We moved to Ranorex because the solution did not easily scale, and we could not find good and short term third-party help. We needed to have a bigger pool of third-party contractors that we could draw on for specific implementations. Silk didn't have that, and we found what we needed for Ranorex here in the Houston area. It would be good if there is more community support. I don't know if Silk runs a user conference once a year and how they set up partners. We need to be able to talk to somebody more than just on the phone. It really comes right down to that. The generated automated script was highly dependent upon screen position and other keys that were not as robust as we wanted. We found the automated script generated by Ranorex and the other key information about a specific data point to be more robust. It handled the transition better when we moved from computer to computer and from one size of the application to the other size. When we restarted Silk, we typically had to recalibrate screen elements within the script. Ranorex also has some of these same issues, but when we restart, it typically is faster, which is important."
"Could be more user-friendly on the installation and configuration side."
"They should extend some of the functions that are a bit clunky and improve the integration."
"Everything is very manual. It's up to us to find out exactly what the issues are."
"The pricing is an issue, the program is very expensive. That is something that can improve."
"The support for automation with iOS applications can be better."
"The solution has a lack of compatibility with newer technologies."
Earn 20 points
BlazeMeter is ranked 9th in Functional Testing Tools with 41 reviews while OpenText Silk Test is ranked 25th in Functional Testing Tools. BlazeMeter is rated 8.2, while OpenText Silk Test is rated 7.6. The top reviewer of BlazeMeter writes "Reduced our test operating costs, provides quick feedback, and helps us understand how to build better test cases". On the other hand, the top reviewer of OpenText Silk Test writes "Stable, with good statistics and detailed reporting available". BlazeMeter is most compared with Apache JMeter, Tricentis NeoLoad, OpenText LoadRunner Cloud, OpenText LoadRunner Professional and Perfecto, whereas OpenText Silk Test is most compared with Selenium HQ, OpenText UFT One, OpenText UFT Developer, Apache JMeter and froglogic Squish.
See our list of best Functional Testing Tools vendors and best Test Automation Tools vendors.
We monitor all Functional Testing Tools reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.