We performed a comparison between OpenText LoadRunner Enterprise and ReadyAPI Performance based on real PeerSpot user reviews.
Find out in this report how the two Performance Testing Tools solutions compare in terms of features, pricing, service and support, easy of deployment, and ROI."You can test a huge variety of applications, not just web-based systems, but SAP, Oracle, web services, pretty much anything out in the market place, but it's mobile-based testing."
"It is mostly user-friendly and usable."
"This product is better oriented to large, enterprise-oriented organizations."
"The product is very user-friendly."
"It's a very powerful tool."
"I like how you can make modifications to the script on LoadRunner Enterprise. You don't have to go into the IDE itself."
"The most valuable feature is the Vuser protocols."
"One of the most valuable features of this solution is recording and replaying, and the fact that there are multiple options available to do this."
"The performance and reporting of this solution have been its most valuable features."
"ReadyAPI automation can help us validate the functionality of most web services, allowing us to find out the exact number of defects before deployment to the user interface."
"We find the product to be scalable."
"It stores good reports, as in, improved reports if compared with the SoapUI. It also has in-built security. You just need to switch and check the security testing. My team has never used it, but I know ReadyAPI provides those facilities as well."
"he initial deployment process is easy."
"We can scale."
"It's like a centralized interface that allows us to increase the quality of our APIs."
"I think better or more integration with some of the monitoring tools that we're considering."
"After they get over the acquisition, the first improvement is going to be tailoring it for their existing stack of other products. How would LoadRunner work for Documentum? How would it work for Business Network? How would it work for other apps? They can have a pre-package or a guide because they are all in the same family as opposed to being outside."
"For such an experienced team as mine, who have been with the product for over ten years, sometimes working with technical support is not that easy."
"The cost of the solution is high and can be improved."
"I think better support for cloud-based load generators would help. For example, integrate with Amazon AWS so you can quickly spin up a load generator in the cloud, use it, spin it down."
"The solution can be improved by making it more user-friendly, and by including autocorrelation capability."
"A room for improvement in Micro Focus LoadRunner Enterprise is that it should take multiple exhibitions for a particular scenario and have automatic trending for that. This will be a very useful feature that lets users look into how many exhibitions happened for the scenario and their performance, and you should be able to see the data within the Performance Center dashboard. For example, there's one scenario I'm focusing on multiple times in a month, and if I check five times, there's no way for me to see the trend and find out how it went with those five exhibitions. It would be great if the Performance Center has a view of all five exhibitions, particularly transaction by transaction, and how they happened. If Micro Focus LoadRunner Enterprise shows you the time trends, information about one exhibition to another, and how each performed, it'll be an immense feature, and that should be visible to every user. Reporting should be simpler in Micro Focus LoadRunner Enterprise. If I did a scenario with one exhibition now, and I did that scenario again, then I should be able to schedule that scenario for the exhibition, and if that scenario is executed multiple times, there should be the option to turn it into a single view that shows you all the transactions, how the performance was, what the trend graph is for a particular time, etc."
"The debugging feature needs to include graphs."
"I'd not sure if they have the same level of documentation for performance and security testing."
"I want the solution to be able to monitor Apache Kafka activity as well."
"This is an area for improvement with the tool. We unnecessarily use JMeter for some website testing, which we would like to avoid by introducing this tool for API and load testing because it provides load testing features."
"We need some time to understand and configure the solution."
"This solution could be improved by offering artificial AI testing in addition to API testing. For example, we would like to have machine learning testing because when test applications, manual work could be completed automatically using this functionality."
"The solution’s interface could be improved."
"It is very slow sometimes."
More OpenText LoadRunner Enterprise Pricing and Cost Advice →
OpenText LoadRunner Enterprise is ranked 5th in Performance Testing Tools with 81 reviews while ReadyAPI Performance is ranked 10th in Performance Testing Tools with 7 reviews. OpenText LoadRunner Enterprise is rated 8.4, while ReadyAPI Performance is rated 8.2. The top reviewer of OpenText LoadRunner Enterprise writes "Saves time and effort, and makes it easy to set up scenarios and execute tests". On the other hand, the top reviewer of ReadyAPI Performance writes "Straightforward to install with the ability to add multiple assertions but the price is too high". OpenText LoadRunner Enterprise is most compared with OpenText LoadRunner Professional, OpenText LoadRunner Cloud, OpenText Silk Performer, Tricentis NeoLoad and Apache JMeter, whereas ReadyAPI Performance is most compared with SmartBear LoadNinja and Apache JMeter. See our OpenText LoadRunner Enterprise vs. ReadyAPI Performance report.
See our list of best Performance Testing Tools vendors and best Load Testing Tools vendors.
We monitor all Performance Testing Tools reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.