Try our new research platform with insights from 80,000+ expert users

OpenText Silk Test vs Perfecto comparison

 

Comparison Buyer's Guide

Executive SummaryUpdated on Dec 15, 2024

Review summaries and opinions

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Categories and Ranking

OpenText Silk Test
Ranking in Functional Testing Tools
19th
Ranking in Test Automation Tools
19th
Average Rating
7.6
Reviews Sentiment
6.8
Number of Reviews
17
Ranking in other categories
Regression Testing Tools (8th)
Perfecto
Ranking in Functional Testing Tools
15th
Ranking in Test Automation Tools
22nd
Average Rating
8.4
Reviews Sentiment
7.4
Number of Reviews
23
Ranking in other categories
Performance Testing Tools (17th), Mobile App Testing Tools (8th)
 

Mindshare comparison

As of May 2025, in the Functional Testing Tools category, the mindshare of OpenText Silk Test is 1.0%, down from 1.3% compared to the previous year. The mindshare of Perfecto is 4.6%, down from 5.7% compared to the previous year. It is calculated based on PeerSpot user engagement data.
Functional Testing Tools
 

Featured Reviews

SrinivasPakala - PeerSpot reviewer
Stable, with good statistics and detailed reporting available
While we are performance testing the engineering key, we need to come up with load strategies to commence the test. We'll help to monitor the test, and afterward, we'll help to make all the outcomes, and if they are new, we'll do lots and lots of interpretation and analysis across various servers, to look at response times, and impact. For example, whatever the observations we had during the test, we need to implement it. We'll have to help to catch what exactly is the issues were, and we'll help to see how they can be reduced. Everything is very manual. It's up to us to find out exactly what the issues are. The solution needs better monitoring, especially of CPU.
Roland Castelino - PeerSpot reviewer
Its reporting allows us to have a clear view regarding what tests have been executed
The most valuable would be their Live Stream analysis, where I can see the live analysis of all the executions on a single device or multiple devices as well as track them. The live analysis and reporting would be the single most valuable feature. We leverage Perfecto’s reporting and analytics a lot. From the CI Dashboard, it is mainly the status, which is the past, failure count, and time consumption, e.g., how much time did an average test or script take? Along with that, it provides the historical view compared to the previous result, e.g., am I a pass or fail? Also, the stack trace is very important. Whenever a pass occurs, we don't look beyond that. However, whenever a failure occurs, the stack trace information that it gives us is pretty critical for us when figuring out where failures lie. It gives a summary for the pass/fail count, total test count, the historical view, time consumption for each test as well as the total tests, and the stack rate of the failure. Perfecto's analytics are very important since we use them on a daily basis. We run our executions daily. After every execution, we pull information from the Perfecto reporting system and share that with our stakeholders. Having this information accurately reported is pretty important for us, so everybody is aware of the current status of the product. That way, we can evaluate the health of the product or environment against that which has been executed. Therefore, it helps make those real-time decisions and highlights the impact to the business. I found Perfecto to be pretty easy to use while executing against cross-platforms. The main reason is because the same script or test automation where we execute on multiple platforms has minimal changes that I need to do. Also, it is easy for me to set up an execution on one platform, then on another platform, either in parallel or one after the other. Parallel opportunities save me time. Once the execution has been completed across these different configurations, I can always check and compare, e.g., what are the differences and consistencies? We utilize Perfecto’s cloud-based lab to test across devices, browsers, and OSs. I use it occasionally for manual testing. Though, there are other team members who use it more frequently than I do. I use it mainly for executing my automated tests. We have the Perfecto lab, cloud devices, and machines. I can program my test to execute against any of those devices, which gives me more confidence in my product. I can compare and see how my product or application functionally behaves across these different devices and from a UI point of view, which helps me a lot. The device lab is extremely important to our testing operations. We rely on having multiple devices up and running all the time. Whenever we kick off an execution, there are multiple reasons why executions may get triggered: * CodeCommit * A scheduled job. * Might be on-demand by any stakeholder. We need the lab to be available, as we need devices up and running for executions to take place. Also, the devices help since they allow us to have parallel execution, and not just wait for a sequential device to become free and available. Therefore, volume is definitely key. It also gives us an opportunity to compare execution across platforms in that space. It is extremely important to you that the lab provides same-day access to new devices since we analyze that data every single day after execution. Perfecto provides their own framework called Quantum Framework. That is one option. The other option is, if I want to have my own framework, I can have a Java-based Maven project, take a Selenium library, AppiumLibrary, and REST Assured library, and utilize the open-source framework. It is easy for us to connect to Perfecto, no matter what framework we use, as long as it has these core libraries in it. I can design and structure it any way that I want. The execution will happen in Perfecto no matter what since they have support for these tools or libraries. It is pretty neat that way. We are not dependent on using just one particular framework to use Perfecto. While there are still some framework limitations, there is the opportunity to use multiple, different open-source frameworks, then pass the execution to Perfecto. We can use most frameworks, then design and craft it any way that we want, then just pass the execution to Perfecto.

Quotes from Members

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Pros

"The scalability of the solution is quite good. You can easily expand the product if you need to."
"The major thing it has helped with is to reduce the workload on testing activities."
"The feature I like most is the ease of reporting."
"A good automation tool that supports SAP functional testing."
"The ability to develop scripts in Visual Studio, Visual Studio integration, is the most valuable feature."
"It's easy to automate and accelerate testing."
"The statistics that are available are very good."
"Scripting is the most valuable. We are able to record and then go in and modify the script that it creates. It has a lot of generative scripts."
"Mobile testing is the most valuable feature as it has reduced dependency on physical devices. We are located offshore and we don't have the physical devices, and shipping physical devices after every new release would be a difficult task. But with Perfecto, it is easy."
"Perfecto has affected our software quality in a good way. It has allowed us to execute on-demand and on-choice. We also track the number of issues that we find in the product. Every single day, we tag the issues that we found. For example, if something was found by automation, that means it was found by a Perfecto execution. Over time, we realized the real value in tracking those numbers. We can see now that we have clearly been finding issues earlier. It has allowed us to catch our defects earlier, thus improving the quality of our applications."
"The automated test reporting functionality is the most valuable feature. We use the CI Dashboard. It's very important as it is the main reporting tool for our automated tests."
"I also like the reporting functions. We are constantly downloading these reports and sharing them with our final customers. They help us understand what kind of bugs are happening through the applications. The recording feature is handy because it lets us see a video of the process we run through the pipeline and discover the point at which the automation is breaking."
"There are a whole bunch of things that I like about the solution, but I really love the interaction it has with mobile devices, the testing capabilities, as well as reporting capabilities that we get from the application. The reports are very detailed."
"The most valuable aspect of the solution is that it covers all types of devices on the market allowing you to test different versions of an operating system."
"In terms of cross-platform testing, they offer all of it, every device available in the market. It covers real scenarios that mimic production so that we don't miss out on any devices that our clients might be using to run the applications we develop. It's been great and very helpful."
"We're working in Agile and we need results ASAP. The fact that the lab provides same-day access to new devices is extremely important to us."
 

Cons

"The solution has a lack of compatibility with newer technologies."
"The pricing could be improved."
"Could be more user-friendly on the installation and configuration side."
"The support for automation with iOS applications can be better."
"They should extend some of the functions that are a bit clunky and improve the integration."
"Everything is very manual. It's up to us to find out exactly what the issues are."
"The pricing is an issue, the program is very expensive. That is something that can improve."
"We moved to Ranorex because the solution did not easily scale, and we could not find good and short term third-party help. We needed to have a bigger pool of third-party contractors that we could draw on for specific implementations. Silk didn't have that, and we found what we needed for Ranorex here in the Houston area. It would be good if there is more community support. I don't know if Silk runs a user conference once a year and how they set up partners. We need to be able to talk to somebody more than just on the phone. It really comes right down to that. The generated automated script was highly dependent upon screen position and other keys that were not as robust as we wanted. We found the automated script generated by Ranorex and the other key information about a specific data point to be more robust. It handled the transition better when we moved from computer to computer and from one size of the application to the other size. When we restarted Silk, we typically had to recalibrate screen elements within the script. Ranorex also has some of these same issues, but when we restart, it typically is faster, which is important."
"We don't use Perforce's BlazeMeter with Perfecto. From my perspective, it's not really relevant."
"The monitoring features, in particular network traffic monitoring, could be improved."
"Previously, we used the cradle. Every time the mobile was blocking it, we would have to ask Perfecto to provide another one. That took a lot of time away from us."
"There could be some improvements done on the interface. At times, there has been a bit of a struggle when finding things on the interface. A UI revamp would be a better option in future. That UI hasn't changed much in a long time, so I think they could just make it a bit better so that people could find stuff easily and intuitively."
"When using devices on the cloud, it lags quite a bit at times. I know that these are real devices that are being projected on our laptop screens and monitors, but if the speed could be improved, that would be good."
"I'm hoping that Perfecto will come up with browser testing as well because it would be easier to access it."
"Going by the dashboard or analytics capabilities that Perfecto or Perforce is looking to offer in its roadmap, it will certainly help if they also cater to executing and enabling decision-making, rather than just focusing on standard testing metrics such as execution, efficiency, and defect rate. These are good metrics, but they don't necessarily enable decision-making for SLTs. Any improvements in the dashboards and reporting tools should focus on metrics or SLAs that can help with decision-making."
"I would like to see the inclusion of machine learning features. If we can have that, it will be a better tool."
 

Pricing and Cost Advice

"We paid annually. There is a purchase cost, and then there is an ongoing maintenance fee."
"Our licensing fees are on a yearly basis, and while I think that the price is quite reasonable I am not allowed to share those details."
"I am not sure about its pricing, but from our perspective, licensing has been easy. Anytime I have new users or requests for users that want to get added, it's a very simple process. I just give the architectural owner of the product the name and email address, and they're able to easily add a new user. We don't have any issues in regards to getting licenses, but I don't have any insights into pricing."
"Perfecto's price is excellent compared to other products with similar features. It was the lowest of the three we evaluated. We also established a partnership with Perfecto, so they provide discounts when we sell Perfecto projects and licenses to our customers."
"Pricing-wise, it is fine. It is not as expensive as what we used to have in the past from HP, IBM, and others. It is decently priced."
"Although Perfecto is a good product for us to use, it is a bit expensive. It takes management a bit of work to find the appropriate funding for us to keep Perfecto. I imagine there could be some way to make it more accessible."
"Perfecto is about 30-40% cheaper than Device Anywhere. That was a big reason why we switched. Perfecto also solves some of the issues that we had with Device Anywhere. We have grown by 100% since we started to use Perfecto, and now we have devices roaming. When we look at the competition, we would still stick with Perfecto."
"This is an expensive solution compared to others, by 30% to 40%."
"Perfecto has definitely saved us on the costs and efforts of having to maintain our own virtual test environment. We lost about 20 devices in the past to maintenance and audit. That was a massive loss for us, as a company, because we were giving devices to someone, but don't know whether we would get it back or not. Having those virtual labs, we don't need to worry about these kinds of things. We are easily saving $5,000 to $10,000 a month on device costing."
"Pricing is an area where Perfecto can do a little better. When we obtain additional licenses, we enter into negotiations with them."
report
Use our free recommendation engine to learn which Functional Testing Tools solutions are best for your needs.
850,028 professionals have used our research since 2012.
 

Top Industries

By visitors reading reviews
Computer Software Company
21%
Financial Services Firm
18%
Manufacturing Company
10%
Government
6%
Financial Services Firm
22%
Computer Software Company
20%
Manufacturing Company
9%
Insurance Company
5%
 

Company Size

By reviewers
Large Enterprise
Midsize Enterprise
Small Business
 

Questions from the Community

What is your experience regarding pricing and costs for Silk Test?
The pricing depends on the license used. The pricing is similar to others in the market.
What is your primary use case for Silk Test?
The product is used for manual, functional, and performance testing. I'm using the tool for loading data into ERP systems.
Ask a question
Earn 20 points
 

Comparisons

 

Also Known As

Segue, SilkTest, Micro Focus Silk Test
Perfecto Mobile, Perfecto Web
 

Overview

 

Sample Customers

Krung Thai Computer Services, Quality Kiosk, Mªller, AVG Technologies
Virgin Media, Paychex, Rabobank, R+V, Discover
Find out what your peers are saying about OpenText Silk Test vs. Perfecto and other solutions. Updated: April 2025.
850,028 professionals have used our research since 2012.