Try our new research platform with insights from 80,000+ expert users

OpenText ALM / Quality Center vs OpenText Silk Test comparison

 

Comparison Buyer's Guide

Executive Summary

Review summaries and opinions

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Categories and Ranking

OpenText ALM / Quality Center
Average Rating
8.0
Reviews Sentiment
6.6
Number of Reviews
207
Ranking in other categories
Application Lifecycle Management (ALM) Suites (4th), Test Management Tools (1st)
OpenText Silk Test
Average Rating
7.6
Reviews Sentiment
6.8
Number of Reviews
17
Ranking in other categories
Functional Testing Tools (20th), Regression Testing Tools (8th), Test Automation Tools (20th)
 

Mindshare comparison

While both are Application Lifecycle Management solutions, they serve different purposes. OpenText ALM / Quality Center is designed for Application Lifecycle Management (ALM) Suites and holds a mindshare of 5.6%, up 5.5% compared to last year.
OpenText Silk Test, on the other hand, focuses on Functional Testing Tools, holds 1.0% mindshare, down 1.3% since last year.
Application Lifecycle Management (ALM) Suites
Functional Testing Tools
 

Featured Reviews

Paul Grossman - PeerSpot reviewer
Range of supported technology expands, but odd IDE design still leave newbies and pro users alike disappointed.
There are always new features and more support for new and legacy technology architectures with each release. But the bad news is a growing list of long-standing issues with the product rarely gets addressed. While I have a larger list of issues that make day to day work harder than it needs to be, these are the Top Five that I do wish would capture someone's attention in upcoming releases. All hit the tool's ROI pretty hard. #1) Jump To Source - The Silent Code Killer: In older QTP versions a double-click on any function in the Toolbox window would take the developer to the function's source code, while a drag from the Toolbox would add it to the code window. Since 12.0 a double-click on a function in UFT's Toolbox window now ADDS the function (same as drag) to the Code window - to whatever random location the cursor happens to be at - even if it is off screen, and it will replace sections of code if it is highlighted. We are not sure what the intention was, but our Best Practice is to avoid the Toolbox window entirely to avoid the real danger of losing days of work and needless bug hunts. Now Jump to Source is not all bad. A right-click on any function called from a Script takes us to the code source, which is great! But it only half works: in a Library, only for functions declared within the same library. Our advance designs have well over twelve libs so a whole lot of extra time is spent searching the entire project for a function's source on a daily basis. Lastly, while we can add custom methods to object, a Jump To Source from these methods is long overdue. So again our only option is to search the entire project. #2) Object Spy: It needs to have multiple instances so that you can compare multiple object properties side-by-side. It lacks a Refresh button, so that automation engineers can quickly identify the property changes of visible and invisible objects. Or HP could skip to option #3... #3) Add RegEx integer support for .Height or .Width object properties when retrieving object collections. If this were possible, our framework could return collections that contain only visible objects that have a .height property greater that zero. (Side Note: the .Visible property has not returned a False value for us in nearly five years - a recent developer decision, not a product issue) Eliminating the need to separate the non-visible objects from visible ones would decrease execution time dramatically. (Another side note: Our experiments to RegEx integer-based .Height properties found that we could get a collection of just invisible objects. Exactly the opposite of what we needed.) #4) The shortcut to a treasure trove of sample code in the latest release 14.0 has been inexplicably removed. This impeeds new users from having an easy time learning the tool's advanced capability. In fact the only users daring enough to go find it now will be you who is reading this review. #5) Forced Return to Script Code. This again is a no-brainer design flaw. Let's say we run a script and throw an error somewhere deep in our function library. Hey it happens. In prior QTP versions when the Stop button would be clicked the tool would leave you right there at the point where the error occurred to fix. Now in recent releases, UFT always takes us back to the main Script, far from that code area that needed immediate attention.
SrinivasPakala - PeerSpot reviewer
Stable, with good statistics and detailed reporting available
While we are performance testing the engineering key, we need to come up with load strategies to commence the test. We'll help to monitor the test, and afterward, we'll help to make all the outcomes, and if they are new, we'll do lots and lots of interpretation and analysis across various servers, to look at response times, and impact. For example, whatever the observations we had during the test, we need to implement it. We'll have to help to catch what exactly is the issues were, and we'll help to see how they can be reduced. Everything is very manual. It's up to us to find out exactly what the issues are. The solution needs better monitoring, especially of CPU.

Quotes from Members

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Pros

"The test-case repository and linkage through to regression requirements will absolutely be a key component for us. We haven't got it yet, but when we've got an enterprise regression suite, that will be a key deliverable for them. We will be able to have all of the regression suite in one place, linked to the right requirements."
"I like that it integrates with the Jira solutions."
"It's basically the way to show the work that we do as QA testers, and to have a historical view of those executions."
"We can get an entire project into a single repository where we can view all the data in detail. This is where we keep all our test cases where everyone can reference them. This provides everyone access to the test cases and artifacts via the cloud. There is no need to contact anyone."
"The stability is very good."
"The integration with UFT is nice."
"This solution is open and very easy to integrate. The interface is good too."
"Ability to customize modules, particularly Defect Tracking module on company specific needs"
"The ability to develop scripts in Visual Studio, Visual Studio integration, is the most valuable feature."
"The scalability of the solution is quite good. You can easily expand the product if you need to."
"The major thing it has helped with is to reduce the workload on testing activities."
"Scripting is the most valuable. We are able to record and then go in and modify the script that it creates. It has a lot of generative scripts."
"A good automation tool that supports SAP functional testing."
"The feature I like most is the ease of reporting."
"It's easy to automate and accelerate testing."
"The statistics that are available are very good."
 

Cons

"HP-QC does not support Agile. It is designed for Waterfall. This is the number one issue that we're facing right now, which is why we want to look for another tool. We're a pharmaceutical services company, so we require electronic signatures in a tool, but this functionality isn't available in HP-QC. We don't have 21 CFR, Part 11, electronic signatures, and we need compliant electronic signatures. Some of the ALM tools can toggle between tabular format and document format for requirements, but the same feature is not available in this solution. There is also no concept of base-lining or versioning. It doesn't exist."
"It is not a scalable solution."
"There is room for improvement in the scalability and stability of the solution."
"As soon as it's available on-premises we want to move to ALM Octane as it's mainly web based, has the capability to work with major tests, and integrates with Jenkins for continuous integration."
"The integration could be improved because with Agile technology you are working more quickly than with a top-down methodology."
"The UFT tests don't work very well and it seems to depend on things as simple as the screen resolution on a machine that I've moved to."
"There's room for improvement in the requirements traceability with Micro Focus ALM Quality Center. That could use an uplift."
"Micro Focus ALM Quality Center should improve the reports. Reporting on tax execution progress against the plan. However, they might have improved over two years since I have used the solution."
"The pricing could be improved."
"The solution has a lack of compatibility with newer technologies."
"The pricing is an issue, the program is very expensive. That is something that can improve."
"They should extend some of the functions that are a bit clunky and improve the integration."
"Everything is very manual. It's up to us to find out exactly what the issues are."
"Could be more user-friendly on the installation and configuration side."
"The support for automation with iOS applications can be better."
"We moved to Ranorex because the solution did not easily scale, and we could not find good and short term third-party help. We needed to have a bigger pool of third-party contractors that we could draw on for specific implementations. Silk didn't have that, and we found what we needed for Ranorex here in the Houston area. It would be good if there is more community support. I don't know if Silk runs a user conference once a year and how they set up partners. We need to be able to talk to somebody more than just on the phone. It really comes right down to that. The generated automated script was highly dependent upon screen position and other keys that were not as robust as we wanted. We found the automated script generated by Ranorex and the other key information about a specific data point to be more robust. It handled the transition better when we moved from computer to computer and from one size of the application to the other size. When we restarted Silk, we typically had to recalibrate screen elements within the script. Ranorex also has some of these same issues, but when we restart, it typically is faster, which is important."
 

Pricing and Cost Advice

"The pricing is expensive nowadays."
"It is an expensive tool. I think one needs to pay 10,000 USD towards the perpetual licensing model."
"Most vendors offer the same pricing, though some vendors offer a cheaper price for their cloud/SaaS solution versus their on-premise. However, cloud/SaaS solutions result in a loss of freedom. E.g., if you want to make a change, most of the time it needs to be validated by the vendor, then you're being charged an addition fee. Sometimes, even if you are rejected, you are charged because it's a risk to the entire environment."
"HPE has one of the most rigid, inflexible, and super expensive license models."
"I don't know the exact numbers, but I know it is pricey. When we talked to the sales reps we work with from our company, they say, "Well, Micro Focus will never lose on price." So, they are willing to do a lot of negotiating if it is required."
"ALM Quality Center is a little bit costly."
"Sure, HP UFT is not free. But consider what you get for that cost: A stable product that is easy to use; the kitchen sink of technology stack support; decades of code (which in many cases actually is free); a version that is a stepping stone to an easier Selenium design; and a support base that is more that just the kindness of strangers."
"Pricing could be improved as it's high-priced. I don't exactly know the pricing point, but previously, I know that it was really high so less people were able to use it for their projects."
"Our licensing fees are on a yearly basis, and while I think that the price is quite reasonable I am not allowed to share those details."
"We paid annually. There is a purchase cost, and then there is an ongoing maintenance fee."
report
Use our free recommendation engine to learn which Application Lifecycle Management (ALM) Suites solutions are best for your needs.
850,834 professionals have used our research since 2012.
 

Top Industries

By visitors reading reviews
Educational Organization
69%
Financial Services Firm
6%
Manufacturing Company
5%
Computer Software Company
4%
Computer Software Company
20%
Financial Services Firm
18%
Manufacturing Company
10%
Government
6%
 

Company Size

By reviewers
Large Enterprise
Midsize Enterprise
Small Business
 

Questions from the Community

What do you like most about Micro Focus ALM Quality Center?
The most valuable feature is the ST Add-In. It's a Microsoft add-in that makes it much easier to upload test cases into Quality Center.
What is your experience regarding pricing and costs for Micro Focus ALM Quality Center?
The on-premises setup tends to be on the expensive side. It would be cheaper to use a cloud model with a pay-per-use licensing model.
What needs improvement with Micro Focus ALM Quality Center?
We work with Jira now, and there are some very good workflows. There could be more configurable workflows regarding test case creation approval. I see a stable tool that remains relevant in the mar...
What is your experience regarding pricing and costs for Silk Test?
The pricing depends on the license used. The pricing is similar to others in the market.
What is your primary use case for Silk Test?
The product is used for manual, functional, and performance testing. I'm using the tool for loading data into ERP systems.
 

Also Known As

Micro Focus ALM Quality Center, HPE ALM, Quality Center, Quality Center, Micro Focus ALM
Segue, SilkTest, Micro Focus Silk Test
 

Overview

 

Sample Customers

Airbus Defense and Space, Vodafone, JTI, Xellia, and Banco de Creìdito e Inversiones (Bci)
Krung Thai Computer Services, Quality Kiosk, Mªller, AVG Technologies
Find out what your peers are saying about Atlassian, Microsoft, Nutanix and others in Application Lifecycle Management (ALM) Suites. Updated: April 2025.
850,834 professionals have used our research since 2012.