Coming October 25: PeerSpot Awards will be announced! Learn more
Buyer's Guide
Test Automation Tools
September 2022
Get our free report covering Tricentis, Katalon Studio, Ranorex, and other competitors of SmartBear TestComplete. Updated: September 2022.
632,611 professionals have used our research since 2012.

Read reviews of SmartBear TestComplete alternatives and competitors

Automation Test Consultant at a computer software company with 10,001+ employees
Consultant
Reduces test execution time, performance well for non-web-based applications, but the AI features need to be improved
Pros and Cons
  • "I find UFT One to be very good for thick clients, which are non-browser applications."
  • "The artificial intelligence functionality is applicable only on the web, and it should be expanded to cover non-web applications as well."

What is our primary use case?

I am a consultant in my organization and one of the tasks that I perform is to assist other users with technical issues. Specifically, with UFT One, I am currently evaluating the AI features. I want to experiment with them and find out how it all works so that we can take that information to our customers.

How has it helped my organization?

The fact that UFT One covers multiple technologies helps in terms of end-to-end scenarios. When we have process flows, workflows, or scenarios that span multiple technologies, we don't have to branch out and use multiple tools. This is very helpful.

The platform supports both API and GUI usage, although we have only used it for GUI.

The continuous testing across the software lifecycle is good. When we have done continuous testing, we connect to remote machines and execute the tool. The only problem that we encountered was that when the system is not visible, or not logged in, then there were some issues. However, it has been several months since we tried this.

We have not really put the AI capabilities into practice yet because it is currently only applicable for web-based applications. Our customers have pre-existing tools that already perform this work.

In general, UFT has helped to reduce our test execution time. In particular, with our non-web ecosystem, the execution time has been reduced considerably.

At this point, UFT has not helped us to decrease defects because we are not creating new test cases. Rather, we are automating test cases with it. It might be the case for regression testing, as regression defects are much higher. 

We also use UFT One for SAP test scenarios.

What is most valuable?

I find UFT One to be very good for thick clients, which are non-browser applications. For browser applications, we have a good number of non-commercial alternatives. However, for thick clients, whether they are Java, Mainframe, SAP, or .NET, this solution works pretty well.

The introduction of artificial intelligence in UFT is a step in the right direction.

The UFT automated manual process has helped to increase our test coverage. Not every one of the tools is applicable but there are some provisions in the latest version that can increase the testing coverage.

We perform some of our tests in virtual machines and UFT gives us control over the machine configuration, such as allocating specific resources. That said, we have our virtual machines configured by another team before they are provided to us, so we don't have UFT control them.

What needs improvement?

The AI functionality has a lot of room for improvement, as it has just started. For example, when a particular object is found, you have to scroll down, rather than have it done automatically.

The artificial intelligence functionality is applicable only on the web, and it should be expanded to cover non-web applications as well.

For how long have I used the solution?

I have been using Micro Focus UFT One for between six months and one year. More generally, I have used UFT for approximately 12 years.

What do I think about the stability of the solution?

The stability is pretty good with respect to the traditional functionality, which has been existing for years. Some of the new features might not be as stable. In particular, there is a little bit of instability with the AI features that I have observed. I think that this is acceptable given that it is new.

What do I think about the scalability of the solution?

This product is scalable in some regards and not others. 

As for extending the execution of tests to other machines, you have to install UFT on every machine and get it started, which may not be very scalable. However, it is scalable in terms of generally extending coverage to other applications. Essentially, once you start automating an application, you can continue to build on that as new requirements or scenarios come in.

How are customer service and technical support?

I have not personally dealt with customer support, although when I was helping one of our customer teams, there was a problem that I could not resolve and I asked them to raise a ticket. Unfortunately, the issue was not resolved. I was told that the answer from the Micro Focus support team was not helpful.

Five or six years ago, I did deal with UFT support, but it was not for the UFT One product.

I have interacted with the Micro Focus design team, giving my input as to how AI is important. I was told that it's going to be available in upcoming releases.

Which solution did I use previously and why did I switch?

I have used other tools including Tricentis Tosca, and I find that one, in particular, to be better for testing web-based applications. There are other tools including TestComplete, but I would recommend UFT One for non-web applications.

Tricentis Tosca is nice because it is a scriptless tool, you don't need to know scripting in order to get it to work. It is more UI-based and a new person can usually do well with it, and there is not much of a learning curve. This is in contrast to UFT One, where you need to know the scripting language in order to automate tests.

What about the implementation team?

I assist our clients in setting up their operations, such as helping to identify objects or setting up the scripting. However, I do not help with the actual deployment.

What other advice do I have?

In the past, UFT One did not support integration with third-party applications such as Jenkins and Bamboo. However, there are now some plugins that are available.

My advice for others who are considering this product is that they are looking to automate non-web applications, then it is a good choice. For web-based applications, I would recommend another tool, such as Tricentis Tosca.

I would rate this solution a seven out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: Partner
Director of Engineering at a energy/utilities company with 51-200 employees
Real User
Top 5
A stable solution with good scripting feature, but needs better scalability and a bigger pool of third-party contractors
Pros and Cons
  • "Scripting is the most valuable. We are able to record and then go in and modify the script that it creates. It has a lot of generative scripts."
  • "We moved to Ranorex because the solution did not easily scale, and we could not find good and short term third-party help. We needed to have a bigger pool of third-party contractors that we could draw on for specific implementations. Silk didn't have that, and we found what we needed for Ranorex here in the Houston area. It would be good if there is more community support. I don't know if Silk runs a user conference once a year and how they set up partners. We need to be able to talk to somebody more than just on the phone. It really comes right down to that. The generated automated script was highly dependent upon screen position and other keys that were not as robust as we wanted. We found the automated script generated by Ranorex and the other key information about a specific data point to be more robust. It handled the transition better when we moved from computer to computer and from one size of the application to the other size. When we restarted Silk, we typically had to recalibrate screen elements within the script. Ranorex also has some of these same issues, but when we restart, it typically is faster, which is important."

What is our primary use case?

We used it for data-driven automated tests that have numeric calculations with high precision requirements. We probably are using the version from two years ago.

How has it helped my organization?

It was implemented to solve a very large and specific test scenario with 24,000 test cases. It did that, and the company was quite happy with this solution, but it did not easily scale, and we could not find good and short term third-party help. So, we moved to Ranorex. We've now canceled the maintenance for Silk.

That one test scenario has been very valuable. We still use the data results from that. We used it to validate Ranorex. It has helped keep the company on the automated test path.

What is most valuable?

Scripting is the most valuable. We are able to record and then go in and modify the script that it creates. It has a lot of generative scripts.

What needs improvement?

We moved to Ranorex because the solution did not easily scale, and we could not find good and short term third-party help. We needed to have a bigger pool of third-party contractors that we could draw on for specific implementations. Silk didn't have that, and we found what we needed for Ranorex here in the Houston area. It would be good if there is more community support. I don't know if Silk runs a user conference once a year and how they set up partners. We need to be able to talk to somebody more than just on the phone. It really comes right down to that.

The generated automated script was highly dependent upon screen position and other keys that were not as robust as we wanted. We found the automated script generated by Ranorex and the other key information about a specific data point to be more robust. It handled the transition better when we moved from computer to computer and from one size of the application to the other size. When we restarted Silk, we typically had to recalibrate screen elements within the script. Ranorex also has some of these same issues, but when we restart, it typically is faster, which is important.

For how long have I used the solution?

I've been with the company for a little over two years. They were using it when I got here. They have used it for several years.

What do I think about the stability of the solution?

It was stable. It didn't crash and ran as expected.

What do I think about the scalability of the solution?

We had one specific large scale job on which we needed to have automated tests. We had 24,000 test cases, which were too much to do in a timely way by hand. We got Silk Test set up, and it ran. We wanted to run other 24,000 general test cases, but we didn't find cloning to be as effective as we would have wanted. It was easier with Ranorex. That might have been because we were able to hire a third-party consultant to come in for three weeks and get that kicked off for us, where we couldn't find that help with Silk. 

How are customer service and technical support?

On the phone, they were fine, but we needed a full-time consultant for three weeks. We could not find that through Silk or their contractor base. 

Which solution did I use previously and why did I switch?

I believe they used something called TestPartner.

How was the initial setup?

It was done before my time.

What about the implementation team?

It was all done in-house. We had a limited number of licenses for people. We took it off for maintenance a couple of times. That's probably the same challenge with any tool.

Everyone engaged with it worked in proper quality assurance, with the exception of one developer whose job was to set up the DLL link between Silk and our products. His role was limited. He got it set up, and he was done. On an ongoing basis, it was all on our SQA testers.

What's my experience with pricing, setup cost, and licensing?

We paid annually. There is a purchase cost, and then there is an ongoing maintenance fee.

Which other solutions did I evaluate?

We now use Ranorex, and we had looked at Ranorex, TestComplete, and LEAPWORK. One of the deciding factors for Ranorex was a recommendation from a respected colleague in a different company.

Generally speaking, Silk Test was fine and better than Ranorex in some ways. The biggest thing was that we were able to get some short-term and very specifically-focused help when needed with Ranorex, but we couldn't get that with Silk. Otherwise, the tool has many comparable features.

What other advice do I have?

It is a fine product. It is just like any other tool. It is a powerful tool, and it needs commitment. Our way to get that on top of our workload was to find a short term contractor. If you've got the manpower to commit to being there to get it started, it will be just fine. There is no real big objection to Silk Test. We just needed some other help with the designs.

I would rate Silk Test a seven out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Director Test Services at a tech services company with 201-500 employees
Real User
Top 20
Great REST API features, good technical support, and makes it easy to onboard new testers
Pros and Cons
  • "The REST API features allowed integrated testing for select products to quickly make calls and test the UIs with API calls while the CLI allows us to matrix the grid function across browsers."
  • "The accessibility reporting features could be more robust to be reported at the script level and allow users to map down to the step level."

What is our primary use case?

We use it for web application testing for about 9 different products and some applications that have multiple API calls. We have successfully used the API validations to enter or set values using the API and confirmed those results in the UI and exports. We use test data within the Java Script editor and we use the Command Line Interface (CLI) and scheduler to vary the combination of users for some of our test suites. We have recently added accessibility testing to those customer-facing web applications as part of our releases.

How has it helped my organization?

We are able to quickly train and onboard new testers by getting them certified on Testim's certification training, which enables our Testers to understand how Testim works. That training makes it easier for them to work on and troubleshoot our existing regression suites for our web applications. This enables them to learn and start to envision tests for new features for existing applications. 

The ability to add shared steps across scripts and edit them improved our ability to create and edit scripts.

What is most valuable?

The new accessibility features allow us to set standards across products and use existing scripts for testing. 

The REST API features allowed integrated testing for select products to quickly make calls and test the UIs with API calls while the CLI allows us to matrix the grid function across browsers.  

We also use the scheduler to trigger runs when environments are expected to be available so we do not need to manually trigger regressions. Shared steps allow testers to leverage repeated steps across tests.

What needs improvement?

The accessibility reporting features could be more robust to be reported at the script level and allow users to map down to the step level.  

Some lists have values that are returned in different orders and once captured within a validation, the sort order can change. Is there a way for Testim to ignore the sort order and validate the list?  

Sharing steps across projects would be helpful for teams as some products are similar in features and configurations, so sharing steps upfront instead of recording them would benefit our teams. 

For how long have I used the solution?

I've been using the solution for 3 years.

What do I think about the stability of the solution?

We have metrics reported from Testim where we have about 600 code changes within our tests out of 4,000 - that should not affect our test results.

What do I think about the scalability of the solution?

We have been able to scale new scripts each month with no issue and have been running 200 tests monthly for over 2 years

How are customer service and technical support?

The Testim technical support is very responsive and worked with a specific team member on the First Databank side to answer hard technical questions and took enhancement requests as needed. 

We were also included in their TDK beta program to implement beta applications before other customers if we chose to do so.

Which solution did I use previously and why did I switch?

Rational Robot was used many years ago and we dropped it as the pixel compares were awful and the pricing was bad.

How was the initial setup?

The initial setup was straightforward and we were able to start recording scripts on Day 1. Once we were given access to the software, we recorded a sample script and had the team working on sample scripts for demo purposes the same day in order to compare software packages.

What about the implementation team?

We implemented directly with the Testim Implementation Team and they were able to answer questions and worked with us each month to keep us on track for projects.

What was our ROI?

The solution offers strong name recognition and code-free automation. 

Updates are easy and forgiving on slow UI responses. 

You can generate videos for defect reporting. 

There are Baseline/Result screenshot comparisons.

What's my experience with pricing, setup cost, and licensing?

I'd advise users to take advantage of the Implementation meetings, monthly discussions, and pro licensing.

Which other solutions did I evaluate?

Yes, we looked at SmartBear's TestComplete.

What other advice do I have?

Be advised that the Chrome browser is the primary tool for Testim, however, it is fine.

Which deployment model are you using for this solution?

Public Cloud
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Shweta Mukkawar - PeerSpot reviewer
Technical Lead at a tech services company with 1,001-5,000 employees
Real User
Top 20
Good reporting, easy-to-use interface, and the APIs are useful
Pros and Cons
  • "The reporting is really nice."
  • "The UI does not have the option of automating the scroll bars."

What is our primary use case?

We have been using this solution in my organization. There are several clients who have come back to us asking for different automation tools and our views on which automation tools can be used in their respective projects.

We have been evaluating for these clients. The evaluations we were doing, was for our client but done within the organization only.

What is most valuable?

The most valuable feature is the UI.

We work primarily on evaluating the UI. We evaluated Tosca BI and a bit for Tosca APIs and think they are very useful.

The reporting is really nice. There are many clients who ask for a feature with any automation tool that can generate good reports for them. There is only one such tool that facilitates that option, which is Tosca.

What needs improvement?

The volume for the BI testing is limited. If they could provide a few options to use it, using Tosca only, that could be a great thing because there are a lot of clients who do not actually want to go for the BI, but then they do have the database testing.

I know that Tosca provides the feature but it's very minuscule.

The UI does not have the option of automating the scroll bars. There are workarounds for that but for example, if I open two tabs that have the same page then it will give me another difficulty in scanning those options. 

In the next release, including this in Tricentis would be great.

For how long have I used the solution?

I have been acquainted with this solution for approximately eight months.

What do I think about the stability of the solution?

So far, what we have used has been stable.

I have read some reviews where they have expressed that they are not happy with the stability, but so far, I have not faced any such issues.

What do I think about the scalability of the solution?

The scalability is good. If I had to rate it out of five, I would say that it would be a three-point five to four.

We have anywhere from 500 to 750 people who are using this solution in our organization.

How are customer service and technical support?

Technical support is good. Their turnaround time is usually within 24 to 48 hours, but normally we have a response within 24 hours.

Which solution did I use previously and why did I switch?

Previously, we did not work with any other solution.

How was the initial setup?

We have always worked with the demo licenses, which included support from the Tricentis teams. They have always been able to set up the licensing.

I would say that it is not that straightforward.

On average, it did not take more than two to three hours if you know the process properly. To get it started, it will require at least six hours.

What's my experience with pricing, setup cost, and licensing?

The disadvantage is that it is very expensive.

I would like to see better costing packs. There are several features but USD $11,000 for one license is expensive. If it had more interaction and if the license cost was a little less it would be better and rate higher.

What other advice do I have?

The recommendation of this solution depends from client to client and what their requirements would be, the parameters, and what is important to them. 

If the client wants good support and at the same time they want to have a good database included with the Automation Testing Suite, and is ready to spend the money, then we would definitely suggest Tricenta Tosca as a good option.

Again, it is dependant on the client's requirements and what they would want in an automation tool.

If you have Linux or Mac machines, then it gets very difficult to implement Tosca. I would suggest using it. 

For testing, they would want to migrate miscellaneous scripts and use Tosca for those migrations. But, it's very difficult. 

My suggestion would be to go with the Tricentis Suites and the Selenium Automation Suites.

From a positive perspective, I would want people to use the reporting from Tosca. They have very good reporting. The reporting feature is very user-friendly and very easy to use.

I would rate Tricenta Tosca an eight out of ten.

Disclosure: My company has a business relationship with this vendor other than being a customer: partner
Portfolio Manager at a tech services company with 10,001+ employees
Real User
Useful multiple technology platform, scalable, but usability could improve
Pros and Cons
  • "The most valuable feature of Katalon Studio is that everything can be managed from one platform."
  • "Katalon Studio should improve its usability, it still needs some improvement where users can easily use it to build their automation suite. It requires some initial work to set it up. There should be more keywords in the library to limit the coding requirements, this will allow a non-technical person easily start using it, which would be better."

What is our primary use case?

Katalon Studio is used for supporting multiple technology platforms, such as BGTs, and web clients.

What is most valuable?

The most valuable feature of Katalon Studio is that everything can be managed from one platform.

What needs improvement?

Katalon Studio should improve its usability, it still needs some improvement where users can easily use it to build their automation suite. It requires some initial work to set it up. There should be more keywords in the library to limit the coding requirements, this will allow a non-technical person easily start using it, which would be better.

There are a couple of areas, such as test data and service virtualization, that should be integrated into this one, it would be an entrance solution for automation testing.

For how long have I used the solution?

I have been using Katalon Studio for approximately one year.

What do I think about the stability of the solution?

When I used Katalon Studio initially there were some stability issues. However, I did a POC and it is very stable. It has been stable for the past four to five years. 

What do I think about the scalability of the solution?

The scalability of Katalon Studio is good.

I'm a part of a centralized team where I have approximately nine different tools. Wherever the requirements are in the project we use the solution, we do not use it on a day-to-day basis.

How are customer service and support?

The support we have received has been good.

Which solution did I use previously and why did I switch?

I used multiple different solutions previously, such as TestComplete, Selenium, and Micro Focus UFT.  Katalon Studio is similar to Micro Focus UFT in supporting multiple platforms.

How was the initial setup?

The full deployment of Katalon Studio can be heavy. If we have to install the complete Android Studio, it takes a lot of space on the system. Otherwise, the setup is not that complex. They have improved over the years, it is much easier, but space is one constraint it has. Once you install Android Studio completely, it would impact the performance of your machine. 

What about the implementation team?

I did the implementation of Katalon Studio myself.

What was our ROI?

I have seen a return on investment from using Katalon Studio. After approximately 10 cycles we can see a good return. We have seen a 30 to 40 percent cost savings.

What's my experience with pricing, setup cost, and licensing?

The cost of Katalon Studio is expensive but it is less than some of the other solutions, such as Micro Focus UFT or SmartBear TestComplete. The cost is the main reason we are using Katalon Studio.

What other advice do I have?

I advise others to try this solution if they're looking for a different technology stack of applications, this is one solution they can use instead of using multiple solutions.

Katalon Studio is an integrated platform for multiple technologies in one place, you will receive all many features, such as web applications and mobile.

I rate Katalon Studio a seven out of ten.

Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
Flag as inappropriate
Buyer's Guide
Test Automation Tools
September 2022
Get our free report covering Tricentis, Katalon Studio, Ranorex, and other competitors of SmartBear TestComplete. Updated: September 2022.
632,611 professionals have used our research since 2012.