Try our new research platform with insights from 80,000+ expert users
reviewer1215417 - PeerSpot reviewer
Senior Director of Quality Engineering at a tech vendor with 1,001-5,000 employees
Real User
Nov 24, 2019
Gives us more efficiencies and overall improvement in transparency and visibility of the testing progress
Pros and Cons
  • "The main thing that really stuck out when we started using this tool, is the linkability of qTest to JIRA, and the traceability of tying JIRA requirement and defects directly with qTest. So when you're executing test cases, if you go to fail it, it automatically links and opens up a JIRA window. You're able to actually write up a ticket and it automatically ties it to the test case itself."
  • "The Insights reporting engine has a good test-metrics tracking dashboard. The overall intent is good... But the execution is a little bit limited... the results are not consistent. The basic premise and functionality work fine... It is a little clunky with some of the advanced metrics. Some of the colorings are a little unique."

What is our primary use case?

The primary use case is to the overall testing process and management of our test cases, as far as the design, creation, review, and archiving of them goes. We use it to manage their overall status.

We are cloud users, so we've got the latest and greatest version. They transparently push updates to us.

How has it helped my organization?

The solution’s reporting enables test team members to research errors from the run results. We do have some metrics and some dashboards that are set up that which allow the testers themselves to get good visibility into where things are at and which allow others to see "pass," "failed," "blocked."

qTest has been very useful for us. It's helped in productivity. It's helped in automating a lot due to the seamless integration with JIRA. It has taken us to the next level, in a very positive way, in the management of our overall test cases. It has been outstanding.

In comparison to managing test cases in spreadsheets or other tools we've used in the past qTest is saving us a couple of hours a day.

Investing in Insights to have one location for a dashboard of all reports and metrics, it has allowed us to minimize the number of reports or URLs which other stakeholders have had to go to in order to get status on the testing. There has definitely been an improvement there.

Use of the solution also provides our team with clear demarcations for which steps live in JIRA and which steps live in qTest. Test cases and tickets are assigned to test plans, etc. through the tools within qTest and they are all linked back.

What is most valuable?

The main thing that really stuck out when we started using this tool is the linkability of qTest to JIRA, and the traceability of tying JIRA requirement and defects directly with qTest. So when you're executing test cases, if you go to fail it, it automatically links and opens up a JIRA window. You're able to actually write up a ticket and it automatically ties it to the test case itself.

It has seamless integration with other key, fact-tracking or ticket-tracking tools, with overall good API integrations.

What needs improvement?

The Insights reporting engine has a good test-metrics tracking dashboard. The overall intent is good, compared to other test tracking or test management tools. But the execution is a little bit limited. The overall solution is good, but the results are not consistent. The basic premise and functionality work fine. When you try to extend the use of it a little bit more, it struggles. It is a little clunky with some of the advanced metrics. Some of the colorings are a little unique. They are currently working on a new flavor for Insights.

We do have dashboards and links set up that our executive level access. Overall, the numbers are accurate, based on what we're putting into it, but where we lose integrity or where we lose the overall perception of things, is when the colors start changing or when red is used to mean good. That's when executives lose respect for it. We've used it as a dashboard during key deployments. And then, as press is being made and the reports are being updated, colors start to change and that distracts from the overall intent of the reporting progress.

We chose to leverage Insights so that we didn't have to manually create charts via either a Google Sheet or Excel since we don't have the resources, time, or bandwidth to do that. That is what excited about Insights. But then, it just didn't meet our expectations.

We have voiced our concerns to Tricentis and they definitely have empathy. We talk about it and they keep us updated. With an acquisition they're going to leverage their analytics tool. We are excited about that, once it launches. 

We have also discussed with our account manager a couple of possible enhancements here and there, but nothing that's critical or major. One example is when you're trying to link test cases to requirements, a lot of time there is duplication between the two. Sometimes you want to tie in some of the same test cases to the same requirements. An enhancement would be a quick way to copy that over directly without having to manually link every single one again. We have some instances where a large chunk of test cases are tied, re-used, and similar. When you get upwards of 15 or 20, to limit some of the tediousness of doing them all manually, if you could take a copy of the links from one and switch them over to another, that would be helpful. It's not of major concern. It would just be nice as a quick way to do it.

Another example is that with the charts — and again, great intention — you can put in a date range and apply it. Then you get to another screen and come back. After updating several charts, the date range is gone again. You have to go back in and it's sometimes two to three times before that date range is saved. 

Buyer's Guide
Tricentis qTest
December 2025
Learn what your peers think about Tricentis qTest. Get advice and tips from experienced pros sharing their opinions. Updated: December 2025.
879,310 professionals have used our research since 2012.

For how long have I used the solution?

It's just about a year since we procured licenses. We've been using it for about 11 months.

What do I think about the stability of the solution?

Stability with qTest is not an issue at all. We've had no downtime and no complaints, along those lines, with anything at all. qTest, by all means, is definitely one of the top test management tools out there.

What do I think about the scalability of the solution?

We're not a big shop so for our situation it's fine. We haven't seen any bandwidth issues with running in the cloud. People are accessing this tool across the globe and we've had no complaints or issues.

We don't plan on rolling it out further until we see the analytics portion of it. Our plan is that we will pick back up again at the start of the calendar year, once we see, at the end of this year, what analytics has to offer and once we get that working. Then we'll go back to the drawing board on how we can use it and then we'll roll it out and provide training.

How are customer service and support?

They have been doing okay in terms of the suggestions we make. It depends on the level of severity of what had occurred, what changes are needed. But they're responsive. We do get communications from the support team pretty well and our account manager is pretty good on following up on things.

For the most part, first-tier support has to ask some basic questions, but they're pretty good. There is room for improvement on communication response time from first-tier support. What we do is we wind up copying our account manager on tech support requests so she can assist in following up a little bit quicker. Ideally, we shouldn't have to do that, but we have learned to do that and it does make it a lot faster.

Which solution did I use previously and why did I switch?

We worked with a customized plugin within JIRA, not even a basic, off-the-shelf version. It was an in-house created module that was built to integrate. They couldn't afford to buy a plug-in, so they made one. That was why we started looking for a new solution. It was horrible. I would have preferred Excel.

How was the initial setup?

Because we have used tools like this in the past, we knew what we were getting into and we hit the ground running. So the initial setup was pretty straightforward. Compared to vendors we've worked with in the past, they've been extremely responsive, especially on the client success side of things. We've had that type of support and they have made sure that our needs are met. They have set us up with training and the like and that has been a really good experience.

Our deployment of the solution took a couple of months. Our complexity was that the test cases were being managed as tickets within JIRA and not necessarily using a test management plugin. The conversion of the test cases, and ensuring they were being transferred and translated into a single entity of the test case, was quite a big project.

What we were using before was a JIRA plugin. Given the way it was designed, what we had to do was extract everything into Excel and then import things in. That part of the tool works phenomenally. It's just that we had well over 20,000 test cases to deal with. We wanted to make sure we organized them into libraries. So it took a bit of time to get everything instated in proper order; to make sure that we didn't just dump everything in there.

We had one person doing the initial deployment. On Tricentis' side, there were two people involved in training us as well as our client support person. At this point, there are just two of us who are managing the tool. We tag team, but being that I am the senior director of the organization, I've tried to become the subject matter expert. I didn't really have anybody to delegate it to. That's why it's been a challenge that Insights is not behaving for us.

We've got 50-some licenses, but we probably see a peak of concurrent at no more than between 15 and 20. We're a medium-size company with about 1,300 employees. Mostly it's quality engineers who are using it. Developers have access to help with test cases. We're trying to get scrum masters in there to use Insights but with the challenges we've had with it we've backed off the roll-out of that.

qTest, is being used quite extensively. But there are just two of us who mostly use Insights. It's good in its ability to correlate all of the results coming from a double-digit number of scrum teams from across the globe. We can see the status of that testing.

For our team, the adoption of the solution has been fantastic. It has been well-received. You couldn't ask for a more straightforward, user-friendly, easy-to-use tool on the qTest side, from a user perspective.

What was our ROI?

We have absolutely seen ROI. We didn't have good visibility and transparency.

Don't get me wrong about Insights. For basic "not run," "pass/fail"-type metrics it is fine. It gives us much more visibility than we had in the past in terms of the ability to collaborate on the design, review, tracking, and archiving of the test cases, and the basic results of some of the sprints.

What's my experience with pricing, setup cost, and licensing?

We're paying a little over $1,000 for a concurrent license. One of the solutions we looked at was about half of that but that one is very much a bare-bones test management tool.

There are no additional costs. We pay a flat yearly rate for each license.

Which other solutions did I evaluate?

We looked into SmartBear and Zephyr, and not that we would purchase Quality Center, but it was used as a benchmark.

The main reason for going with qTest was not only that their test management application is more feature-rich and a good solution compared to others, but the ability to create a dashboard and report on a ton of metrics. We could have saved a lot of money, but I pushed hard for paying a premium to get the Insights dashboard.

What other advice do I have?

The biggest lesson I've learned from using the solution, because of the Insights challenge, is that I would probably do more of a formal trial. They are aware there are issues with it, and they are going to work on it.

Absolutely use it for its test management capabilities, without a doubt, but have an alternative solution for your reporting metrics.

Your testing using the tool is not going to change the result of the testing. It's just that the means are more efficient. Our testing scope has been the same and our processes have all been the same. But we're implementing a tool that's a little more organized. We're not really going to become better testers just because we're tracking things a little bit differently. It gives us more efficiencies and an overall improvement in the transparency and visibility of testing progress and its status. qTest has been pretty rock-solid.

Which deployment model are you using for this solution?

Private Cloud
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Assistant Vice President, IT Quality Assurance at a insurance company with 5,001-10,000 employees
Real User
Nov 21, 2019
Very much a QA-centric application; using it is pretty seamless if you're a QA engineer
Pros and Cons
  • "Being able to log into Defects, go right into JIRA, add that defect to the user story, right there at that point, means we connect all of that. That is functionality we haven't had in the past. As a communication hub, it works really well. It's pretty much a closed loop; it's all contained right there. There's no delay. You're getting from the defect to the system to JIRA to the developer."
  • "I would really love to find a way to get the results, into qTest Manager, of Jenkins' executing my Selenium scripts, so that when I look at everything I can look at the whole rather than the parts. Right now, I can only see what happens manually. Automation-wise, we track it in bulk, as opposed to the discrete test cases that are performed. So that connection point would be really interesting for me."

What is our primary use case?

It's our primary tool for managing for testing across the Guardian enterprise.

How has it helped my organization?

It's helped us in having a web interface and being intuitive for how testing is done. If you're a tester, it makes a lot of sense. Instead of an application that we try to modify to make use of in QA, this is very much a QA-centric application. How to use this and what it's referring to are pretty seamless if you're a QA engineer. To that end, it has really increased the productivity of my team. In an agile world, being able to create suites of test cases, and copy them from one project to another project, is really important.

The on-demand reporting has also helped. To be able to just look at defect counts, and how much progress was made for the day, or where we stand overall with the project, is really important. All of that has really simplified things for us quite a bit.

On a weekly basis, for reporting it has definitely saved at least 50 percent of our time, if not more.

In terms of it helping to resolve issues when they occur, being able to log into Defects, go right into JIRA, add that defect to the user story, right there at that point, means we connect all of that. That is functionality we haven't had in the past. As a communication hub, it works really well. It's pretty much a closed loop; it's all contained right there. There's no delay. You're getting from the defect to the system to JIRA to the developer.

It very much provides our team with clear demarcations for which steps live in JIRA and which steps live in qTest. That takes away the discussions like, "What are we going to use? How are we going to communicate? Where will the data be?" Any of that preparation time is behind us. It's now the default for how the teams function and what they do. That's a really powerful process. The fact that it's a reusable, repeatable process makes everybody much more comfortable and trusting of the data that they're getting. They can then focus on the issues at hand. qTest really becomes a tool, and the best thing about a tool is not knowing you're using it. To that end, it's doing a really great job.

I can't say that we've seen a decrease in critical defects in releases since we started using qTest, but we have more visibility into our test coverage, block test cases, daily activities, etc. But I can't say that it's done anything to necessarily improve the quality of code.

Overall, it has helped to increase testing efficiency by around 30 percent. A lot of that, again, is due to being able to reuse things and being able to get to the metrics quickly. I can't overemphasize how easy it makes things.

What is most valuable?

We are using qTest Manager and qTest Insights, primarily.

We have a global testing team, so we needed a centralized, easy, web-based interface for accessing all of our testing and being able to manage test cases. We ship over 400 projects a year, so we needed something that was going to scale to that. That's what the Manager piece is for.

The Insights piece is for obtaining where we are in terms of status and finding out about metrics, etc. It provides insight into the status of my testing.

Also, we are fully integrated with JIRA, so back and forth, we use JIRA as an agile shop. JIRA does all of our user stories, etc., and is the main source for defects. qTest Manager is the testing hub. The integration between the two has been great, pretty much seamless. We did run into one defect with volume, but the 9.7.1 release fixed that.

What needs improvement?

They're coming out with a new feature now, an analytics module. I, personally, slide more toward the metrics/analytics side of things, so anything they can do to come up with a reliable template where I can look at all of my metrics at a project-, quarter-, or enterprise-level, would be fantastic. So that's my "nirvana" goal. I don't want to have to go to Tableau. I have a lot of hopes in their analytics module.

And I would really love to find a way to get the results, into qTest Manager, of Jenkins' executing my Selenium scripts, so that when I look at everything I can look at the whole rather than the parts. Right now, I can only see what happens manually. Automation-wise, we track it in bulk, as opposed to the discrete test cases that are performed. So that connection point would be really interesting for me.

We have between 150 and 200 users who are all QA. Project managers might cycle in sometimes for metrics, but we publish our metrics. You can embed scripts that come out of Insights, which is a really great feature. It's a feature I would really like to see them work on more, to make sure their APIs are bi-directional or timely. It's a little unclear if they refresh at a certain point in time or when I click it. That is one area that is a little murky.

For how long have I used the solution?

We're just about to start our third year using qTest.

What do I think about the stability of the solution?

We hit a wall before the 9.7.1 upgrade — we waited too long to upgrade. We hit a volume where we started seeing that JIRA and qTest were out of sync a little bit. It seemed to be a timing thing. But once we upgraded, that all went away. 

In terms of stability, I don't think it's ever crashed. Sometimes the Insights module is slow to load up but I think that is a timeout issue.

What do I think about the scalability of the solution?

I have a better feeling about scalability with 9.7.1 than I did prior to that. We should be okay. There will become a time, though, where we're going to have to consider archiving data, and how we want to do that. That would be a great feature for them to have over time, to be able to go back and archive.

We continue to bring a lot of projects into the department, so the volume of projects that qTest will manage for us continues to increase. We're starting to integrate it a lot with other products. An example would be SmartBear — we do a lot of API testing there. Anything that Tricentis would build, API-wise, along those lines would be really helpful.

We use NeoLoad for all our performance testing and that integrates with AppDynamics, so I don't know that we would need to integrate them, but it would be nice if it were an option. We definitely continue to use JIRA. We'll continue to expand on that platform. There's a lot of potential.

How are customer service and technical support?

I speak very highly of the company, especially the QASymphony folks who were merged into Tricentis. There was some merger pain in terms of availability. We found that our calls were cycling. But they recognized that pretty quickly and definitely helped us get on the right path. 

When we were doing the upgrade, we were able to get slots scheduled fairly easily. 

So tech support is as I expect it to be, at this point. 

I have names of people whom I can call. That's always nice. It's not just "1-800-qTest." As a vendor they're attentive. They've been up here a few times and we definitely have a view into their roadmap. I find that as much as you're willing to give, you'll get.

Which solution did I use previously and why did I switch?

We were using the HP suite. We switched because of price point and ease of use. We went into agile quickly, as an enterprise, and HP wasn't at an agile point at that time. We needed to make a switch.

qTest is much more intuitive and straightforward. There's not a lot of complexity to it. HP opened up the world so there were far too many features than we needed. That became a burden over time. HP's integration with JIRA was difficult. It was a thick client and it was very difficult to use the web interface and have good response times. I could go on and on, but you get the gist of it.

How was the initial setup?

We did a prototype two years ago and demo'ed it. It definitely played strong. The price point was right and then we started road-mapping it in 2018. We started implementing in October of 2018. We have a lot to do here so it took us until June of 2019 to get us all to steady-state. But it went without hitches, and that's probably due to a combination of how much planning we put into it and its ease of use.

You need to plan it. You need to know what your JIRA templates look like. You need to know what your JIRA workflow is, and then you need to understand what you want qTest Manager to look like. If you're integrating with JIRA, that will be the defining piece in how all of that structure will look. Once you understand that — and fortunately, I have control over both in my department, so we are really intimate with what our Jira template looks like — it really maximizes how it integrates and the efficiency of how to get to where we wanted to go. It sounds like it took a long time, but it was really a lot of planning time and then we did the cutover. We also had a lot of training that we put into it. Having done this before, qTest was, by far, one of the easiest ones I've done.

We do internal audits on our own. We look back quarterly and say, "Are we meeting our own processes? Do we have reliable, reputable standards with our projects and the metrics, the way we count things? Are we consistent?" I do you think you have to measure yourself, in addition to measuring your projects. That's really helped us significantly.

As for adoption of the tool, it's been really easy. It has simplified a lot of things. Things are right there. You can quickly drill through and it's pretty intuitive to pick up. There's not a lot of complexity around it. There are not a lot of unnecessary fields. The training on it and the adoption of it have been a lot easier than with HP.

What about the implementation team?

The deployment was all my department. We have a third-party vendor, Cognizant, that we work with. We have an 80/20 split: 80 percent of the department is Cognizant, 20 percent is Guardian. This touched everybody in the department, and we're somewhere between 150 and 200 people. But we had a core team of about a dozen people who mapped and planned it all out, and then they touched the rest of the department as their projects migrated over.

I have two to two-and-a-half people maintaining it.

What was our ROI?

I definitely see ROI in that I have testers who are focused more on doing really complex testing, rather than writing test cases. The reusable regression suite is always a good thing; to be able to copy and move tough cases from one project to another. I don't want my testers to be rewriting things.

What's my experience with pricing, setup cost, and licensing?

I have not looked at it recently, but our license price point is somewhere between $1,000 and $2,000 a year. It's pretty low when you think about what we used to have. We haven't had any additional costs from Tricentis. We do the hosting on Amazon, so that's our cost.

Which other solutions did I evaluate?

We actually did a bake-off between Tricentis and QASymphony. And then we got the best of both worlds when Tricentis acquired QASymphony.

We looked at Zephyr and Xray but they were really too small-scale for the enterprise that we have. They probably would have saved us a lot of money, but our efficiency would have really scaled off.

What other advice do I have?

What I've learned from using the solution is "don't be afraid of change." HP was the blockbuster of our industry. There are a lot of great options out there. Do your due diligence and be brave.

Also, have a plan. It's not something that you want to go into and figure out as you're going. You need to really sit down and consider where you are, where you want to go, what variables are going to help you figure out how to implement this. It's just like any other software package. You need to need to have a plan. You need to have a training plan. You need to make sure your team understands the opportunity and what they're going to get out of it. It can be scary, so you have to manage change as much as you have to manage implementation.

In terms of using qTest to look into failures, we haven't really enabled that part of it, yet. We use Selenium to do all of our automation, and that's a little different than using the Tricentis application. We're a Java shop and .NET shop, so we wanted to go with an open-source tool so we could hire Java developers for automation. We also use Selenium for open-source test automation. We know there are some exploratory options in qTest, and we will start setting the roadmap for 2020 in that direction. We definitely want to expand what we're using within the product, now.

Getting our upgrade to 9.7.1 was really significant for us. This past year has been a migration year. We got to steady-state around June. I wanted to get everyone to steady-state, spend some time with it, get our upgrade behind us, and then start to expand out to use some other pieces of functionality in 2020.

We use qTest to provide results to executives, but it's usually sanitized through my team a little bit, just because we're still getting used to the cycles and execution. We can do multiple runs and we have to get down to what the actual results are, not the overall multiple runs' results. I use qTest for that and that information gets cleaned up before it goes out to executives. The information it provides them is accurate.

qTest is an eight out of ten at this point. For me, it's been the metrics. Every company counts things differently. To understand their reports, out-of-the-box, and to align the solution to where I want it to be — do I massage it to maintain the metric that I have or do I have to wait for a breaking point or do I redefine my calculation — is where those two points go. That has taken a little more than I would've expected. All the data is there. It's just a matter of how you're layering it out for your company.

Knowing what our calcs are and knowing what the qTest calcs are, and where they diverge, would've been really great. We were a little more naive than we had planned for.

I hope Tricentis keeps it alive and well. It's a great little product that is going to quickly grow to be something that gets out there with the big boys.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Tricentis qTest
December 2025
Learn what your peers think about Tricentis qTest. Get advice and tips from experienced pros sharing their opinions. Updated: December 2025.
879,310 professionals have used our research since 2012.
Testing Lead Manager at a energy/utilities company with 501-1,000 employees
Real User
Nov 21, 2019
Helps us resolve issues faster because everyone is working off of the same information in one location
Pros and Cons
  • "qTest helps us compile issues and have one place to look for them. We're not chasing down emails and other sources. So in the grand scheme of things, it does help to resolve issues faster because everyone is working off of the same information in one location."
  • "I really can't stand the Defects module. It's not easy to use. ALM's... Defects Module is really robust. You can actually walk through each defect by just clicking an arrow... But with the qTest Defects module you can't do that. You have to run a query. You're pretty much just querying a database. It's not really a module, or at least a robust module. Everything is very manual."

What is our primary use case?

We are using qTest to store and house test scripts and test design. We're using the test execution to execute and we're using the Defects module to track defects.

How has it helped my organization?

This is an SAP implementation and until we brought in qTest the team had no tool. They were doing everything manually in Excel, all their tests and execution. I came on board to help lead. We've done multiple test cycles. We're in UAT right now. They did one integration test cycle without the tool and we've done two with the tool. It's helped with productivity when you compare it to doing it manually in Excel.

The solution's reporting has enabled test team members to research errors from the run results, for the most part. In terms of its effect on their productivity, they went from a complete manual Excel testing solution to this tool. We're able to execute 550 test scripts within a six-week period. Some of these are simple tests but some of them are robust integration scenarios.

Our company had an assessment done by an outside organization before the tool was in use. When they did their assessment they said they were not seeing good metrics. When they did the second assessment, they asked, "What's changed since our last assessment? We're seeing numbers, we're seeing reports, we're seeing data. Everything looks stable and not all over the place." They definitely noticed the change, and that's an outside organization.

qTest helps us compile issues and have one place to look for them. We're not chasing down emails and other sources. So in the grand scheme of things, it does help to resolve issues faster because everyone's working off of the same information in one location.

Overall, the solution has increased testing efficiency by about 60 percent, compared to what they were doing before.

What is most valuable?

We get the most benefit from the

  • test library
  • test execution.

There is also a high level of reporting provided in the tool and you can extract reporting out of the tool. That has been very beneficial.

Ease of use is another helpful aspect. Right now, during the UAT, we have a lot of business users using the tool. They test the UAT scripts. The team quickly adapted to the tool. I was able to roll it out and I was actually commended by several internal people, as well as external consultants who were doing assessments, as to how things had changed. They were shocked that I was able to implement the tool so quickly. I had never used the tool before either. I received a trial copy of the tool and was able to quickly implement it and have it up and running for the team.

What needs improvement?

The Insights reporting engine is a little challenging to use in terms of setting up a report and getting the data. It took me a while to understand how to use the tool. 

I'm mainly extracting the data out of the tool. I'm not necessarily using any of the dashboards in the tool. There are some fields that I did not make site-specific because I had to get things up and running quickly. The fields are in both the Test Run area and Defects. If you do a project via site-specific, you can't get any of those fields out of Insights. That's a limitation that they need to figure out. They shouldn't have that limitation on the tool.

In addition, I really can't stand the Defects module. It's not easy to use. ALM Micro Focus used to be called QC. That solution's Defects Module is really robust. For example, let's say you have a defect and you have a query. You can actually walk through each defect by just clicking an arrow. You go through that defect, add your updates, click the "next" arrow, and walk down through them. But with the qTest Defects module you can't do that. You have to run a query. You're pretty much just querying a database. It's not really a module, or at least a robust module. Everything is very manual. By contrast, qTest's test design and test execution modules are very robust. They just missed the boat on the Defects module. From what I've heard and from what I can understand, other people are using JIRA or something else to do their defects tracking, and we're not. I needed a tool to do everything. That's their weakest link.

For how long have I used the solution?

We have been using the product for a few months.

What do I think about the stability of the solution?

Overall, qTest is stable. Sometimes we see some performance slowdowns, a hiccup or glitch-type of pause. But for the most part, it has been operating. We haven't felt any pain yet.

What do I think about the scalability of the solution?

Right now, it's handling everything we're throwing at it. Since we're in UAT, this will be the highest number of people in the tool and probably the most activity in the tool, and it's been supporting things without any issues.

We went from 30 licenses to 60 licenses during this four-month period of time. I don't think that number will be increased. Once this project is over, the number of consultants will be reduced and the number of people involved will be reduced.

How are customer service and technical support?

Technical support has been fine, acceptable. Their responses have been in an inappropriate amount of time for the most part. 

There are just those two limitations that I've uncovered, as compared to other tools that I've used. So a lot of my interactions are like, "Hey, I want to do this," and they say, "Oh, you can't do that," or "the tool doesn't support that." That's the thing I have run into the most. It's not a support issue, it's just a tool issue. Functionality.

Which solution did I use previously and why did I switch?

Cost and time were the main reasons I went with qTest. If I were to have my choice, I probably would have implemented the Micro Focus product because I am familiar with it and know it can do everything I wanted to do. But that would likely have been overkill; way more than this project needed, and it was much more costly. 

I was looking at another tool, the SmartBear QAComplete tool that I had used on a previous project. I didn't necessarily like that tool, but its cost was less than either qTest or HP QC/ALM. But once I get my hands on qTest, I definitely liked it better than the QAComplete product.

The ease of use and the interface helped push me toward qTest. I had also called a friend and he said. You have to look at QASymphony or Tricentis. This qTest is good." I said, "Are you sure?" He said, "Yes, it's good. Trust me." That helped push me over the top.

How was the initial setup?

The initial setup was challenging. There are certain areas where it's very strict in how you have to set up your project. There are some strict guidelines that you have to follow. You have to have a release and a test plan. You can't do certain things within the test design module or test execution module. There are only certain ways that you can set up a folder structure, whether it's related to a cycle or a test suite or a module. I would prefer fewer restrictions. The restrictions are what made it complicated.

Deployment itself was done over a weekend. This is not on-prem, it's in the cloud. I set up the structure and then had to understand how to load the test scripts. It was very fast.

Our implementation strategy was to get it done as soon as possible. It was very off-the-cuff. There was no time to plan. I landed here right before this test cycle was supposed to start and I knew that if we left it as a manual execution we would fail miserably. For me, the plan was to identify, learn, and implement a tool, all within less than a week. It took me two weeks, including training myself. There was no plan other than "we need a tool."

What about the implementation team?

I used Justin, I used one of the support people, and I had one meeting with one of their people. I had no more than four hours of support and a couple of emails. 

Overall, my experience with their support during deployment was good. I was asking some questions and needed to take an approach that either they didn't agree to or didn't understand why I was doing it that way. The one person I remember talking to was so tied up in the Agile methodology that she couldn't see outside the Agile box, and that's what I needed. We weren't coming from an Agile methodology.

What was our ROI?

Over the four months there has been ROI. If I crunched the numbers I would probably find it has paid for itself already.

What's my experience with pricing, setup cost, and licensing?

The price I was quoted is just under $60,000 for 30 licenses, annually, and that's with a 26.5 percent discount.

Which other solutions did I evaluate?

I've used QAComplete from SmartBear. I've used HP QC or ALM from Micro Focus. I also used an old IBM rational test manager which I think was called SQA.

I think qTest was really built to support Agile, where the other tools were built to support traditional Waterfall but were easily adaptable to Agile. qTest is probably going to struggle a bit before they can truly support non-Agile implementations.

What other advice do I have?

The biggest lesson I have learned from using qTest is that every tool has limitations and you need to be able to adapt and overcome and not be stuck with one way of doing things. You have to find out where the tool shines and where it doesn't and make sure that the team feels the least amount of pain when you do implement it.

This solution has been implemented for one particular project. We have 60 concurrent licenses available and we have about 120 users who have been given access. Their roles in the project are either business analysts or quality testers. But these people also have their roles within the business. Some are managers within finance, some are directors, some are AP specialists, some are AR specialists. The project is a financial system implementation so we have a sampling of users from all departments executing scripts.

Since implementing the tool, we've seen a decrease in critical defects but I don't know if I can attribute it to the tool. I don't know if that's possible. It might be a stretch. But we definitely have seen a drop in critical defects. Over the last four months we have seen a 40 to 60 percent drop.

For deployment and maintenance of the solution, it's just me. I'm the one who picked the tool, I'm the one who implemented the tool, I'm the main administrator of the tool, and I am leading all of the testing efforts.

Setting up the users is pretty simple. I would recommend it. If you're looking for something quick, easy to use, and robust, it's definitely a very good tool. If I could get them upgrade the Defects module, I would be very happy.

I do love it. I'm giving it a nine out ten just because I don't think any tool out there is a ten, but Tricentis is close.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Sr. Manager Quality Assurance at a tech vendor with 1,001-5,000 employees
Real User
Nov 21, 2019
Provides a central point of reference for tracking bugs and failures, who owns the issue and its status
Pros and Cons
  • "The test automation tracking is valuable because our automated testing systems are distributed and they did not necessarily have a single point where they would come together and be reported. Having all of them report back to qTest, and having one central place where all of my test executions are tracked and reported on, is incredibly valuable because it saves time."
  • "I wouldn't say a lot of good things about Insights, but that's primarily because, with so many test cases, it is incredibly slow for us. We generally don't use it because of that."

What is our primary use case?

I use it for test case management. I manage testers and I use qTest in order to schedule and track test case execution within my testing group.

We're on the cloud version.

How has it helped my organization?

The solution’s reporting enables test team members to research errors from the run results. That has definitely sped up productivity because it allows multiple engineers to be aware of the failures, all at once and in one place. There's no duplication of effort because everybody knows what's going on and who's working on it, through qTest, as opposed to people seeing an email that something's wrong. In the latter scenario they might all run off to try to fix it and then you're duplicating effort through a lot of people working on it and not communicating with each other. Having qTest as the central point when there's a failure means we can easily track if a bug has been created on the issue, who owns it, who created it, and what its status is. All of those are linked right in qTest so you can automatically see if this failure is being tracked and who is tracking it.

Previously we were using a product called Zephyr. It did not have history based on the test cases. At least it didn't have the history the way I wanted to track it. It didn't show all the defects that were generated by that test case and it didn't track and display those defects statuses within JIRA. QTest, specifically with the link from qTest to JIRA — so that my test cases are continuously linked either back to the requirement that generated them or to any defects that were created because of them — that link is what is allows me to be much more efficient because I'm now not running between multiple systems. I'm not saying to my testers, "Hey, who's working on this? What was the problem with that? Why don't we run this?" All of that information is located right there in the solution. 

My personal efficiency has been increased because I have a single point of truth within qTest, to always be able to see what the status of my tests is. My team's efficiency has been increased, again, because of the lack of duplication of their efforts. They always know what's assigned to them and what they own and what its status is. And they don't have to manually connect test cases from one system to the next, because they're automatically linked and the information is automatically shared. There are a lot of efficiencies built into that link between qTest and my ticketing systems, as well as, of course, by using qTest in my automation systems. Those links are really what has turned things up.

qTest has probably doubled our efficiency. There has been a 100 percent improvement in the time the testers and I spend on managing our test cases.

We have also used the product for our execution of open-source test automation frameworks. In our case specifically, that would be Cypress and pytest. I wouldn't say that ability has affected productivity. I don't think it has a multiplying effect when it comes to doing automation faster. Its multiplier comes after you've created the automation. At that point, executing it and getting the results are a lot faster. We still execute test case automation the same way we always did. We put a JSON file into Jenkins and Jenkins executes the test cases. But now, instead of just executing them and being done with it, it executes them and reports the results back to qTest. It's the same process, just with an extra step. Because of that reporting, we have a central point of truth. We don't have to look at Jenkins and try to figure out what happened, because it's not a very good interface to get an overall view of the health of a system. That's what qTest is.

In addition, the solution provides our team with clear demarcations for which steps live in JIRA and which steps live in qTest. Using, say, requirements within JIRA to test cases within qTest, there is a distinct difference between those two systems. Being able to build off of the requirements that are automatically imported allows my people to generate test cases faster and in a more organized manner, because they're based on information that's being given to them by project management via the requirements. It makes it clearer where each step that lives within the process and that is an efficiency-increaser.

Finally, since we started using qTest we have seen a decrease in critical defects and releases, although not a lot. We didn't really take on qTest to reduce the number of defects. We took on qTest to be better organized and efficient in our quality assurance processes. I had no expectation that qTest was going to decrease the number of defects we had. It was definitely going to increase the efficiency and the speed at which we were able to do our testing. That does then decrease the number of defects and issues that we run into on a regular basis. Over the first year there was probably a 50 percent decrease and over the second year we've seen about ten to 20 percent. It's not significant but, again, it was never expected to be a significant decrease.

What is most valuable?

Among the most valuable features are 

  • test automation tracking
  • JIRA linking
  • defect tracking
  • reporting.

The test automation tracking is valuable because our automated testing systems are distributed and they did not necessarily have a single point where they would come together and be reported. Having all of them report back to qTest, and having one central place where all of my test executions are tracked and reported on, is incredibly valuable because it saves time. It allows me to just look at and use one place, and one reporting solution, to track all my executions and the defects that are generated from those.

The integration with JIRA allows us to have an integration between both our automation testing systems, such as Jenkins, through qTest, and into JIRA. It allows all that data to be transferred and distributed among all the different stakeholders within the organizations. That way I don't even have to do reporting. They can just look in JIRA and see what the testing results were. It's very simple for me. It makes my life a little easier so I don't have to generate so many reports.

What needs improvement?

I wouldn't say a lot of good things about Insights, but that's primarily because, with so many test cases, it is incredibly slow for us. We generally don't use it because of that. It would be nice. It has good features, but as soon as we started using qTest, Insights became unusable. I do know that they're planning on replacing it next month. It's the one bad side of the application and they're replacing it, so at least they're listening to their customers. They know when they've got a problem, so that's a good thing.

In addition, within Insights, the report creation could be more versatile and intuitive. Generally, the reporting tools could be made more streamlined and easier to access by people outside of the organization. If I have one complaint about qTest, it's its reporting. Again, that is something that's being replaced here soon, so it'll be an invalid point within a month.

It has already been fixed in the on-premises version. The hosted version has yet to have the replacement. I don't know what the replacement's going to be like. I haven't used it so I can't really judge it.

For how long have I used the solution?

I've been using qTest for over two years.

What do I think about the stability of the solution?

There are some optimizations that could be applied. There is a bit of lag when you're getting up into the hundreds of thousands and even millions of records, but that is to be expected. 

Stability-wise it has always been available. I actually can't think of a time when it wasn't available when we needed it. The stability itself has been 100 percent. The optimization is an area for improvement.

What do I think about the scalability of the solution?

The scalability has definitely been impressive. We've got a global organization with so many different teams and I don't hear any complaints from any of them. They're all up and running on this product, all around the world. So we've scaled extensively. The different teams don't really affect each other, but we're all using the same system. We don't really notice that there are 30 different product teams using the system. You only see your own.

It's extensively used in the sense that all the QA organizations within the different product teams — we're looking at 15 to 20 different product teams, each with five to ten quality assurance engineers, and some of them with up to 30 or 50 engineers — all of them are using the product at least as their test case management system. Some of them have different implementations when it comes to their automations. Some have different implementations when it comes to their ticketing system integrations. But all of them are equally supported by the product in different project scenarios and product configurations.

It requires zero people for maintenance because it's cloud.

How are customer service and technical support?

Tech support is incredibly responsive and has always come back very quickly and helped us find issues. They have gone out of their way to make sure that we are served as best as we possibly can be. I feel like I'm in really good hands with them. That definitely started from the time at which we took on and transitioned to qTest, in the way that they helped us get up to speed with information and support.

Which solution did I use previously and why did I switch?

We used JIRA and both the Zephyr and the Xray plugins. The scalability of those plug-ins was usually fine. They scaled along with JIRA, and JIRA is endlessly scalable. Reporting is where they would fall down. JIRA doesn't have the greatest reporting and most of the reporting is manual. When you're looking at reporting within qTest, most of it is already built for you. It has canned reports that already exist and which don't require a lot of effort. Mind you, that is where qTest somewhat falls down as well, on the reporting side of things, but it is still head-and-shoulders above the open-source solutions.

The decision to move to qTest was due to the way we had our implementation. We had no central, single enterprise-class test case management solution available to any of our teams. As they grew and became more extensive, they found either that the low-budget solutions they were using, or the open-source solutions that we're using, or the complete lack of solutions that they had, were simply not adequate. The decision was made at that time by upper management that we needed to find a central, enterprise-class solution for test case management. 

How was the initial setup?

The initial setup was very straightforward since Tricentis did most of the work for us. We're using a hosted cloud product so for us it was, "Here's your username and password." 

We received extensive support from, at that time, QASymphony, and Tricentis now, in getting up and running, understanding the product, and getting the information that we needed to make the best possible use of the product and to be successful. QASymphony and Tricentis have excelled at making sure that we are successful. I have a regular meeting with my success manager and she's always on call to be able to help us with issues.

Globally, for our organization, it took about six months for complete adoption. That was not Tricentis' fault. That was just how long it took us to get everybody up to speed and onboard. If it came down to how long it took Tricentis to do the deployment, it was probably a day and we were up and running and ready to go. There was not really a lot of configuration required on their side. The effort to get a large, global organization transitioned from one tool to another is not trivial. With Tricentis' help we were able to do it in what I would call an "impressive" six months.

Our implementation strategy was varied. Globally, we have many different projects and project teams and they all were using different tools. Some were simply using spreadsheets, while others were using tools like Zephyr. All of them chose to transition over to the central qTest test case management system. Each team had a very different implementation and that's definitely where Tricentis' support shined.

What was our ROI?

We have definitely seen return on our investment, simply through the efficiencies of the process. It's a tool that everybody knows how to use and it's global, so there's a good support network. And the support network from Tricentis is so extensive and useful to everybody around the world. Simply through the increased efficiencies of our test case management system, we have seen a return on the investment. That's not even taking into account the improvements in quality within our products, which is immeasurable.

What's my experience with pricing, setup cost, and licensing?

There is an upfront, yearly cost for concurrent licenses, meaning we're not limited to a specific number of users, only to a specific number of users online at a certain time. That works really well for us because we're a global organization. We'll have people online in San Diego, and those licenses then can be used later in the day by people online in Tel Aviv. It's been a really great licensing model for us.

I believe that there is a maintenance cost as well. I'm not really involved in the payment of that, so I don't really know what it would be.

Which other solutions did I evaluate?

An evaluation was opened up to search for the proper solution. qTest was the winner. 

What other advice do I have?

The biggest lesson I have learned is that the transitioning process is only difficult if you drag it out. Transitioning over to a new product needs to happen quickly. It needs to be a top-down decision and the information needs to be disseminated to everybody in a quick and efficient manner. We saw that happen easily with the qTest product and that sold me on the lesson that I learned, when it comes to implementing new, global-enterprise software.

qTest is a great solution. It should definitely be at the top of your list when you're looking at test case management solutions. It's really the service and support that comes from Tricentis that sets it apart. In addition to that, its integrations with systems that we are already using is the force multiplier that allows qTest to be even more efficient than just another tool that people have to use on a regular basis. It has become the only tool that they have to use on a regular basis.

In our company, executives or business users don't review results provided by qTest because that would be a kind of an Insights thing and we don't really use that. They do see the testing status in their tool of choice because we have the links to JIRA, so that they don't have to review the testing status within qTest. They don't log into qTest at all. They see the information that they want through our links with the ticketing system.

The solution doesn't really help us to quickly solve issues when they occur, but I don't really feel like that's its job. Its job isn't to help me solve issues. Its job is to make sure that I'm aware that there are issues that need solving, and that information is distributed to all the people who need it, right when it happens. There are some things in there that help me figure out what's going on and what do I need to do to fix a problem; it depends, of course, on the problem. But I don't feel that qTest's job is to help me solve problems. qTest's job is to make sure that I'm aware of the status of problems, that there are problems, and whether or not they're being worked on.


Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
NancyMcClanahan - PeerSpot reviewer
NancyMcClanahanQuality Assurance Team Lead at a healthcare company with 10,001+ employees
Real User

I agree with you on the issue or defect resolution I think we can improve that function with a workflow activated. Which we are still working on consistence in that area with the company. Good evaluation.

reviewer970854 - PeerSpot reviewer
Sr. Product Manager - Intelligent Automation & RPA at a financial services firm with 5,001-10,000 employees
Real User
Nov 10, 2021
Great for test management and automates a lot of the testing functions
Pros and Cons
  • "Works well for test management and is a good testing repository."
  • "Could use additional integration so that there is a testing automation continuum."

What is most valuable?

The solution works very well for test management and it also automates a lot of the testing functions so that you don't have to manage them in Excel spreadsheets. It doesn't go all the way to automated testing, but it becomes a good testing repository. If an organization is not fully ready to take advantage of automated testing, qTest is a good first step. The value with qTest is that it has nice hooks with the GR. When it comes to test management, it has good integration. 

What needs improvement?

I'd like to see better integration in the platform so that there is a testing automation continuum, where customers can easily mature through qTest and Tosca functionalities.

How was the initial setup?

It took a little bit of back and forth to get started but since then, it's working very well for us. We had to work with our support team and it took a little longer than expected. 

What other advice do I have?

I rate this solution eight out of 10. 

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Senior Architect at a manufacturing company with 1,001-5,000 employees
Real User
Nov 28, 2019
Helps us to quickly come up with a test plan, but overall, it's not as intuitive to use as it could be
Pros and Cons
  • "The most valuable feature is reusing test cases. We can put in a set of test cases for an application and, every time we deploy it, we are able to rerun those tests very easily. It saves us time and improves quality as well."
  • "You can add what I believe are called suites and modules. I opened a ticket on this as to what's the difference. And it seems there's very little difference. In some places, the documentation says there's no difference. You just use them to organize how you want. But they're not quite the same because there are some options you can do under one and not the other. That gets confusing. But since they are very close to the same, people use them differently and that creates a lack of consistency."

What is our primary use case?

We use it for QA software that we build. 

How has it helped my organization?

It boosts productivity because we're able to quickly come up with a test plan, as opposed to doing it from scratch each time or from something homegrown.

What is most valuable?

The most valuable feature is reusing test cases. We can put in a set of test cases for an application and, every time we deploy it, we are able to rerun those tests very easily. It saves us time and improves quality as well.

It also helps us to identify defects before we get them into production. And, overall, it has increased testing efficiency by 30 percent in terms of time.

What needs improvement?

The information that qTest provides to executives could be better. If there are tests that have a lot of steps in them, people will go through and do seven out of eight steps, but it doesn't show the test is complete. So from a metrics perspective, what executives normally see is that it looks like nothing was done, even though they did seven out of the eight steps.

In addition, you can add what I believe are called suites and modules. I opened a ticket on this as to what's the difference. And it seems there's very little difference. In some places, the documentation says there's no difference. You just use them to organize how you want. But they're not quite the same because there are some options you can do under one and not the other. That gets confusing. But since they are very close to the same, people use them differently and that creates a lack of consistency. My preference would be that qTest establish the way they do it and everybody has to do it that way, so everything is done the same way.

In response to my ticket, they said that they are the same and that you can choose whichever one to best organize how you want to organize. But the problem is that everybody in the organization makes a different choice. And they sent me a link to the documentation. Some of the documentation does say that there are some differences. There was one thing, like importing tests or something, that we could do under one but not under the other. That really made it a mess. That's the only really big concern I have had.

For how long have I used the solution?

We've been using qTest for between six months and a year.

What do I think about the stability of the solution?

It seems very stable.

What do I think about the scalability of the solution?

Scalability gets to be a little bit of a mess. I've never seen a performance issue but, as we continue to add projects, especially if somebody has access to a lot of the projects or is an administrator who has all the projects, it feels a little bit unorganized. There's too much stuff. When I create projects, for example, they're in my dropdown forever, as far as I know. That just creates a huge list of products. I would like, when a project is done, to get it out of my face.

How are customer service and technical support?

Tech support did answer promptly. My issue is not the fault of the tech support. The tech support did fine. The issue I described above is the only time I've contacted them.

Which solution did I use previously and why did I switch?

In this organization, Tricentis was the first. In my last job we used Micro Focus Quality Center. Both it and qTest are a pain. They're pretty similar.

How was the initial setup?

The initial setup is a little bit wonky. What you need to do to get the job done is not intuitive. It takes more time to train people than if it were a little bit simpler.

Getting all the products set up and getting all the testers assigned took a while.

The adoption of qTest in our organization has been average. People aren't against it. They comply. But again, because we don't have a formal QA team, it's our biggest option. When we ask people on the business side to use it, they are pretty good about using it, as long as we show them how to.

What was our ROI?

It does what it's supposed to do. I don't know what the organization paid for it, but it is getting the job done that it's supposed to get done.

What other advice do I have?

I would recommend planning how you're going to organize using it and have everybody organized the same way as they use it. A lot of times you see this in software: They build in flexibility thinking they're doing you a favor because they're making it flexible and thinking you can use it the way you want. But if you have ten users, those ten users each use it ten different ways. If there's no flexibility at all, the ten users use it the same way. To me, that's almost better. Even if it's not exactly how we want, at least it's the same. Uniformity, over being able to choose exactly how I use it, would be my preference.

The biggest lesson I've learned from using qTest is that we need dedicated QA people. What will happen is something like the following. I have a developer, Amos, who, thinking he's doing the right thing, goes in and loads up 20 tests and then he gives that to the business to test. And they think, "Hey, the expectation is that I do exactly what this thing says." The problem is we only then test it from the perspective of the developer. We're not actually getting the business to think about what they should look at or, better yet, developing a dedicated QA team which knows to look for defects. It's a myopic perspective on testing. And because of that, we do not find as many defects as we would. That is not a qTest issue, though. If we had a dedicated testing team using qTest, that would be ideal.

We have not seen a decrease in critical defects and releases since we started using it but I wouldn't blame qTest for that. It's more that we do not have a dedicated QA team. My management team seems to think that qTest is a substitute for a dedicated QA team and we have the developers and the business desk use it to test. But developers and business are not as good at finding defects as a dedicated QA team is.

In terms of maintenance and for administration of the solution, we don't have anybody dedicated to those tasks. People do the maintenance needed to get done whatever they need done. It's mostly me who creates projects, adds users, etc.

We have 56 users, who are primarily developers and on the business side.

Overall, it gets the job done, but it's a struggle to do it. It's not as intuitive to use as it could be.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Automation Lead at a computer software company with 51-200 employees
Real User
Nov 27, 2019
We're spending less time trying to find defects and doing manual testing
Pros and Cons
  • "The most important feature which I like in qTest manager is the user-friendliness, especially the tabs. Since I'm the admin, I use the configuration field settings and allocate the use cases to the different QA people. It is not difficult, as a QA person, for me to understand what is happening behind the scenes."
  • "As an admin, I'm unable to delete users. I'm only able to make a user inactive. This is a scenario about which I've already made a suggestion to qTest. When people leave the company, I should be able to delete them from qTest. I shouldn't have to have so many users."

What is our primary use case?

We have licenses for qTest Manager and Flood. We use Flood for performance testing. We use the Manager on a day-to-day basis for storing the test cases and linking them with the NPM, with the Selenium automation test cases, and we schedule runs through qTest.

We also have Jira Cloud and connectivity using the CI/CD pipeline. We connect qTest with Jira and set up our runtime and regression automation. Manual is done using just the Manager and the automation is done using Selenium and Selenide.

All the user stories are done in JIRA. We take those user stories from JIRA and input them into qTest. From there, people write the test cases related to each and every user story and these test cases reside in qTest. qTest is then connected to a Linux box and Selenium. Since we have connected the qTest automation, Selenium runs the suite. We create defects in JIRA and connect that to qTest. This is how we link the entire package.

How has it helped my organization?

The solution's reporting enables test team members to research errors from the run results.

Our executives have started to review results provided by qTest, but that process is not completely done. We are in the process of implementing it for the higher officials and showing it on their screens. Everything is in the cloud and they can just click on things and it says, "Okay, these passed and these failed."

The speed with which our team understood the tool and started implementing and using it has drastically improved things. I'm sure we will improve our use of it over the next couple of years and use the tool to the maximum.

The solution is helping increase testing efficiency. We spend less time trying to find defects and doing manual testing.

qTest is definitely doing a good job of meeting our requirements and meeting the needs of our higher officials for understanding how the tests are being run.

What is most valuable?

The most important feature which I like in qTest manager is the user-friendliness, especially the tabs. Since I'm the admin, I use the configuration field settings and allocate the use cases to the different QA people.  It is not difficult, as a QA person, for me to understand what is happening behind the scenes. Looking at the code and looking at the Node.js or the NPM connection, it is so easy for anyone to understand the CI/CD pipeline.

The terminology used on the tabs, like "test plan" or "test cases" or "regression" or "requirements," make it easy for any layman to understand: "Okay, this is the model. This is how QA works. This is how the lifecycle of a product moves." There isn't any tough terminology which might make a new user say, "Okay, what is this? I don't understand."

It also provides the easiest way in which I can set up automation. It is really easy to do, compared with other ALM tools.

The integration with JIRA is superb. It was easy for my DevOps manager to go ahead and create integration between JIRA and qTest.

What needs improvement?

As an admin, I'm unable to delete users. I'm only able to make a user inactive. This is a scenario about which I've already made a suggestion to qTest. When people leave the company, I should be able to delete them from qTest. I shouldn't have to have so many users.

There are more improvements, which can be made, such as giving users an easier way to access the tool.

For how long have I used the solution?

We have been using qTest for just under a year.

We have not completely implemented everything, all the features. Although we know what qTest has, we have not explored the data and the dashboard and the tabs. So we are just using 60 percent of the tool's assets. We are still waiting for our own stable releases to happen and then we can say, "Okay, automation is done, manual is done."

What do I think about the stability of the solution?

Initially, we had a few issues, but now it's stable. It doesn't give us any problems. For the past six months at least, I haven't had to create support tickets as often as I used to in the six months before.

The tool was new for us and it was pretty difficult for us to understand certain things. But now, we know what it is and how to implement it. We know how to integrate with JIRA, with Selenium, etc. Everything has settled down.

What do I think about the scalability of the solution?

The solution is scalable.

We currently have ten users using licenses out of our total of 12 licenses, and they use it on a daily basis. It's used extensively to create the test cases, run automations, and create defects in JIRA. 

Currently, we don't have any plans to increase our usage. Five staff members are required for the deployment and maintenance. They are the people who schedule the automation runs and who do all the other jobs on a daily basis.

How are customer service and technical support?

Technical support is pretty good. During the first six months I was creating tickets and tried to get the answers immediately through email. If it was not possible for me to understand their answer, they immediately scheduled a meeting. So at the maximum, my problem would be resolved over the course of a week. The support is really good.

Which solution did I use previously and why did I switch?

Previously, we used Micro Focus ALM. Now, we have divided our products internally: an old product where we use Micro Focus, and a new product for which we wanted a newer tool to be implemented, which is qTest.

How was the initial setup?

The initial setup was a little bit difficult. Once we started with qTest, we had to migrate all our test cases from Micro Focus ALM. That's where we had a few difficulties in implementing this.

We had the help of a migration manager from Tricentis who really helped us out. At that particular stage, I had difficulty setting this up. Once it was done I was so relieved. It did take time. We thought it would take a week's time, but it took a month to finish the entire task. 

The code didn't work as it was supposed to in the wizard for the migration. It's true that our company's repository in Micro Focus ALM was very large, so it was difficult for us to take everything from there. We had to break the repository in half, and we had a lot of issues with IT here, and with Tricentis there. Everything got settled, but it was not quick.

What about the implementation team?

Our experience with the Tricentis consultant was good. 

It's just that our setup took a lot of time. We had a lot of difficulty, initially, in migrating the entire project. We needed to activate the product in ALM, and then deactivate back. It was kind of a mess. But the support engineer would coordinate with me, even outside of office hours. We sat together in meetings and tried to clear things up. He was a pretty good guy who really helped us out to set this up. 

Now, when it is so user-friendly and so easy to work on, that's only because he gave us the initial foundation for the product. We're really thankful for that.

What was our ROI?

It's too soon for us to see return on investment.

What's my experience with pricing, setup cost, and licensing?

We signed for a year and I believe we paid $24,000 for Flood, Manager, and qTest Insights. We paid an extra for $4,000 for the migration support.

Which other solutions did I evaluate?

Being a lead manager, I shopped around among many ALM tools and tried to understand which would really suffice the needs of our newer tool. We found qTest was the most user-friendly, and I can even say the most popular. The cost-effectiveness was also part of it. All of that helped us choose this.

Comparing Micro Focus and qTest, the cost of qTest is far less. Secondly, the cloud base and the fact that I am able to see everything on one screen is helpful. Although Micro Focus is updating as the time goes by, it's not as easy and as user-friendly as qTest.

There's reporting in both solutions. qTest Insights has more customization. Although ALM has some customization, it's not so easy to set up. You need to write a type of VBScript code to do more customization. But in Insights, it's easier for me to customize my reports.

We use both solutions, but the team that started using qTest is entirely different. The team is new, the product is new, so they didn't find any difficulty in adopting this tool. The other team, which was using Micro Focus ALM is still using it. We have not changed any team's structure because qTest is used by the newer team and Micro Focus ALM is used by the older people.

We looked at SpiraTest Inflectra and TestRail. SpiraTest is definitely competitive with qTest. We found everything that was in qTest was in SpiraTest as well. But there were flaws in terms of the terminologies used by Inflectra. It would not be easy for any QA to really understand. That was one of the differences we found. And the initial support which I needed from SpiraTest — I did have to mail them every day — was not what I wanted. I was not getting immediate answers to my questions. 

As for TestRail, its integration with JIRA was not as easy as we thought it would be. That was one of the flaws in TestRail which caused us to give up on it and we moved to qTest.

What other advice do I have?

Go for it, take a shot at it. Try it out with the 30-day free trial. If you really find it to be a good fit for your company, the productivity and the cost, go ahead and choose it. It's definitely a good tool.

The biggest thing we've learned from this tool is the ease of using it. It is easier. There is a possibility of creating the entire application lifecycle management by moving around different tabs and moving around different options. With one screen it is easy for a QA person to get into it.

We have not used Insights that much. We have used it to some extent but we haven't gone into the details in the graphics and the reporting. Because our own product is changing so often — the versions and the management and the configuration of the product are changing — we do not have a stable release for our product. So we are not set up completely with Insights. We are in the process of doing so.

About 40 percent of what we do is still manual testing; only 60 percent is automated. The basic aim is for at least 80 percent automation.

Our team which is working on qTest Manager is located in Ukraine, so a team leader there could provide more elaborate answers than me. I'm leading it from our head office.
The team in Ukraine are the people who are using it on a day-to-day basis.

I would rate qTest at seven out of ten. To make it a ten there are a few things here and there which could be easier for the user, like giving popups in between operations. When I want to delete something it asks me, "Are you sure you want to delete?" But it does not do that everywhere. So there are some small things, here and there, which could really improve the tool. It is supported in Chrome, Firefox, Safari and IE11. I would like to see more browser compatibility options, like using it in Edge. And when I move to different browsers, the format of the tool is not consistent.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Product QA Manager at a computer software company with 201-500 employees
Real User
Nov 21, 2019
Provides us with visibility into test results as well as better accountability within the QA team
Pros and Cons
  • "The integration with Selenium and other tools is one of the valuable features. Importing of test cases is also good."
  • "We feel the integration between JIRA and qTest could be done even better. It's not as user-friendly as qTest's other features. The JIRA integration with qTest needs to mature a lot... We need smarter execution with JIRA in the case of failures, so that the way we pull out the issues again for the next round is easy... Locating JIRA defects corresponding to a trait from the test results is something of a challenge."

What is our primary use case?

We have multiple teams working at Reflexis and test management is a critical aspect. We wanted to be able to maintain tests. We have multiple releases to be sent to customers and to internal teams.

We use JIRA for defect management and for our internal project-tracking purposes, but for test management we primarily use qTest.

How has it helped my organization?

qTest has created a lot of transparency in the distinct classes. Everyone now has access to the tool, so there is visibility, internally, from one team to another team, regarding the results. When the builds are being sent out, people know how stable a build is and what the quality of that release is like. This information is very transparent and available to everyone who has access.

The way it's optimizing things is through the transparency within the teams. For example, we have an engineering QA team and then we need to send the build release to the implementation QA team. They are also able to review things. They get to know what things have passed or failed. And when we need to share with customers or others, they get very good information. They know that these builds have taken care of these things.

With respect to accountability, it provides clear information: This person has worked on this and that defect or these and those test cases and whether they have passed or failed.

As a QA team, there is more accountability. Now we are able to see what the test cases are that are assigned to us for QA, how much has been executed, and what has passed and what has failed. Later, those things can be evaluated, so it improves the accountability of the tester and creates more transparency in the results.

qTest has improved our time to release. With the automated testing which we are able to integrate with qTest, people are able to go through things immediately. We haven't seen a big change in time to release, but there is a gradual change. It has definitely improved release time, but that still needs to improve a lot. Release times have improved by 20 to 25 percent, roughly. We expect that to increase a lot. A few teams have adopted qTest completely, while other teams have started to adopt it in their work. Those things are going on in parallel. As more teams come into qTest, release time should definitely improve, in the longer run.

In addition, the automation integration that we do has been valuable. Because it has APIs, whenever we run an automation test it is automatically updated in qTest. Those efforts have been taken care of, especially with the transparency that it provides when we need to share the results or the release status with other teams. That is certainly a big plus we get from qTest.

What is most valuable?

The integration with Selenium and other tools is one of the valuable features. Importing of test cases is also good. 

The way we structure the test cases and the way we structure the execution cycles and the way we are able to integrate the requirements with the test cases and then generate reports, they're all pretty awesome.

There is a qTest reporting engine and they have Insights which is separate from the standard, conventional reports. Insights is pretty good. Once you get into it and start to understand how it has been designed, you will be able to take advantage of all the features.

The reporting is awesome. The way you get the stats and other critical information from the test reports in qTest is good.

What needs improvement?

We feel the integration between JIRA and qTest could be done even better. It's not as user-friendly as qTest's other features. The JIRA integration with qTest needs to mature a lot. We have some concerns and we have some challenges as we try to work with those features. This is an area where, if we see more improvements, we will be very happy.

We need smarter execution with JIRA in the case of failures, so that the way we pull out the issues again for the next round is easy. Currently, we have some challenges and complexities around that. Locating JIRA defects corresponding to a trait from the test results is something of a challenge. It impacts productivity. The reason is that the team spends more time on mapping it again for new execution failures. If that is taken care of, it will actually save a lot of QA effort.

I'm not sure if someone is working on that. We had raised this point during our evaluation, so it was probably discussed at some point in time, that they will get at it, but we don't have a clear version by which it will be taken up.

Also, Insights is not that easy to use for someone who has just started working with qTest. You need to know what all the fields are and have some background on Insights. It's not that user-friendly for someone who's just starting to work with it. People should be trained so they know what all the various features are inside it. Then people will be able to appreciate it.

For how long have I used the solution?

We've been using it less than a year.

What do I think about the stability of the solution?

The stability has been good. It's definitely serving our purposes and that's one of the reasons went for qTest.

What do I think about the scalability of the solution?

We see it helping us in the long-run as well. qTest seems to be adding more and more new features.

We have about 40 to 50 team members using it right now. We plan to slowly increase the number of users. It's a gradual process. We are planning to scale it. We are not currently reaching the peak of 25 concurrent users, most of the time. It rarely gets to the max. We average 15 to 20 users at any point in time.

There is no immediate plan to increase our licenses. As more teams and more members come into play, and when we hit the peak very frequently, we may increase the number of licenses.

How are customer service and technical support?

We have the option to contact tech support but, so far, except for a couple of times, we haven't had a reason to contact them. Tech support is good. They have set up a good infrastructure and process, so things are getting addressed quickly.

Which solution did I use previously and why did I switch?

Previously we had TestLink but we found many challenges with it when we had to run automated tests. There are good features in qTest, which helps us in maintaining it and sharing with others, with ease. The UIs are good and give a lot of flexibility to the testers when working with them. Those are some of the main reasons that we chose qTest for our test management.

We did an extensive evaluation of qTest. We had multiple people from Tricentis helping us during our evaluation process. It has been adding value to our organization.

How was the initial setup?

The initial setup was straightforward. I was not involved in that process. It was done by the IT team in discussion with qTest counterparts. But overall, I didn't see any challenges. It was planned for a specific day, and it was completed on that day.

There was one person from our side and one person from Tricentis involved.

The adoption has been good. The organization is impressed with the features and the value that it will add to our QA processes. That's definitely a positive. It's definitely doing what we were expecting. We haven't seen any concerns from the end-users or management.

What was our ROI?

We have definitely seen ROI. One area of return is due to the simplicity of use. It brings a defined process to the team. TestLink, which we used previously, was not very usable for the testers in terms of maintaining the test cases or creating them. It was taking a lot of time. People are able to work with qTest and are able to focus more on the actual testing, rather than maintaining things due to complexities. Those are the areas it has improved.

We haven't seen dollar savings, but it is definitely adding value to the teams.

What's my experience with pricing, setup cost, and licensing?

It is pretty costly, from what I remember. It's quite a few times more costly than other tools on the market. We compared it to the other leading test management tools. We went for it because of the features and the value it could add to our organization.

Which other solutions did I evaluate?

We have evaluated several other tools. But the features, especially the requirements being integrated with the test cases, are pretty awesome. Many tools do not have the features and, even if they have those features, they are not as simplified as they are in qTest. That's one of the primary reasons qTest has been very useful for us.

Open-source solutions don't have as many features and their usability is also not as good.

Multiple people in our company evaluated other solutions and, based on all their input, we finally chose qTest. 

What other advice do I have?

Do a cost-benefit analysis. qTest is more costly than other tools. If you have multiple teams, it's going to be essential, and it's worth buying qTest. Apart from that, if cost is not a factor, there are more benefits from qTest and it's definitely a tool you can go for.

All the features we have used are pretty impressive and good. The JIRA integration is the only thing that, if it is very critical, you need to plan accordingly.

It's a good investment for the implementation of the QA process. It creates more accountability in the team and also makes a lot of things easy for the managers as well. It simplifies a lot of QA processes. These are the things we've learned from using the solution. As we start having other teams use the tool, they should also be able to see and take advantage of these things.

Not many business users are using qTest. We share reports with them and they use them for management and other purposes. Primarily, qTest is used by the QA team only. But people take the reports as a starting point for discussion for things like product-improvement purposes. The business users rarely go into the tool to get to the various details they need. Mostly the reports are PDFs that we generate. That becomes the source for them instead of them logging into it and getting information.

The IT team maintains it along with all the software that we have installed on our premises. That team is taking care of it. But we hardly have any maintenance requests for qTest. There have been a couple of times where we had outages but, apart from that, we have hardly had any maintenance requests for qTest.

We haven't seen any change in the number of defects. It mainly creates transparency, and accountability has been increased.

It's easily understandable, including the reports. It's pretty comprehensive and provides all the essential details that we need to publish from any of the teams.

I would rate qTest at nine out of ten. It's a perfectly good tool. It definitely serves its purpose and I can definitely recommend it.

Which deployment model are you using for this solution?

On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Download our free Tricentis qTest Report and get advice and tips from experienced pros sharing their opinions.
Updated: December 2025
Product Categories
Test Management Tools
Buyer's Guide
Download our free Tricentis qTest Report and get advice and tips from experienced pros sharing their opinions.