What is our primary use case?
When I first started here, my goal was to get a test case management tool. The testers were using spreadsheets, so the idea was to set up a strategy and plan to not only create a testing process, but to provide a way to improve that testing process. One of my suggestions was that we get a tool that allows us to be ready for the future, as we get to automation. Now after all the manual testing we have moved to Cloud Application testing for 3rd party Cloud solutions and also complete testing for our Electronic Medical Records system and integrations. It has allowed use to expand the Quality Assurance of all software used at Parkview Health .
How has it helped my organization?
We can actually track without having to have spreadsheets, which really improved our process by, probably, 180 percent. That was a biggie. We were able to put test cases into one area so that everybody can see them, for every module and every application that we use.
The solution's reporting does a good job of enabling test team members to research errors from the run results. You can query down, make it smaller. Since we are not a Dev shop, our releases are projects that are opening offices, or changing a piece of functionality, or if we get a release from our vendor. We're able to see a report of the execution, and that too is getting better. It has improved our productivity but has not totally gotten rid of some of the manual work that we do.
qTest has also helped us to quickly solve issues when they occur. I've seen demos of the automation and the like for Tosca. That's going to be interesting and an eye-opener for my company, given that "medical" is very slow at adopting new technology.
We have also seen a decrease in critical defects in releases. When we started off we probably had close to a 60 percent defect rate, and the last time I did my metrics, a couple of months ago, it had a gone down to 36 percent. It's because everything is right there. It's visible. You have to be accountable for what you do and what you mark up. In our company, it's been a huge culture change that they actually have to keep track of what's not working, in one location.
Overall, the solution has increased testing efficiency by a good 95 percent.
What is most valuable?
The most valuable features are the execution side of it and the tracking of defects. I like the way it structures a project. We have a PMO that does project management. That project management then triggers a process that tells us that it's going to be tested at a certain date. We're able to put the test cases into qTest or modify something that's already there, so it's a reusable-type pf environment. It is very important that we can do that and change our test data as needed, especially for our EHR system. I also like the flexibility in how it can used for DevOps or non-DevOps operations of Quality Assurance and Testing.
What needs improvement?
We have used the Insights reporting engine but, within the last six months or so, since Tricentis took it over, they've started to improve that. We had some custom fields to match our process dates, and to track who is the project manager of the release, and who the test coordinator is. That way, we can keep track of what kind of testing is being done for that particular project. The Insights engine would not show us any of the custom fields when we first started using it. I've been working with them to improve that factor for Insights.
The next phase is that by the end of the year, they're supposed to release a new analytical tool within Insights or change Insights to be that analytics tool. I'm looking forward to that because I do all my analytics with exports from qTest and exports from our ITSM/ITIL system, Cherwell. I then make my reports out of them, so it will be very welcome to have that functionality.
I do some reporting for executives and business users from qTest. I go to Insights, do a query on the fields I want them to see, and then export that into Excel. I get the graphs, and then do a screen print, put it into a report, and send it off in a PowerPoint presentation. The quality of that data needs help. I use it fairly regularly for defect reporting because it does show an excellent view of the defects that are associated with the project and whether they're open or closed—looking forward to the new Analysis tool that is coming to Cloud customers soon.
Reporting shouldn't be so difficult. I shouldn't have to write so many queries to get the data I'm looking for, for a set of metrics about how many releases we had. I still have to break those spreadsheets out of there to get the data I need.
Also, qTest doesn't have any workflow engine. The only one they have a workflow engine for is the defects. I'd like to see more of something of that nature. It might help improve efficiency as we move into the future, especially when automation comes in.
For how long have I used the solution?
We have been using qTest for almost five years. We are able to track all types of testing Application, Unit, Functional, Integrated, and or the combination of types of releases.
What do I think about the stability of the solution?
We just had an outage on Monday. They had a data center outage. Outages happen a couple of times a year, so it's not bad. It's up about 97 percent of the time.
What do I think about the scalability of the solution?
I don't think I have any scalability issues with it. Right now, we have concurrent licenses, which seem to be plenty. We've not had a problem with that. It has only happened once or twice where there were 35 licenses used at the same time. The tool tells you, and then you have to wait until somebody signs off. That's easy to manage. We don't have any issues yet. I'm not saying that we won't, once automation is there.
We have 200 users and a total of 35 concurrent licenses. Generally, the users are analysts or Epic analysts, as well as managers, directors, and people involved in network validation. We have a tester lab, project administration, project manager, quality assurance, and quality assurance leads, as well as some people who have report-read-only access. Some of our vendors also have access. They have their user profiles because I limit their access in terms of what they can see and what they can do.
There are two of us involved in deployment, upgrading, and maintenance of qTest. I'm the lead, and I have a test coordinator who helps me.
We plan on increasing usage as we add more systems to it, and once we add automation. I will analyze how many licenses we have versus what we will be running at that time and will determine if we need any more. Make sure you set it up the way you do your business. The process is essential, not just the tool that you're using to manage it. The biggest lesson I have learned from using qTest is that I should have used it years ago. We should have had this a long time ago, not just five years ago.
I send out periodical reports of all the metrics that we do, usually twice a year. We use other tools for keeping track of tasks that have to be done on each one of the projects. We use Microsoft Planner. It makes it easier for people to actually do their assignments and then let us know that the tasks are completed. If we had the JIRA tool or something of that nature, that would help the process. But, at this time, we don't use that functionality.
How are customer service and support?
We've used Tricentis technical support quite often. The way I understand it, they have tier-one, tier-two, and tier-three. Their development people are probably their third level.
They answer quickly, but sometimes they ask questions that I cannot answer because it's part of their tool. Last week I finally told them, "Just go out and look at our system. Follow my instructions and just go out and look. You have it. It's your cloud."
Which solution did I use previously and why did I switch?
I have used Micro Focus LoadRunner at a couple of locations. The Micro Focus tool is very complex and not as user-friendly as qTest is. I knew that implementing HP, plus the price — it is much more expensive than qTest — would be more difficult.
So the factors were both price and usability of the tool. Because some of the people who do our testing are not IS people. They don't understand the software development lifecycle. You have to make it simple for them to use, and I can do that within qTest.
How was the initial setup?
The initial setup was easy to use, but you have to make sure that you follow the process that is associated with how you manage your testing overall.
Our deployment took about six months. That was because of the data that we had to load. The strategy was to make sure it was working for all of Epic, which is our EHR system. We wanted to get that part done first. We then started making the third-party applications and got all that data into it. We waited about a year to make sure that the third-party applications had their regression test cases in the system, and we still add new application data in, as we go forward, separating the implementation of our EHR. In terms of the adoption of the solution within our company, it's much more user-friendly. It allows everything to be in a central location. The data management becomes more critical because you have everything right there at your fingertips, versus a spreadsheet which could be located anywhere.
What about the implementation team?
What was our ROI?
We have absolutely seen return on our investment.
What's my experience with pricing, setup cost, and licensing?
For the 35 concurrent licenses, we pay something like $35,000 a year. There are no additional costs to the standard licensing fees, until we get into Tosca.
We have the Elite version, which allows us to have Insight, Parameters, Explorer Sessions, Pulse, Launch, and Flood.
Which other solutions did I evaluate?
We evaluated five tools, narrowed it down to three, and qTest was ranked as number one. SmartBear, the HP tool, and Tricentis were the top-three. Back then, it was QASymphony. It was before Tricentis bought them out.
The solution had to manage test plans, requirements, and test design. We wanted to make sure we could revise test cases as we moved forward with releases. Because we're not centralized as a testing organization — we have other groups that do our testing — it had to be able to get them involved cohesively. It had to track defects. Also, we do not have a project management tool, nor do we do DevOps projects. But we are continually doing different releases of all types of medical software. So we wanted to be able to manage our releases for all of our software. Ninety percent of our software is medical, but we also have things such as supplier management. We wanted to be able to do it all, all of our test cases, in one location. We wanted it to be easy to share. We also wanted it to have a good road-map toward the future. It needed to be integrative and have the ability for single-sign-on.
What other advice do I have?
Make sure you set it up the way you do your business. The process is essential, not just the tool that you're using to manage it. The biggest lesson I have learned from using qTest is that I should have used it years ago. We should have had this a long time ago, not just five years ago.
I send out periodical reports of all the metrics that we do, usually twice a year. We use other tools for keeping track of tasks that have to be done on each one of the projects. We use Microsoft Planner. It makes it easier for people to actually do their assignments and then let us know that the tasks are completed. If we had the JIRA tool or something of that nature, that would help the process. But, at this time, we don't use that functionality.
Which deployment model are you using for this solution?
Hybrid Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Other
*Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.