IT Central Station is now PeerSpot: Here's why

BlazeMeter OverviewUNIXBusinessApplication

BlazeMeter is #7 ranked solution in top Performance Testing Tools, #11 ranked solution in top Test Automation Tools, and #15 ranked solution in top Functional Testing Tools. PeerSpot users give BlazeMeter an average rating of 8.0 out of 10. BlazeMeter is most commonly compared to Apache JMeter: BlazeMeter vs Apache JMeter. BlazeMeter is popular among the large enterprise segment, accounting for 72% of users researching this solution on PeerSpot. The top industry researching this solution are professionals from a computer software company, accounting for 27% of all views.
BlazeMeter Buyer's Guide

Download the BlazeMeter Buyer's Guide including reviews and more. Updated: August 2022

What is BlazeMeter?

BlazeMeter ensures delivery of high-performance software by enabling DevOps teams to quickly and easily run open-source-based performance tests against any mobile app, website or API at massive scale to validate performance at every stage of software delivery.

The rapidly growing BlazeMeter community has more than 100,000 developers and includes prominent global brands such as Adobe, Atlassian, Gap, NBC Universal, Pfizer and Walmart as customers. Founded in 2011, the company is headquartered in Palo Alto, Calif., with its research and development in Tel Aviv.

BlazeMeter was previously known as JMeter Cloud.

BlazeMeter Customers

DIRECTV, GAP, MIT, NBCUniversal, Pfizer, StubHub

BlazeMeter Video

BlazeMeter Pricing Advice

What users are saying about BlazeMeter pricing:
  • "The overall product is less costly than our past solutions, so we've absolutely saved money."
  • "The product isn't cheap, but it isn't the most expensive on the market. During our proof of concept, we discovered that you get what you pay for; we found a cheaper solution we tested to be full of bugs. Therefore, we are willing to pay the higher price tag for the quality BlazeMeter offers."
  • BlazeMeter Reviews

    Filter by:
    Filter Reviews
    Industry
    Loading...
    Filter Unavailable
    Company Size
    Loading...
    Filter Unavailable
    Job Level
    Loading...
    Filter Unavailable
    Rating
    Loading...
    Filter Unavailable
    Considered
    Loading...
    Filter Unavailable
    Order by:
    Loading...
    • Date
    • Highest Rating
    • Lowest Rating
    • Review Length
    Search:
    Showingreviews based on the current filters. Reset all filters
    Bala Maddu - PeerSpot reviewer
    Mobile Network Automation Architect at BT - British Telecom
    MSP
    Reduced our test operating costs, provides quick feedback, and helps us understand how to build better test cases
    Pros and Cons
    • "Scheduling is the most valuable feature. You can run the test 24/7 and then integrate it to the on-premises internal APIs. How it connects to the internal APIs and how it secures the data is very important for us and has definitely helped us."
    • "Version controlling of the test cases and the information, the ability to compare the current version and the previous version within Runscope would be really nice. The history shows who made the changes, but it doesn't compare the changes."

    What is our primary use case?

    We use this solution as a tester. When it comes to 5G, there are loads of changes because we're trying to build the first 5G core network with the standalone architecture. Everything is based on APIs and API-based communications with a new HTTP/2 protocol. When we build the core network, we constantly change and tweak the network.

    When it comes to testing, whether it's with Postman or any other tool, normally we run the test, make sure it works, and then move on. I was pretty impressed with Runscope because we can keep the test running 24/7 and are able to see feedback at any time.

    A proper feedback loop is enabled through their graphical user interface. We can add loads of validation criteria. As a team, if we make changes and something fails on the core service, we can actually find it. 

    For example, we had a security patch that was deployed on one of the components. Runscope immediately identified that the network mode failed at that API layer. The monitoring capability allows us to provide fast feedback. 

    We can also trigger it with Jenkins Pipelines. We can integrate it into our DevOps quite easily, and they have webhooks. The validation criteria is quite simple. Most of the team love it and the stakeholders love the feedback loop as well. They can look at it, run it, and see what's happening.

    The final solution will be across four different locations. The performance will run in a specific location. Runscope will run across different locations and test different development environments. At the moment, it's only on two environments. One is a sandbox where we experiment, and one is a real environment where we test the core network.

    There are around 10 to 15 people using the application, but some of them only view the results. They're not always checking whether it works or not. We have multiple endpoints.

    We use the solution on-premises.

    How has it helped my organization?

    The on-the-fly test data improved our testing productivity a lot. The new test data features changed how we test the applications because there are different things we can do. We can use mock data or real data. We can also build data based on different formats. 

    For example, an IMEI number should be a 15 digit number. If you need various combinations of it, BlazeMeter can do it as long as we can provide regular expressions and say, "The numbers should be in this format." Mobile subscriber identities, which are pretty common in the telecom world, are easy. This solution has changed how we test things. Fundamentally, it helped us a lot.

    Previously, most of the test projects that I delivered before I moved into automation used to take months. Now, the entire API test is completed within minutes. Because we look at millisecond latency, the tests don't take any longer. It's less than a minute. 

    The moment those tests run on schedule, I don't really need to do anything. I just concentrate on what other tests I can add and what other areas I can think of. 

    Recently, I have seen BlazeMeter's other products in their roadmap, and they're really cool products. They use some AI and machine learning to build new API level tests. I don't think it's available to the wider market yet, but there are some really cool features they're developing.

    BlazeMeter reduced our test operating costs by quite a lot because normally to do the same level of testing, we need loads of resources, which are expensive. Contractors and specialists are expensive, and offshore is quite expensive. However we do it, we have to spend a lot of time. Running those tests manually, managing the data manually, and updating the data manually take a lot of time and effort. With this project, we definitely save a lot in costs, and we give confidence to the stakeholders.

    For previous projects and even smaller projects, we used to charge 100k to 200K for testing. We're using BlazeMeter for massive programs, and the cost is a lot less.

    What is most valuable?

    Scheduling is the most valuable feature. You can run the test 24/7 and then integrate it into the on-premises internal APIs. How it connects to the internal APIs and how it secures the data is very important for us and definitely helped us.

    It enables the creation of test data that can be used both for performance and functional testing of any application. Within the performance module of BlazeMeter, they have a different capability that supports performance testing. We have the performance test run on schedule, which is quite nice. It uses something called the Taurus framework. We built our own containers with the Taurus framework, but we moved to BlazeMeter because of the security vulnerabilities with Log4j. 

    They've been more proactive in fixing those, but it was quite hard for us. We did it over five days, but they came back with the fixes in two days. We realized that their container solutions are much more secure. But at the same time, when it comes to Runscope, they have yet to add the data-driven approach, but they are really good. They support test data creation in their functional module, but there could be a few improvements made in test data management on Runscope.

    The ability to create performance and functional test data that can be used for testing any application is very important to our organization because we're looking at big loads of customers moving onto 5G standalone architecture. We're also looking at Narrowband IoT, machine-to-machine communications, and vehicle-to-vehicle communications. 

    All of these require the new low latency tests, so that if we ship a piece of telecom equipment and move the customers onto the new 5G architecture, we can be confident enough to say, "Yes, this works perfectly."

    Also, running those tests continuously means we can give assurance to our stakeholders and customers that we can build the applications in a way that can support the load. There are more than 20 million customers in the UK, and there's growing traffic on private networks and on IoT. As the technology shifts, we need to give assurance to our customers.

    The ease of test data creation using BlazeMeter is the best part of the solution. I worked with them on the test data creation and how they provided feedback in the early days. It was really good. They have implemented it on the performance and mock services. Originally, we managed the test data on CSVs and then ran it with JMeter scripts. It was good, but the way BlazeMeter created mocks with regular expressions and the test data is quite nice. It reduced some of the challenges that we had, and managing some of the data on cloud is really good. 

    The features are really cool, and it also shifts the testing to the left because even before you have the software, you can build a mock, build the test cases in Runscope, and work on different API specifications. Then, you can actually test the application before it is deployed and even before any development. That feedback is quite useful.

    BlazeMeter provides the functional module. They provide the performance testing, and it's all based on JMeter, which is really nice. JMeter is an open-source tool. You can upload your JMeter scripts back into the performance tab, and you can run off it. It's really brilliant and gives us the ability to run the test from anywhere in the world.

    Runscope provides the capability to run test cases from different locations across the world, but we use it on-premises, which is quite nice. The reporting capability is really good. When the test fails, it sends a message. When it passes again, it sends a message. We know what's happening. The integration back into Teams is interesting because you can put the dashboard on Teams, which is nice.

    It's really important that BlazeMeter is a cloud-based and open-source testing platform because for some of the functionalities, we don't always need to rely on BlazeMeter reporting. Their reporting is really good. Having the ability to use open-source tools means we can also publish it to our internal logging mechanisms. We have done loads of integrations. We also worked with them on developing the HTTP/2 plugin, which is now available open-source. 

    The way they have collaborated and how they support open-source tools is really brilliant because that's how we came to know that the JMeter HTTP/2 plugin was provided by BlazeMeter, so we contacted them. We already tried that open-source tool and it was working at that stage. We started off with the mocks, using open API specifications. They also provide free trial versions.

    With the shift-left, we build a mock and then start to use Runscope to validate those test cases. At that stage, we know even before the application is deployed that we can actually get something moving. When the real application is available within that sprint, we already have cases that are being validated across mocks and immediately configure them with the real applications and real environment variables. For a majority of the time, it would work and sometimes it might be a case where we update the data and then at that stage, we get the test cases to work. The moment we do that, we put it on schedule 24/7, every hour or every half an hour, depending on the number of changes that we do on the specific nodes. We always know whether or not it works.

    This solution absolutely helps us implement shift-left testing. We really started building our core network this year. Last year, it was all about the planning phase. We almost got our APIs and everything automated with the mocks. We started to use the feedback loop and knew which ones worked. We did a lot of work around our own automation frameworks and with Runscope. 

    We stopped some of the work we did on our own automation frameworks and slowly started to move them into BlazeMeter. We knew that as long as the tool supported it, we would continue with that. If we hit a problem, then we would see. At this stage, a majority of the work is done on the BlazeMeter set of tools, which is really nice because we started off with our own JMeter data framework test.

    BlazeMeter competes with the tools we have built in-house, and there's no way we can match their efficiency, which is why we slowly moved to BlazeMeter. The team loves it.

    We also use BlazeMeter's ability to build test data on-the-fly. Sometimes when we run the test, we realize that some of the information has to be changed. I just click on it and it opens on a web interface. I'll update the number in my columns because CSV also displays it as a table. For us, it's a lot easier. We don't have to go back into Excel, open a CSA, manipulate the data, do a git check, etc.

    I like that the fly test data meets compliance standards because you get that feedback immediately, and it's not like they're holding the data somewhere else. We can also pull in the data from our own systems. It's all encrypted, so it's secure.

    Generating reports off BlazeMeter is also quite nice. You can just click export or you can click on executed reports.

    What needs improvement?

    Overall, it's helped our ability to address test data challenges. The test data features on their own are very good, but version control test data isn't included yet. I think that's an area for improvement.

    We can update the test data on the cloud. That's a good feature. There's also test data management, which is good. Runscope doesn't have the test data management yet. Mock services do, and performance testing has it. We can do the same test through JMeter, validating the same criteria, but the feedback from Runscope is quite visible. We can see the request and the response, what data comes back, and add the validation criteria. We can manage the test environments and test data, but running the same API request for multiple test data is missing. We cloned the test cases multiple times to run it. They need to work on that.

    Version controlling of the test cases and the information, the ability to compare the current version and the previous version within Runscope would be really nice. The history shows who made the changes, but it doesn't compare the changes.

    In the future, I would like to see integrations with GitLab and external Git reports so we could have some sort of version control outside as well. There is no current mechanism for that. The ability to have direct imports of spoken API specifications instead of converting them to JSON would be nice. There are some features they could work on.

    Buyer's Guide
    BlazeMeter
    August 2022
    Learn what your peers think about BlazeMeter. Get advice and tips from experienced pros sharing their opinions. Updated: August 2022.
    622,949 professionals have used our research since 2012.

    For how long have I used the solution?

    I have been using this solution for more than a year and a half.

    I came across BlazeMeter because I was looking for something around mock services. I was also looking for a product or tool that tests HTTP/2, particularly HTTP/3 because the 5G core network is built on HTTP/2. I couldn't find a tool other than BlazeMeter that supports it.

    I tried to build mock services and tested the solution. Once I was happy, I also realized they have BlazeMeter Runscope, so I wanted to try it.

    What do I think about the stability of the solution?

    It's stable. I wouldn't say any application is without bugs, but I haven't seen many. We had issues once or twice, but it was mostly with browser caching. There haven't been any major issues, but there were improvements that could be made in a couple of areas. They were always happy to listen to us. They had their product teams, product owners, and product managers listen to our feedback. They would slowly take the right feedback and try to implement some of the features we wanted. They always ask us, "What is your priority? What will make the best impact for you as a customer?" We give our honest feedback. When we say what we need, they know that many other customers will love it.

    They were also really good with Log4j vulnerabilities. They came back with a fix less than two days after that came out. We had to turn off the services, but it was all good because Runscope didn't have an immediate impact. It was the performance container. They had some vulnerabilities because the original JMeter uses some of those Log4j packages. They had to fix the log of JMeter and then update their container.

    What do I think about the scalability of the solution?

    It's very scalable. The solution is built for scalability. I didn't know that we could even move into this sort of API world. I used to think, "We do those tests like this." JMeter provides its own sort of capability, but with BlazeMeter, there's a wow factor.

    We plan to increase coverage as much as possible.

    How are customer service and support?

    I would rate technical support 10 out of 10.

    BlazeMeter absolutely helps bridge agile and COE teams. We had some of the BlazeMeter team invited into our show and tell when we started. They saw our work and were quite happy. We showed them how we build our test cases. They also provided the feedback loop and told us what we could improve in different areas.

    We also have a regular weekly call with them to say, "These are the things that are working or not working," and they take that feedback. We'll get a response from them within a few weeks, or sometimes in a few days or a few hours, depending on the issue. If it's a new feature, it might take two or three weeks of additional development. If it's a small bug, they get back to us within hours. If it's a problem on our side, they have somebody on their team for support. I was really surprised to see tools provided to do that because I haven't seen anything like that with other tools. When there's a problem, they respond quickly.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    We switched because we started off with a BDD framework that was done in-house. We realized that the number of security vulnerabilities that come off Docker containers was a risk to us.

    We still continue with that work because we had to move toward mutual DLS in the wild too. We have that working at the moment, along with BlazeMeter. We've tried Postman, but it didn't support HTTP/2 when we looked a year and a half ago.

    How was the initial setup?

    I did most of the initial setup. We had to go through proxies and more when we connected to it. I currently use the Docker-based one because they support Docker and Kubernetes. At the moment, it's deployed in one or two locations, one is a sandbox for experimenting, and one in an actual development site, which is really good.

    The initial deployment was very easy. It took a few hours. I forgot the proxy part, but once I did that, it was all good.

    We can deploy a mock to build the application, and if we want to do it on-premises, as long as we have a Linux-based server, we can do it in 15 or 20 minutes. I was surprised because the moment it showed the hundreds of combinations for APIs that would happen, I was a bit shocked, but then I understood what it was doing.

    I have a team of engineers who work on the solution and the different APIs that we need to support. I have two engineers who are really good with BlazeMeter. They were part of the virtualization team. There are a few engineers who started off with learning JMeter from YouTube and then did BlazeMeter University.

    Most of the time, maintenance is done on the cloud. Because we are behind the proxy, we recently realized that when they did an upgrade, the upgrade failed and it took down the service. We provided that feedback, so the next time they do automated upgrades, we won't have any issues. Other than that, we haven't had any issues.

    What was our ROI?

    Since deployment, we use this solution every day. We have seen the value from the beginning because it helped us build our automation frameworks. It helped us understand how we can build better test cases, better automation test cases, how the feedback loop is enabled, etc. 

    It's saved us a lot of time. It reduces the overall test intervals. We can run the test quite quickly. We can provide confidence to stakeholders. When trying to move toward DevOps and new ways of working, so the feedback loops need to be fast enough. When we deploy a change, we want to get fast feedback. That's very important, and BlazeMeter allows us to do that. 

    We know that we can always trigger the test through Runscope on demand. At any point in time, it'll give us fast feedback immediately. It's quite easy to integrate with tools like Jenkins and Digital.ai, which is an overall orchestrator.

    We tried to go the Jenkins route, but we realized that we don't even need to do that. The solution provided nice APIs that can work with this sort of CI/CD. They have webhooks and different ways of triggering it. They have built-in plugins to Jenkins for Jmeter, BlazeMeter, etc. They understand how the automation frameworks and tools work.

    Their Taurus framework, which they built for the open-source community, is quite brilliant on its own, but BlazeMeter offers much more. Although it's built on the Taurus framework, you can still have test levels, you can group tests, etc.

    What other advice do I have?

    I would rate this solution 10 out of 10. 

    We try to avoid scripting. We use the scriptless testing functionality about 95% of the time. With JMeter, you don't need a lot of scripting. I don't need to know a lot of automation or programming at this stage to use it.

    We haven't faced any challenges in getting multiple teams to adopt BlazeMeter. I created a sandbox for my own team where they can experiment. People really wanted access to it, so I added more and more people, and the designers are now part of it.

    For others who are evaluating this solution, my advice is to do the BlazeMeter University course first before you start to use the product. It will give you a general understanding of what it is. It only takes half an hour to an hour.

    You don't always need to finish the course or pass the exam, but doing the course itself will definitely help. They have a JMeter basic and advanced course and a Taurus framework course. They have an API monitoring course, which will help for Runscope, and one for mocks. Most of the courses are quick videos explaining what the product does and how it works. At that stage, you can go back and build your first automation test case on JMeter or Runscope. It's brilliant.

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Service Virtualization Developer at a tech services company with 10,001+ employees
    Real User
    Mock Services and API monitoring help us reduce cycle times, but MQ protocol and JDBC are needed
    Pros and Cons
    • "With the help of the Mock Services, we are overcoming everything. Wherever we are facing issues, whether they will be long term or temporary, by implementing the Mock Services we can bypass the faulty components that are not needed for our particular testing."
    • "One problem, while we are executing a test, is that it will take some time to download data. Let's say I'm performance testing with a high-end load configuration. It takes a minimum of three minutes or so to start the test itself. That's the bad part of the performance testing... every time I rerun the same test, it is downloaded again... That means I have to wait for three to four minutes again."

    What is our primary use case?

    I'm working for a telecommunications client. We are using BlazeMeter's Mock Services as a priority for performance testing, along with API monitoring. These functions are each used on different projects.

    How has it helped my organization?

    One of the projects for this client is completely based on Mock Services and, with the help of that functionality, we are able to complete end-to-end testing with all the dependent components.

    There are third-company suppliers who provide components that need to be tested, but most of the time we do not get those components on time. Normally, we would wait until the development is completed and only then go for automation. But now, with the help of the Mock Services, once we get the source code, we upload it directly into the Mock Services and into the API monitoring. Once we get the tracers, we are ready to point to the actual instances and test things immediately. By the time we get to that stage, we are ready for the market. That means that even before getting the complete component, we can start working with something. There is no need to wait. As a result of the Mock Services, the time it takes us to develop API automation is minimized.

    Also, we had a number of performance testing tools, but we had to integrate third-party applications for generating reports. That was a pain point for us when it came to showcasing things to stakeholders so that they could be aware of what was happening. But now, everything is available once the performance testing is completed. We can immediately see the reports, and when a test is running, I can share the execution page with anyone else, as a link. That means they can view exactly what is happening, moment by moment, regarding the response time, request time, and latency. That feature is not available in some of the other applications and possibly not in any of the other applications.

    One of the areas that BlazeMeter benefits us is our test cycle times. In the past, if there was a defect with a component, we would have to wait until the issue was fixed. And even though we were not testing that particular component, because of the dependency of that component, we would have to wait until the issue was fixed. If it ended up going beyond the deadline for the release cycle, we would leave that test case for the next release.

    With the help of the Mock Services, we are overcoming everything. Wherever we are facing issues, whether they will be long term or temporary, by implementing the Mock Services we can bypass the faulty components that are not needed for our particular testing. In that way, we are able to reduce our cycle times. In addition, we have some physical devices and network devices in our testing. It takes a week to create physical devices in a virtual way. Instead, with the Mock Services we are creating them in a minute, and that helps our end-to-end testing to be completed on time. The benefit of BlazeMeter's Mock Services is that it takes us through our testing early in the cycle.

    In a single line of business, in a particular call flow, if we have 1,000 test cases per release, 100 to 200 of them are with the help of the Mock Services. That saves us time, money, and manpower.

    And before we had BlazeMeter's API monitoring, if there were 10 components and anything was down, we would not be aware. We would not send a heartbeat every second to all of the components to check whether they were down or up. The API monitoring is a real benefit for us because we are able to schedule it for every 30 minutes or every hour, and we can keep on monitoring a component. If there is a failure, we will immediately be notified by email, even on the weekend. We can take action and report the situation to the data analyst and to the component people so that they can immediately work on fixing it.

    The API monitoring is one of the most excellent tools we have come across because of the scheduling and the results. We are able to analyze how stable a component is based on that monitoring.

    What is most valuable?

    The most valuable features for us are the API monitoring and the Mock Services.

    Another good thing is that we can upload JMX files and schedule and monitor performance testing. We are able to share results and see reports that we can't get in JMeter. In that way, the performance testing is good.

    In terms of the range of test tools, when there are API calls we can do automation testing, functional testing, performance testing, and use the Mock Services to create a situation that the APIs are down. We are able to handle everything that has to do with APIs. Whatever we have to test—the functionality, the behavior—we are able to do so with the help of BlazeMeter.

    What needs improvement?

    One problem, while we are executing a test, is that it will take some time to download data. Let's say I'm performance testing with a high-end load configuration. It takes a minimum of three minutes or so to start the test itself. That's the bad part of the performance testing.

    I don't think they can reduce that time because that's the functionality they have implemented in our BlazeMeter performance testing. But it's a pain point whenever we are running performance testing in a call or a demo, as well as in our live testing when all the business people are there.

    The first time I run a given test, if it takes three minutes to download onto my server that's understandable. But every time I rerun the same test, it is downloaded again, because once the test is completed the files that were downloaded are removed. That means I have to wait for three to four minutes again.

    We also had a call last week regarding secret keys. In JMX we have some Backend Listeners, such as Kibana, and there are usernames and passwords for them that we have to manually enter. When we upload the JMX file into BlazeMeter for performance testing, the usernames and passwords are viewable. Anyone who has access to BlazeMeter can download the JMX file and the usernames and passwords are visible to all those people. That's an issue with the performance testing.

    Also, all the competitors have MQ protocol support, which is lacking in BlazeMeter's Mock Services. Having MQ protocol support in the Mock Services would be great for us. JDBC, the database communication, is also lacking. If we had those things, we would be completely satisfied with BlazeMeter's Mock Services. 

    And for the API monitoring, we are missing a data-driven approach. If, for a single API call, we have 50 to 100 test cases, there should be no need for us to create multiple steps or to duplicate the test steps. Instead, if we had a data-driven approach available, we could directly add the test data into an Excel sheet and call it into the single test steps and achieve what we need to do. We have raised this concern to the Perforce team as well, and they said they are working on it.

    For how long have I used the solution?

    I've been using BlazeMeter for a year.

    What do I think about the stability of the solution?

    It's stable. Sometimes we do face issues, but they are understandable things.

    Every month or two months, something will happen in the back end. The UI will say, for example, that performance testing is down due to this or that reason, and that they are fixing it. Sometimes it affects our testing. We will be in a demo or in a call with our stakeholders where we are presenting and something will be down.

    We will raise a support ticket and they will say they are analyzing it and fixing it. They won't take much time, but at that time, it's a pain point. But it happens in all tools. Because it is a cloud tool it's expected, but it's not happening very frequently, so we are happy with it.

    How are customer service and support?

    We have weekly calls with the BlazeMeter support team, and that's a great thing. During those calls they will ask if there are any issues and whether we need something resolved. If we raise any concerns, they immediately help us during that call. If not, they will ask us to raise a ticket and they follow up on it on both sides—on the support side and with us. They will give us updates. I haven't seen any other companies do that. I have been amazed with the basic support.

    We also get weekly updates on whatever the roadmap contains and the new features they are going to be implementing. If we have any doubts we address them in the call. We are using some other tools, but we haven't seen this much support from any other company. When it comes to support, Perforce is the best company I have ever come across.

    How would you rate customer service and support?

    Positive

    Which other solutions did I evaluate?

    We haven't had a chance to use the cloud services because of security issues related to our company. We only use the on-prem server. But the cloud services are one of the best things about BlazeMeter when comparing it with its competitors.

    We have older tools, like CA DevTest, that we are still using due to dependencies on JMX, MQ, and JDBC steps that are not available with BlazeMeter. With DevTest we are able to handle a lot of the custom extensions. Instead of the Mock Services, we were using the CA DevTest Service Virtualization tool. We want to move completely to BlazeMeter but we can't because of those dependencies.

    Ca DevTest is the main competitor, but it doesn't have the performance testing available. Both solutions have pluses and minuses.

    DevTest is hard to use. It has too many features for Service Virtualization. If a beginner is trying to learn something in DevTest, it's hard. It might take a month or two months to get some understanding of what the DevTest tool does. BlazeMeter is very simple. Even for beginners, they give some options in the Mock Services. If you're a beginner, you can create a Mock Service and it gives you a description for each and every step. This way, beginners can easily adopt BlazeMeter.

    In addition to the step-by-step demos, there is the BlazeMeter University. When we onboard people into BlazeMeter, we ask them to go through those courses. For example, if we are asking them to work on API monitoring, we have them do the course on API monitoring. Once they get the certification, we have them work on the API monitoring. With the BlazeMeter University, there is no need for us to have a separate KB on how it will work or how it will respond. Onboarding people into BlazeMeter is not a problem for us.

    What other advice do I have?

    We were using the functional testing for APIs, but it has been disabled in our organization. I asked what was the purpose of disabling it and they said it was to make sure that everyone is using the API monitoring. Although we requested that they enable it again for our purposes, so far we haven't had much chance to explore the API functional testing.

    Overall, I would rate the solution at seven out of 10 because I have sent some requirements for API monitoring and performance testing on Mock Services separately, to separate teams; things that should be introduced into BlazeMeter. Until those things are available, I am not able to use some of the components 

    Which deployment model are you using for this solution?

    On-premises
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Buyer's Guide
    BlazeMeter
    August 2022
    Learn what your peers think about BlazeMeter. Get advice and tips from experienced pros sharing their opinions. Updated: August 2022.
    622,949 professionals have used our research since 2012.
    Ryan Mohan - PeerSpot reviewer
    Quality Assurance Manager at a financial services firm with 10,001+ employees
    Real User
    Enterprise performance testing platform that gives us a centralized place to execute load tests, do reporting, and have different levels of user access control
    Pros and Cons
    • "The orchestration feature is the most valuable. It's like the tourist backend component of BlazeMeter. It allows me to essentially give BlazeMeter multiple JMeter scripts and a YAML file, and it will orchestrate and execute that load test and all those scripts as I define them."
    • "BlazeMeter needs more granular access control. Currently, BlazeMeter controls everything at a workspace level, so a user can view or modify anything inside that workspace depending on their role. It would be nice if there was a more granular control where you could say, "This person can only do A, B, and C," or, "This user only has access to functional testing. This user only has access to mock services." That feature set doesn't currently exist."

    What is our primary use case?

    Our primary use case for BlazeMeter is performance testing. We leverage BlazeMeter as our enterprise performance testing platform. Multiple teams have access to it, and we execute all of our load tests with BlazeMeter and do all the reporting through it. We also use it for mock services.

    We have a hybrid deployment model. The solution is hosted and maintained by BlazeMeter. We also have on-premise locations within our network that allow us to load test applications behind our corporate firewalls. That's for test environments and non-production applications that are not externally available. It's a hybrid role that is mostly SaaS, but the on-premises component allows us to execute those load tests and report the results back to the BlazeMeter SaaS solution.

    The cloud provider is GCP. BlazeMeter also grants access to Azure and AWS locations which you can execute load tests from. They engaged with all three of the major cloud providers.

    How has it helped my organization?

    BlazeMeter gives us a centralized place to execute load tests, do reporting, and have different levels of user access control. BlazeMeter has a full API, which is the feature that's given us a lot of value. It allows us to integrate with BlazeMeter in our CI/CD pipelines, or any other fashion, using their APIs. It helps increase our speed of testing, our reporting, and our reporting consistency, and gives us a central repository for all of our tests, execution artifacts, and results.

    BlazeMeter added a mock services portion. We used to leverage a different product for mock services, and now that's all done within BlazeMeter. Mock services help us tremendously with testing efforts and being able to mock out vendor calls or other downstream API calls that might impact our load testing efforts. We can very easily mock them out within the same platform that hosts our load tests. That's been a huge time saver and a great value add.

    BlazeMeter absolutely helps bridge Agile and CoE teams. It gives us both options. BlazeMeter is designed so that we can grant access to whoever needs it. We can grant access to developers and anyone else on an Agile team. It allows us to shift left even farther than a traditional center of excellence approach would allow us.

    It absolutely helps us implement shift-left testing. One of the biggest features of shifting left is BlazeMeter's full, open API. Regardless of the tools we're leveraging to build and deploy our applications, we can integrate them with BlazeMeter, whether that's Jenkins or some other pipeline technology. Because BlazeMeter has a full API, it lets us start tests, end tests, and edit tests. If we can name it, it can be done via the API. It tremendously helps us shift left, run tests on demand, and encode builds.

    Overall, using BlazeMeter decreased our test cycle times, particularly because of the mock service availability and the ease with which we can stand out mock services, or in the case of an Agile approach, our development teams can stand out mock services to aid them in their testing. 

    It's fast, and the ability to integrate with pipelines increases our velocity and allows us to test faster and get results back to the stakeholders even quicker than before.

    The overall product is less costly than our past solutions, so we've absolutely saved money.

    What is most valuable?

    The orchestration feature is the most valuable. It's like the tourist backend component of BlazeMeter. It allows me to essentially give BlazeMeter multiple JMeter scripts and a YAML file, and it will orchestrate and execute that load test and all those scripts as I define them.

    The reporting feature runs parallel with orchestration. BlazeMeter gives me aggregated reports, automates them, and allows me to execute scheduled tests easily on my on-premise infrastructure.

    BlazeMeter's range of test tools is fantastic. BlazeMeter supports all sorts of different open-source tools, like JMeter and Gatling, and different web driver versions, like Python and YAML. If it's open-source, BlazeMeter supports it for the most part.

    It's very important to me that BlazeMeter is a cloud-based and open-source testing platform because, from a consumer perspective, I don't have to host that infrastructure myself. Everything my end users interact with in the front-end UI is SaaS and cloud-based. We don't have to manage and deploy all of that, which takes a lot of burden off of my company.

    The open-source testing platform is fantastic. They support all of the open-source tools, which gives us the latest and greatest that's out there. We don't have to deal with proprietary formats. A secondary bonus of being open-source and so widely used is that there is a tremendous amount of help and support for the tools that BlazeMeter supports.

    What needs improvement?

    BlazeMeter needs more granular access control. Currently, BlazeMeter controls everything at a workspace level, so a user can view or modify anything inside that workspace depending on their role. It would be nice if there was a more granular control where you could say, "This person can only do A, B, and C," or, "This user only has access to functional testing. This user only has access to mock services." That feature set doesn't currently exist.

    For how long have I used the solution?

    I have used this solution for almost five years.

    What do I think about the stability of the solution?

    The stability has absolutely gotten better over the years. They had some challenges when they initially migrated the platform to GCP, but most of those were resolved. Overall, they have very high availability for their platform. If there's an issue, they have a status page where they publish updates to keep customers in the loop. 

    If you email their support team or open a ticket through the application, they're always very quick to respond when there's a more global uptime issue or something like that. Overall, they have very high availability.

    How are customer service and support?

    Technical support is absolutely phenomenal. I've worked with them very closely on many occasions. Whether it's because we found a bug on their side, or an issue we're having with our on-premises infrastructure, they're always there, always willing to support, and are very knowledgeable.

    I would rate technical support as nine out of ten.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    We previously used HP Performance Center. We used HP Virtual User Generator as a predecessor to JMeter for our scripting challenges.

    We switched because it's a very outdated tool and toolset. BlazeMeter is a more modern solution. It supports many more tools, and it allows us to solve problems that were blocked by the old solution. 

    The BlazeMeter platform is designed to be CI/CD, so it has continuous integration, it's continuous delivery-friendly, Agile-friendly, and it has all of the modern software development methodologies. 

    Our old solution didn't really cooperate with that. It didn't have the API or any of the test data functionality that we've talked about with generating or pulling test data. It didn't have any of the mock services. BlazeMeter gave us the kind of one-stop-shop option that allows us to accelerate our development and velocity within our Agile space.

    How was the initial setup?

    From my company's side, I'm the "owner" of BlazeMeter. I worked with a support team to set up the on-premises infrastructure. I still work with them.

    Deployment was straightforward and simple. We pulled some Docker images and deployed them. The whole on-premise deployment methodology is containerized, whether it's standalone unit servers running Docker or a Kubernetes deployment, which allows you to deploy on-premise BlazeMeter agents through a Kubernetes cluster and your own GCP environment or on-premises Kubernetes environment.

    What about the implementation team?

    We worked directly with BlazeMeter.

    Which other solutions did I evaluate?

    We evaluated Load.io and a couple of other solutions. When we brought on BlazeMeter five years ago, they were absolutely the leader in the pack, and I believe they still are. They have a much more mature solution and an enterprise feel. The whole platform is much more developed and user-friendly than some of the other options we evaluated. 

    I don't know if there are any features in other platforms that BlazeMeter didn't have; it was mostly the other way around. There were things BlazeMeter had that other platforms didn't have, and existing relationships with the company that used to own BlazeMeter, Broadcom.

    What other advice do I have?

    I would rate this solution an eight out of ten. 

    It's a fantastic solution and can do so many things. But unless you have a team that's already very experienced with JMeter and BlazeMeter, there will be some ramp-up time to get people used to the new platform. Once you're there, the features and functionality of BlazeMeter will let you do things that were absolutely not feasible on your previous platforms.

    We don't really leverage the actual test data integration and creation functionality, but we leverage some of the synthetic data creation. BlazeMeter will let you synthetically generate data for load tests, API, or mock services. We have leveraged that, but we have not leveraged some of the more advanced functionality that ties in with test data management.

    The ability to create both performance and functional testing data is not very important to us. A lot of the applications we test are very data-dependent and dependent on multiple downstream systems. We don't leverage a lot of the synthetic data creation, as much as some other organizations might.

    We don't extensively use BlazeMeter's ability to build test data on-the-fly. We use it to synthetically generate some test data, but a majority of our applications rely on existing data. We mine that in the traditional sense. We don't generate a lot of synthetic test data or fresh test data for each execution.

    BlazeMeter hasn't directly affected our ability to address test data challenges. We don't directly leverage a lot of the test data functionality built into BlazeMeter, but we're trying to move in that direction. We have a lot of other limitations on the consumer side that don't really let us leverage that as much as we could. It certainly seems like a great feature set that would be very valuable for a lot of customers, but so much of our testing is done with existing data.

    We haven't had any significant challenges with getting our teams to adopt BlazeMeter. There were just typical obstacles when trying to get people to adopt anything that's new and foreign to them. Once most of our users actually spent time using the platform, they really enjoyed it and continued to use it. 

    There were no significant hurdles. Their UI is very well-designed and user-friendly. Perforce puts a lot of effort into designing its features and functionalities to be user-friendly. I've participated in a few sessions with them for upcoming features and wire frameworks of new functionalities.

    Which deployment model are you using for this solution?

    Hybrid Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Google
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    QA Automation & Perform Lead (C) at Canadian Tire
    Real User
    A highly stable cloud-based tool with an impressive depth and breadth of functionality
    Pros and Cons
    • "Using cloud-based load generators is highly valuable to us, as we can test from outside our network and increase load generation without having to upscale our hardware as much. The cloud load generator is there when we need it and is the feature we leverage the most."
    • "We encountered some minor bugs, and I would like to have the ability to add load generators to workspaces without having to use APIs. We can't do that now, so we're beholden to the APIs."

    What is our primary use case?

    We use the solution for enterprise performance testing of various technologies including web services, APIs, and web GUIs.

    We deployed the solution to increase our performance testing footprint, which we needed to upscale for the maturity of our operation. 

    We have six on-prem load generators on our network, and the rest of our deployment is in the cloud. It's a very simple architectural design.

    How has it helped my organization?

    BlazeMeter opened up performance testing for us. Our old solution was a client-based performance testing tool, and for staff to access it, they needed to remotely connect to a Windows VM and book time with that controller. Now our tool is web-based, and we onboarded 12 to 14 teams to BlazeMeter, which would not have happened before. Our CoE team was the go-to for performance testing, but the solution has opened up the practice to the whole enterprise, making teams more self-sufficient, and that's the most significant benefit. Performance testing is no longer segregated to one team.

    What is most valuable?

    Using cloud-based load generators is highly valuable to us, as we can test from outside our network and increase load generation without having to upscale our hardware as much. The cloud load generator is there when we need it and is the feature we leverage the most.

    We have a very high opinion of the range of test tools the solution provides, it has a great deal of potential and we are just scratching the surface of it currently. As our maturity and skillset with the product increase, we'll be able to leverage that more. For example, we don't really use mock services yet. We know how to, but we're still set in some of our ways. 

    BlazeMeter being cloud-based and open-source is vital; it was one of our top priorities when choosing a solution. Much like the rest of the world, we're moving away from the old paradigm of the Windows days where we would bring up a server, get Windows licenses, an operating system, and maintain it all. With BlazeMeter, most of that is done for us, and we don't have to worry about infrastructure. We have on-prem load generators for teams needing to run load tests from within our network, and we need to maintain that capacity. However, we don't have to host anything outside of the load generators in the network, so the maintenance effort and cost are much less than they would be as a legacy system.  

    The solution does bridge Agile and CoE teams. It's a shift-left tool, and testing comes in much earlier than in the past. BlazeMeter is a valuable asset in this regard. 

    The tool helped us to implement shift-left testing. Many of our teams with the required skillset can include performance testing as part of their build runs. This may not be high-level testing; internally, we refer to it as early performance testing. It allows teams to confirm the software is functioning correctly early, which was not the case before. We would wait until a certain point in the SDLC before running a performance check, and now we're able to implement that much earlier in the process. 

    We currently don't have any stats on changes in our test cycle times, but there is no doubt in my mind that BlazeMeter improved our software quality.

    We have not faced challenges in getting multiple teams to adopt BlazeMeter. We onboarded around 50 users in three quarters, which is incredible considering we had two performance testers before implementing the solution. Our only challenge is skill sets, our staff wants to adopt the tool and understand its importance, but they may not have the resources or skillset to do so. Those with the necessary skillset are onboarded as soon as their project is greenlighted. 

    What needs improvement?

    Our biggest challenge is the skill set required to operate the solution because we used to have a centralized performance testing team. Now we've opened it up to other teams; some needed to onboard new resources. The solution is simple and user-friendly, but we still need the right staff to use it.

    We encountered some minor bugs, and I would like to have the ability to add load generators to workspaces without having to use APIs. We can't do that now, so we're beholden to the APIs.

    For how long have I used the solution?

    We have been using the solution for about nine months.

    What do I think about the stability of the solution?

    The solution is very stable. We had a few issues with users getting 404 errors recently, but that's the first time we have encountered any problems in three quarters. 

    What do I think about the scalability of the solution?

    The scalability is incredible. We could scale it to as big or small as we want, with our license being the sole limitation. The resources are in Docker containers in Docker images. We could scale within a few minutes if necessary. 

    How are customer service and support?

    The technical support is excellent. When we had hiccups during deployment, they responded quickly with effective solutions for us.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    We used other tools and switched because they weren't as user-friendly. BlazeMeter offered us the ability to increase our performance testing footprint without requiring a high level of performance testing expertise from our QA staff. Additionally, our old solutions were client-based, and BlazeMeter is cloud-based, providing all the advantages that come with that.

    How was the initial setup?

    The deployment is very straightforward. That was one of our criteria, as we didn't want a complex new enterprise solution rollout. There were a few bumps during deployment, but most of that was on our side. BlazeMeter is relatively simple compared to other enterprise solutions we implemented.

    Less than ten staff were involved in the deployment. We used Linux Enterprise to house the six on-premise load generators, and there were a couple of employees responsible for Docker, our solutions architect, and myself as the admin.

    What was our ROI?

    I don't have a concrete figure, but I can say once we sunset our old solution, that will save us a significant amount of money on infrastructure, licensing, and maintenance. I also think there is an ROI associated purely with the increased quality of our software, thanks to BlazeMeter.

    What's my experience with pricing, setup cost, and licensing?

    The product isn't cheap, but it isn't the most expensive on the market. During our proof of concept, we discovered that you get what you pay for; we found a cheaper solution we tested to be full of bugs. Therefore, we are willing to pay the higher price tag for the quality BlazeMeter offers.

    Which other solutions did I evaluate?

    We carried out a proof of concept of four tools, which included BlazeMeter. It's more stable and mature, with well-documented APIs. BlazeMeter University was a significant consideration for us due to our requirements; it helped us roll out the solution to multiple teams. It seemed like APIs for the other solutions were an afterthought.

    What other advice do I have?

    I would rate the solution an eight out of ten. 

    The solution enables the creation of test data for performance and functional testing, but our use is focused on performance testing. We don't particularly use functional testing, but we are currently talking about using test data management for functional testing. We have our in-house automation framework, so the ability to create both functional and performance test data isn't a high priority for us.  

    We don't use BlazeMeter's ability to build test data on-the-fly, not because we aren't aware of it, but because we are still at the early stages with the solution. Until fairly recently, just one other person and I were in charge of performance testing for the entire company, so having self-sufficient teams is an immense change for us as an organization.

    I would say it's critical to have the appropriate skillsets among the staff as we could deploy just about any solution in an enterprise. Still, it won't be used to its total capacity without the proper skills. BlazeMeter showed us how little performance testing we were doing before and how vital increasing that footprint is. We've onboarded 50 users; that's 50 users who were not engaged less than a year ago and can all carry-out performance testing.

    This solution can work very well for enterprise companies with a more advanced skill pool to draw from. For beginners in this area, specific skills such as JMeter scripting are required to use the application. It's easier to use than most solutions but requires a particular skill set to deploy and operate successfully. A good solutions architect and QA leads are essential in evaluating any product.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    QA Automation Engineer with 201-500 employees
    Real User
    Top 20
    The action groups allow us to reuse portions of our test and update multiple tests at once
    Pros and Cons
    • "The feature that stands out the most is their action groups. They act like functions or methods and code, allowing us to reuse portions of our tests. That also means we have a single point for maintenance when updates are required. Instead of updating a hundred different test cases, we update one action group, and the test cases using that action group will update."
    • "The performance could be better. When reviewing finished cases, it sometimes takes a while for BlazeMeter to load. That has improved recently, but it's still a problem with unusually large test cases. The same goes for editing test cases. When editing test cases, it starts to take a long time to open those action groups and stuff."

    What is our primary use case?

    We have a couple of use cases for BlazeMeter. One is performance testing. It allows us to aggregate the execution and reporting of our performance tests. We can also create automated functional tests relatively quickly compared to writing tests in a coded platform like Java.

    Around 20 people in the QA department are using BlazeMeter to test Mendix- based applications. We're doing regression testing on 22 applications, and we have at least two environments that we interact with regularly: a development environment and a pre-production environment.

    How has it helped my organization?

    Before BlazeMeter, we didn't have a performance test aggregator. They were running one-off JMeter tests that weren't stored in a repository. JMeter can generate some reporting, but it's nowhere near as nice as what BlazeMeter provides. And it's more readily understood by the development teams that we work with and the management. That part is great.

    We initially purchased the tool for performance testing, but we discovered that we had access to functional testing, so we started using that. That's been great for a lot of the same reasons. It increases visibility and gets everybody on the same page about which tests can run and the status of our regression and functional tests.

    BlazeMeter can create test data for performance and functional testing. We don't have much use for that currently, but I could see that being useful for individual functional tests in the future. It's nice to have automatic data generation for test cases.

    We haven't used BlazeMeter for shift-left testing. The functional testers embedded with the sprint teams don't do automation. That's all kicked down the road, and the automation is done outside of the sprint. While there is a desire to start attacking things that way, it never really got any traction.

    I believe BlazeMeter has also reduced our test times, but I can't quantify that.
    It's helped us with our test data challenges. I think they have a lot of great implementation, so I don't want to detract from that, but we have some problems with our applications and some custom things. I think we work on a different platform than many other people do, so it hasn't been as beneficial to us probably as it would be for many other people.

    What is most valuable?

    The feature that stands out the most is their action groups. They act like functions or methods and code, allowing us to reuse portions of our tests. That also means we have a single point for maintenance when updates are required. Instead of updating a hundred different test cases, we update one action group, and the test cases using that action group will update.

    The process is pretty straightforward. You can enter data into spreadsheets or use their test data generation feature. You can create thousands of data points if you want. We aren't currently using it to create that much data, but it could easily be used to scale to that. The solution includes a broad range of test tools, including functional tests, performance tests, API testing, etc. They're continuously expanding their features. 

    I also like that it's a cloud-based solution, which gives me a single point of execution and reporting. That's great because we can take links to executed test cases and send those to developers. If they have questions, the developers can follow that link to the test and duplicate it or run the test for themselves.

    A cloud solution can be a little bit slower than an on-premises client or maintaining test cases locally on our machine. However, we've also run into issues with that. Sometimes people mess up and push the latest changes to the repository. That's not a problem with BlazeMeter because we're doing all the work in the cloud.

    Out of all the functional tests, scriptless testing has been the standout piece for my team because it's cloud-based. It's easy for everybody to get into the navigation, and it's pretty intuitive. There's a recorder that's already built into it. It's easy to get started writing test cases with scriptless testing.

    BlazeMeter's object repository provides a single point of update for us with regard to locators or selectors for our web elements. It's the same with the action groups. It's incredibly valuable to have reusable action groups that give us a single point for maintenance. It saves a ton of maintenance time.

    What needs improvement?

    The performance could be better. When reviewing finished cases, it sometimes takes a while for BlazeMeter to load. That has improved recently, but it's still a problem with unusually large test cases. The same goes for editing test cases. When editing test cases, it starts to take a long time to open those action groups. 

    For how long have I used the solution?

    We've been using BlazeMeter for a little more than a year now.

    What do I think about the stability of the solution?

    BlazeMeter is pretty solid. The only complaint is performance. When we get massive tests, we run into some issues.

    What do I think about the scalability of the solution?

    We've never had issues with scalability. We've got hundreds of tests in BlazeMeter now, and we haven't had a problem aside from some performance problems with reporting. 

    How are customer service and support?

    I rate BlazeMeter support ten out of ten. The BlazeMeter team has been fantastic. Anytime we need something, they're always on it fast. We have regular meetings with the team where we have an opportunity to raise issues, so they help us find solutions in real-time. That's been great.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    We were previously using Java and Selenium. We implemented BlazeMeter for the performance testing. When we discovered the functional test features, it was easy to pick up and start using. It was an accident that we stumbled into. Our use grew out of an initial curiosity of, "Let's see if we can create this test." And, "Oh, wow. That was really quick and easy." And it grew from there into a bunch more tests.

    How was the initial setup?

    Our DevOps team did all the setup, so I wasn't involved. We have faced challenges getting our functional test teams to engage with BlazeMeter. They don't have automation experience, so they're hesitant to pick it up and start using it. We've made a couple of attempts to show them how to get started with scriptless, but the incentive has not been good enough. Generally, it's still the regression team that handles the automation with Blazemeter, as well as whatever else we're using.

    After deployment, we don't need to do much maintenance. Sometimes, we have to update test cases because they break, but BlazeMeter itself is low-maintenance.

    What was our ROI?

    We've seen a return. I don't know exactly how many test cases are in BlazeMeter now, but we've added quite a few functional test cases in there. It's the tool that our performance testing uses right now in conjunction with JMeter.

    What's my experience with pricing, setup cost, and licensing?

    I can't speak about pricing. My general evaluation isn't from that standpoint. I make the pitch to the leadership, saying, "I think we should get this," and somebody above me makes a decision about whether we can afford it.

    Which other solutions did I evaluate?

    We looked at other solutions for performance testing, not functional testing. 
    A few points about BlazeMeter stood out. One was BlazeMeter's onboarding team. They seemed more helpful and engaged. We had a better rapport with them initially, and their toolset integrated well with JMeter, the solution we were already using. It's also a much more cost-effective solution than the other options.

    What other advice do I have?

    I rate BlazeMeter nine out of ten. There's still some room to grow, but it's a pretty solid product. If you're comparing this to other tools and you're thinking about using BlazeMeter for functional testing, take a look at the action groups, object library, and test data generation features. Those three things make your day-to-day work a lot easier. It simplifies creating and maintaining your tests. 

    Which deployment model are you using for this solution?

    Public Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Other
    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Robinson Caiado Guimarães - PeerSpot reviewer
    Sales Leader at Better Now
    Real User
    Helps us implement shift-left testing, and ability to build test data on-the-fly helps us increase test coverage
    Pros and Cons
    • "One thing that we are doing a lot with the solution, and it's very good, is orchestrating a lot of JMeter agents. This feature has helped us a lot because we can reuse other vendors' performance scripts that they have used with JMeter before."
    • "I believe that data management and test server virtualization are things that Perforce is working on, or should be working on."

    What is our primary use case?

    We are using it to execute performance testing for our website and mobile applications, including e-commerce solutions, internet banking, and more.

    Our applications are in a cloud environment and have a lot of users using them at the same time. We are looking to create a better experience for the users. We execute this kind of performance setting to establish a baseline for response times and we use it to reduce them.

    We are both a customer of BlazeMeter and a partner that implements it for other customers.

    How has it helped my organization?

    It helps us to work faster when doing performance testing. Because we're faster, we can better use our resources. We have more capacity for our backlog.

    We have better visibility into test metrics by managing the reports and dashboards that present the results of test execution. That helps us to evaluate the performance parameters that the application should achieve. It has been great seeing the results that we are achieving.

    It has definitely helped us to implement shift-left testing. By doing performance tests faster and earlier in the process, we have the opportunity to prevent performance issues in production. That aspect is very good.

    In addition, the Scriptless Testing functionality means we can build a performance testing team in which some members don't need to have extreme programming or development experience. We can create a mixed team that has professionals with great expertise and less experienced people so that we can prepare them for the future.

    Another benefit is the ability to build test data on-the-fly. That helps us increase test coverage. It's one of the main functionalities that helps us with that. It has also decreased our test cycle times by almost 35 percent, but I do believe that it could be even more than that.

    What is most valuable?

    One thing that we are doing a lot with the solution, and it's very good, is orchestrating a lot of JMeter agents. This feature has helped us a lot because we can reuse other vendors' performance scripts that they have used with JMeter before.

    It can also be used for both performance and functional testing of any application and we can manage different kinds of data to better cover the performance and functional testing of the applications. For example, if we need different kinds of data to test the different uses of an application, we can use this platform to help us with that. 

    Another thing we like is that while it's difficult sometimes to connect this kind of performance test solution with application performance monitoring solutions, BlazeMeter has been great. Our experience with that has been great.

    BlazeMeter is the perfect tool for us because it works with cloud and open source and other vendors' solutions. It's very good with other testing solutions in both our and our customers' ecosystems. For example, it's very good for perfecting mobile solutions and with other test data management solutions. It's very easy to integrate it into the software testing processes and tools that our customers are using.

    It's also very good at bridging Agile and CoE teams. One thing that we are exploring right now, to make sure it works correctly, is the integration of BlazeMeter in the continuous integration and continuous delivery journey.

    What needs improvement?

    I believe that data management and test server virtualization are things that Perforce is working on, or should be working on.

    For how long have I used the solution?

    I have been using BlazeMeter for three to four years.

    What do I think about the stability of the solution?

    I don't recall having any issues with its stability. I'm extremely satisfied with that.

    What do I think about the scalability of the solution?

    The scalability is great. We can have a lot of virtual users when we need them.

    We use BlazeMeter in different environments because we have customers with on-prem and cloud solutions.

    How are customer service and support?

    I'm very happy with the technical support. They have answered all our needs.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    We went with BlazeMeter because we can easily access the platform through the internet and because it is very easy to incorporate it into our processes.

    What was our ROI?

    The return on investment is guaranteed with BlazeMeter. If you only look at the price, you will only see one side of the financial equation. We like to evaluate the return on investment in terms of reduced testing time, increased productivity, and the ability to deliver many more features into production with BlazeMeter.

    Which other solutions did I evaluate?

    I've worked with other performance testing, service virtualization, and test data management solutions. I have been working in the software testing area since 2000.

    People love using JMeter, which is an open-source solution, but I prefer BlazeMeter because we can easily orchestrate a lot of testing. It has additional features that help make our performance and non-functional-requirements testing better, because we can integrate it with functional testing. And there are service virtualization and test data management features in the same solution. In a good number of cases, we only need BlazeMeter to do many things, rather than using two or three tools to do the same job. It's a single workplace where we can do many things.

    What other advice do I have?

    My advice would be not to look only at the price as the only point of decision. Evaluate if the tool is a good fit with your business and challenges.

    We are using the SaaS solution. For companies that want to start, it's a better approach because it's easy to use: Pay and go. Of course, in large accounts, we are facing companies that are looking not only for cloud and private cloud, but for hybrid solutions where it is installed on-premises and in the cloud. BlazeMeter is up to this kind of challenge.

    Adoption is a challenge that we face with every new tool or process. Because BlazeMeter is easy to use, we can adopt it faster. For developers and Agile teams, it's very helpful when we can use something quickly. That has helped a lot in the adoption.

    The level of maintenance is okay. It is nothing that would create barriers for us. We have a team of eight people involved in the solution, and their roles include administration and performance testing.

    Blazemeter is one of the top-three that I have found. It doesn't have any negative points compared to others.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor. The reviewer's company has a business relationship with this vendor other than being a customer: Partner
    Flag as inappropriate
    PeerSpot user
    Technology services specialist at a financial services firm with 1,001-5,000 employees
    Real User
    Brings agility and efficiency, and with real-time data, helps in understanding the behavior of an application at all stages of the test
    Pros and Cons
    • "For me, the best part is that we can graphically see the test result at runtime. It helps us understand the behavior of the application during all stages of the test."
    • "The Timeline Report panel has no customization options. One feature that I missed was not having a time filter, which I had in ELK. For example, there are only filter requests for a time of less than 5 seconds."

    What is our primary use case?

    Previously, to perform performance tests, we had to connect servers in the cloud, configure them to perform the test, and plot the results on a dashboard. BlazeMeter came to simplify all this work.

    In terms of deployment, we are using local Linux servers (RHEL 7), and for the cloud, we are using EC2 servers with Amazon Linux. Our cloud provider is Amazon AWS.

    How has it helped my organization?

    With BlazeMeter, our main gains were in agility and efficiency in the execution of performance tests and delivery of post-test reports.

    It has helped us to implement shift-left testing. It has certainly helped us to speed up the tests, and with that, we gained time to carry out tests in all development cycles.

    It has the ability to build test data on-the-fly, and this on-the-fly test data meets compliance standards, which is very important for us. Real-time data helps us understand the behavior at each level of the test. So, we can define numbers that an application needs to achieve in the test to classify it as being OK or not. This data helps a lot in the real-time investigation. By looking at each level, we can identify the exact moment of degradation or “break”.

    It decreased our test cycle times. I believe that we saved at least 50% of the time in preparation for the execution. Using BlazeMeter has greatly simplified our performance testing experience, especially the preparation part.

    What is most valuable?

    For me, the best part is that we can graphically see the test result at runtime. It helps us understand the behavior of the application during all stages of the test.

    BlazeMeter is a cloud-based and open-source testing platform, which is very important for us because we can be sure that we're using a tool that follows market trends and stays up-to-date.

    What needs improvement?

    The Timeline Report panel has no customization options. One feature that I missed was not having a time filter, which I had in ELK. For example, there are only filter requests for a time of less than 5 seconds.

    For how long have I used the solution?

    I have been using this solution for approximately 1 year.

    What do I think about the stability of the solution?

    It is very stable. We haven't seen any instability or unavailability issues so far.

    What do I think about the scalability of the solution?

    It is scalable as per our needs. In our working model, the only team that uses BlazeMeter is ours. This solution is used only by our team whose mission is to bring performance tests to projects and squads.

    How are customer service and support?

    They are very good. In the beginning, they held a workshop with our team, and whenever we ask questions, we are attended to without any problem. I would rate them a ten out of ten.

    How would you rate customer service and support?

    Positive

    Which solution did I use previously and why did I switch?

    We didn't use any other solution. We performed the tests manually.

    As soon as we got to know this tool, we realized how important it would be and the benefits it would bring to the company. Its main benefits have been gains in agility and efficiency. 

    For the performance tests that we carry out in the company, we only use BlazeMeter. I don't know any other tools. My view of BlazeMeter is that it is a very mature tool that delivers what it has set out to deliver in an excellent way.

    How was the initial setup?

    I was not involved in its deployment. In terms of maintenance, the only maintenance is setting up new servers for use. This configuration is usually performed by us in the Performance team.

    What was our ROI?

    I don't have access to the information about its cost. So, I can't say if we have seen an ROI and if we have reduced our test operating costs.

    Which other solutions did I evaluate?

    We did not review other products.

    What other advice do I have?

    BlazeMeter brings agility and efficiency in the preparation and execution of performance tests. With this, we gain time which is used to increase the scope of tests and anticipate possible problems.

    BlazeMeter didn't help bridge Agile and CoE teams because we have a specific team. So, there was no involvement of professionals who work with agile. We gained agility and efficiency, but there was no involvement of any external team.

    I would rate BlazeMeter a nine out of ten.

    Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
    Flag as inappropriate
    PeerSpot user
    Technology Specialist at a financial services firm with 10,001+ employees
    Vendor
    Top 20
    Easy to navigate and offers reasonable pricing but needs grouping features
    Pros and Cons
    • "The stability is good."
    • "The should be some visibility into load testing. I'd like to capture items via snapshots."

    What is our primary use case?

    We primarily use the solution for performance testing. 

    What is most valuable?

    The solution is easy to navigate. 

    The testing it offers is very useful and compatible with many platforms.

    We don't have to maintain any infrastructure, as that is already provided by BlazeMeter. 

    The real user monitoring feature is great.

    I like that you can integrate JMeter script into the solution and run your load while testing a number of users.

    The solution easily runs with end-user experience monitoring and you can run scripts right in the solution. 

    The initial setup is pretty straightforward. 

    The stability is good. 

    The solution can scale. 

    We have found the technical support to be quite helpful.

    The pricing is quite reasonable.

    What needs improvement?

    We have already provided some sort of feedback for our BlazeMeter vendor who is directly interacting with us. We would like, for example, for there to be some sort of grouping features available. 

    The should be some visibility into load testing. I'd like to capture items via snapshots. 

    While they are in the cloud, it would be good to also offer on-premises options. 

    For how long have I used the solution?

    I've been using the solution for about ten years. It's been about a decade. We've used it for a while. 

    What do I think about the stability of the solution?

    We've found the stability to be reliable. We're not running into any performance issues at all. There are no bugs or glitches. It doesn't crash or freeze. 

    What do I think about the scalability of the solution?

    We are planning to scale up to more BlazeMeter products very soon. You can scale it well. We want to scale up to ten products a month.

    How are customer service and technical support?

    Technical support has been very good so far. They are helpful and responsive. We are satisfied with the assistance overall.

    Which solution did I use previously and why did I switch?

    We also use LoadRunner.

    How was the initial setup?

    The solution is very straightforward to implement. It's not overly complex or difficult. 

    The deployment only takes one week or so. 

    What's my experience with pricing, setup cost, and licensing?

    The pricing of the solution is pretty good. It's not overly expensive.

    We pay to license the solution every year. There are many types of license models. We have chosen the UH license model for which we pay every year.

    What other advice do I have?

    We're a customer and an end-user.

    I'd advise a company considers multiple technologies to support its goals. 

    I'd rate the solution at a seven out of ten.

    Which deployment model are you using for this solution?

    Public Cloud
    Disclosure: I am a real user, and this review is based on my own experience and opinions.
    PeerSpot user
    Buyer's Guide
    Download our free BlazeMeter Report and get advice and tips from experienced pros sharing their opinions.
    Updated: August 2022
    Buyer's Guide
    Download our free BlazeMeter Report and get advice and tips from experienced pros sharing their opinions.