We use this solution as a tester. When it comes to 5G, there are loads of changes because we're trying to build the first 5G core network with the standalone architecture. Everything is based on APIs and API-based communications with a new HTTP/2 protocol. When we build the core network, we constantly change and tweak the network.
When it comes to testing, whether it's with Postman or any other tool, normally we run the test, make sure it works, and then move on. I was pretty impressed with [Runscope] because we can keep the test running 24/7 and are able to see feedback at any time.
A proper feedback loop is enabled through their graphical user interface. We can add loads of validation criteria. As a team, if we make changes and something fails on the core service, we can actually find it.
For example, we had a security patch that was deployed on one of the components. [Runscope] immediately identified that the network mode failed at that API layer. The monitoring capability allows us to provide fast feedback.
We can also trigger it with Jenkins Pipelines. We can integrate it into our DevOps quite easily, and they have webhooks. The validation criteria is quite simple. Most of the team love it and the stakeholders love the feedback loop as well. They can look at it, run it, and see what's happening.
The final solution will be across four different locations. The performance will run in a specific location. [Runscope] will run across different locations and test different development environments. At the moment, it's only on two environments. One is a sandbox where we experiment, and one is a real environment where we test the core network.
There are around 10 to 15 people using the application, but some of them only view the results. They're not always checking whether it works or not. We have multiple endpoints.
We use the solution on-premises.
The on-the-fly test data improved our testing productivity a lot. The new test data features changed how we test the applications because there are different things we can do. We can use mock data or real data. We can also build data based on different formats.
For example, an IMEI number should be a 15 digit number. If you need various combinations of it, BlazeMeter can do it as long as we can provide regular expressions and say, "The numbers should be in this format." Mobile subscriber identities, which are pretty common in the telecom world, are easy. This solution has changed how we test things. Fundamentally, it helped us a lot.
Previously, most of the test projects that I delivered before I moved into automation used to take months. Now, the entire API test is completed within minutes. Because we look at millisecond latency, the tests don't take any longer. It's less than a minute.
The moment those tests run on schedule, I don't really need to do anything. I just concentrate on what other tests I can add and what other areas I can think of.
Recently, I have seen BlazeMeter's other products in their roadmap, and they're really cool products. They use some AI and machine learning to build new API level tests. I don't think it's available to the wider market yet, but there are some really cool features they're developing.
BlazeMeter reduced our test operating costs by quite a lot because normally to do the same level of testing, we need loads of resources, which are expensive. Contractors and specialists are expensive, and offshore is quite expensive. However we do it, we have to spend a lot of time. Running those tests manually, managing the data manually, and updating the data manually take a lot of time and effort. With this project, we definitely save a lot in costs, and we give confidence to the stakeholders.
For previous projects and even smaller projects, we used to charge 100k to 200K for testing. We're using BlazeMeter for massive programs, and the cost is a lot less.