Buyer's Guide
Network Monitoring Software
May 2023
Get our free report covering Cisco, Cisco, Dynatrace, and other competitors of ThousandEyes. Updated: May 2023.
708,544 professionals have used our research since 2012.

Read reviews of ThousandEyes alternatives and competitors

Network Engineer at GNCU
Real User
Top 20
Incredibly easy to use, cuts our resolution time, and automatically takes care of configuration management and backups
Pros and Cons
  • "It is useful for configuration management and automated backup. It is one of my favorite features because it is low-hanging fruit, and it is easy to accomplish, but on a network where we've got infrastructure devices in hundreds, it is an arduous task to keep on top of. Auvik does it all automatically, so that's probably one of my favorites because it is important, and it just does it automatically. I don't even have to think about it."
  • "Currently, with Auvik's support, I'm troubleshooting some of the information gathered on Cisco devices through SNMP V3. Auvik is not able to pull some of the important information that it uses to draw the map, which is kind of shocking because it is Auvik. So, it is their platform, and it is monitoring Cisco devices, which are obviously very prevalent in the world. Auvik is having a hard time gathering such important information over SNMP V3, which is a networking standard, and on super popular device brand and model. They're actively working with me on that piece. It seems that network device management using SNMP V3 could use a little tuning."

What is our primary use case?

I used to work at a managed service provider, and we needed a network topology mapping solution and discovered Auvik. So, we tried it out, and then we used Auvik until that MSP was bought out. I left the MSP world and became a network engineer at Greater Nevada Credit Union, where I'm now.

We pretty much use it for topology mapping. We use it for mapping out the network and then monitoring the availability of the network infrastructure devices. There is also alerting whenever there are problems. So, we basically use it for monitoring, alerting, and troubleshooting. We also use it for configuration management and automated backup.

It is a managed solution, so they handle all of the platform upgrades and all that stuff. We have got whichever version they have got.

How has it helped my organization?

It alerts us whenever there are problems, such as a site is down, an individual device is offline, or there are performance issues. So, it provides alerting and assists in troubleshooting when there is not a site-wide or a network-wide issue.

When they started it, Auvik was intended to be an MSP-focused tool. So, you set up different networks in Auvik as if they are distinct entities or different companies. I've deployed Auvik such that it treats all of our different locations as different networks, even though everything is basically tied together in one big wide area network. The net effect here is that network discovery is so effective it discovers all of the same subnets over and over again across all different networks that I have configured in Auvik. It normally wouldn't be a problem in an MSP world because those networks are not connected to one another. It is kind of an annoyance for me, but it really just kind of highlights how effective it is. Its discovery mechanism is very effective. I haven't had too many scenarios where Auvik didn't discover a particular subnet. It mostly just boils down to whether or not we've configured the network correctly so that something isn't just like a hidden Easter egg. 

Prior to Auvik, we weren't tracking any kind of KPIs relative to the network, performance, uptime, etc. There wasn't even the ability to do that because there just wasn't a solution in place. Now that we've implemented this platform, it has given us the ability to do so after our IT organization reaches that maturity level. The ability is there, and the data is there, but we're not there yet. So, it has given us the ability to track those kinds of KPIs. Beyond that, given that we are a 100% Cisco network, it very simply tracks contract status, support status, and all that stuff. I can very easily run a report and confirm the software and the firmware version that all of the devices are running to make everything consistent and get all of our switches and routers on the standard software version. We're approaching that templatized network look. It is one of the things that I could have done manually. I could physically log in to every device and figure out what they're on and then go through the upgrade process. Now, it's a little bit more simplified because I can just run one report and see that everything is on different versions. I can then standardize the version across the board.

It automatically updates our network topology. There are certain things that we have to do as dictated by the NCUA. We are a credit union, and the NCUA is the federal regulatory body that oversees our operations. When we get audited every six months or so, the NCUA basically has a long list of things that they check. They'll say, "Are you performing configuration backups of your network devices?" I would say that we do, and they would ask me to show it to them. For that, all I got to do is bring up Auvik and say, "Here's the device. Our entire network is managed by this platform, and here is an example of a configuration backup for a particular switch. Here is every configuration that has changed since the platform was implemented." Directly above that pane in the browser window is the topology. One of the other things that they ask about is if we have network topology diagrams to which I say that we have but not in the traditional sense. Once upon a time, most folks just manually maintained Visio diagrams of how the network was physically and logically connected, but you just can't rely on those because of the network changes. In a network of this size, probably not a single day passes when I don't make a configuration change. The help desk folks also go and deploy a new workstation regularly, and Auvik automatically discovers those new devices and automatically updates the maps. So, it is a living document at that point, which makes it useful because it is always accurate. I don't have to manually go in and add a new device. 

It has decreased our meantime to resolution primarily because I'm notified of problems much quicker. Previously, if there was a problem, a user would call the help desk to look into it. If the help desk wasn't really sure about what's going on, they escalated it to the network guy. I then looked into it and said, "Oh, I see." Now, instead of that, I'm getting a notification from the tool at the same time a user notices a problem, and then I start looking into it. By the time the help desk hits me up, I'm like, "Yeah, this should be good now." So, in that capacity, it has definitely improved the meantime to resolution. It has probably cut our resolution times in half.

It helps us to put out fires before people/end users even know there is a problem. There have been some scenarios where it has alerted on things, and there was no perceived impact by the end-users. If there was a failed power supply in a switch that maybe had redundant power supplies, we would get a notification that one of those power supplies has died. We can then proactively replace that failed device before the spare tire blows out, and the network goes down.

We're a credit union, and we've got an online banking website, ATMs, ITMs, etc. We have another department that handles all of those member or customer-facing technologies. Previously, if there was a network outage somewhere, it used to be that they were basically unaware of it until they started getting reports that members are calling in and saying that the e-branch is down, and they can't log in to the e-branch. That team does not use Auvik, but I have included them in the outage alerting. So, they get an email when a branch goes down, or there are problems. They don't get notifications for high broadcast traffic, but when there are obvious problems, they get a notification. For example, when a site goes down, we know that the ITMs aren't going to be working, and they're going to get notified at some point by members, but Auvik would have already sent them an alert saying that the XYZ branch is down. So, they can already anticipate that there are going to be ITM issues because the whole site is offline.

It provides automated, out-of-the-box device configuration backups. These are just compulsory administrative tasks for the stuff you rarely need, but if you ever need it and you didn't have it, you're in a big problem. It does the automated backup, and it does it so reliably that I've never manually managed configuration. If I was doing that manually, it would probably take five minutes per device to do a configuration backup. Across a hundred devices, it would be 500 minutes a month. So, it saves me a fair amount of time. It also saves me needing to employ somebody to do a very repetitive task. This is what technology does. It replaces dumb functions so that humans can go and do things that are not so easily automated. The device configuration part also saves money, but the only reason that it saved money was that it was something that we weren't doing before Auvik. We were not spending money to backup configurations because we were not really backing up configurations. So, it didn't really replace anything. It just implemented something that needed to be done but wasn't being done.

It enabled us to consolidate or replace other tools. We got rid of the managed service provider and saved approximately 100K a year, and it replaced SolarWinds and Uptime. Uptime was another platform similar to Auvik, but it was nowhere near as feature-rich. We're paying around 17K a year for Auvik, and SolarWinds and Uptime combined were probably in the neighborhood of 25K a year. So, it has saved around 8K a year.

What is most valuable?

It is useful for configuration management and automated backup. It is one of my favorite features because it is low-hanging fruit, and it is easy to accomplish, but on a network where we've got infrastructure devices in hundreds, it is an arduous task to keep on top of. Auvik does it all automatically, so that's probably one of my favorites because it is important, and it just does it automatically. I don't even have to think about it.

It is incredibly easy to use. That was one of the things that helped motivate. We were basically told that we couldn't use SolarWinds anymore, and we had to adopt something new. I already knew Auvik, but considering that I'm the only network engineer here, the simplicity of the platform was important so that the rest of the IT team could use it to find information. It was important to have an interface that was intuitive and the information that was accessible and usable by folks who weren't networking nerds.

Given that you can deploy it so quickly and so easily, its time to value is very quick. I can start getting meaningful information out of it almost immediately.

What needs improvement?

Sometimes, we get requests for exporting a map of the network. I can export a map, but it exports it as a PDF, which is basically just like a drawing. There is no context. When you're looking at the map, you can hover over things and you can drill in devices and see all kinds of information, but when you export it to a PDF, it is just like a flat image. It is a picture of it, and if you don't know what you're looking at, it doesn't necessarily make any sense. This may be something that has already improved. The exportability piece was one thing that was kind of like a gripe, but it is not all that important. If NCUA wanted to see proof that we have network topology diagrams, I can just show them the tool. Worst case scenario, I can give them read-only access to log into our Auvik tenant, and then they can see for themselves all of that stuff.

Currently, with Auvik's support, I'm troubleshooting some of the information gathered on Cisco devices through SNMP V3. Auvik is not able to pull some of the important information that it uses to draw the map, which is kind of shocking because it is Auvik. So, it is their platform, and it is monitoring Cisco devices, which are obviously very prevalent in the world. Auvik is having a hard time gathering such important information over SNMP V3, which is a networking standard, and on super popular device brand and model. They're actively working with me on that piece. It seems that network device management using SNMP V3 could use a little tuning.

For how long have I used the solution?

I probably started to use it in 2016 or 2017. 

What do I think about the stability of the solution?

It is very stable. There were occasions where I got a notification that Auvik failed to pull a device for its configuration information to see if there was a change, and then, it'll magically resolve itself after 15 or 20 minutes. So, there were some instances that made me wonder why that happened, but, generally, it has been very stable. I don't know if I've ever seen an Auvik outage.

What do I think about the scalability of the solution?

It is super simple to scale. To add a site, we deploy all of the equipment. After the equipment is deployed, I deploy a collector at that new site, and we're off and running.

The only folks that use the platform are in the IT department, but we've also got another department in the technology wing of the organization. This department handles all of those member or customer-facing technologies, such as online banking website, ATMs, ITMs, etc. They do not use Auvik, but I have included them in the outage alerting. So, they get an email when a branch goes down or there are problems. The cybersecurity team also uses it a little bit, and we also have our systems engineers, who actually manage the server infrastructure. There are probably about 15 users across those different roles.

It is being used everywhere across the entire network. There is nowhere to really increase its usage. As things change, they may warrant increasing its usage. There are probably some opportunities to increase the use with TrafficInsights and things like that. 

How are customer service and technical support?

Aside from the ticket that I'm working on right now, I didn't have to reach out to them too much. So, the jury is still out, and we'll see how they do on this. They haven't given up and are still looking into it. So, for now, I would give them a solid eight out of 10.

Which solution did I use previously and why did I switch?

When I joined this organization, they didn't have much for monitoring the network, but they had already purchased SolarWinds licensing. When the SolarWinds breach happened, we got a kind of edict from the NCUA to discontinue any relationships that we might have with SolarWinds. So, I said, "Okay, not a problem. I know Auvik." We adopted Auvik, and we've been using Auvik since then.

How was the initial setup?

Its initial setup was very easy. The configurations were already in place on our network devices to allow management over SNMP. All it took was to deploy the tool and then give it the necessary information to begin the network discovery. After that, it just started populating information. So, it was very easy.

Auvik doesn't use anything in terms of how it interacts with the network. It doesn't use any proprietary stuff that you really have to learn. It uses the same protocols that everything else uses. So, there wasn't any complicated platform-specific stuff that we needed to get in place to make it work. Deploying the tool is as simple as installing software or spinning up a virtual machine. It took us about a day. It was very quick.

Its setup was much quicker than other solutions because you don't have to set up the front-end. All you got to do is deploy little collectors. You don't have to set up the interface you interact with or set that server up. That's usually the part that is a real pain because you have to spin up your own servers, and you got to install the software and give it enough resources. The interface is clunky and slow, and you've got to tune the virtual machine. That's obviously applicable to any hosted service, but that was definitely a contributing factor to the speed and the ease of deploying it. It was like everything is there, and you just got to start plugging your information into it and let the collectors discover and plug it in for you.

In terms of the implementation strategy, with Auvik or network monitoring tools, we, sort of, have two different approaches. The first approach is that we can deploy it so that one collector or one group of collectors monitors the entire network, and we have one map that shows the entire network. Prior to working at GNCU, I was working at a managed service provider, and GNCU was one of our customers. I had done a lot of project work for GNCU, but they were not a managed customer. So, we didn't deploy our toolset on their network, and therefore, we didn't have any visibility. However, in order to do some of the project work that I was planning for them, I needed that kind of information. I needed topology, and I needed to know subnets and things like that. So, we temporarily deployed Auvik back then into GNCU's network. We just deployed the collector, and let it discover the entire network. We gave it about a day to go and do all that discovery and draw the whole map out. After that, I kind of realized it was clunky because the map was so big. It was detailing the network that spans around 30 different locations. 

Another approach is to break each site down into its own network instead of doing one big network map. This is the approach that we followed when we implemented it at GNCU back in December. In this approach, each site is its own customer, which made the map for each site much smaller. It also made it much easier to navigate and see the things that we wanted to do. So, in the end, this was the approach that we ended up using. It is nice that you have that option instead of having just one way.

In terms of maintenance, it is like a platform. We don't maintain anything there. The only thing that we do is that when we make changes to the network or deploy a new device, we need to go in and make sure that Auvik discovers the new device, and it is able to log in, make a backup of the configurations, and start pulling it over SNMP. The platform itself requires zero maintenance.

In terms of the impact of this level of maintenance on our operations as compared to other solutions I've used in the past, with SolarWinds, when a new version came out, we had set it in a way to kind of automate it to an extent. When an update was available, we would upload it manually, apply it, and make sure that everything was working. It wasn't overly arduous. There were patches, modest updates, and stuff like that. For full version upgrades, a lot of times, it was easier to just deploy a new server, install the new version, and then get it set up. We don't have to do that now. It is almost like a thing that you used to do back in the day before SaaS solutions.

What about the implementation team?

We implemented it ourselves.

What was our ROI?

We have not done an ROI. I also cannot quantify exactly how much it has saved because I don't remember exactly what we were paying for SolarWinds, but it is similar to what we were paying for SolarWinds. When we were using SolarWinds, after we had got it deployed and configured the way that we wanted, we probably wouldn't have ever gone back to Auvik, despite me knowing it and liking Auvik. That's because we had already made the investment in that platform, but then the breach happened, and we had no choice. So, there wasn't a meaningful saving in switching from SolarWinds to Auvik. 

Prior to me coming on board, GNCU had kind of outsourced the network part to two different organizations. One of those organizations just did the monitoring and management piece. They were charging us about 100K a year for that managed service. By implementing Auvik, we basically duplicated what they were doing, which has a very measurable impact. I didn't have access to their platform, so I needed something that I could use to monitor and manage the network. So, by getting rid of that managed service provider, we saved approximately 100K a year.

What's my experience with pricing, setup cost, and licensing?

Their licensing model is basically per managed device. You pay X amount per managed device, and managed devices are limited to switches, routers, firewalls, and wireless LAN controllers. So, the only things that we pay for are our switches, routers, firewalls, and wireless LAN controllers, but there are orders of magnitude more devices that Auvik manages that we don't pay for. It also manages servers, workstations, and phones. Auvik will gather KPIs from anything that is connected to the network if it can be managed via a standard like SNMP or WMI. There are no costs in addition to the standard licensing fees.

Auvik doesn't nickel-and-dime. SolarWinds nickel-and-dime you to death. Everything has a different license, and you needed that license for every device, no matter what it was, down to even the interface level. It was ridiculous. Auvik does it monthly. So, it is per device and per month with the option to pay annually at some percent savings, which is what we do. We pay annually right now. It is something like 17K dollars a year.

Auvik might have even been a little bit more expensive than SolarWinds, but that was only because we had not added some of the things that Auvik did to the SolarWinds licensing. So, eventually, the SolarWinds product probably would've been a little bit more expensive if it was like an apple to apple comparison in terms of features.

Which other solutions did I evaluate?

I had checked ThousandEyes. I had also checked Cisco DNA Center, which was more costly, and the network was just not there yet. Some of our devices don't support management via Cisco DNA Center. So, we were not there yet. Someday, I'd like to be able to get there, but for what we needed, Auvik was just the easiest answer.

What other advice do I have?

I would advise others to check it out. It doesn't hurt. They give you a two-week free trial. You can kind of just say that you want to try this, and then, you try it. There is no haggling back and forth with sales. They give you access to the platform for two weeks. For us, I had done the trial just to get it implemented, and then, they extended the trial for us free of charge for another two weeks so that we could get all the approvals in place to adopt the platforms and start paying for it. They make it super easy, so try it out.

The automation of network mapping has enabled junior network specialists to resolve issues directly and freed up senior-level team members to perform higher-value tasks, but it is not because of the tool. It is because of the proficiency level of our team. We don't have junior network staff. There is just me. Our help desk folks are our junior staff, and it is just not in their wheelhouse yet. It goes back to that organizational operational maturity. We've got like the help desk that helps the end-users, and then we've got the engineers who deploy and are kind of like that highest escalation point. It kind of goes from zero to 60. They check something out there, and the help desk will get a ticket saying that it must be a network thing. It just comes right over to me. I'll try to use those opportunities as a teaching opportunity to show, "Hey, log in to Auvik, and then you can see here that the device is online. We've got some other monitoring tools that we use as well for workstations in virtual infrastructure to see that it is not a network issue, and here's how you can dig through Auvik to see it." It increases the proficiency level of our staff. The tools kind of assist with that change and with them improving. A network engineer can tell the help desk guy until he is blue in the face about how things work, but when you have something to kind of visualize, you can look at metrics and performance indicators. It, kind of, helps in providing a little bit of context to the topics that I'm talking about, and then, they can, kind of, use those things. So, the proficiency definitely is improving, and the tool helps with that.

We have not used the TrafficInsights feature. We have a cybersecurity team, and they have a tool called Darktrace, which is TrafficInsights on steroids. It has got some AI or machine learning built into the platform, and it does some really gee-whiz stuff. Because of the presence of that tool, I haven't gone into configuring TrafficInsights yet. It is on my list of things to do because it is just convenient to have all of your data that you might want to access available in one window, as opposed to having to log into another device and learn how to use another device or another tool. So, eventually, I'll get around to that TrafficInsights so that the information is available.

If there is anything that Auvik has taught me, which is also one of my general rules of thumb, is that when something is not working as expected, it is not necessarily a problem related to that thing. For example, if it is a problem that I'm having with Auvik, usually it is not indicative of a problem with Auvik. Similarly, it is not necessarily a problem on the network that is impacting users. It tends to point to something not being configured correctly on the network. It kind of highlights our own mistakes.

For an advanced network operations center, Auvik is very easy to use and super easy to deploy. It is intuitive, and its features are very useful to an extent. When it comes to a more advanced network team, there are things that Auvik doesn't do. Doing those things would make it awesome, but they would just make the platform more complex and probably less easy to use. So, for the fundamentals, Auvik does a fantastic job. Once you go beyond the fundamentals, Auvik still does a pretty good job, but there are some things that I would not be surprised that the platform will never do. That's because it is not intended to be Cisco DNA Center. It is intended to be a broad platform that supports everything to a degree. 

For an unsophisticated or a very small network team, I would give it a nine out of 10 because of ease of use. A managed service provider is a good example because the folks who consume the product are not network specialists. They primarily used it for backup, mapping, KPIs, and assisting in troubleshooting. For mid-range organizations, it is a solid nine. For advanced networking teams, it is probably a five because it is not going to give you all the information that you want. It is not going to do all of the things that you might want it to do, but the things that it does, it does very well.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Cliff Chapman - PeerSpot reviewer
Architect at Linkstatus Ltd
Real User
Excellent support, easy configuration, and a reliable tool to know what the problem is and where the problem is
Pros and Cons
  • "The main feature that we use is what they call Delivery, which is the testing of network paths end-to-end."
  • "They should try and make diagnostics run a bit quicker. When the problem occurs on a network, AppNeta runs automatic diagnostics on the end-to-end path. The path it was testing only to the destination, it now runs the same test to all of the devices and all the intermediate devices. Depending on the number of intermediate devices, it can take several minutes to run. If we're trying to find or diagnose a problem that only lasts two or three minutes, it may be that the diagnostics is still running by the time the problem is cleared. The only thing, which I have also mentioned to AppNeta in the past, is that there should be much faster and much more lightweight diagnostics, which can be completed within 30 seconds or one minute, rather than in 5 to 10 minutes."

What is our primary use case?

Most of the time, we use it for testing the performance of networks for voice and video. It's not designed to be exclusively that, but that's what we use it for most of the time because network performance for the voice and video is notoriously difficult to monitor and manage.

We've built a business around AppNeta because in the early days, it was, and it still is, a unique tool in the way it operates, and we do a lot of consulting work for much bigger companies. Most of our work comes from a company into video and voice solutions, but we have done a lot of work with other ISPs. Essentially, we've built the business around what AppNeta can do.

How has it helped my organization?

It works well for visibility into the internet and cloud environments. AppNeta works from a source probe, and it sends a low bandwidth stream of traffic to a target. If the target is not another probe, it has to use ICMP. One slight difficulty is that a lot of the cloud vendors don't allow ICMP into their cloud infrastructure. The way around that is to install a software probe in the cloud. So, AppNeta has installed software probes in nearly every Microsoft Azure location, and those probes are publicly accessible, which means we can reach them by using a different protocol. If we need to test somewhere that's not Azure, such as Amazon Web Services or Google, then we would need to install a software probe in the cloud. So, it's one more step than we've had to use in the past, but that's because of what cloud providers will allow and will not allow into their data centers.

In terms of active and passive monitoring for alerting us to the deterioration of digital experiences before users are impacted, the delivery side is always active because that's the way it works. The passive side is used to monitor the operational traffic that's on the network. It works fine, but we don't use it very much because there are always security implications. So, we only use the active side of the tool. The passive side looks at the operational traffic that's already on the network, which can become tricky security-wise because when you do that, you are able to see all of the traffic unless it's encrypted. We don't tend to use that. We have used it, and it's useful to work out notifications and alerting. From the active side, we can see if the utilization is high, but we can't tell what's causing the high utilization. If we can get security authorization, we can use the passive side to find out which applications are using all the bandwidth. Typically, we're mostly focused on the active side of the testing, and the alerting and the notifications are very useful.

We use the Automatic User Geo-Location feature. It happens automatically. It's not something that you can turn on or off. We do use it, but I'm always a little concerned about how accurate it might be. That has nothing to do with AppNeta. For example, if I try to find out where my home broadband is, quite often, it'll show me in London, whereas actually, I live 80 miles away from London. It's just where the IP address is logged. We do use it, but we use it cautiously. It's good for the remote workforce. We've done a lot of troubleshooting work for people working from home. Over the last couple of years, so many people have started working from home, and they've had to rely on commercial broadband rather than business broadband. Often, people have no experience in networking or troubleshooting. We're able to get them a software agent that can be installed on their machine, and then once it's on, we just remotely manage it from a central location. So, once a user has installed the probe, which is no more difficult than any other Windows or Mac application, they don't have to do anything else. We take it over from there. When AppNeta first brought the licensing out for working-from-home pros, it included 25 agents, and recently, they pulled that up to 50 agents. It's pretty useful. For the price of one standard license, you get 50 agents that you can either put in for people working from home, or you can put them in a branch office as if it was a hardware probe. We found it to be very flexible for that purpose.

In terms of ease of using the Automatic User Geo-Location feature to determine if a problem is user-specific or region-specific, it depends on how many agents are around. We do know what the ISP is because it automatically tells us the ISP, as well as the geolocation. If we see people in similar locations and they're having problems with the same ISP, we can pin that down to an ISP problem. It's a question of comparing and contrasting what all the users are doing, but the end-to-end thing is, AppNeta already has a system to work out where a problem might be, whether it's at the local end with the user, or at the remote destination end, or somewhere in between. It always has that ability. There's a combination of ways to find out where a problem might be getting caused and who's causing it and why.

It's very good for our Mean Time to Identify (MTTI) for performance issues across business-critical apps, locations, and users. Obviously, when you're dealing with an end-to-end path, the devices and the connections are not all owned by the same people. The local part of the network is owned by the customer or a third party. There may be one or more ISPs, and there might be an ISP at the local end and a different ISP at the remote end because we work globally. So, we may be testing between the US and the Far East, or with the UK and South Africa. There's always at least one telco, and normally, more than one telco. Although we can pinpoint the problem and find out who owns it, it doesn't mean we can fix it fast because it all depends on other third parties accepting what we say and getting on and fixing it for themselves. However, normally, if there's a problem that lasts more than, for example, 10 minutes, we pretty much know exactly where the problem is.

It hasn't had much effect on our mean time to resolve (MTTR) because we're not normally in a position to resolve it. We're all pretty much external consultants. We can explain to people where the problem is, but we don't have direct relationships with telcos, where we can instruct them to go and do this. The MTTR for a problem is a bit unpredictable, but that doesn't have anything to do with AppNeta. That has to do with convincing the person or the organization that owns the problem area to get on and fix it.

We have seen a reduction in open tickets using AppNeta. Before people used AppNeta, they didn't have a reliable method to find out where the problem was. Sometimes, when we work for a particular client, a problem ticket has been open for a month because the problem is periodic, and it is unpredictable when it happens. When it does happen, they don't have the right tool in place to monitor it, and then we come in and install AppNeta, and generally, we can get that ticket closed within a few days of us becoming involved. So, we often find that problem tickets have been open and unresolved for quite some time, and then we come in and put AppNeta in, and typically, we can get that ticket closed within a few days. It's just a matter of how quickly the people involve us in the problem. If we were involved right from the beginning, I'm pretty confident that it would definitely result in tickets getting closed earlier. Sometimes, tickets are getting closed within minutes because we've done our bit and we've said what the problem is and where the problem is. There's not really any point in holding the ticket open any longer because the resolution is up to some other third party like ISP or telco.

What is most valuable?

The main feature that we use is what they call Delivery, which is the testing of network paths end-to-end. They do provide synthetic web transactions, and they also provide the ability to actually look at traffic on the network, but we don't use web transactions very much. That's essentially because the customers we work with aren't interested in that side of the business. 

What needs improvement?

They should try and make diagnostics run a bit quicker. When the problem occurs on a network, AppNeta runs automatic diagnostics on the end-to-end path. The path it was testing only to the destination, it now runs the same test to all of the devices and all the intermediate devices. Depending on the number of intermediate devices, it can take several minutes to run. If we're trying to find or diagnose a problem that only lasts two or three minutes, it may be that the diagnostics is still running by the time the problem is cleared. The only thing, which I have also mentioned to AppNeta in the past, is that there should be much faster and much more lightweight diagnostics, which can be completed within 30 seconds or one minute, rather than in 5 to 10 minutes. 

Currently, when we have short-duration problems, we use a different tool, but we only use that different tool for short-duration problems. With AppNeta, as long as the problem exists for more than a few minutes, such as within 10 to 15 minutes, we can normally tell where the problem is. However, most of the problems that we deal with are intermittent. They're very rarely a permanent condition that needs to be addressed. That makes it more difficult to troubleshoot. We would look to see at least two or three events and hope they show the same results to raise our confidence that we've actually found the problem, rather than just a problem.

For how long have I used the solution?

Broadcom acquired AppNeta earlier this year, but I've been using AppNeta for about 12 years.

What do I think about the stability of the solution?

It's stable.

What do I think about the scalability of the solution?

It's scalable. In terms of its usage, we use AppNeta exclusively for end-to-end network and app visibility across managed and unmanaged networks. We have looked at other products in the past, but AppNeta is still the one we typically use as our first choice. We do use some other products specifically for troubleshooting rather than long-term continuous monitoring, but 99% of what we do uses AppNeta only. For only 1%, we use a couple of other products.

How are customer service and support?

It has always been excellent. It's always pretty much a 10 out of 10.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We didn't use any other tool previously. We worked in much more general-purpose networking management, and when we were introduced to AppNeta a long time ago, we saw that we could quickly build a business specifically around it. We didn't suddenly change to AppNeta. We just created a brand-new business using AppNeta.

There are other products that do synthetic web transactions for the delivery side of what they call Delivery, which is the network performance, but AppNeta uses a pretty unique technology. AppNeta has been going for nearly 20 years. They used to be called Apparent Networks, and they changed their name around 10 or 12 years ago. They've been using pretty much the same technology, which I don't believe anybody else has. That's why we've always used AppNeta.

How was the initial setup?

It's a cloud-based portal, but the probes that we install at the customer sites can be software or hardware. The software has been very valuable over the last couple of years because, with COVID, so many customers closed some sites. So, we couldn't ship a hardware probe to a particular site because there was nobody there to receive it or install it. With a software probe, we just downloaded from the portal and installed it, and we were up and running within minutes. It's a software-as-a-service, but we can use dedicated hardware or software probes that are installed with the user.

Typically, we don't go on-site and install anything, but we do pre-configure any equipment so that the customer can just plug it in, and if the firewall requirements have been met, then it'll just work. It's simple to do that. AppNeta does offer a private on-prem version, but we never used it.

The process for configuring it is straightforward. We just ask the customer for various addressing information. It's a very simple device. It only needs about six pieces of information that the customer gives us. We configure it and ship it. If it's a software probe, we don't even need anything like that because the customers just download and install it themselves. There's no preconfiguration that we need to do. When the probe comes online on the portal, then we just configure the destination targets that we want to run the path test on, and we just press go. That's it.

We take about an hour to configure the probe, but actually, it only takes us a few minutes to do the configuration. We like to leave it running for a little while just to make sure everything is okay with it from the software side of things and the hardware. If the customer installs the hardware, or they install the software, we can be up and running and measuring the performance within about half an hour.

What about the implementation team?

For the configuration, there are only two of us. Either of us can do it. It just depends on who has got the hardware probe. A lot of the work we do is fairly long-term, so we might do a network assessment for the pre-deployment of video and voice, which might take two or three months if it's a very large network. Quite often, what happens is that once the customer sees what can be done with the probes for an assessment or a troubleshooting engagement, typically, they'll keep the probes on anyway. We only do the configuration once per probe, and that configuration may well last several years.

In terms of maintenance, the software upgrades are handled by AppNeta themselves. It does require maintenance, but AppNeta manages that centrally, and it's pretty much transparent.

What was our ROI?

We have seen an ROI. Most of our work is involved with voice and video. If there are voice and video problems, it makes a call practically useless. People have spent a fortune on installing voice and video room products, and even individual products now. The return on the investment for the application that we get working is much more valuable than the return on the investment for investing in the actual tool itself. We're not exclusively voice and video. If you have a web app that is running very slowly, we can use AppNeta to work out if the problem is at the user end, if it's the network, or if it's the actual application server itself. All of that is invaluable. 

Every customer spends a massive amount of money on all the applications that they run. If the applications don't run correctly, they're not very productive, and all of that investment in those applications is wasted. If you have one application that the entire company uses, with thousands of people using it, the return on the investment to get that running properly again is almost incalculable.

What's my experience with pricing, setup cost, and licensing?

That's a little difficult because the licensing costs are different depending on the type of probe that you have. We typically don't get involved in the commercial side, but the list price is probably something like $3,000 for a small probe. However, that gives all of the features that the probe can do, whether or not you use them. In the old days, up until two or three years ago, each of the separate features was a separately licensable module so that you could add things that you wanted, and you didn't have to add things that you didn't want. They've changed all that now, and everything the probe can do is a part of the base license. It's a bit tricky to specify the pricing because if you do use all of the features, which we don't, then it's really good value for money, but as there's nothing else like it, it's still an essential purchase. We don't seem to have any problem with customers seeing the value in paying that.

The small probe is probably around $3,000 and the very large probe that they make for massive data centers might be $50,000 or $60,000. It's a subscription model, so the payment is per year. 

One of the other products that we looked at was ThousandEyes, and we specifically didn't go for that. It has some similarities to AppNeta, but we specifically did not go for that because it had a very unpredictable licensing model. In that, you had to decide how many times you wanted to run a test. If you decide you are going to run a test every five minutes, and that becomes not good enough and you then want to change the testing to every one minute, effectively, you use up five times the number of licenses. It's very unpredictable what the cost of the ThousandEyes product is going to be. With the type of work we do, testing once every 5 minutes or every 10 minutes is nowhere near adequate.

What other advice do I have?

To those evaluating this solution, I'd recommend that they get some help from someone like ourselves because AppNeta does take a certain amount of interpretation of what the results are telling you. That might not be immediately obvious if somebody has never used it before, and they try to start working on it without any training. Since Broadcom acquired AppNeta, the training documentation has improved quite a lot. I would definitely recommend looking at the Broadcom training website for AppNeta because it can get you up to speed very much quicker than what used to be the case a few years ago.

I'd rate it a 10 out of 10. We've used it all the time. We were lucky in that we've always had really close access to the people in AppNeta, and we've made suggestions about product improvements and things like that. In general, they've always done it within a few months or a year. Now that they're Broadcom, I don't know if we'll have that close relationship, but as far as I can see, nothing has changed with the way we operate together. It has always been a really good, healthy, and cooperative relationship between ourselves and the AppNeta people.

Disclosure: My company has a business relationship with this vendor other than being a customer: MSP
Flag as inappropriate
Carl Funk - PeerSpot reviewer
Senior Manager at a training & coaching company with 10,001+ employees
Real User
The UI is well designed, so it's easy to get the visibility you want.
Pros and Cons
  • "Catchpoint helped us establish that something is in a provider network, so we could tell our customers to check their internet provider because the traffic is not getting to us. You need to be gentle when you tell them that, but the fact that we could do it was crucial."
  • "There's still too much manual involvement in getting customized test configurations out there. It's good, but it still takes a lot of effort. In other words, it's when you need to configure it to collect a specific variable and that kind of thing."

What is our primary use case?

Our DevOps and our development team used Catchpoint exclusively for synthetic testing of API and URL endpoints. Our DevOps team is like a composite team for the solution users. Their job was to operate the tool and build the test, and they were the primary folks using it daily. 

The developers used Catchpoint for pre-production testing to ensure that the tests ran and gave them the needed data. They made improvements to the systems that were being monitored. We had around 100 users on the whole platform. The third group was my team, the platform owners. We were in there running daily reports. 

When we started, it was pretty light because we were trying to evolve our thinking, and then it grew. The contract was renegotiated when I was leaving, and we were increasing the test count even higher than we had, indicating a higher level of interest in what we could get out of it. 

Catchpoint is a natively built cloud. There are also ways to deploy an agent internally for tests that sit behind firewalls and internal systems, but the platform is definitely SaaS.

How has it helped my organization?

Integration is essential. We can tell what's going on in our infrastructure and all the other events in our environments simultaneously. Our hand-built, in-house integration solution had a lot of overhead. at the same time, Catchpoint was essentially ready out of the box. We only need to configure a connector. That was a huge win. 

Also, our outside-in tests from our previous provider would trigger an alert. Someone would look at that test failure and say, "Well, the site's up. What's wrong here?" We couldn't eliminate ourselves as the culprit. We couldn't tell if it was being routed through an entirely different AS across the internet because someone between us and the test site has a provider issue.

Catchpoint helped us establish that something is in a provider network, so we could tell our customers to check their internet provider because the traffic is not getting to us. You need to be gentle when you tell them that, but the fact that we could do it was crucial.

What is most valuable?

Catchpoint's UI is well designed, which was a major selling point for us. It's easy to get the visibility you want. The essential piece for us was the external integration with other tools. We needed to get alerts and test result data into other systems we utilized to roll up that information about how our enterprise was working. 

Lastly, we appreciate the visibility Catchpoint gives us into the provider networks we were traversing when our test ran from an endpoint in a data center to our internal applications.

What needs improvement?

There's still too much manual involvement in getting customized test configurations out there. It's good, but it still takes a lot of effort. In other words, it's when you need to configure it to collect a specific variable and that kind of thing. 

The other issue is the cost. The more data you collect, the more expensive it becomes. You sell your organization by saying we can get this feature set, but then you have to walk that back because we'll need more money to run every test.

This is something hard to get out in your initial scoping. You provide Catchpoint with a series of tests and get a cost estimate, not realizing all the data you might have to collect long term. That was a big deal for us because we partly switched on the promise of saving money. 

For how long have I used the solution?

I no longer Catchpoint, but I used it for approximately a year and a half.

What do I think about the stability of the solution?

Catchpoint is incredibly reliable. I rate the stability 10 out of 10.  

What do I think about the scalability of the solution?

SaaS products are built for scalability, so you never see the infrastructure behind them. The only limitation on scalability is the on-prem element, but honestly, you could CI/CD pipeline that and remove the scalability question there. 

We never got around to it when I was there, but it's all on the customer to do that. If you're looking at it from the perspective of the core product, it's entirely scalable. I would rate Catchpoint 10 out of 10 for scalability.

How are customer service and support?

I love Catchpoint support. I'd rate them 10 out of 10. They're incredibly easy to work with, and you don't need to go through layers of bureaucracy to get to people who can answer your questions. I'm connected to their CEO on LinkedIn, and I communicate with him occasionally to let him know how things are going. We never had a problem getting the answers we needed, including post-sales.

Which solution did I use previously and why did I switch?

Before Catchpoint, we had a custom-scripted integration solution. We switched because it was cheaper, integration was more straightforward, and the UI was better. We were trying to create a UX that hadn't existed before because we understood the need to evolve. We felt we could lower the cost, and Catchpoint would allow us to see into provider networks. 

Ironically, we ended up disabling a lot of that functionality because it became too expensive. Ultimately, we only selected basic tests to stay under our budget. But in the beginning, it was a reason we switched.

It wasn't the only reason, but we had a significant visibility issue with China. We didn't have good node coverage in China with our previous provider. The third reason relates to integration. We needed to seamlessly put products together and have them tell our end-to-end story.

One of our goals was to transition away from managing by exception to actually utilizing the tool. That's why UI became so important. It was the first time we could get teams to use it proactively. It wasn't just, "Oh, there's an alert. Let me go into the console. Okay, yeah. I know what that is. Let's go fix it."

How was the initial setup?

I rate Catchpoint a solid eight out of 10 for ease of deployment. We migrated somewhere around 2,000 tests in less than a month. While it took us a few weeks, we migrated a ton of stuff in that 30 days.

What about the implementation team?

I don't know if it is standard per se, but we had a project manager assigned to our migration. You tell them the tool you're moving from. We didn't pay any extra for this other than what was already in the contract. 

We told them, "Here are the types of tests we have, and this is what we need them to be." We provided them with technical guidance for questions about the tests because some things don't convert directly. 

There were times when they would reach out to us and say, "Okay. What about this group of tests? What were you trying to do here?" They helped us a lot with the conversion. We had to do very little in-house.

What was our ROI?

Abandoning the old solution involved switching tools and transitioning from our custom integration. I look at that holistically. Without either one of those being optimized, there would be problems.

We could deploy a test around 25 to 30 percent faster with Catchpoint than we could before them. Their test-writing language for complex cases needed work, but it was still better than we had. I can't even put a number on the ability to detect whether we were looking at a provider issue or our problem. We couldn't automate it. :

We could eventually triage and troubleshoot it, but we couldn't see it in real-time. That was a considerable return on investment. The ability to definitively tell a customer that the issue is on their end is invaluable because it helps us talk them down off a cliff. The capacity to identify those issues regionally, nationally, and globally was an unbelievable return on our investment.

What's my experience with pricing, setup cost, and licensing?

The pricing is based on consumption and works on a point scale. For example, let's say I want to look at www.google.com, and I'm going to test it to see if it's there. It will bring back all this data that tells me how long it took to connect and how long it took to get the first byte. It will list all the resources on the page, showing that they all work and there are no broken links.

It brings that data back. That test has an assigned point value depending on what you decide to extract from that test. If all I do is check to see whether it's available, it might be one point. I don't know the exact point values off-hand. This is just an example.

If I decide to add performance checking and all those time metrics I just mentioned, that might be 1.5. It'll slap on an extra 0.5 because you're pulling back more data and taxing their systems further. You can add screenshots. What did it see at this point? What they call snippets in some tools. That typically adds another point or more in some cases because that is very intensive. 

Using google.com as an example, if I have the test login for me, that's a step in the test. It also gets a point value because each action can have all the same data recorded. A multi-step or transaction-based test could cost you seven or eight points every time it runs. You buy an allotment of points up front. And as you consume those points, your balance goes down until you run out.

You buy an allotment of points for a year on a contract with the expectation that you have planned your tests and what you expect to do with them. They will give you a rounded estimate, and you negotiate the price of your contract based on the points you buy. They are tiered, so you will get better deals as you put more testing into the system. Smaller contracts tend to be more expensive per point than larger contracts where you get volume discounts.

The price is reasonable, but we didn't save as much as we'd hoped. When we pitted them against Dynatrace, our Dynatrace sales guy was ready to negotiate. It's like buying a used car. We didn't save a ton, but we felt like the feature set we got kind of made up for that.  

Value-wise, ThousandEyes would've saved us almost 40 percent, but there were a couple of hangups with that product that my leadership was unwilling to overlook. I thought they were minor, especially considering we're always trying to save money. However, the leadership of the DevOps team decided it wasn't worth saving that much money."

Ironically, ThousandEyes is now priced higher than Catchpoint. They have tried to regain our business, but now they're more expensive. There's an ebb and flow. If they're trying to get new business, they will probably treat you better than they do if you're an ongoing customer.

I give Catchpoint three out of five for affordability because they're transparent, there's no haggling, and they're willing to undercut anybody else in the industry. I don't think there's anything special about their pricing model. I don't believe that they're particularly insanely fair. It's more of a you-get-what-you-pay-for type thing.

It's not easy to figure out the point scale, but there are built-in tools to quickly simulate a test and determine what it will cost. I think they are highly transparent. I don't remember there being a single hidden fee. There is a cost if you attempt on-prem testing, but no mysteries there. 

Which other solutions did I evaluate?

Catchpoint is one of several that can perform this same function. I generally consider the raw capabilities of all these tools to be somewhat commodified at this point. They're relatively mature, and I don't see much difference. 

You have an endpoint somewhere and configure a test. It runs from the endpoint and tells you if your stuff is available. That's generally what these tools do. Most importantly, we don't need people sitting in front of this tool all the time. They don't have to sit there and watch it.

We evaluated other options like ThousandEyes, and our existing product was Dynatrace, which was rebranded as Gomez. There was one more. We took the three and did a bake-off. 

What other advice do I have?

I give Catchpoint eight out of 10. It's excellent software, but when I start thinking of quality software, it's not the first thing that comes to mind. Depending on your cost model, I would've recommended ThousandEyes before they increased their prices. I don't think it's overwhelmingly good, but it's a solid eight.

My advice to Catchpoint users is to know your tests. You will struggle when they're trying to estimate your point usage. Ultimately, we had to disable several features because we had poorly calculated that. Plan for increased adoption because you can buy more points ad hoc, but they're not as cheap as buying them up front. At the same time, don't overbuy because anything unused is wasted.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Other
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Mahesh Doshi - PeerSpot reviewer
IT at a energy/utilities company with 10,001+ employees
Real User
Top 5
Very good reporting and alerting abilities but inadequate enhancement support
Pros and Cons
  • "The alerting feature is very good because it allows you to set MOS alerts at various network junctures or data points."
  • "Sometimes the solution does not register devices properly and that is a bug."

What is our primary use case?

Our company used the solution to provide a good view of traffic movement and issues along the way. For example, the solution can look at a voice quality issue between point A and point B and determine the path, exact issue, and issue point.

Unfortunately, we did not receive the required enhancement support we so had to stop using the solution. 

What is most valuable?

The reporting abilities are valuable. 

The alerting feature is very good because it allows you to set MOS alerts at various network junctures or data points. The MOS level measures the quality of voice and video to industry standards. Some junctures or data points might be more important to monitor. For example, you might want to receive alerts when an MOS value is high in a CEO's office or for a VIP phone. 

What needs improvement?

Sometimes the solution does not register devices properly and that is a bug. 

Device utilization between links needs to be added. 

Improvements are needed for the syslog and NetFlow messages. 

The technical support is not great. 

For how long have I used the solution?

I used the solution for two years. 

What do I think about the stability of the solution?

The solution is stable with no issues. 

What do I think about the scalability of the solution?

The solution is scalable based on the license you purchase. For example, our license covered 100 device points. 

How are customer service and support?

If enhancements are not required, then support for the product is good.

The solution was a new product so we wanted some enhancements. For example, we had some issues registering a device. We placed a support call and were hoping they would give us a timeline for resolving the issue one way or the other. But support refused to provide a timeline and that was where everything fell apart. The support system was not great and we rarely got support for required enhancements. 

Overall, technical support is rated a five out of ten. 

How would you rate customer service and support?

Neutral

Which solution did I use previously and why did I switch?

We previously used Route Explorer which gave us BGP peering for all routers around the globe. 

How was the initial setup?

The setup is not that difficult. Installation support was not great but our technical staff figured it out. 

What about the implementation team?

We implemented the solution in-house. 

The initial deployment took three months. At that point, the first piece of the solution was working.  

We wanted to use other features but couldn't get them to work so we approached technical support. Enhancements, product change requirements, and support requirements took an additional three months. 

What's my experience with pricing, setup cost, and licensing?

The solution is fairly expensive compared to other products and there is not much flexibility for reducing costs. 

Which other solutions did I evaluate?

About five years ago, we were using Route Explorer but there were some shortcomings. We started exploring options and around that same time Cisco purchased the solution. Since we have a Cisco house, we were introduced to the solution as an alternative and opted to get started with a minimal license.  

The solution was not exactly a one-to-one replacement for Route Explorer, but gave us a very good view of traffic movement and issues. 

What other advice do I have?

Before choosing the solution, it is important to rank your requirements from high to low priority. The solution has a lot of features but at a high price. If you will only use 30% of the solution to meet your requirements, then it would be better for you to explore other products. Find the solution that best fulfills your requirements. 

The solution itself is great. If I could use it properly, then I would rate it a seven out of ten. Unfortunately, support is a factor when determining usability or cost effectiveness. We did not receive the needed support so had to stop using the solution. 

For that reason, my overall rating for the solution is a five out of ten. 

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
ASM Naushad Alam - PeerSpot reviewer
Network Manager at a financial services firm with 1,001-5,000 employees
Real User
Top 5Leaderboard
Allows any number of customizations but lacks functionality for finding root causes
Pros and Cons
  • "The solution allows you to configure and customize how you want to collect information from servers or other systems."
  • "The solution needs to add features for finding loopholes or problems and their root causes."

What is our primary use case?

Our company is a financial organization and we use the solution to check connectivity, CPU utilization, and hard disk utilization for all of our servers. We monitor networks to learn traffic conditions. We use threshold features to compare servers or routers and find each server's size before it reaches 100%. 

The solution is currently installed in our IT division. We have 6,000 employees, 50 divisions, and 500 servers. The solution monitors all firewalls including HCI, Check Point, Palo Alto, Cisco, and VMware. 

What is most valuable?

The solution allows you to configure and customize how you want to collect information from servers or other systems. There are many options for any type of customization such as for servers, health checks, and traffic conditions. 

What needs improvement?

The solution needs to add features for finding loopholes or problems and their root causes. 

For how long have I used the solution?

I have been using the solution for three years. 

What do I think about the stability of the solution?

We have not yet purchased the commercial version so have a lack of technical ability. We do not yet fully know the key points or key features of the solution. We just use what we use along with WhatsUp Gold. 

Based on our use only, stability is rated a seven out of ten. 

What do I think about the scalability of the solution?

The solution is definitely scalable and you get those benefits with the commercial version. We are using the freeware version right now but plan to purchase or get the premium support from the OEM.

How are customer service and support?

Technical support is not provided with the freeware version so we don't have any support activity with the OEM. 

How was the initial setup?

Another team member handles setup but it is not complicated. 

What about the implementation team?

We implement the solution in-house. First, we install the solution on a VM. Later,  we manage the server and and install the VM there. 

What's my experience with pricing, setup cost, and licensing?

The solution is open source so it is free.

Which other solutions did I evaluate?

We also have been using WhatsUp Gold for ten years. Zappix is a Linux solution and WhatsUp Gold is a Windows solution. 

We have a basic license for WhatsUp Gold and have purchased upgrades. In our country, there is no local expert tied with the OEM. The support team that is provided is not acceptable to us. 

The solution, SolarWinds, and WhatsUp Gold are good for monitoring servers but lack the functionality to find problems or root causes for any system, application, or service. 

ThousandEyes and AppDynamics find actual problems with networks, applications, or services. We are looking more at these products because our goal is to find all loopholes. 

For example, the solution or WhatsUp Gold might identify a packet loss. But ThousandEyes or AppDynamics will drill to the highest problem such as the HTML or a Cisco network problem. This approach is much more interesting. 

What other advice do I have?

The solution is popular in any IT sector because it allows for any number of customizations. 

Since we are using the freeware version, we do not know of other key features or how to use them. It is not necessarily the solution, but maybe instead our lack of knowledge. 

Based on our use, the solution is rated a seven out of ten. 

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
Buyer's Guide
Network Monitoring Software
May 2023
Get our free report covering Cisco, Cisco, Dynatrace, and other competitors of ThousandEyes. Updated: May 2023.
708,544 professionals have used our research since 2012.