When you look in the layer 7 environment, you actually can see the code operating against the two parties. It could be a client server, a web server, or a database server. It could be a database server and another database server. You can look at whatever those application components are and you see how they're interoperating. If for some reason, there's a runaway command or something that's inefficient, you can see the command that's being executed and the players that it's operating against. I did that with the infrastructure team and the application development team, and we could very quickly remedy problems with the application that the organization was facing for an extended period of time, even before my project was initiated. I've recently looked at their current offering and see that they can investigate Layer 7 network to see what commands are being written and passed or returned. That's quite useful. It will help identify latency and if it is related to the traffic or the code itself. That, in turn, helps people debug more quickly. We can rectify issues in days as opposed to months. I like that we have quantifiable data in order to get true measures. The solution provides more visibility into the monitoring of traffic. It helps address blind spots. It develops an intelligent fabric that gives you a more realistic view of the true traffic within the environment. When it comes to the visibility into the infrastructure, it is imperative that the people applying these probes understand the reference architecture and understand their segmentation model. Sometimes if an organization has a compliance responsibility, then normally, the segmentation models are somewhat defined. If, for some reason, the organization is open and there aren't too many like that anymore, then there are no problems. You start to segment the incident and try to understand the relationship between these different assets and the environment, it might block traffic and you might not be able to see it. When you're dealing with Cisco fabric, if for some reason you have a host hanging off a distribution switch and another host hanging off a distribution switch based on the Cisco fabric, that traffic may never hit the core switch. Sometimes people analyze NetFlow off the core, but if something is operating through a distribution switch, you will never see that traffic when you're dealing with a Cisco fabric. I define that as a layer 2 blind spot. In order for you to rectify that, you have to have probes in environments that travel through the course switch to see the full amount of traffic. Once you set up the fabric, that becomes one large network to your network environment, and they're not traffic tracking anything within it until it hits a port somewhere. Alerting is becoming more critical over time. I've been in this business for a long time. Twenty years ago we'd be in a data center and we'd have a perimeter network and we'd be done. The bottom line would be very difficult for someone to come in and compromise my environment. Then we extended our environments from on-premise into co-location. Now we actually have traffic that goes over a wide area network. As such, our security profile changes over time. At first, we would normally do it through all layer 2 relationships or VPN-type environments, but now we're doing it over the internet. The instant we poke a hole through your internet, even though we have a tunnel within it, we're exposed to a higher-threat environment. Now that we're in the cloud, we're going through a higher-threat environment. Around two years ago there was an exploit that focused on the chip. So even if I'm using a cloud provider, I'm leveraging their hypervisor, and I have my own tenancy, at the end of the day everything runs through a processor. So when that processor exploit came through, around four years ago, that problem's wide open. At the end of the day, now more than ever, monitoring is important. Somebody noticed a spike in traffic, somebody compromised the environment. It was a ransomware attack. Because of that leading indicator plus the consideration of the compute environment as well, they could shut down the attack but if they didn't have that capability, they would've been taken advantage of. Based on the ability to look for those leading indicators that can be fed back or introduced into your SIEM environment to make sure that you're responding to any threats that may occur, which are more prevalent now than ever before. The user interface they have right now is very powerful.