What is our primary use case?
I recently worked on a huge project for a new entity of a major semiconductor company. We had a greenfield deployment where we were building everything from scratch. The primary use case was to build a solution that meets the following requirements:
- Provides Zero Trust Network Access for all remote users.
- Provides seamless performance.
- Avoids all bottlenecks that the traditional VPN concentrators have with regards to being a single point of failure by putting the entire global traffic to a particular VPN concentrator.
On the secondary front, we did a couple of integrations with Cisco Viptela. It is an SD-WAN solution for ensuring traffic optimization, traffic steering, branch-to-branch connectivity, and branch cloud connectivity. We had to ensure adequate performance and zero trust and have metrics and security compliance with all standard regulatory frameworks such as GDPR for the European region. This was a huge deployment with a budget of close to 2 million dollars.
The plugin version is 2.1.086 innovation, and the platform version is 2.1.
How has it helped my organization?
It protects all app traffic so that users can gain access to all apps. There are definitely a lot of integrations. Prisma Access also derives the App-ID capability from the Palo Alto Next-Gen firewalls, which is a USP of Palo Alto. So, it inherently has the capability to see and monitor all the traffic and understand all applications. If an application is being tunneled through different ports or protocols just to masquerade the traffic to bypass the traditional security controls, it won't work. Technically, you cannot bypass any of the security controls that Palo Alto has.
The Single Pass Parallel Processing (SP3) still works with Prisma Access. So, you can have all the integration that you want. It also integrates very well with Prisma SaaS, which is a new solution from Palo Alto.
It can build IPS tunnels with all vendors that you have. It could be a very small router or a firewall from any vendor. With regards to protocols, traditional IPS used to have a couple of restrictions in terms of inspection and other things, but Prisma Access understands every application and every packet. It can see the higher progress of a session. It is a great product to work with.
It secures both web-based and non-web-based apps. Traditionally, I used to have problems with web-based and non-web-based traffic. Prisma Access is a full tunnel, and it is fairly agnostic to the type of traffic. It recognizes everything such as a torrent, FTP, or UDP session. It recognizes web applications, non-web applications, or custom applications. We have a couple of applications that are Java-based, custom developed, and custom managed. It is capable of recognizing every application.
It understands all applications and all standard and custom signatures that you can configure. With regards to the data leaks, it has a network DLP functionality. So, you can potentially configure regex or something else to inspect the traffic and look for patterns, such as credit card numbers and social security numbers. You can define the patterns and put a monitor for notification.
It provides all capabilities in a single, cloud-delivered platform.
It provides traffic analysis, threat prevention, URL filtering, and segmentation. Its usage for segmentation is less because we are also using their firewalls. On the transport side, we are using SD-WAN. We cannot do away with any of these features simply because we expect this platform to provide Next-Gen filtering capabilities. URL filtering is definitely important because we don't want to buy another dedicated solution. Threat prevention is like antivirus and anti-spyware, and all IPS functionalities are absolutely mandatory for us. Technically, it does everything that a typical Next-Gen firewall is supposed to do, but it does that in the cloud. So, you get all the scalability and visibility. We absolutely want all these features, and that perhaps was one of the reasons why we went for Prisma Access instead of another product.
It provides millions of security updates per day, which is important to us. There is something called AutoFocus, which is their threat intel platform. We also get a lot of updates from Unit 42, which is their threat intel feed. We have incorporated that with our platform. It is absolutely essential for us to at least know all known threats so that we can take steps to fix them well in advance. There were recent attacks with regards to SolarWinds and other solutions, and we were able to get timely feeds and notifications from Palo Alto automatically through the signature updates. We also got proactive updates from the Palo Alto technical support. This is absolutely necessary for us, and it keeps all known threats at bay.
Our implementation is still in progress, and we use its Autonomous Digital Experience Management (ADEM) features for performance-based monitoring, checking the latency, and checking the end-user experience not only based upon a couple of traditional metrics but also based on the actual ones. We don't have a standard benchmark to compare it with, but we definitely have complete visibility of who is doing what and who is getting what type of end-user experience. If someone is working from Seattle and needs to connect to Oregon, we definitely don't want to have the traffic all the way to some data center and then take a zig-zag route. We want it to follow an optimal path. It does provide us actionable insights into what's happening, and we can take corrective measures in the long run.
ADEM provides real and synthetic traffic analysis. We do have a security operations team that tests and ingests into SIEM/SOAR platforms that do automatic remediation. This is quite crucial because if there is suboptimal routing, it totally destroys the end-user experience. We check for the concentration of the users. Especially at this time when most of the users are working from home or remotely, we need to have such insights so that we can enable all points of presence within Prisma Access to ensure a better end-user experience.
What is most valuable?
The model itself is great. It is a managed firewall. If you look at it purely from a technical standpoint, it is a globally distributed and managed firewall platform that sits on top of Google Cloud and AWS. It has a global presence, and that is one of the most important things because this particular client for whom I was building this design has a presence across the globe, including China, where there are few constraints. Its presence and performance are super awesome.
It is a natural transition from Palo Alto Next-Gen firewalls. Of course, people who would be managing this platform need some knowledge transfer and training, but it is not a huge leap. That's the beauty of it.
It is geographically dispersed, and it sits on top of Google and AWS platforms. Therefore, you don't face the standard issues, such as latency or bandwidth issues, that you usually face in the case of on-prem data centers.
It is fairly simple in terms of administration. It is derived from Palo Alto Next-Gen firewalls that have been in the market for more than a decade. It has evolved from Palo Alto Next-Gen firewalls, and there is only the difference of naming convention. The web interface and the way of managing things are fairly easy.
It does whatever they're promising about this particular product. It has all the features that they say. We are leveraging quite a few features, and there are not many features that we are not using. All the features work the way they say.
Whatever we've configured is working as promised in terms of security, and I'm fairly certain about the security that it provides. From the security aspect, I would rate it a 10 out of 10.
What needs improvement?
It is a managed firewall. When you run into issues and have to troubleshoot, there is a fair amount of restriction. You run into a couple of restrictions where you don't have any visibility on what is happening on the Palo Alto managed infrastructure, and you need to get on a call to get technical assistance from Palo Alto's technical support. You have to get them to work with you to fix the problem. I would definitely like them to work on the visibility into what happens inside Palo Alto's infrastructure. It is not about getting our hands onto their infrastructure to do troubleshooting or fixing problems; it is just about getting more visibility. This will help us in guiding technical support folks to the area where they need to work.
For how long have I used the solution?
I've been using this solution for about one and a half to two years. I've been extensively designing, implementing, troubleshooting, and working with Palo Alto for feature edits and update suggestions.
What do I think about the stability of the solution?
The solution itself is fairly stable. We never faced any outages because of the underlying platform. So, its stability has been good, but I would like more visibility into what is going on inside Palo Alto's infrastructure.
They have also been fine in terms of the maintenance that they have been doing outside the maintenance window.
What do I think about the scalability of the solution?
It is scalable. It sits on top of Google Cloud and Amazon AWS, so it is geographically distributed. The only place where we have connection issues is in China, but this is not because of Prisma Access. It is more related to the data privacy and regulatory restrictions that China has.
When we started, which was two months ago, we had about 5,500 users. We probably have more than 1,000 concurrent users. We have 15 or 16 sites. We're going up at quite a good pace, and we would have somewhere close to 30 sites.
How are customer service and support?
We have a premium/enterprise license. We never had any problems with getting support, especially on weekdays. Having a premium/enterprise license definitely adds a few points. I would rate them somewhere between a seven and an eight. That's because there is a lack of visibility into what happens inside the infrastructure, and because we can't pinpoint a specific area to them, they need some time to look at where things are wrong.
With regards to backend maintenance, they have their own schedule of maintenance for their infrastructure. They keep us updated about that well in advance. The preventative maintenance and the communication from them have been fairly smooth, and we never had any issues.
How was the initial setup?
It was fairly straightforward. We started with a couple of proof of concepts, and we've been adding things. We are gradually getting new locations, new sites, and new deployments, and we never faced any challenges in terms of the capabilities of the platform. It has been fairly smooth.
This was a huge implementation with a couple of dozen sites, and it involved designing, bill of materials, procurement, and implementation. The designing phase took about two months. The implementation took about a month.
The beauty of it is that we just have a team of five people managing the entire implementation. When it goes to the operation stage, we would definitely need more people because there are different pieces to it, but for the design implementation, we just have five people to manage everything.
What about the implementation team?
We implemented it ourselves.
What was our ROI?
This was a greenfield deployment, and we built it from scratch. So, there isn't much of a comparison between what used to happen in the past and what is happening now. However, because it is an OpEx-based or typical cloud-based model where you get charged for whatever you are using, it would potentially bring down the cost of consumption in terms of bandwidth. For example, if we have currently enabled all features, and tomorrow, we find a feature to be redundant and we don't want to use it for a particular location or data stream, we can do away with a couple of controls. We will only get charged for what we are using. It is security as a service and network as a service. As of now, I don't have the exact numbers for the savings that we are looking at, but down the line, it would definitely translate to huge savings in terms of OpEx and CapEx.
All such platforms require skilled professionals, and because it is derived from traditional Palo Alto firewalls, it is easy to learn. You don't need to spend a lot on training, and as of now, that's definitely a very important factor for us.
What's my experience with pricing, setup cost, and licensing?
We created a bill of materials and passed it on to a third party. It probably was WWT, but it was sourced by the client itself.
Based on what I have heard from others, it is a pricey solution as compared to its peers, but I am not sure. However, considering the features that it offers, it is a break-even point. You get whatever they are promising.
Which other solutions did I evaluate?
We had used Zscaler for a proof of concept, but we wanted segmentation capabilities within the data center as well as for on-prem locations. We wanted to have local segmentation capabilities. We wanted a solution that scales inside the cloud but also on-prem. Zscaler didn't have that model in the past, so we went ahead with Prisma Access. That was the only PoC that we did in addition to Prisma Access.
With regards to other integrations, the integrations with Cisco SD-WAN still exist, but these are not a competitor of Prisma Access. These are just integrations.
What other advice do I have?
If it is a natural transition from a purely on-premises model to a hybrid model where you have a significant number of sites or you are moving towards Zero Trust Network Access for providing a decentralized VPN solution, you should definitely go for it. It provides the entire security stack, so you don't have to keep on adding different solutions and then try permutations to make them work together. Prisma Access does everything beautifully. You don't need a lot of training or develop a lot of skills to manage the solution because it has evolved from Palo Alto Next-Gen firewalls.
For DLP, we are not using Prisma Access because it is a network DLP. Being a semiconductor company, we needed a couple of controls to ensure that the entire flow of the communication is very well defined. Therefore, we are using different tools that auto-discover, and then we put controls. For example, we have endpoint DLP, network DLP, and email DLP. We don't want to rely on Prisma Access because it sits outside of our perimeter. We want to have as much close control over the source as we can.
It didn't enable us to deliver better applications because this implementation was done in a silo. This project was not done very sequentially. It has been quite sporadic. The way the solution was built, applications were not at the center. We built it with a top-down approach. It was our first cloud-deployment model, and we haven't faced any problems with any of the standard applications. All the custom apps that we are bringing from the original plan are working the way they're supposed to. So, we never faced any challenges with regards to the performance or the security after deploying these applications. The entire setup is fairly agnostic to the types of applications that we already have, and a couple of them are not standard applications like Office 365, Workday, etc. They are fairly custom apps that you use in your lab environment or manufacturing utilities, and it works with them.
I would rate it a nine out of 10. Except for the visibility part, it is great. I am taking a few other client projects that are for Fortune 100 companies, and I am doing a lot of refreshes for them. Prisma Access is definitely going to be at the top of my list. It is not because I know this product inside out; it is because of the experience that our clients are getting with it, the security it provides, and the proactive updates that Palo Alto is pushing for Prisma Access.
Disclosure: My company has a business relationship with this vendor other than being a customer. Partner