Our primary use cases are for container security and for auditing purposes.
We have multiple clusters.
Our primary use cases are for container security and for auditing purposes.
We have multiple clusters.
Palo Alto enables us to know what security threats are happening in the background.
It provides the visibility and control we need regardless of how complex or distributed our cloud environment becomes.
Prisma Cloud provides us with a single tool to protect all of our cloud resources and applications, like what we need to manage and reconcile security and compliance reports.
We have been enabled to reduce runtime.
Prisma Cloud provides risk clarity at runtime and across the entire pipeline. It shows issues as they're discovered during the build phases.
The most valuable features are code security and container security.
It gives us awareness about any security breaches and if there are any vulnerabilities.
Palo Alto provides security scanning for multi and hybrid cloud environments. We need to know where there is a threat. Palo Alto monitors and reports it.
It can be integrated into any alerting tool that has enough automation and capability. It can pull some of the metrics without an agent.
Prisma Cloud provides risk clarity at runtime and across the entire pipeline, like, showing issues as they're discovered during the build phases.
There are some operational issues but testing it is good.
The UI is the worst.
I have been using Palo Alto Networks for two years.
The stability is good. I would rate it an eight out of ten.
The scalability is good.
Their technical support isn't on an expert level. They need to improve.
Neutral
The deployment time takes around two to four weeks. The understanding of the product takes around six months.
The initial setup was straightforward.
It does not require regular maintenance. You need to do maintenance around every six months by updating the agent.
I would rate Prisma Cloud by Palo Alto Networks a seven out of ten.
We are using the CSPM, CWP, and Code Security modules across our team. We are using the CSPM for our compliance system and the CWP for container security.
We are using multiple cloud accounts and the solution helps us with posture management. We have identified things that have optimized our posture across those accounts. We now have a single tool to protect all of our cloud resources.
We have also been able to integrate security into the CI/CD pipeline with touchpoints into existing DevOps processes. At runtime, it gives us risk clarity; the modules are really good and we have seen a decrease in alert investigation times.
Integration is very easy. And because it supports security that spans multi- and hybrid-cloud environments, it's very easy to use.
It's also a very good tool for helping us take a preventive approach to cloud security. The CSPM part is very easy.
It's pretty good when it comes to protecting the full cloud-native stack, but it depends on how you configure it and the kinds of rules you implement.
When it comes to compliance, the issue is that when we are exporting the reports, there is only a single compliance option. If I need to report on multiple compliance requirements, that feature isn't available. For example, I made a single report for ISO 27000 but I can't correlate it with GDPR.
Also, for the different modules we have to set up different policies. There should be a single console where we can implement and define all the rules in one go.
It provides visibility and control across our distributed cloud environments, apart from network segmentation. The network segmentation modules have very limited functionality.
And onboarding multiple Unix platforms is a little complex.
I have been using Prisma Cloud by Palo Alto Networks for one and a half years.
Overall, it's stable.
It's scalable.
The initial setup was slightly complex, when it came to integrating everything.
Almost all the CSPM tools are pretty expensive. I also explored Orca but it is also pretty expensive.
As of now, we are going to continue with this product. But we are also exploring. New tools are coming into the market so we have to keep up with all the tools and technologies. We are exploring what other kinds of features are available in the market.
From the security automation point of view, it's a fairly good tool, but it still needs some enhancements. Sometimes, it becomes somewhat complex to implement everything.
Overall, Prisma Cloud is a pretty good tool. The only part that stands out for improvement is the reporting.
We use Prisma Cloud in several ways and there are a lot of use cases. The first way that we use it is for inventory. It keeps a near real-time inventory of virtual compute storage and services. Second, we use it for monitoring and alerting of misconfigurations or other items of security significance. Next is compliance. We use it to monitor compliance with the centers for internet security (CIS) benchmarks.
Prisma provides security that spans multi/hybrid-cloud environments. We have it configured to watch for compliance in AWS, the Google Cloud Platform, and very soon, Azure as well. This is important to us because our risk management organization mandated the fact that we would maintain this overwatch capability in any of our clouds that have virtual compute storage or workloads.
Prisma's comprehensiveness for protecting the full cloud-native stack is excellent.
The comprehensiveness of the cloud-native development lifecycles is excellent. For us, the deploy functionality is not applicable but the build and run capabilities are. It positively affects our operations and gives us optics that we wouldn't otherwise have, at the speed of the cloud.
Prisma provides the visibility and control that we need, regardless of how complex our environments are. This very much boosts our confidence in our security and compliance postures. It's also been deemed acceptable as a sufficient presence and efficacy of control by our internal auditors and external regulators alike.
This solution has enabled us to integrate security into our CI/CD pipelines and add touchpoints as a control stop in the release chain. The touchpoints are seamless and very natural to our automation.
Prisma Cloud is a single tool that we can use to protect all of our cloud resources without having to manage and reconcile several security and compliance reports. It unifies and simplifies the overall operations.
Using this tool provides us with risk clarity across the entire pipeline because we use it as a pre-deployment control, ensuring that the run state is known and the risk posture is known at runtime. Our developers use this information to correct issues using our tools for YAML, JSON, CloudFormation templates, and Terraform.
Prisma does so much pre-screening that it limits the number of runtime alerts we get. This is because those pre-deployment code controls are known before the run state.
The investigations capabilities enhance our process and lower incident response and threat detection time. However, it is an enabler and it is run in parallel with our SIEM, which is Splunk. Most of what we're going to do, investigation-wise, is going to be in Splunk, simply because there's better domain knowledge about the use of that tool in Splunk's query language.
The most valuable feature is the continuous cloud compliance monitoring and alerting. The way Prisma works is that it has a tentacle from Palo Alto's AWS presence into ours. That tentacle is an application program interface, an API, a listener. That listener goes in and is entitled to look at all of the Amazon Web Services' logging facilities. It can then do event correlation, and it can tattletale on misconfigurations such as an S3 storage bucket made publicly available. We wouldn't otherwise be aware of that if Prisma didn't watch for it and alert on it.
Prisma provides cloud workload protection and cloud network security in a single pane of glass, and these items are very important to us. It also provides cloud infrastructure entitlement management but identity and access management is not something that we use Prisma for. We implemented a PoC but we opted to use another tool for that use case.
The security automation capabilities provided by this product are excellent and industry-leading. Palo Alto bought a company called Twistlock, which makes a pre-deployment code scanner. They added its functionality to the feature set of Prisma in the form of this compute module. Now, we're able to use the Twistlock capability in our automation, which includes our toolchains and pipelines.
This tool provides excellent features for preventative cloud security. We use all of the auto-remediation capabilities that Prisma offers out of the box. That "see something, do something" auto-remediation capability within Prisma keeps our human responders from having to do anything. It's automated, meaning that if it sees something, it will right the wrong because it has the entitlement to do that with its Prisma auto-remediation role. It's great labor savings and also closes off things much quicker than a human could.
Palo just keeps bolting on valuable features. They just show up in the console, and they have their little question mark, down in the lower right-hand corner, that shows what's new, and what's changed for August or September. They just keep pouring value into the tool and not charging us for it. We like that.
We would like to have the detections be more contemporaneous. For example, we've seen detections of an overprivileged user or whatever it might be in any of the hundreds of Prisma policies, where there are 50 minutes of latency between the event and the alert. We'd always want that to be as quick as possible, and this is going to be true for every customer.
The billing function, with the credits and the by-workload-licensing and billing, is something that is a little wonky and can be improved.
We began using Prisma Cloud in October or November 2018, when it was still known as RedLock.
Stability-wise, it has been perfect.
The scalability is excellent. Palo keeps adding cloud support, such as for Alibaba, Oracle, and others.
We have approximately 5,500 employees. Our deployment is all-encompassing overwatch to all of our AWS accounts, of which there are 66. We also have two or three different folders within GCP.
We do have plans to increase our usage. This includes using it for more of its capabilities. For example, there is a workload protection link that we haven't fully embraced. There are also some network security features and some dashboarding and geo-mapping capabilities that we could make better use of.
The technical support is excellent. We have premium support with Palo Alto and I never have any critique for the quality or speed of support.
We have used this solution from the outset of our cloud journey. It began with Evident.io, then it became RedLock, and then it became Prisma Cloud.
The initial setup is very straightforward. We did it several times.
The first one was deployed to AWS, which probably took about an hour. Years later, as we adopted the Google Cloud, it was configured in probably half an hour.
Palo provides the necessary setup instructions and you can't go wrong, as long as you have the role entitlement set up for Prisma. The handshake only takes about an hour.
Our deployment was done entirely in-house.
We have three people, full-time, who are responsible for the maintenance. Their roles are policy management, meaning these are the rule sets. It's called RQL, the RedLock query language, the out-of-the-box policies that are ever dynamic. When there's a new policy, we have to go in and rationalize that with our cyber organization.
We have to scrutinize the risk rating that's put on it by Palo. We have to realize when we're going to turn it on and turn it off. Also, we have to consider the resulting incident response procedures associated with the alert happening.
One metric that would be meaningful in this regard is that our company has had no cloud-based compromise.
You can expect a premium price because it is a premium quality product by a leading supplier.
We are a strategic partner with Palo Alto, meaning that we use all of their solutions. For example, we use their NG firewalls, WildFire, Panorama, Prisma, and all of their stuff. Because Prisma was an add-on for us, we get good pricing on it.
There are costs in addition to the standard licensing fees. The credits consumption billing model is new and we're going to be using more of the features. As we embrace further and we start to use these workload security protections, those come at an incremental cost. So, I would say that our utilization, and thus the cost, would trend up as it has in the past.
We evaluated several other products such as DivvyCloud, Dome9, and a product by Sophos.
We did a full comparison matrix and rationalization of each of the capabilities. Our sister company was using DivvyCloud at the time and as we do from time to time, we conferred with them about what their likes and dislikes were. They were moderately pleased with it but ultimately, we ended up going with Palo Alto.
My advice for anybody who is considering this product is to give it a good look. Give it a good cost-balance rationalization versus the cost of a compromise or breach, because it's your defense mechanism against exposure.
I would rate this solution a ten out of ten.
We had an internal debate regarding our firewall solution for the cloud. Initially we had a vendor that suggested we could build a whole environment using the Azure firewall, but we had requirements for Zero Trust architecture. We are essentially like a bank. We were planning to host some PCI services in the cloud and we were planning to create all the zones. When we looked at the feature set of Azure, we were not able to find Layer 7 visibility, which we had on our firewalls, and that is where the debate started. We thought it was better to go with a solution that gives us that level of visibility. Our team was comfortable with Palo Alto as a data center firewall, so we went for Prisma Cloud.
The comprehensiveness of the solution for protecting the full cloud-native stack is pretty good. It is doing a good job in three areas: identification, detection, and the response part is also very clear. We are able to see what is wrong, what is happening, and what we allowed, even for troubleshooting. If something goes bad, we need to check where it went bad and where it started. For example, if there is an issue that seems to be performance-related, we are able to look at the logs and the traffic flow and identify if the issue really is performance-related or if it is a security issue. Because we are new to the cloud, we are using a combination of different features to understand what is going on, if the application owner does not know what is wrong. We use the traffic analysis to find out what it was like yesterday or the day before and what is missing. Perhaps it is an authentication issue. We use it a lot for troubleshooting.
We have implemented Palo Alto's SOAR solution, Demisto, and have automated some of the things that our SOC team identified, related to spam and phishing. Those workflows are working very well. Things that would take an analyst between three and six hours to do can now be achieved in five to eight minutes because of the automation capabilities.
Overall, the Palo Alto solution is extremely good for helping us take a preventative approach to cloud security. One of the problems that we had was that, in the cloud, networking is different from standard networking. Although only a portion of our teams is trained on the cloud part, because we had engineers who were using the platform, they were able to quickly adapt. We were able to use our own engineers who were trained in the data center to very quickly be able to work on Prisma Cloud. But when we initially tried to do that with Azure itself, we had a lot of difficulty because they did not have the background in how Azure cloud works.
Also, when you have a hybrid cloud deployment, you will have something on-prem. Maybe your authentication or certain applications are still running on-prem and you are using your gateway to communicate with the cloud. A lot of troubleshooting happens in both the data centers. When we initially deployed, we had separate people for the cloud and for the local data centers. This is where the complication occurred. Both teams would argue about a lot of things. Having a single solution, we're able to troubleshoot very quickly. The same people who work on our Palo Alto data center firewalls are able to use Prisma Cloud to search and find out what went wrong, even though it's a part of the Azure infrastructure. That has been very good for us. They were easily able to adapt and, without much training, they were able to understand how to use Prisma Cloud to see what is happening, where things are getting blocked, and where we need to troubleshoot.
The solution provides the visibility and control we need, regardless of how complex or distributed the cloud environments become. If you have traffic passing through multiple zones and you have your own data center as well, that is where it does the magic. Using Prisma Cloud, we're able to quickly troubleshoot and identify where the problem is. Suppose that a particular feature in Office 365 is not working. The packet capture capability really helps us. In certain cases, we have seen where Microsoft has had bugs and that is one area where this solution has really helped us. We have been able to use the packet capture capability to find out why it was not working. That would not have been possible in a normal solution. We are using it extensively for troubleshooting. We are capturing the data and then going back to the service provider with the required logs and showing them the expected response and what we are getting. We can show them that the issue is on their side.
When it comes to Zero Trust architecture, it's extremely good for compliance. In our data center, we did a massive project on NSX wherein we had seven PCI requirements. We needed to ensure that all the PCI apps pass through the firewall and that they only communicate with the required resources and that there was no unexpected communication. We used Prisma Cloud to implement Zero Trust architecture in the cloud. Even in between the subnets, there is no communication allowed. Only what we allowed is passing through the firewall. The rest is getting blocked, which is very good for compliance.
If I have to generate a report for the PCI auditor, it is very simple. I can show him that we have the firewall with the vulnerability and IPS capabilities turned on, and very quickly provide evidence to him for the certification part. This is exactly what we wanted and is one of the ways in which the solution is helping us.
Another of the great things about Prisma Cloud is that the management console is hosted. That means we are not managing the backend. We just use Prisma Cloud to find out where an issue is. We can go back in time and it is much faster. If you have an appliance, the administration and support of it are also part of your job. But when you have Prisma Cloud, you don't care about those things. You just focus on the issues and manage the cloud appliances. This is something that is new for us and extremely good. Even though we have a lot of traffic, the search and capabilities are very fast, making them extremely good for troubleshooting.
Because the response is much faster, we're able to quickly find problems, and even things that are not related to networking but that are related to an application. We are able to help the developers by telling them that this is where the reset packet is coming from and what is expected.
We are using the new Prisma Cloud 2.0 Cloud Security Posture Management features. For example, there are some pre-built checklists that we utilize. It really helps us identify things, compared to Panorama, which is the on-prem solution. There are a lot of elements that are way better than Panorama. For instance, it helps us know which things we really need to work on, identifying issues that are of high importance. The dashboards and the console are quite good compared to Panorama.
If one of our teams is talking about slowness, we are able to find out where this slowness is coming from, what is not responding. If there is a lock on the database, and issues are constantly being reported, we are able to know exactly what is causing the issue in the backend application.
The main feature is the management console which gives us a single place to manage all our requirements. We have multiple zones and, using UDR [user-defined routing] we are sending the traffic back to Palo Alto. From there we are defining the rules for each application. What we like about it is the ease of use and the visibility.
The application visibility is amazing. For example, sometimes we don't know what a particular custom port is for and what is running on it. The visibility enables us to identify applications, what the protocol is, and what service is behind it. Within Azure, it is doing a great job of providing visibility. We know exactly what is passing through our network. If there is an issue of any sort we are able to quickly detect it and fix the problem.
The solution provides Cloud Security Posture Management, Cloud Workload Protection, Cloud Network Security, and Cloud Infrastructure Entitlement Management in a single pane of glass. When it comes to anomaly detection, because we have Layer 7 visibility, if there is something suspicious, even though it is allowed, we are able to identify it using the anomaly detection feature. We also wanted something where we could go back in time, in terms of visibility. Suppose something happened two hours back. Because of the console, we are able to search things like that, two hours back, easily, and see what happened, what change might have happened, and where the traffic was coming from. These features are very good for us in terms of investigation.
In addition, there are some forensic features we are utilizing within the solution, plus data security features. For example, if we have something related to financial information, we can scan it using Prisma Cloud. We are using a mixture of everything it offers, including network traffic analysis, user activity, and vulnerability detection. All these things are in one place, which is something we really like.
Also, if we are not aware of what the port requirements are for an application, which is a huge issue for us, we can put it into learning mode and use the solution to detect what the exact port requirements are. We can then meet to discuss which ones we'll allow and which ones are probably not required.
The only part that is actually tough for us is that we have a professional services resource from Palo Alto working with us on customization. One of the things that we are thinking about is that if we have similar requirements in the future, how can we get his capability in-house? The professional services person is a developer and he takes our requirements and writes the code for the APIs or whatever he needs to access. We will likely be looking for a resource for the Demisto platform.
The automation also took us time, more than we thought it would take. We had some challenges because Demisto was a third-party product. Initially, the engineer who is with us thought that everything was possible, but later on, when he tried to do everything, he was not able to do some things. We had to change the strategy multiple times. But we have now reached a point where we are in a comfort zone and we have been able to achieve what we wanted to do.
Also, getting new guys trained on using the solution requires some thought. If someone is already trained on Palo Alto then he's able to adapt quickly. But, if someone is coming from another platform such as Fortinet, or maybe he's from the system side, that is where we need some help. We need to find out if there is an online track or training that they can go to.
Related to training is the fact that changes made in the solution are reflected directly in the production environment. As of now, we are not aware of any method for creating a demo environment where we can train new people. These are the challenges we have.
We have been using Prisma Cloud by Palo Alto Networks for about eight months.
We have not had many issues with the solution's stability, and whatever challenges we have had have been in the public cloud. But with the solution itself there has only been one issue we got stuck on and that was NAT-ing. It was resolved later. We ran into some issues with our design because public internet access was an issue, and that took us some time. But it was only the NAT-ing part where we got stuck. The rest has all been smooth.
As of now, we have not put a load on the system, so we will only know about how it handles that when we start migrating our services. For now, we've just built the landing zones and only very few services are there. It will take like a year or so before we know how it will handle our load.
This is our main firewall solution. We are not relying on the cloud-based firewall as of now. All our traffic is going through Prisma Cloud. Once we add our workloads, we will be using the full capacity of the solution.
We have not had any issues up to now.
We initially tried to use the Azure firewall and the VPC that is available in Azure, but we had very limited capabilities that way. It was just a packet filtering solution with a lot of limitations and we ended up going back to Palo Alto.
The initial setup was straightforward. There was an engineer who really helped us and we worked with them directly. We did not have any challenges.
The initial deployment took us about 15 days and whatever challenges we had were actually from the design side. We wanted to do certain things in a different way and we made a few changes later on, but from the deployment and onboarding perspectives, it was straightforward.
We have a team of about 12 individuals who are using Prisma Cloud, all from the network side, who are involved in the design. On the security side, three people use it. We want to increase that number, but as I mentioned earlier, there is the issue of how we can train people. For maintenance, we have a 24/7 setup and we have at least six to eight engineers, three per shift. Most of them are from the network security side, senior network security engineers, who mainly handle proxy and firewall.
Our implementation strategy included using a third-party vendor, Crayon, who actually set up the basic design for us. Once the design was ready, we consulted with the Palo Alto team telling them that this was what we wanted to implement: We will have this many zones and these are the subnets. It didn't take much time because we knew exactly what our subnets were but also because the team that was helping us had already had experience with deployment.
Our experience with Crayon went well. Our timeline was extremely short and in the time that was available they did an excellent job. We reached a point where the landing zones were ready and whatever issues we had were resolved.
I can't say much about the pricing because we still have not started using the solution to its full capabilities. As of now, we don't have any issues. Whatever we have asked for has been delivered.
If you pay for three years of Palo Alto, it's better. If you're planning on doing this, it's obviously not going to be for one year, so it's better if you go with a three-year license.
The only challenge we have is with the public cloud vendor pricing. The biggest lesson I have learned is around the issues related to pricing for public cloud. So when you are doing your segmentation and design, it is extremely important that you work with someone who knows and understands what kinds of needs you will have in the future and how what you are doing will affect you in terms of costs. If you have multiple firewalls, the public cloud vendor will also charge you. There are a lot of hidden costs.
Every decision you make will have certain cost implications. It is better that you try to foresee and forecast how these decisions are going to affect you. The more data that passes through, the more the public cloud will charge you. If, right now, you're doing five applications, try to think about what 100 or 250 applications will cost you later.
If we had gone with the regular Azure solution, some of the concerns were the logging, monitoring, and search capabilities. If something was getting blocked how would we detect that? The troubleshooting was very complicated. That is why we went with Prisma Cloud, for the troubleshooting.
Microsoft is not up to where Palo Alto is, right now. Maybe in six months or a year, they will have some comparable capabilities, but as of now, there is no competitor.
Before choosing the Palo Alto product we checked Cisco and Fortinet. In my experience, it seemed that Cisco and Forinet were still building their products. They were not ready. We were lucky that when we went to Palo Alto they already had done some deployments. They already had a solution ready on the marketplace. They were quickly able to provide us the demo license and walk us through the capabilities and our requirements. The other vendors, when we started a year ago, were not ready.
If you have compliance requirements such as PCI or ISO, going with Palo Alto would be a good option. It will make your life much easier. If you do not have Layer 7 visibility requirements and you do not have auditing and related requirements, then you could probably survive by going with a traditional firewall. But if you are a midsize or enterprise company, you will need something that has the capabilities of Prisma Cloud. Otherwise, you will have issues. It is very difficult to work with the typical solution where there is no log and you don't know exactly what happened and there is too much trial and error.
Instead of allowing everything and then trying to limit things from there, if you go with a proper solution, you will know exactly what is blocked, where it is blocked, and what to allow and what not to allow. In terms of visibility, Prisma Cloud is very good.
One thing to be aware of is that we have a debate in our environment wherein some engineers from the cloud division say that if we had an Azure-based product, the same engineer who is handling the cloud, who is the global administrator, would have visibility into where a problem is and could handle that part. But because we are using Palo Alto, which has its own administrators, we still have this discussion going on.
Prisma Cloud also provides security spanning multi- and hybrid-cloud environments, which is very good for us. We do not have hybrid cloud as of now, but we are planning, in the future , to be hosting infrastructure on different cloud providers. As of now we only have Azure.
Because Zero Trust is something new for us, we have actually seen a significant increase in alerts. Previously, we only had intra-zone traffic. Now we have inter-zone traffic. Zero Trust deployments are very different from traditional deployments. It's something we have to work on. However, because of the increased security, we know that a given computer tried to scan something during office hours, or who was trying to make certain changes. So alerts have increased because of the features that we have turned on.
We primarily use Prisma Cloud as a cloud security posture management (CSPM) module. Prisma Cloud is designed to catch vulnerabilities at the config level and capture everything on a cloud workload, so we mainly use it to identify any posture management issues that we are having in our cloud workloads. We also use it as an enterprise antivirus solution, so it's a kind of endpoint security solution.
Our setup is hybrid. We use SaaS also. We mostly work in AWS but we have customers who work with GCP and Azure as well. About 60 percent of our customers use AWS, 30 percent use Azure, and the remaining 10 percent are on GCP. Prisma Cloud covers the full scope. And for XDR, we have an info technology solution that we use for the Gulf cloud. So we have the EDF solution rolled out to approximately around 500 instances right now.
Prisma Cloud is used heavily in our all production teams. Some might not be directly using the product since our team is the service owner and we manage Prisma. Our team has around 10 members teams, and they are the primary users. From an engineering aspect, there are another 10 team members who use it basically. Those are the actual people who work hands-on with Prisma Cloud. Aside from that, there are some product teams that use Prisma indirectly. If we detect something wrong with their products, we take care of it, but I don't think they have an active account on Prisma Cloud.
Prisma Cloud has been helpful from a security operations perspective. When a new product is getting onboarded or we are creating a new product — specifically when we need to create a new peripheral— it's inevitable that there will be a kind of vulnerability due to posture management. Everything we produce goes through via CICD, and it's kind of automated. Still, there are some scenarios where we see some gaps. So we can discover where those gaps exist, like if someone left an open port or an instance got compromised.
These kinds of situations are really crucial for us, and Prisma Cloud handles them really well. We know ahead of time if a particular posture is bad and we have several accounts in the same posture. Prisma gives us a deep dive with statistics and metrics, so we know which accounts are doing bad in terms of posture, how many accounts are out of alignment with the policy strategy, how many are not compliant. Also, it helps us identify who might be doing something shady.
So we get some good functionality overall in that dashboard. Their dashboard is not customizable, however, so that's a feature we'd like to say. At the same time, what they do provide on their dashboard is pretty helpful. It enables us to make the posture management more mature. We're able to protect against or eliminate some potential incidents that could have happened if we didn't have Prisma.
Prisma Cloud is quite simple to use. The web GUI is powerful. Prisma Cloud scans the overall architecture of the AWS network to identify open ports and other vulnerabilities, then highlights them. It's really good at managing compliance. We get out-of-the-box policies for SOC 2, Fedramp, and other compliance solutions, so we do not need to tune most of the rules because they are quite compliant, useful, and don't get too many false positives.
And in terms of Prisma Cloud's XDR solution, we do not have anything at scope at present that can give us the same in-depth visibility on the endpoint level. So if something goes bad on the endpoint, Prisma's XDR solutions can really go deep down to identify which process is doing malicious activity, what was the network connection, how many times it has been opened, and who is using that kind of solution or that kind of process. So it's a long chain and its graphical representation is also very good. We feel like we have power in our hands. We have full visibility about what is happening on an endpoint level.
When it comes to securing new SaaS applications, Prism Cloud is good. If I had to rate it, I would say seven out of 10. It gives us really good visibility. In the cloud, if you do not know what you are working with or you do not have full visibility, you cannot protect it. It's a good solution at least to cover CSPM. We have other tools also like Qualys that take care of the vulnerability management on the A-level staff — in the operating system working staff — but when it comes to the configuration level, Prisma is the best fit for us.
Prisma Cloud's dashboards should be customizable. That's very important. Other similar solutions are more elastic so you have the power to create customized dashboards. In Prisma Cloud, you cannot do that. Prisma also should allow users to fully automate the workflow of an identified set. Right now, it can give us a hint about what has happened and there is an option to remediate that, but for some reason, that doesn't work.
Another pain point is integration with ticketing solutions. We need bidirectional integration of Prisma Cloud and our ticketing tool. Currently, we only have one-way integration. When an alert appears in Prisma Cloud, it shows up in our ticketing tool as well. But if someone closes that ticket in our ticketing tool, that alert doesn't resolve in Prisma Cloud. We have to do it manually each time, which is a waste of time.
I am not sure how much Prisma Cloud protects against zero-day threats. Those kinds of threats really work in different kinds of patterns, like identify some kind of CBE, that kind of stuff. But considering the way it works for us, I don't think it'll be able to capture a zero-day threat if it is a vulnerability because Prisma Cloud actually doesn't capture vulnerability. It captures errors in posture management. That's a different thing. I don't know if there is any zero-day that Prisma can identify in AWS instantly. Probably, we can ask them to create a custom policy, but that generally takes time. We haven't seen that kind of scenario where we actually have to handle a zero-day threat with Prisma Cloud, because that gets covered mostly by Qualys.
I've been using Prisma Cloud for almost two years now.
Prisma Cloud is quite stable. At times, it goes down, but that's very rare. We have some tickets with them, but when we see some issues, they sort it out in no time. We do not have a lot of unplanned downtime. It happens rarely. So I think in the last year, we haven't seen anything like that.
Prisma Cloud is quite scalable. In our current licensing model, we're able to heavily extend our cloud workload and onboard a lot of customers. It really helps, and it is on par with other solutions.
I think Prisma Cloud's support is quite good. I would rate them seven out of 10 overall. They have changed their teams. The last team was comparatively not as good as the one we have right now. I would rate them five out of 10, but they have improved a lot. The new team is quite helpful. When we have an issue, they take care of it personally if we do not get an answer within the terms of the SLA. We tend to escalate to them and get a prompt answer. The relationship between our management and their team is quite good as well. .
We have a biweekly or weekly call with their tech support team. We are in constant communication about issues and operating problems with them. It's kind of a collab call with their tech support team, and we have, I think, a monthly call with them as well. So whenever we have issues, we have direct access to their support portal. We create tickets and discuss issues on the call weekly.
Transitioning to the new support team was relatively easy. They switched because of the internal structure and the way they work. Most of the engineering folks work out of Dublin and we are in India. The previous team was from the western time zone. That complicated things in terms of scheduling. So I think the current team is right now in Ireland and it's in the UK time zone. That works best for us.
We have an engineering team that does the implementation for us, and our team specifically handles the operations once that product is set up for us. And then that product is handed over to us for the daily BA stuff accessing the security, the CSPM kind of module. We are not involved directly. When the product gets onboarded, it's handed over to us. We handle the management side, like if you need to create a new rule or you need to find teams for the rule. But the initial implementation is handled by our engineers.
I would rate Prisma Cloud six out 10. I would recommend it if you are using AWS or anything like that. It's quite a tool and I'm impressed with how they have been improving and onboarding new features in the past one and a half years. If you have the proper logging system and can implement it properly within your architecture, it can work really well.
If you are weighing Prisma Cloud versus some CASB solution, I would say that it depends on your use case. CASBs are a different kind of approach. When someone is already using a CASB solution, that's quite a mature setup while CSPM is another side of handling security. So if someone has CASB in place and feels they don't need CSPM, then that might be true for a particular use case at a particular point in time. But also we need to think of the current use case and the level of maturity at a given point in time and consider whether the security is enough.
Previously, we were primarily using Amazon Web Services in a product division. We initially deployed RedLock (Prisma Cloud) as a PoC for that product division. Because it is a large organization, we knew that there were Azure and GCP for other cloud workloads. So, we needed a multi-cloud solution. In my current role, we are primarily running GCP, but we do have some presence in Amazon Web Services as well. So, in both those use cases, the multi-cloud functionality was a big requirement.
We are on the latest version of Prisma Cloud.
It is very important that Prisma Cloud provides security spanning multi-cloud environments, where you have Amazon, Azure, and GCP multiple cloud environments. Being able to centralize all those assets, have visibility, and set some policies and rules within one dashboard when you have multiple cloud accounts is a big advantage.
The comprehensiveness of Prisma Cloud for securing the entire cloud-native development lifecycle was shown when Palo Alto bought Twistlock and integrated in some of the container security pieces, particularly for containers, Docker, and Kubernetes, and building in the Prismic Cloud Compute tab. Having that functionality from Twistlock more focused on Docker and containers filled in some of the space where the original Prisma RedLock piece was a little more focused on just the API, e.g., passive scanning. The integration of Twistlock into Prisma Cloud Compute definitely expanded this functionality into the container and Docker space, which is a big growth area in the cloud as well.
Prisma Cloud has enabled us to take a very strong preventive approach to cloud security. One of the hardest things with cloud is getting visibility into workloads. With Prisma Cloud, you can go in and get that visibility, then set up policies to alert on risky behavior, e.g., if there are security groups or firewall ports open up. So, it is very helpful in preventing configuration errors in the cloud by having visibility. If there are issues, then you can find them and fix them.
Educates and trains cloud operators on how to better design their different cloud and infrastructure deployments. Prisma Cloud has very good remediation steps built in. So, if you do find an issue, they will give you steps on, "Here is how you go into the Console and make this change to close out this issue, preventing this in the future." So, it is a strong tool for the prevention and protection of the cloud, in general.
We have gone in and done some tuning to remove alerts that were false positives. That reduced some of the alerts. Then, as our team has gone in and fixed issues, we have seen from the metrics and tracking of Prisma Cloud that alerts have been reduced.
The compliance tabs were helpful just to have visibility into the assets as well as the asset management tabs. In the cloud, everything is very dynamic and ephemeral. So, being able to see dynamic asset inventory for what we have in cloud environments was a huge plus. Just to have that visibility in a dashboard instead of having to dump things into a spreadsheet, e.g., you are trying to do asset inventory and spreadsheets, then five minutes later it changes cause the cloud is dynamic. So, the asset inventory and compliance tabs are strong.
When the cloud team makes a change that may introduce some risk, then we get alerts.
We pretty heavily used the Resource Query Language (RQL) and the investigate tab to find what instances and cloud resources are externally facing and might be higher risk, looking for particular patterns in the resources.
Prisma Cloud provides the following in a single pane of glass within a dashboard: Cloud Security Posture Management, Cloud Workload Protection, Cloud Network Security, and Cloud Infrastructure Entitlement Management. It is particularly challenging, especially in a multi-cloud environment, where you would have to log into your Google Cloud, then look for your infrastructure and alerting within Google. In addition, you have to switch over to Amazon and log into an AWS Console to do some work with Amazon. Having that central visibility across multiple cloud environments is definitely important when you have different sources and different dashboards for the cloud, which will still be separate, but you still have some centralization within that dashboard.
The solution’s security automation capabilities are definitely good. We use some of the automation within the alerting, where if Prisma Cloud detected a change and there was a certain threshold, e.g., if it was above a medium or a high risk issue, then we would send off an alert that would go to our infrastructure team/Slack channel, creating a Jira ticket. The automation with Slack and Jira have been very good feature points.
The Prisma Cloud tool identifies for the security team the resource in the cloud that is the offender, such as, the context, the resource in the cloud, what is the cloud account, and the cloud environment that the resource is in. Then, there is always very good context on remediation, e.g., how do we go in and fix that issue? Do we either go through automation or log into the Cloud Console to do some remediation? The alerts include the context that is needed as well as the risk ranking and severity, whether it is a high, medium, or low issue.
The Prisma Cloud Console always has good remediation steps, whether it is going into the Console, updating a Cloud Formation, or Terraform scripts. The remediation guidance is always very helpful from Prisma Cloud.
Some of the usability within the Compute functionality needs improvement. I think when Palo Alto added on the Twistlock functionality, they added a Compute tab on the left side of the navigation. Some of the navigation is just a little dense. There is a lot of navigation where there is a tab and dropdowns. So, just improving some of the navigation where there is just a very dense amount of buttons and drop-down menus, that is probably the only thing, which comes from having a lot of features. Because there are a lot of buttons, just navigating around the platform can be a little challenging for new users.
They could improve a little bit of the navigation, where I have to kind of look through a lot of the different menus and dropdowns. Part of this just comes from it having so many awesome features. However, the navigation can sometimes be a little bit like, "I can't remember where the tab was," so I have to click and search around. This is not a big negative point, but it is definitely an area for improvement.
I started using this solution when it was still called RedLock. Before Palo Alto bought RedLock, I used RedLock for about a year and then for another year or two once Palo Alto bought them, rebranding them as Prisma Cloud. So, I have been using it for about three or four years.
It is very stable and solid. We haven't really had any issues with the dashboard. The availability is there. The ability to log in and get near real-time data on our cloud environment is very good. Overall, the stability and accessibility has been good.
We use it pretty much daily, several days a week. We are licensed for 200 workloads in Prisma Cloud.
We are definitely still working on maturing some of our operations. We have a pretty small infrastructure team; just two engineers who are focused on infrastructure. We are trying to automate as much as we can, and Prisma Cloud supports most of that. There are still some cases where you have to log into the Console and do some clicking around. However, for the most part, we are trying to automate as much as we can to scale those operations with a very small infrastructure and security team.
Their customer and technical support is very good. They helped us on scoping, getting an estimate for how many workloads and resources that we had. Their support team helped us through some issues on the configuration in the API on the Defender side. We had a couple questions that came up and the customer success and support engineers were very responsive and helpful.
The sales team was really good. We leveraged some of our relationships, working extensively with some of the leadership at Palo Alto in Unit 42 on their threat team. The sales team gave us a pretty good deal right before the end of the year, last year. So, we were able to get a good discount, so we were able to get the purchase done. Overall, it was a good experience.
This was a new implementation for our company.
Deploying the baseline for Prisma Cloud, its API configuration, was straightforward. To set up the API roles and hook in the API connectivity, we were able to do that within a couple of hours. The Prisma Cloud piece at the API level was very quick. The Defender agents were a bit more complicated because we had to deploy the Compute Defender agents into our containers, Docker, and Kubernetes. That was a little more complex, because we were deploying, not just connecting an API. We were deploying agents within our environment. So, the API side was very simple and fast. The Defender side was a bit more complicated.
We are still working on expanding and deploying some more Defender agents. The API piece was deployed within about a week, which was very fast. On the Defender side, with the infrastructure team's input, it took us several weeks to get the Defender agents deployed.
When we deployed Prisma Cloud, we established some baselines for security and our infrastructure team for what was running in the cloud. They were using some automation and scripting. They thought everything was okay with the script: We just run a script and it deploys this server and infrastructure in the cloud. What we found was that there were some misconfigurations. They had a default script that was opening up some ports that were not needed. So, we worked with the infrastructure team, went back, and said, "Okay, these ports were uncovered with our Prisma Cloud scanning. Is there a business use? Is there any valid reason for these ports to be open?" The team said, "No we don't really need these ports." It was just a default that we need to deploy in Google or AWS. It was just a default that was added in. So, we worked with them to go back and change some of their defaults, then change some of their scripts. Now, in future cases, when they deploy the Terraform script, it would make sure that those ports are automatically closed.
We purchased directly from Palo Alto. We didn't use a system integrator. We purchased directly from them and went through their support team. I have a good relationship with the sales and customer success team at Palo Alto just from past relationships. So, we did a direct purchase.
We will eventually see return on investment just out of the automation and the ability to scale the platform up.
We have reduced alert investigation times by approximately a couple hours a week.
The pricing is good. They gave us some good discounts right at the end of the year based on the value that it brings, visibility, and the ability to build in cloud, compliance, and security within one dashboard.
We did look at a couple other vendors who do similar cloud workload protections. Based on the relationships that we have with Palo Alto, we knew that Palo Alto was kind of the leader in this space. We had hands-on experience with the tool and Palo Alto was also a customer of ours. So, we had some strong relationships and Palo Alto was the leader.
We did some demos with different tools that were not as comprehensive. We had some tools that we looked at which just focused more on the container side and some that focused more on the cloud API layer. Since Prisma Cloud has unified some of these different pieces into one platform, we ultimately decided that Prisma Cloud was going to be the best solution for us.
It is a good tool. Work with your stakeholders and cloud teams to implement Prisma Cloud within as many environments as you can to get that rich amount of data, then come up with a strong strategy for integrations and alerting. Prisma Cloud has a lot of integrations out-of-the-box, like ServiceNow, Jira, and Slack. Understand what your business teams need as well as what your engineering and developers need. Try to work on the integrations that allow for the maximum amount of integration and automation within a cloud environment. So, work with your business teams to come up with a plan for how to implement it in your cloud, then how to best integrate the tooling and alerting.
While Prisma Cloud does have the ability to do auto-remediation, which is a part of their automation, we didn't turn any of that on now because those features have a tendency to sometimes break things. For example, it will automatically shut down a security group or server that can sometimes have an impact into availability. So, we don't use any of the auto-remediation features, but we do have automation setup with Jira and Slack to create tickets and events for our ticketing and infrastructure teams/Slack channels.
We definitely want to continue to explore and build-in some of the Shift Left principles, getting the tool into our dev cycles earlier. We do have some plans to expand more on the dev side. I am hiring an AppSec engineer who will be focused more on the development and AppSec side. That is something that is in our roadmap. It has just been something that we have been trying to work on and get into our backlog of a lot of projects.
I would rate this solution as a nine out of 10.
We have a very large public cloud estate. We have nearly 300 public cloud accounts, with almost a million things deployed. It's pretty much impossible to track all of the security and the compliance issues using anything that would remotely be considered homegrown—scripts, or something that isn't fully automated and supported. We don't have the time, or necessarily even the desire, to build these things ourselves. So we use it to track compliance across all of the various accounts and to manage remediation.
We also have 393 applications in the cloud, all of which are part of various suites, which means there are at least 393 teams or groups of people who need to be held accountable for what they have deployed and what they wish to do.
It's such a large undertaking that automating it is the only option. To bring it all together, we use it to ensure that we can measure and track and identify the remediation of all of our public cloud issues.
The solution provides risk clarity at runtime and across the entire pipeline, showing issues as they are discovered during the build phases. Our developers are able to correct them using the tools they use to code. It gives our developers a point to work towards. If the information provided by this didn't exist, then we wouldn't be able to give our developers the direction that they need to go and fix the issues. It comes back to ownership. If we can give full ownership of the issues to a team, they will go fix them. Honestly, I don't care how they fix them. I don't really mind what tools they use.
It is reducing run-time alerts. It's still in the process of working on those, but we have already seen a significant decrease, absolutely.
The entire concept is the right thing for us. It's what we need. The application is the feature, so to speak it. What it does is what we want it for: looking across the various cloud estates and providing us with information about what's going on in our cloud, where it is, when it happened. The product is the most valuable feature. It's not a do-all and end-all product. That doesn't exist. But it's a product with a very specific purpose. And we bought it for that very specific purpose.
When it comes to protecting the full cloud native stack—the pure cloud component of the stack—it is very good.
One of the main reasons we like Prisma Cloud so much is that they also provide an API. You can't expect to give someone an account on Prisma Cloud, or on any tool for that matter, and say, "Go find your things and fix them." It doesn't work like that. We've got to be able to clearly identify who owns what in our organization so that we can say, "Here's a report for your things and this is what you must go and fix." We pull down the information from the API that Prisma Cloud provides, which is multi-cloud, multi-account—hundreds and hundreds of different types of alerts graded by severity—and then we can clearly identify that these alerts belong to these people, and they're the people who must remediate them. That's our most important use case, because if you can't identify users, you can't remediate. No user is going to sit there going through over a million deployed things in the public cloud and say, "That one's mine, that one's not, that's mine, that's not." It's both the technology that Prisma Cloud provides and the ability to identify things distinctly, that comprise our use case.
It also provides the visibility and control we need, regardless of how complex or distributed our cloud environments become. It doesn't care about the complexity of our environment. It gives us the visibility we need to have confidence in our compliance. Without it, we would have no confidence at all.
It is also part of our DevOps processes and we have integrated security into our CI/CD pipeline. To be honest, those touchpoints are not as seamless as they could be because our processes do rely on multiple tools and multiple teams. But it is one of the key requirements in our DevOps life cycle for the compliance component to be monitored by this. It's a 100 percent requirement. The teams must use it all the time and be compliant before they move on to the next stage in each release. It is a bit manual for us, but that's because of our environment. It's given our SecOps teams the visibility they need to do their jobs. There's absolutely no chance that those teams would have any visibility, on a normal, day-to-day basis, simply because the SecOps teams are very small, and having to deal with hundreds of other development teams would be impossible for them on a normal basis.
Based on my experience, the customization—especially the interface and some of the product identification components—is not as customizable as it could be. But it makes up for that with the fact that we can access the API and then build our own systems to read the data and then process and parse it and hand it to our teams. At that point, we realized, "Okay, we're not never going to have it fully customizable," because no team can expect a product, off-the-shelf, to fit itself to the needs of any organization. That's just impossible.
So customization from our perspective comes through the API, and that's the best we can do because there is no other sensible way of doing it. The customization is exactly evident inside the API, because that's what you end up using.
In terms of the product having room for improvement, I don't see any product being perfect, so I'm not worried about that aspect. The RedLock team is very responsive to our requirements when we do point out issues, and when we do point out stuff that we would like to see fixed, but the product direction itself is not a big concern for us.
We've been using it since before it was called Prisma Cloud. We're getting on towards two years since we first purchased it.
The stability of Prisma Cloud is very good. I have no complaints along those lines. It seems to fit the requirements and it doesn't go down. Being a SaaS product, I would expect that. I haven't experienced any instability, and that's a good thing.
Again, as a SaaS product, I would expect it to just scale.
We regularly use Palo Alto technical support for the solution. I give it a top rating. They're very good. They have a very good customer success team. We've never had any issues. All our questions have been answered. It has been very positive.
We did not have a previous solution.
The initial setup was very straightforward. It's a SaaS product. All you have to do is configure your end, which isn't very hard. You just have to create a role for the product and, from there on, it just works, as long as the role is created correctly. Everything else you do after that is managed for you.
We have continuously been deploying it on new accounts as we spin them up. Our deployment has been going on since year one, but we've expanded. Two years ago we probably had about 40 or 50 cloud accounts. Now, we have 270 cloud accounts.
We have a team that is dedicated to managing our security tools. Something this big will always require some maintenance from our side: new accounts, and talking to internal teams. But this is as much about management of the actual alerts and issues than it is anything else. It's no longer about whether the tool is being maintained. We don't maintain it. But what we do is maintain our interaction with the tool. We have two people, security engineers, who work with the tool on a regular basis.
It's a non-functional ROI. This isn't a direct-ROI kind of tool. The return is in understanding our security postures. That's incredibly important and that's why we bought it and that's what we need from it. It doesn't create funds; it is a control. But it certainly does stop issues, and how do you quantify that?
Pricing wasn't a big consideration for us. Compared to the work that we do, and the other costs, this was one of the regular costs. We were more interested in the features than we were in the price.
If a competitor came along and said, "We'll give you half the price," that doesn't necessarily mean that's the right answer, at all. We wouldn't necessarily entertain it that way. Does it do what we need it to do? Does it work with the things that we want it to work with? That is the important part for us. Pricing wasn't the big consideration it might be in some organizations. We spend millions on public cloud. In that context, it would not make sense to worry about the small price differences that you get between the products. They all seem to pitch it at roughly the same price.
Before the implementation of Prisma Cloud, there were only two solutions in the market. The other one was Dome9. We did an evaluation and we chose this one, and they were both very new. This is a very new concept. It pretty much didn't exist until Prisma Cloud came along.
The Prisma Cloud solution was chosen because of the way it helped integrate with our operations people, and our operations people were very happy with it. That was one of the main concerns.
Both solutions are very good at what they do. They approach the same problem from different directions. It was this direction that worked for us. Having said that, certain elements of Prisma Cloud were definitely more attractive to us because they matched up with some of our requirements. I'm very loath to say one product is better than the other, because it does depend on your requirements. It does depend on how you intend to use it and what it is, exactly, that you're looking for.
You need to identify how you'll be using it and what your use cases are. If you don't have a mature enough organizational posture, you're not going to use it to actually fix the issues because you won't have the teams ready to consume its information. You need to build that and that needs to be built into the thinking around that product. There's no point having information if you're not going to act on it. So understand who is going to act on it, and how, and then you've got a much better path to understanding your use for this. There's no point in buying a product for the sake of the product. You need the processes and the workflows that go with it and you need to build those. It's not good enough to just hope that they will happen.
The solution doesn't secure the entire spectrum of compute options because there are other Palo Alto products that secure containers, for example. This is very specifically focused on the configuration of the public cloud instances. It doesn't look inside those instances. You would need something else for that. You don't want to be using other products to do this. You don't want to mistake this for something that does everything. It doesn't. It is a very specific product and it is amazingly good at what it does.
We do integrate it with our workflow as part of the process of getting an application onto the internet. It does integrate with our workflow, giving us a posture as part of the workflow. But it is not a workflow tool.
It definitely does multi-cloud. It does the three major ones plus Alibaba Cloud. It doesn't reach into hybrid cloud, in the sense that it doesn't understand anything non-cloud. We don't use it to provide security, although it is very good for that. We already have an advanced security provision posture, because we are a very large organization. We just use it to inform us of security issues that are outside our other controls.
Prisma Cloud doesn't provide us with a single tool to protect all of our cloud resources and applications in terms of security and compliance reports because we have non-cloud-related tools being folded into the reports as well. Even though it works on the cloud, and is excellent at what it does, we integrate it with our Qualys reports, for example, which is the scanning on our hosts. Those hosts are in the cloud, but this doesn't touch them. There's no such thing as a single security tool, frankly. It's basically part of our portfolio and it's part of what every organization needs, in my opinion, to be able to manage their cloud security postures. Otherwise, it would just never work.
The main reason why we are using Prisma Cloud is to identify any compliance issues. We have certain compliance requirements across our different resources, such as something should be completely inaccessible, logging should be enabled, and certain features should be enabled. So, we are using it to identify any such gaps in our cloud deployment. Basically, we are using it as a Cloud Security for Posture Management (CSPM) tool.
It is a SaaS solution.
One of the things that we have been able to do with Prisma Cloud is that we have been able to generate real-time alerts and share them with our technology team. For certain resources, such as databases, we have certain P1 requirements that need to be fulfilled before our resource goes live. With Prisma, if we identify any such resource, then we just raise an alert directly with the support team, and the support team gets working on it. So, the turnaround time between us identifying a security gap and then closing it has gone down drastically, especially with respect to a few of the resources for which we have been able to put this plan into motion. We have reduced the timeline by 30%. That's because the phase of us identifying the gaps manually and then highlighting them to the team is gone, but the team still needs to remediate them. Of course, there is a provision in Prisma Cloud where I can reduce it further by allowing auto-remediate, but that is not something that we have gone for as an organization.
We are using it to find any gaps, create custom policies, or search in our cloud because even on the cloud portal, you don't get all the details readily available. With Prisma, you have the capability of searching for whatever you're looking for from a cloud perspective. It gives you easy access to all the resources for you to find any attribute or specific values that you're looking for in an attribute. Based on my experience with Azure and Prisma, search becomes much easier via Prisma than via your cloud.
As a pure-play CSPM, it is pretty good. From the data exposure perspective, Prisma Cloud does a fairly good job. Purely from the perspective of reading the conflicts, it is able to highlight any data exposures that I might be having.
There are two main things that Palo Alto should look into. The first is the reporting piece, and the second one is the support.
Currently, custom reports are available, but I feel that those reports are targeting just the L1 or L2 engineers because they are very verbose. So, for every alert, there is a proper description, but as a security posture management portal, Prisma Cloud should give me a dashboard that I can present to my stakeholders, such as CSO, CRO, or CTO. It should be at a little bit higher level. They should definitely put effort into reporting because the reporting does not reflect the requirements of a dashboard for your stakeholders. There are a couple of things that are present on the portal, but we don't have the option to customize dashboards or widgets. There are a limited set of widgets, and those widgets don't add value from the perspective of a security team or any professional who is above L1 or L2 level. Because of this, the reach of Prisma Cloud in an organization or the access to Prisma Cloud will be limited only to L1 and L2 engineers. This is something that their development team should look into.
Their support needs to be improved. It is by far one of the worst support that I have seen.
We are using Azure Cloud. With AWS, Prisma is a lot more in-depth, but with Azure, it's still developing. There are certain APIs that Prisma is currently not able to read. Similarly, there were certain APIs that it was not able to read six months ago, but now, it is able to review those APIs, top-up resources, and give us proper security around that. Function apps were one of those things that were not there six months ago, but they are there now. So, it is still improving in terms of Azure. It is much more advance when it comes to AWS, but unfortunately, we are not using AWS. A problem for us is that in terms of protecting data, one of the key concepts is the identification of sensitive data, but this feature is currently not enabled for Azure. This feature is there for AWS, and it is able to read your S3 buckets in the case of AWS, but for Azure, it is currently not able to do any identification of your storage accounts or read data on the storage to give security around that. So, that is one of the weak points right now. So, from a data exfiltration perspective, it needs some improvement.
It is currently lacking in terms of network profiles. It is able to identify new resources, and we do get continuous alerts from Prisma when there is an issue, but there have been a few issues or glitches. I had raised a case with Palo Alto support, but the ticket was not going anywhere, so I just closed the ticket. From a network security group's point of view, we had found certain issues where it was not able to perform its function properly when it comes to the network profile. Apart from that, it has been working seamlessly.
I've been using Prisma Cloud for around six months.
It is a stable platform. Especially with it being a SaaS platform, it just has to make API calls to the customers' cloud portals. I haven't found any issues with regard to stability, and I don't foresee any issues with stability based on the architecture that Prisma has.
It is pretty scalable. The only limitation is the licensing. Otherwise, everything is on the cloud, and I don't see any challenges with respect to scalability. I would consider it as a scalable solution.
Currently, there are around eight to 10 people who are working with Prisma, but we are still bringing it up to maturity. So, majorly, I and a couple of my colleagues are working with Prisma. The others have the account, but they are not active with respect to Prisma. Almost all of us are from InfoSec.
The support from Palo Alto needs to be improved a lot. It is by far one of the worst support services that I have seen. It takes a lot of time for them to come back, and nothing conclusive happens on the ticket as well.
There was a ticket for which I called them for three months, and nothing was happening on that ticket. They were just gathering evidence that I had already shared. They asked for it again and again, and I got frustrated and just closed the ticket because I was just wasting my time. I was not getting any response. There was no progress that I was seeing in getting my issue getting resolved even after three months. This is not just for one ticket. There have been a couple of other tickets where I've faced similar issues with Palo Alto. So, support is definitely something that they should look into.
Today, I won't recommend Palo Alto Prisma to someone because I'm not confident about their support. Their support is tricky. I would rate them a three or four out of 10. They are polite and have good communication skills, but my requirement from the support team is not getting fulfilled.
We haven't used any other product.
I've been involved with the entire implementation of Prisma Cloud. I've manually done the implementation of Prisma in my current organization in terms of fine-tuning the policies, reviewing the policies, and basically bringing it up to maturity. We have not yet achieved maturity with the product. We have also encountered some problems with the product because of which the implementation has been a bit delayed.
The integration piece is pretty straightforward. In terms of the availability of the documentation, there is no issue. If you reach the right document, your issue gets resolved automatically, and you don't have to go to the support team. That was pretty smooth for me.
The initial integration barely took half a day. You just have to make some changes on your cloud platform, get the keys, and just put the keys manually. We had a lot of subscriptions, and when we were doing the integration, tenant-level integration was not available. So, I had to manually integrate or rather onboard each subscription. That's the reason why it took me half a day. It might have even been just a couple of hours.
As of now, we have not seen an ROI because we are not yet mature. We have not yet reached the maturity level that we want to reach.
My colleague had reviewed other solutions like Aqua and Cloudvisory. One of the reasons for selecting Prisma was that we have planned a multi-cloud approach, and based on our analysis, we felt that Prisma will be better suited for our feature requirements. The other reason was that we already have quite a few Palo Alto products in our environment, so we just thought that it will be easier for us to do integrations with Prisma. So, these were the two key reasons for that decision.
Currently, there are not many options to choose from across different products. So, from that perspective, Prisma is pretty decent. It works how CSPMs are supposed to work. They have to read up the config, and then throw you an alert if they find any misconfiguration. So, from that perspective, I didn't find it to be that different from other CSPMs. The integration pieces and other things are pretty simple in Prisma Cloud, which is something that we can take into account when comparing it with others.
I would recommend others to consider a CSPM product, whether they go with Prisma or another flavor of CSPM. It also depends on the deployment that the organization has, the use case, and the budget. For an organization similar to mine, I would definitely recommend going for CSPM and Palo Alto Firewall.
I would advise others to not go with the higher level of Prisma support. They should go for third-party professional services because, in my experience, they have a better understanding of the product than the Prisma support team. Currently, we have one of higher levels of support, and we are not getting the return on that support. If we go for a lower tier of support, we save that money and give it to a third-party professional service. That would be a better return on investment.
Prisma Cloud hasn't helped us to identify cloud applications that we were unaware that our employees were using. That has not been the case so far because when we had initially done the deployment, we had done it at the subscription level rather than at the tenant level. So, in our case, it is quite the opposite where there would be subscriptions that the client is not aware of. I think Prisma has come up with a release wherein we can integrate our cloud on a tenant level rather than the subscription level. That is something that we will be doing going forward.
I would rate this solution a seven out of 10.
