Our primary use case is to certify blueprints. We are helping both on the CSPM and the CWPP parts of it. We monitor the compute infrastructure and certify the project.
CACS for CSPM, we certify against the NIST 800-53 compliance standard.
Our primary use case is to certify blueprints. We are helping both on the CSPM and the CWPP parts of it. We monitor the compute infrastructure and certify the project.
CACS for CSPM, we certify against the NIST 800-53 compliance standard.
For the compliance part, we have found the pie graph, where we can see all of the compliance standards in one go, to be a valuable feature.
Prisma Cloud's monitoring features such as the compute compliance dashboard and the vulnerability dashboard, where we can get a clear visualization of their docker, have also been valuable. We can get layer-by-layer information that helps us see exactly where it's noncompliant. They update the dashboards quite frequently.
Their data security feature is quite good as well.
Their training modules are good, and my team is okay with them.
Microsegmentation still needs improvement.
For data security, they have only specific regions like the US, and they need to move to Asia as well.
The most important thing has to do with the computing, licensing, and costing. They charge seven workloads for monitoring one compute, and that is quite expensive. This makes it difficult to move fully with the compute part because of the workload.
Their training modules need to have more live examples. We need to refer to the YouTube channel or follow Palo Alto to get the reference. If they can refer to the YouTube channel in their training and indicate that it can be referred to for further information, it would be good.
On their portal, they do not have which services are available in each region. While searching, it's very hard to find in which location a service is enabled. So, it would be great to have a list of services for each region.
I've been using Prisma Cloud for eight months. It is a SaaS solution.
It's stable as of now; it has not been down in the last eight months.
It is scalable as of now. We have 20 VMs.
Technical support is good. From what I've observed though, different regions seem to have different SMEs, subject matter experts, and different people have different knowledge. So, there is definitely a gap between the different SMEs.
We were using AWS products.
We switched because of twist lock for compute security. The Prisma Cloud dashboard is powerful, and it gives you at-a-glance compliance security against many standards. We can also write our own custom policies if we want to build our own standard. So, there are lots of benefits with Prisma Cloud.
It's a SaaS, so the initial setup is pretty straight forward. We are still onboarding, and most of the customers are in the dev environment as of now and not production. So, it was quite smooth. They have their contributions filed on the portal, the cloud formation templates.
The licensing cost is a bit high on the compute side. We get a corporate discount, which helps reduce overall cost. In some cases, you may need to have two licenses to onboard a project, which would make it expensive.
If your specialization involves blueprint certification against a compliance standard, then you can go with Prisma Cloud. It is very powerful for data loss prevention, and I would rate it at seven on a scale from one to ten.
When we did a POC, we realized that this product was able to give us insights into how consumers or services are activated. We could tell if, in certain cases, there was any kind of manual issues such as a misconfiguration. The solution is used to help us to reconfigure items and figure out what reconfiguration needs to be done, et cetera. Our target was to enhance the security portion of our AWS cloud.
The security features are quite good.
The monitoring part is excellent. It is able to completely monitor our users in order to see what the users are doing at what time and if the users are currently logged in from India, and after five minutes of seeing a user if they are then trying to log in from Singapore, for example. Of course, this would not be possible, and so we would know something was wrong. It can pick up questionable behavior that may have been missed.
The reporting is great.
It's very user-friendly. You can easily make customized dashboards as well.
We can easily restrict the users if we need to. We can even restrict them from accessing certain applications or services.
If anything tries to come in from a malicious IP, it will block it.
The initial setup is easy.
We've found the solution to be stable and reliable.
The solution does offer pretty good integration options.
Technical support is quite helpful.
The remediation part could be better. It should be able to automatically remediate on the basis of its artificial intelligence. If there are alerts, it should directly act and surround the malicious threat with a container or something. Instead of waiting on approval, it should immediately act. There should be no need for manual input when there is a threat on hand.
The ability to scale is limited as it is a SAS product.
The licensing is a bit confusing.
We've used the solution for a while. Previously, it was RedLock Solutions and we were using it since it was known as RedLock. That's around let's say two years now. Then, Palo Alto bought it, and we now use it under the new name.
The stability and reliability are excellent. There are no bugs or glitches. It does not crash or freeze. it's great.
The scalability isn't infinite. It's limited.
That said, we haven't really tested it as we haven't added any users or anything into the solution yet.
We have found the technical support to be helpful and responsive. Originally, when we needed assistance with integrating it into our AWS cloud, we contact them and they helped us immediately. It was a very positive experience. We were very satisfied.
The initial setup is very easy. It's not overly complex. A company should be able to handle it without any issues.
We pay a licensing fee on a yearly basis.
It is not costly. However, the way it is priced is based on the number of incentives. The problem is, what is the number of incentives? We don't know. They seem to do it by the number of workloads, however, we're unclear as to what defines a workload. They need to improve on the licensing front. They need to be more clear about the whole thing.
I've never evaluated any other services.
We are Palo Alto partners.
I'd advise that companies that get big and have a lot of servers or critical applications in their cloud invest in this solution.
I would rate the solution at a nine out of ten.
When we started using this tool, the name was Twistlock, it was not Prisma Cloud. We had a container team responsible for modernizing our environment and they created an on-prem solution using Red Hat OpenShift. They started using Twistlock as a way to manage the security of this on-prem environment.
My team, which was the security team, inherited the ownership of the tool to manage all the security problems that it was raising.
When we started using containers on the cloud, our cloud provider was Azure. We also started migrating our security solutions for the cloud, but that was at the end of my time with the company, so I didn't participate much in this cloud process.
We were also sending the logs and alerts to Splunk Cloud. We were managing all the alerts generated by policies and vulnerabilities and the threats from the web. That way, we had a pipeline system sending these alerts to a central location where our investigation team would look at them. So we used the system to manage both cloud and on-prem and connect them.
We had one team that didn't have any security whatsoever. We helped them to add Prisma Cloud to scan their environment. It was a big issue in the company at the time, because they had a huge environment which was not following the security rules of the company. They didn't have any security. Prisma Cloud helped us to start raising alerts and vulnerabilities. That was a successful case because in the timeframe of one to two weeks, we installed the tool and were teaching the team how to manage it, find their vulnerabilities, and how to fix them. We were able to help a team that was totally vulnerable to have a security solution.
Overall, it covered all the stages that we hoped it would cover.
The solution also reduced our runtime alerts. I don't have the exact numbers but I would say it lowered the number of issues by 70 percent. Our strategy was that we started using the tool for some small applications, and then we started using it for other teams. For the small applications, I can't guarantee the reduction was 70 percent because those solutions were managed by the security team which had smart people who were security conscious.
We used the policy features to manage users so that they would not have secrets in their containers. We also used the vulnerabilities, the CVEs, that were being raised by the tool.
The CVEs are valuable because we used to have a tool to scan CVEs, at the language level, for the dependencies that our developers had. What is good about Prisma Cloud is that the CVEs are not only from the software layer, but from all layers: the language, the base image, and you also have CVEs from the host. It covers the full base of security.
The compliance is good because it has a deep view of the container. It can find stuff that only administrators would have access to in our container. It can go deep down into the container and find those policy issues.
We also started looking for the WaaS (Web-Application and API Security) solution, but we didn't implement it during the time I was at the company. We tested it. What's good about the WaaS is that it's almost a miracle feature. You can find SQL injection or cross-site scripting and defend against that by setting up Prisma Cloud and turning on the feature.
Prisma Cloud also provided risk clarity at runtime and across the entire pipeline, showing issues as they were discovered during the build phases. It provided a good rating for how to prioritize a threat, but we also had a way to measure risk in our company that was a little bit different. This was the same with other scanning tools that we had: the risk rating was something that we didn't focus too much on because we had our own way to rate risk. Prisma Cloud's rating was helpful sometimes, but we used our risk measurement more than the tool's.
One problem was identifying Azure Kubernetes Services. We had many teams creating Kubernetes systems without any security whatsoever. It was hard for us to identify Kubernetes because the Prisma Cloud could not identify them. From what I heard from Palo Alto at the time, they were building a new feature to identify those. It was an issue they were already trying to fix.
In addition, when it comes to access for developers, I would like to have more granular settings. For example, in our company we didn't want to display hosts' vulnerabilities to developers, because the infrastructure or containers team was responsible for host vulnerabilities or the containers. The developers were only responsible for the top application layer. We didn't want to provide that data to the developers because A) we thought it was sensitive data and B) because it was data that didn't belong to developers. We didn't want to share it, but I remember having this problem when it came to the granularity of granting permissions.
They need to make the settings more flexible to fit our internal policies about data. We didn't want developers to see some data, but we wanted them to have access to the console because it was going to help them. One possibility was to develop our own solution for this, using the API. But that would add complexity. The console was clean and beautiful. It has the radar where you can see all the containers. But we just didn't want to show some data. It was a pain to have to set up the access to some languages and some data.
Another thing that was a pain was that in our on-prem environment there was a tool that sometimes generated a temporary container, to be used just for a build, and Prisma would raise some compliance issues for this container that would die shortly. It was hard to suppress these kinds of alerts because it was hard to find a standard or a rule that would fit this scenario. The tool was able manage the whole CI/CD pipeline, including the build as well—even these containers that were temporary for a build—but sometimes it would raise too much unnecessary data.
Also, one of the things that it's hard to understand sometimes is how to fix an issue. We managed to do so by testing things ourselves because we are developers. But a little bit of explanation about how to fix something would help. It was more showing what the problem was than it did about how to fix it.
I used Prisma Cloud by Palo Alto Networks for about a year and a half.
It's pretty much stable, as much as containers are stable. It is more about the container solution itself, or how Kubernetes is managed and the state of health of the containers. As Prisma is a container solution itself, it was as good as the Kubernetes environment could make it.
I don't know about the Prisma Cloud SaaS solution because we didn't use it, but the on-prem solution was as reliable as our Kubernetes system was. It was really reliable.
It's pretty scalable because of the API. I liked how simple the console was and how simple the API was. There was no complexity; it was straightforward. The API documentation was also very good so it was pretty easy to scale. You could automate pretty much everything. You could automate the certificate information, you could automate the access for developers, and a lot of other stuff. It was a pretty modern solution. Using APIs and containers, it was pretty scalable.
We used their technical support many times and it was very good. The engineers there helped us a lot. They were engaged and interested in helping, and they were polite and they were fast. When we raised an issue to high priority, they answered faster. I would rate their support at five out of five.
Prisma Cloud was the only solution we had for container security. We had other tools such as SAST and DAST tools, as well as open source management tools. Those intersected somewhat with what Prisma does, but Prisma had access to the whole environment, so it's a little bit different.
We used the API from Prisma Cloud. We had a Jenkins pipeline with a lot of scripts to automate the installation of Prisma Cloud and the patching updates as well.
In our company, the security team had about 10 people, but only two were responsible for Prisma Cloud. As I mentioned, we inherited ownership of it from the containers team. In the containers team, we had a guy who was our main contact and who helped us. For example, when we needed to access a certain environment, he had to manage access so that it could have privileged access to do what it needed to do in the container environment. So overall, there were three people involved with it.
We used Prisma Cloud extensively. We used it across the whole on-prem environment and partially on cloud. We were at around 10 or 20 percent of the cloud. I think that nowadays they have probably reached much more than that, because we were just beginning on the cloud at the time.
Smaller companies should probably use the SaaS. I know that Azure and the cloud providers already have different ways to use tools in an easy manner so that you don't need to manage the infrastructure. So smaller companies should look into that. The infrastructure solution would be more for big companies, but I would recommend the solution for big companies. I would also recommend it for small companies. In terms of budget, sometimes it's hard to prioritize what's more important, but Prisma fits into different budget levels, so even if you have a small environment you can use Prisma's SaaS solution.
I was pretty satisfied with it. My impression of Prisma Cloud was pretty good. It's an amazing tool. It gives the whole view of your container environment and connection with multiple platforms, such as Splunk. It is a good solution. If I had my own company and a container environment, I would use it. It can fit a huge container environment with a lot of hosts, but it can also fit a small container environment. Azure also provides built-in solutions to install Prisma in your application. So there are different solutions for various container environments. The company I was in had huge container environments to monitor, on-prem and in the cloud, and the tool fit really well. But the tool also fits small environments.
Primarily the intent was to have a better understanding of our cloud security posture. My remit is to understand how well our existing estate in cloud marries up to the industry benchmarks, such as CIS or NIST, or even AWS's version of security controls and benchmarks.
When a stack is provisioned in a cloud environment, whether in AWS or Azure or Google Cloud, I can get an appreciation of how well the configuration is in alignment with those standards. And if it's out of alignment, I can effectively task those who are accountable for resources in clouds to actually remediate any identifiable vulnerabilities.
The solution is really comprehensive. Especially over the past three to four years, I was heavily dependent on AWS-native toolsets and config management. I had to be concerned about whether there were any permissive security groups or scenarios where logging might not have been enabled on S3 buckets, or if we didn't have encryption on EBS volumes. I was quite dependent on some of the native stacks within AWS.
Prisma not only looks at the workloads for an existing cloud service provider, but it looks at multiple cloud service providers outside of the native stack. Although the native tools on offer within AWS and Azure are really good, I don't want to be heavily dependent on them. And with Google, where they don't have a security hub where you can get that visibility, then you're quite dependent on tools like Prisma Cloud to be able to give you that. In the past, that used to be Dome9 or Evident.io. Palo Alto acquired Evident.io, and that became rebranded as this cloud posture management solution. It's proven really useful for me.
It integrates capabilities across both cloud security posture management and cloud workload protection. The cloud security posture management is what it was initially intended for, looking at configuration of cloud service workloads for AWS, Azure, Google, and Alibaba. And you can look at how the configuration of certain workloads align to standards of CIS, NIST, PII, etc.
And that brings our DevOps and SecOps teams closer together. The engineering aspect is accountable for provisioning dedicated accounts for cloud consumers within the organization. There might be just an entity within the business that has a specific use case. You then want to go to ensure that they take accountability for building their services in the cloud, so that it's not just a central function or that engineering is solely responsible. You want something of a handoff so that consumers of cloud within the organization can also have that accountability, so that it's a shared responsibility. Then, if you're in operations, you have visibility into what certain workloads are doing and whether they're matching the standards that have been set by the organization from a risk perspective.
You've also got the software engineering side of the business and they might just be focused on consuming base images. They may be building container environments or even non-container environments or hosting VMs. They also have a level of accountability to ensure that the apps or packages that they build on top of the base image meet a certain level of compliance, depending on what your business risk-appetite is. So it's really useful in that you've got that shared accountability and responsibility. And overall, you can then hand that off to security, vulnerability management, or compliance teams, to have a bird's-eye view of what each of those entities is doing and how well they're marrying up to the expected standards.
Prior to Prisma cloud, you'd have to have point solutions for container runtime scanning and image scanning. They could be coupled together, but even so, if you were running multiple cloud service providers in parallel, you could never really get the whole picture from a governance perspective. You would struggle to actually determine, "Okay, how are we doing against the CIS benchmark for Azure, GCP, and AWS, and where are the gaps that we need to address from a governance and a compliance perspective so as to reduce our risk and the threat landscape?" Now that you've got Prisma Cloud, you can get that holistic view in a single pane of glass, especially if you're running multiple cloud workloads or a number of cloud workloads with one cloud service provider. It gives you the ability to look at private, public, or hybrid offerings. It saves me having to go to market and also run a number of proofs of concepts for point solutions. It's an indication of how the market has matured and how Palo Alto, with Prisma Cloud in particular, understands what their consumers and clients want.
It can certainly help reduce alert investigation times, because you've got the detail that comes with the alert, to help remediate. The level of detail offered up by Prisma Cloud, for a given engineer who might not be that familiar with a specific type of configuration or a specific type of alert, saves the engineer having to delve into runbooks or online resources to learn how to remediate a particular alert. You have to compare it to a SIEM solution where you get an event or an alert is triggered. It's usually based on a log entry and the engineer would have to then start to investigate what that alert might mean. But with Prisma Cloud and Prisma Cloud Compute, you get that level of detail off the back of every event, which is really useful.
It's hard to quantify how much time it might save, but think about the number of events and what it would be like if they didn't have that level of detail on how to remediate, each time an event occurred. Suppose you had a threshold or a setting that was quite conservative, based on a particular cloud workload, and that there were a number of accounts provisioned throughout the day and, for each of those accounts, there were a number of config settings that weren't in alignment with a given standard. For each of those events, unless there was that level of detail, the engineer would have to look at the cloud service provider's configuration runbooks or their own runbooks to understand, "Okay, how do I change something from this to this? What's the polar opposite for me to get this right?" The great thing about Prisma Cloud is that it provides that right out-of-the-box, so you can quickly deduce what needs to be done. For each event, you might be saving five or 10 minutes, because you've got all the information there, served up on a plate.
For me, what was valuable from the outset was the fact that, regardless of what cloud service provider you're with, I could segregate visibility of specific accounts to account owners. For example, at AWS, you might have an estate that's solely managed by yourself, or there might be a number of teams within the organization that do so.
You can also integrate with Amazon Managed Services. You can also get a snapshot in time, whether that's over a 24-hour period, seven days, or a month, to determine what the estate might look like at a certain point in time and generate reports from that for vulnerability management forums. In addition to that, I can get a snapshot of what I deemed were the priority vulnerabilities, whether it was identity access management, key rotation, or secrets management. Whatever you deem to be a priority for mitigating threats for your environment, you can get that as a snapshot.
You can also automate how frequently you want reports to be generated. You can then understand whether there has been any improvement or reduction in vulnerabilities over a certain time period.
The solution also enables you to ingest logs to your preferred SIEM provider so that you've got a better understanding of how things stack up with event correlation and SIEM systems.
If you've got an Azure presence, you might be using Office 365 and you might also have a presence in Google Cloud for the data, specifically. You might also want to look at scenarios where, if you're using tools and capabilities for DevOps, like Slack, you can plug those into Prisma Cloud as well to understand how well they marry up to vulnerabilities. You can also use it for driving out instant vulnerabilities into Slack. That way, you're looking at what your third-party SaaS providers are doing in relation to certain benchmarks. That's really useful as well.
In addition, an engineer may provision something like a shared service, a DNS capability, a sandbox environment, or a proof of concept. The ability to filter alerts by severity helps when reporting on the services that have been provisioned. They'll come back as a high, medium, or low severity and then I ensure that we align with our risk-appetite and prioritize higher and medium vulnerabilities so that they are closed out within a given timeframe.
When it comes to root cause, Prisma Cloud is quite intuitive. If you have an S3 bucket that has been set to public but, realistically, it shouldn't have been, you can look at how to remediate that quite intuitively, based on what the solution offers up as a default setting. It will offer up a way to actually resolve and apply the correct settings, in line with a given standard. There's almost no thinking involved. It's on-point and it's as if it offers up the specific criteria and runbooks to resolve particular vulnerabilities.
That assists security, giving them an immediate way to resolve a given conflict or misalignment. The time-savings are really incomparable. If you were to identify a vulnerability or a risk, you might have to draw up what the remediation activity should look like. However, what Prisma Cloud does is that it actually presents you with a report on how to remediate. Alternatively, you can have dynamic events that are generated and applied to Slack, for example. Those events can then be sent off to a JIRA backlog or the like. The engineers will then look at what that specific event was, at what the criteria are, and it will tell them how to remediate it without their having to set time aside to explain it. The whole path is really intuitive and almost fully automated, once it's set up.
One scenario, in early days, was in trying to get a view on how you could segregate account access for role-based access controls. As a DevSecOps squad, you might have had five or six guys and girls who had access to the overall solution. If you wanted to hand that off to another team, like a software engineering team, or maybe just another cloud engineering team, there were concerns about sharing the whole dashboard, even if it was just read-only. But over the course of time, they've integrated that role-based access control so that users should only be able to view their own accounts and their own workloads, rather than all of the accounts.
Another concern I had was the fact that you couldn't ingest the accounts into Prisma Cloud in an automated sense. You had to manually integrate them or onboard them. They have since driven out new features and capabilities, over the last 12 months, to cater for that. At an organizational level you can now plug that straight into Prisma Cloud, as and when new accounts are provisioned or created. Then, by default, the AWS account or the Azure account will actually be included, so you've got visibility straight away.
The lack of those two features was a limitation as to how far I could actually push it out within the organization for it to be consumed. They've addressed those now, which is really useful. I can't think of anything else that's really causing any shortcomings. It's everything and more at the moment.
I've been using Prisma Cloud for about 12 months now
It's pretty straightforward to run an automated setup, if you want to go down that route. The capabilities are there. But in terms of how we approached it, it was like a plug-and-play into our existing stack. Within AWS, you just have to point Prisma Cloud at your organizational level so that you can inherit all the accounts and then you have the scanning capability and the enforcement capability, all native within Prisma Cloud. There's nothing that we're doing that's over and above, nothing that we would have to automate other than what is actually provided natively within Prisma Cloud. I'm sure if you wanted to do additional automation, for example if you wanted to customize how it reports into Slack or how it reports into Atlassian tools, you could certainly do that, but there's nothing that is that complex, requiring you to do additional automation over and above what it already provides.
I haven't gone about calculating what the ROI might be.
But just looking at it from an operational engineering perspective and the benefits that come with it, and when it comes to the governance and compliance aspects of running AWS cloud workloads, I now put aside half an hour or an hour on a given day of the week, or alternative days of the week. I use that time to look at what the client security posture is, generate a number of reports, and hand them off to a number of engineering teams, all a lot quicker than I used to be able to do so two or three years ago.
In the past, at times I would have had to run Trusted Advisor from AWS, to look at a particular account, or run a number of reports from Trusted Advisor to look at multiple accounts. And with Trusted Advisor, I could never get a collective view on what the overall posture was of workloads within AWS. With Prisma Cloud, I can just select 30 AWS accounts, generate one report, and I've got everything I need to know, out-of-the-box. It gives me all the different services that might be compliant/non-compliant, have passed/failed, and that have high, medium, or low vulnerabilities. It has saved me hours being able to get those snapshots.
I can also step aside by putting an automated report in place and receive that on a weekly basis. I've also got visibility into when new accounts are provisioned, without my having to keep tabs on whether somebody has just provisioned a new account or not. The hours that are saved with it are really quite high.
As it stands now, I think things have moved forward somewhat. Prisma and the suite of tools by Palo Alto, along with the fact that they have integrated Prisma Cloud Compute as a one-stop shop, have really got it nailed. They understand that not all clients are running container workloads. They bring together point solutions, like what used to be Twistlock, into that whole ecosystem, alongside a cloud security posture management system, and they'll license it so that it's favorable for you as a consumer. You can think about how you can have that presence and not then be dependent on multiple third-parties.
Prisma cloud was originally destined for cloud security posture management, to determine how the configuration of cloud services aligns with given standards. Through the evolution of the product, they then integrated a capability they call Prisma Cloud Compute. That is derived from point solutions for container and image scanning. It has the capabilities on offer within a single pane of glass.
Prior to the given scenario with Prisma Cloud, you'd have to either go to Twistlock or Aqua Security for container workloads. If you were going open source, obviously that would be free, but you'd still have to be looking at independent point solutions. And if you were looking at governance and compliance, you'd have to look at the likes of Dome9, Evident.io, and OpenSCAP, in a combination with Trusted Advisor. But the fact that you can just lean into Prisma Cloud and have those capabilities readily available, and have an account manager that is priced based on workloads, makes it a favorable licensing model.
It also makes the whole RFP process a lot more streamlined and simplified. If you've got a purchasing specialist in-house, and then heads-of-functions who might have a vested interest in what the budget allocation is, from either a security perspective or from a DevOps cloud perspective, it's really quite transparent. They work the pricing model in your favor based on how you want to actually integrate with their products. From my exposure so far, they have been really flexible on whatever your current state is, with a view to what the future state might be. There's no hard sell. They "get" the journey that you're on, and they're trying to help you embrace cloud security, governance, and compliance as you go. That works favorably for them as well, because the more clients that they can acquire and onboard, the more they can share the experience, helping both the business and the consumer, overall.
Prior to Prisma cloud, I was looking at Dome9 and Evident.io. Around late 2018 to early 2019, Palo Alto acquired Evident.io and made it part of their Prisma suite of security tools.
At the time, the two that were favorable were Evident.io and Dome9, side-by-side, especially when running multiple AWS accounts in parallel. At the time, it was Dome9 that came out as more cost-effective. But I actually preferred Evident.io. It just happened to be that we were evaluating the Prisma suite and then discovered that Palo Alto had acquired Evident.io. For me that was really useful. As an organization, if we were already exploring the capabilities of Palo Alto and had a commercial presence with them, to then be able to use Prisma Cloud as part of that offering was really good for me as a security specialist in cloud. Prior to that, if as an organization you didn't have a third-party cloud security posture management system for AWS, you were heavily dependent on Trusted Advisor.
My advice is that if you have the opportunity to integrate and utilize Prisma Cloud you should, because it's almost a given that you can't get any other cloud security posture management system like Prisma Cloud. There are competitors that are striving to achieve the same types of things. However, when it comes to the governance element for a head of architecture or a head of compliance or even at the CSO level, without that holistic view, if you use one of them you are potentially flying blind.
Once you've got a capability running in the cloud and the associated demand that comes through from the business to provision accounts for engineers or technical service owners or business users, the given is that not every team or every user that wants to consume the cloud workload has the required skill set to do so. There's a certain element of expertise that you need to securely run cloud workloads, just as is needed for running applications or infrastructure on-premise. However, unless you have an understanding of what you're opening up to—the risk element to running cloud workloads, such as a potential attacks or compromise of service—from an organizational perspective, it's only a matter of time before something is leaked or something gets compromised and that can be quite expensive to have to manage. There are a lot of unknowns.
Yes, they do give you capabilities, such as Trusted Advisor, or you might have OpenSCAP or you might be using Forseti for Google Cloud, and there are similar capabilities within Azure. However, the cloud service providers aren't native security vendors. Their workloads are built around infrastructure- or platform-as-a-service. What you have to do is look at how you can complement what they do with security solutions that give you not just the north-south view, but the east-west as well. You shouldn't just be dependent on everything out-of-the-box. I get the fact that a lot of organizations want to be cloud-first and utilize native security capabilities, but sometimes those just don't give you enough. Whether you're looking at business-risk or cyber-risk, for me, Prisma Cloud is definitely out there as a specialist capability to help you mitigate the threat landscape in running cloud workloads.
I've certainly gone from a point where I understood what the risk was in not having something like this, and that's when I was heavily dependent on native tools that are offered up with cloud service providers.
The first release that came out didn't include the workload management, because what happened, I believe, was that Palo Alto acquired Twistlock. Twistlock was then "framed" into cloud workload management within Prisma Cloud. What that meant was that you had a capability that looks at your container workloads, and that's called Prisma Cloud Compute, which is all available within a single pane of glass, but as a different set of capabilities. That is really useful, especially when you're running container workloads.
In terms of securing the entire development life cycle, if you integrate it within the Jenkins CI/CD pipeline, you can get the level of assurance needed for your golden images or trusted image. And then you can look at how you can enforce certain constraints for images that don't match the level of compliance required. In terms of going from what would be your image repository, when that's consumed you have the capability to look at what runtime scanning looks like from a container perspective. It's not really on par with, or catering to, what other products are looking at in terms of SAST and DAST capabilities. For those, you'd probably go to the market and look at something like Veracode or WhiteHat.
It all depends on the way an organization works, whether it has a distributed or centralized setup. Is there like a central DevOps or engineering function that is a single entity for consuming cloud-based services, or is there a function within the business that has primarily been building capabilities in the cloud for what would otherwise be infrastructure-as-a-service for internal business units? The difficulty there is the handoff. Do you look at running it as a central function, where the responsibility and the accountability is within the DevOps teams, or is that a function for SecOps to manage and run? The scenario is dependent on what the skill sets are of a given team and what the priorities are of that team.
Let's say you have a security team that knows its area and handles governance, risk, and compliance, but doesn't have an engineering function. The difficulty there is how do you get the capability integrated into CI/CD pipelines if they don't have an engineering capability? You're then heavily relying on your DevOps teams to build out that capability on behalf of security. That would be a scenario for explaining why DevOps starts integrating with what would otherwise be CyberOps, and you get that DevSecOps cycle. They work closer together, to achieve the end result.
But in terms of how seamless those CI/CD touchpoints are, it's a matter of having security experts that understand that CI/CD pipeline and where the handoffs are. The heads of function need to ensure that there's a particular level of responsibility and accountability amongst all those teams that are consuming cloud workloads. It's not just a point solution for engineering, cloud engineering, operations, or security. It's a whole collaboration effort amongst all those functions. And that can prove to be quite tricky. But once you've got a process, and the technology leaders understand what the ask is, I think it can work quite well.
When it comes to reducing runtime alerts, it depends on the sensitivity of the alerting that is applicable to the thresholds that you set. You can set a "learning mode" or "conservative mode," depending on what your risk-appetite is. You might want it to be configured in a way that is really sensitive, so that you're alerted to events and get insights into something that's out of character. But in terms of reducing the numbers of alerts, it all depends on how you configure it, based on the sensitivity that you want those alerts to be reporting on.
I would rate Prisma Cloud at eight out of 10. It's primarily down to the fact that I've got a third-party tool that gives me a holistic view of cloud security posture. At the click of a button I can determine what the current status is of our threat landscape, in either AWS or Azure, at a conflict level and at a workload level, especially with regards to Prisma Cloud Compute. It's all available within a single pane of glass. That's effectively what I was after about two or three years ago. The fact that it has now come together with a single provider is why I'd rate it an eight.
We are using it for monitoring our cloud environment and detecting misconfigurations in our hosted accounts in AWS or Azure.
As the security operations team, our job is to monitor for misconfigurations and potential incidents in our environment. This solution does a good job of monitoring those for us and of alerting us to misconfigurations before they become potential security incidents or problems.
We've set the tool up so that it provides feedback directly to the teams responsible for their AWS or cloud accounts. It has been really helpful by getting information directly to the teams. They can see what the problem is and they can fix it without us having to go chase them down and tell them that they have a misconfiguration.
The solution secures the entire spectrum of compute options such as hosts and VMs, containers and Containers as a Service. We are not using the container piece as yet, but that is a functionality that we're looking forward to getting to use. Overall, it gives us fantastic visibility into the cloud environment.
Prisma Cloud also provides the data needed to pinpoint root cause and prevent an issue from occurring again. A lot of that has to do with the policies that are built into the solution and the documentation around those policies. The policy will tell the user what the misconfiguration is, as well as give them remediation steps to fix the misconfiguration. It speeds up our remediation efforts. In some of the cases, when my team, the security team, gets involved, we're not necessarily experts in AWS and wouldn't necessarily know how to remediate the issue that was identified. But because the instructions are included as part of the Prisma Cloud product, we can just cut and paste it and provide it to the team. And when the teams are addressing these directly, they also have access to those remediation instructions and can refer them to figure out what they need to do to remediate the issue and to speed up remediation on misconfigurations.
In some cases, these capabilities could be saving us hours in remediation work. In other cases, it may not really be of value to the team. For example, if an S3 bucket is public facing, they know how to fix that. But on some of the more complex issues or policies, it might otherwise take a lot more work for somebody to figure out what to do to fix the issue that was identified.
In terms of the solution’s ability to show issues as they are discovered during the build phases, I can only speak to post-deployment because we don't have it integrated earlier in the pipeline. But as far as post-deployment goes, we get notified just about immediately when something comes up that is misconfigured. And when that gets remediated, the alert goes away immediately in the tool. That makes it really easy in a shared platform like this, where we have shared responsibility between the team that's involved and my security operations team. It makes it really easy for us to be able to go into the tool and say, "There was an alert but that alert is now gone and that means that the issue has been resolved," and know we don't have to do any further research.
For the developers, it speeds up their ability to fix things. And for my team, it saves us a ton of time in not having to potentially investigate each one of those misconfigurations to see if it is still a misconfiguration or not, because it's closed out automatically once it has been remediated. On an average day, these abilities in the solution save my team two to three hours, due to the fact that Prisma Cloud is constantly updating the alerts and closing out any alerts that are no longer valid.
The policies that come prepackaged in the tool have been very valuable to us. They're accurate and they provide good guidance as to why the policy was created, as well as how to remediate anything that violates the policy.
The Inventory functionality, enabling us to identify all of the resources deployed into a single account in either AWS or Azure, or into Prisma Cloud as a whole, has been really useful for us.
And the investigate function that allows us to view the connections between different resources in the cloud is also very useful. It allows us to see the relationship traffic between different entities in our cloud environment.
The integration of the Compute function into the cloud monitoring function—because those are two different tools that are being combined together—could use some more work. It still feels a little bit disjointed.
Also, the permissions modeling around the tool is improving, but is still a little bit rough. The concept of having roles that certain users have to switch between, rather than have a single login that gives them visibility into all of the different pieces, is a little bit confusing for my users. It can take some time out of our day to try to explain to them what they need to do to get to the information they need.
I have been using Palo Alto Prisma Cloud for about a year and a half.
We really have had very few issues with the stability. It's been up, it's been working. We've had, maybe, two or three very minor interruptions of the service and our ability to log in to it. In each case there was a half an hour or an hour, at most, during which we were unable to get into it, and then it was resolved. There was usually information on it in the support portal including the reason for it and the expectation around when they would get it back up.
It seems to scale fine for us. We started out with 10 to 15 accounts in there and we're now up to over 200 accounts and, on our end, seemingly nothing has changed. It's as responsive as it's ever been. We just send off our logs. Everything seems to integrate properly with no complaints on our side.
We have nearly 600 users in the system, and they're broken out into two different levels. There are the full system administrators, like myself and my team and the security team that is responsible for our cloud environment as a whole. We have visibility across the entire environment. And then we have the development teams and they are really limited to accessing their specific accounts that are deployed into Prisma Cloud. They have full control over those accounts.
For our cloud environments, the adoption rate is pretty much 100 percent. A lot of that has to do with that automated deployment we created. A new account gets started and it is automatically added to the tool. All of the monitoring is configured and everything else is set up by default. You can't build a new cloud account in our environment without it getting added in. We have full coverage, and we intend to keep it that way.
Tech support has been very responsive. They are quick to respond to tickets and knowledgeable in their responses. Their turnaround time is usually 24 to 48 hours. It's very rare that we would open anything that would be considered a high-priority ticket or incident. Most of the stuff was lower priority and that turnaround was perfectly acceptable to us.
This is our first tool of this sort.
The initial setup was really straightforward. We then started using the provided APIs to do some automated integration between our cloud environment and Prisma Cloud. That has worked really well for us and has streamlined our deployment by a good deal. However, what we found was that the APIs were changing as we were doing our deployment. We started down the path we created with some of those integrations, and then there were undocumented changes to the APIs which broke our integrations. We then had to go back and fix those integrations.
What may have happened were improvements in the API on the backend and those interfered with what we had been doing. It meant that we had to go back and reconfigure that integration to make it work. My understanding from our team that was responsible for that is that the new integration works better than the old integration did. So the changes Palo Alto made were an improvement and made the environment better, but it was something of a surprise to us, without any obvious documentation or heads-up that that was going to change. That caught us a little bit out and broke the integration until we figured out what had changed and fixed it.
There is only a learning curve on the Compute piece, specifically, and understanding how to pivot between that and the rest of the tool, for users who have access to both. There's definitely a learning curve for that because it's not at all obvious when you get into the tool the first time. There is some documentation on that, but we put together our own internal documentation, which we've shared with the teams to give them more step-by-step instructions on what it is that they need to do to get to the information that they're looking for.
The full deployment took us roughly a month, including the initial deployment of rolling everything out, and then the extended deployment of building it to do automated deployments into new environments, so that every new environment gets added automatically.
Our implementation strategy was to pick up all of the accounts that we knew that we had to do manually, while we were working on building out that automation to speed up the onboarding of the new accounts that we were creating.
We did all of that on our own, just following the API documentation that they had provided. We had a technical manager from Palo Alto with whom we were working as we were doing the deployment, but the automated deployment work that we did was all on our own and all done internally.
At this point, we really don't have anybody dedicated to deployment because we've automated that process. That has vastly simplified our deployment. Maintenance-wise, as it is a SaaS platform, we don't really have anybody who works on it on a regular basis. It's really more ad hoc. If something is down, if we try to connect to it and if we can't get into the portal or whatever the case may be, then somebody will open a ticket with support to see what's going on.
We have seen ROI although it's a little hard to measure because we didn't have anything like this before.
The biggest areas of ROI that we've seen with it have been the uptake by the organization, the ease of deploying the tool—especially since we got that full automation piece created and taken care of—as well as the visibility and the speed at which somebody can start using the tool. I generally give employees about an hour or two of training on the tool and then turn them loose on it, and they're capable of working out of it and getting most of the value. There are some things that take more time to get up to speed on, but for the most part, they're able to get up to speed pretty quickly, which is great.
The pricing and the licensing are both very fair.
There aren't any costs in addition to the standard licensing fees, at this time. My understanding is that at the beginning of 2021 they're not necessarily changing the licensing model, but they're changing how some of the new additions to the tool are going to be licensed, and that those would be an additional cost beyond what we're paying now.
The biggest advice I would give in terms of costs would be to try to understand what the growth is going to look like. That's really been our biggest struggle, that we don't have an idea of what our future growth is going to be on the platform. We go from X number of licenses to Y number of licenses without a plan on how we're going to get from A to B, and a lot of that comes as a bit of a surprise. It can make budgeting a real challenge for it. If an organization knows what it has in place, or can get an idea of what its growth is going to look like, that would really help with the budgeting piece.
We had looked at a number of other tools. I can't tell you off the top of my head what we had looked at, but Prisma Cloud was the tool that we had always decided that we wanted to have. This was the one that we felt would give us the best coverage and the best solution, and I feel that we were correct on that.
The big pro with Prisma Cloud was that we felt it gave us better visibility into the environment and into the connections between entities in the cloud. That visualization piece is fantastic in this tool. We felt like that wasn't really there in some of the other tools.
Some of the other tools had a little bit better or broader policy base, when we were initially looking at them. I have a feeling that at this point, with the rate that Palo Alto is releasing new policies and putting them into production, that it is probably at parity now. But there was a feeling, at the time, among some of the other members of the team that Palo Alto came up short and didn't have as many policies as some of the other tools that we were looking at.
I would highly recommend automating the process of deploying it. That has made just a huge improvement on the uptake of the tool in our environment and in the ease of integration. There's work involved in getting that done, but if we were trying to do this manually, we would never be able to keep up with the rate that we've been growing our environment.
The biggest lesson I've learned in using this solution is that we were absolutely right that we needed a tool like this in our environment to keep track of our AWS environment. It has identified a number of misconfigurations and it has allowed us to answer a lot of questions about those misconfigurations that would have taken significantly more time to answer if we were trying to do so using native AWS tools.
The tool has an auto-remediation functionality that is attractive to us. It is something that we've discussed, but we're not really comfortable in using it. It would be really useful to be able to auto-remediate security misconfigurations. For example, if somebody were to open something up that should be closed, and that violated one of our policies, we could have Prisma Cloud automatically close that. That would give us better control over the environment without having to have anybody manually remediate some of the issues.
Prisma Cloud also secures the entire development lifecycle from build to deploy to run. We could integrate it closer into our CI/CD pipeline. We just haven't gone down that path at this point. We will be doing that with the Compute functionality and some of the teams are already doing that. The functionality is there but we're just not taking advantage of it. The reason we're not doing so is that it's not how we initially built the tool out. Some of the teams have an interest in doing that and other teams do not. It's up to the individual teams as to whether or not it provides them value to do that sort of an integration.
As for the solution's alerts, we have them identified at different severities, but we do not filter them based on that. We use those as a way of prioritizing things for the teams, to let them know that if it's "high" they need to meet the SLA tied to that, and similarly if it's "medium" or "low." We handle it that way rather than using the filtering. The way we do it does help our teams understand what situations are most critical. We went through all of the policies that we have enabled and set our priority levels on them and categorized them in the way that we think that they needed to be categorized. The idea is that the alerts get to the teams at the right priority so that they know what priority they need to assign to remediating any issues that they have in their environment.
I would rate the solution an eight out of 10. The counts against it would be that the Compute integration still seems to need a little bit of work, as though it's working its way through things. And some of the other administrative pieces can be a little bit difficult. But the visibility is great and I'm pretty happy with everything else.
When we migrated our workloads from the on-prem to the cloud, we used Prisma Cloud to tell us whether our workloads were PCI compliant.
Prisma Cloud ensures that our organization is PCI compliant.
The most valuable feature is its cloud security posture management. Prisma Cloud is very easy to use and gives us daily reports.
The user interface should be improved and made easier.
We have been using Prisma Cloud by Palo Alto Networks for five years.
The stability is good.
The scalability is good.
Prisma Cloud’s customer support is good.
We have seen an ROI with respect to time and metrics.
Regarding Prisma Cloud's pricing, we started small, and then we just kept on growing.
Before choosing Prisma Cloud, we evaluated SolarWinds as an option. We chose Prisma Cloud because SolarWinds wasn't an enterprise-level software.
The solution has a moderate level of ease of use. Prisma Cloud has helped free 50% of our staff's time to work on other projects. Many tasks were done manually before, but now things are faster with Prisma Cloud.
We are trying to learn about new cybersecurity issues and what other solutions are available to combat them.
Overall, I rate Prisma Cloud an eight out of ten.
I use it for testing and visibility.
Palo Alto has helped our organization improve its security posture.
CSPM is the most valuable feature.
They should improve user experience. It is complicated to integrate the solution with the public cloud provider.
I have been using the solution for two years.
I’m happy with the stability of the solution.
The solution has strong scalability.
We have seen an ROI on the solution. We have full inventory visibility and a full security posture.
The pricing of the solution is fair.
I attend the RSA conference to close gaps. Attending the conference impacts our cybersecurity purchases because it helps us build a roadmap for future evolution. Overall, I rate the solution a seven out of ten.
Our use case for the solution is monitoring our cloud configurations for security. That use case, itself, is huge. We use the tool to monitor security configuration of our AWS and Azure clouds. Security configurations can include storage, networking, IAM, and monitoring of malicious traffic that it detects.
We have about 50 users and most of them use it to review their own resources.
If, for a certain environment, someone configures a connection to the internet, like Windows RDP, which is not allowed in our environment, we immediately get an alert that says, "Hey, there's been a configuration of Windows Remote Desktop Protocol, and it's connected directly to the internet." Because that violates our policy, and it's also not something we desire, we will immediately reach out to have that connection taken down.
We're also integrating it into our CI/CD pipeline. There are parts we've integrated already, but we haven't done so completely. For example, we've integrated container scanning into the CI/CD. When they build a container into the pipeline, it's automatically deployed and the results come back to our console where we're monitoring it. The beauty of it is that we give our developers access to this information. That way, as they build, they actually get near real-time alerting that says, "This configuration is good. This configuration is bad." We have found that very helpful because it provides instant feedback to the development team. Instead of doing a review later on where they find out, "Oh, this is not good," they already know: "Oh, we should not configure it this way, let's configure it more securely another way." They know because the alerts are in near real-time.
That's part of our strategy. We want to bring this information as close to the DevOps team as possible. That's where we feel the greatest benefit can be achieved. The near real-time feedback on what they're doing means they can correct it there, versus several days down the road when they've already forgotten what they did.
And where we have integrated it into our CI/CD pipeline, I am able to view vulnerabilities through our different stages of development.
It has enhanced collaboration between our DevOps and SecOps teams by being very transparent. Whatever we see, we want them to see. That's our strategy. Whatever we in security know, we want them to know, because it's a collaborative effort. We all need each other to get things fixed. If they're configuring something and it comes to us, we want them to see it. And our expectation is that, hopefully, they've fixed it by the time we contact them. Once they have fixed it, the alert goes away. Hopefully, it means that everyone has less to do.
We also use the solution's ability to filter alerts by levels of security. Within our cloud, we have accounts that are managed and certain groups are responsible. We're able to direct the learning and the reporting to the people who are managing those groups or those cloud accounts. The ability to filter alerts by levels of security definitely helps our team to understand which situations are the most critical. They're rated by high, medium, and low. Of course we go after the "highs" and tell them to fix them immediately, or as close to immediately as possible. We send the "mediums" and "lows" to tickets. In some instances, they've already fixed them because they've seen the issue and know we'll be knocking on the door. They realize, "Oh, we need to fix this or else we're going to get a ticket." They want to do it the right way and this gives them the information to enable them to make the proper configuration.
Prisma Cloud also provides the data needed to pinpoint root cause and prevent an issue from occurring again. When there's an alert and an issue, in the event it tells you how to fix it. It will say, "Go to this, click on this, do this, do that." It will tell you why you got the alert and how to fix it.
In addition, the solution’s ability to show issues as they are discovered during the build phases is really good. We have different environments. Our low environments are dev, QA, and integrations, environments that don't have any data. And then we have the upper environment which actually has production data. There's a gradual progression as we go from the lower environments and eventually, hopefully, they figure out what to do, and then go into the upper environment. We see the alerts come in and we see how they're configuring things. It gives us good feedback through the whole life cycle as they're developing a product. We see that in near real-time through the whole development cycle.
I don't know if the solution reduces runtime alerts, but its monitoring helps us to be more aware of vulnerabilities that come in the stack. Attackers may be using new vulnerabilities and Prisma Cloud has increased the visibility of any new runtime alerts.
It does reduce alert investigation times because of the information that the alerts give us. When we get an alert, it will tell us the source, where it comes from. We're able to identify things because it uses a protocol called a NetFlow. It tracks the network traffic for us and says, "This alert is generated because these attackers are generating alerts," or "It's coming internally from these devices," and it names them. For example, we run vulnerability scanning weekly in our environment to scan for weaknesses and report on them. At times, a vulnerability scanner may trigger an alert in Prisma. Prisma will say, "Oh yeah, something is scanning your environment." We're able to use this Prisma information to identify the resources that have been scanning our environment. We're able to identify that really quickly as our vulnerability scanner and we're able to dismiss it, based on the information that Prisma provides. Prisma also provides the name or ID of a particular service or user that may have triggered an alert. We are able to reach out to that individual to say, "Hey, is this you?" because of the information provided by Prisma, without having to look into tons of logs to identify who it was.
Per day, because Prisma gives us the information and we don't have to do individual research, it saves us at least one to two hours, easily and probably more.
One of the most valuable features is monitoring of configurations for our cloud, because cloud configurations can be done in hundreds of ways. We use this tool to ensure that those configurations do not present a security risk by providing overly excessive rights or that they punch a hole that we're not aware of into the internet.
One of the strengths of this tool is because we, as a security team, are not configuring everything. We have a decentralized DevOps model, so we depend on individual groups to configure their environments for their development and product needs. That means we're not aware of exactly what they're doing because we're not there all the time. However, we are alerted to things such as if they open up a connection to the internet that's bringing traffic in. We can then ask questions, like, "Why do you need that? Did you secure it properly?" We have found it to be highly beneficial for monitoring those configurations across teams and our DevOps environment.
We're not only using the configuration, but also the containers, the container security, and the serverless function. Prisma will look to see that a configuration is done in a particular, secure pattern. When it's not done in that particular pattern, it gives us an alert that is either high, medium, or low. Based on those alerts, we then contact the owners of those environments and work with them on remediating the alerts. We also advise them on their weaker-than-desirable configuration and they fix it. We have people who are monitoring this on a regular basis and who reach out to the different DevOps groups.
It scans our containers in real time. Also, as they're built, it's looking into the container repository where the images are built, telling us ahead of time, "You have vulnerabilities here, and you should update this code before you deploy." And once it's deployed, it's scanning for vulnerabilities that are in production as the container is running. And we're also moving into serverless, where it runs off of codes, like Azure Functions and AWS Lambdas, which is a strip line of code. We're using Prisma for monitoring that too, making sure that the serverless is also configured correctly and that we don't have commands and functions in there that are overly permissive.
The challenge that Palo Alto and Prisma have is that, at times, the instructions in an event are a little bit dated and they're not usable. That doesn't apply to all the instructions, but there are times where, for example, the Microsoft or the Amazon side has made some changes and Palo Alto or Prisma was not aware of them. So as we try to remediate an alert in such a case, the instructions absolutely do not work. Then we open up a ticket and they'll reply, "Oh yeah, the API for so-and-so vendor changed and we'll have to work with them on that." That area could be done a little better.
One additional feature I'd like to see is more of a focus on API security. API security is an area that is definitely growing, because almost every web application has tons of APIs connecting to other web applications with tons of APIs. That's a huge area and I'd love to see a little bit more growth in that area. For example, when it comes to the monitoring of APIs within the clouded environment, who has access to the APIs? How old are the APIs' keys? How often are those APIs accessed? That would be good to know because they could be APIs that are never really accessed and maybe we should get rid of them. Also, what roles are attached to those APIs? And where are they connected to which resources? An audit and inventory of the use of APIs would be helpful.
I've been using Palo Alto Prisma for about a year and a half.
It's a stable solution.
The scalability is "average".
Palo Alto's technical support for this solution is okay.
We did not have a previous solution. It was the same solution called Redlock, which was then purchased by Palo Alto.
The initial setup took a day or two and was fairly straightforward.
As for our implementation strategy, it was
In terms of maintenance, one FTE would be preferable, but we do not have that.
We implemented it ourselves, with support from Prisma.
One thing we're very pleased about is how the licensing model for Prisma is based on work resources. You buy a certain amount of work resources and then, as they enable new capabilities within Prisma, it just takes those work resource units and applies them to new features. This enables us to test and use the new features without having to go back and ask for and procure a whole new product, which could require going through weeks, and maybe months, of a procurement process.
For example, when they brought in containers, we were able to utilize containers because it goes against our current allocation of work units. We were immediately able to do piloting on that. We're very appreciative of that kind of model. Traditionally, other models mean that they come out with a new product and we have to go through procurement and ask, "Can I have this?" You install it, or you put in the key, you activate it, and then you go through a whole process again. But this way, with Prisma, we're able to quickly assess the new capabilities and see if we want to use them or not. For containers, for example, we could just say, "Hey, this is not something we want to spend our work units on." And you just don't add anything to the containers. That's it.
The biggest lesson I have learned while using the solution is that you need to tune it well.
The Prisma tool offers a lot of functionality and a lot of configuration. It's a very powerful tool with a lot of features. For people who want to use this product, I would say it's definitely a good product to use. But please be aware also, that because it's so feature rich, to do it right and to use all the functionality, you need somebody with a dedicated amount of time to manage it. It's not complicated, but it will certainly take time for dedicated resources to fully utilize all that Prisma has to offer. Ideally, you should be prepared to assign someone as an SME to learn it and have that person teach others on the team.
I would rate Prisma Cloud at nine out of 10, compared to what's out there.
