Prisma Cloud by Palo Alto Networks OverviewUNIXBusinessApplication

Prisma Cloud by Palo Alto Networks is the #1 ranked solution in top Web Application Firewalls, Container Security Solutions, Cloud Workload Protection Platforms, top Microsegmentation Software tools, top Cloud Security Posture Management (CSPM) tools, and top Cloud-Native Application Protection Platforms (CNAPP) tools. PeerSpot users give Prisma Cloud by Palo Alto Networks an average rating of 7.6 out of 10. Prisma Cloud by Palo Alto Networks is most commonly compared to Microsoft Defender for Cloud: Prisma Cloud by Palo Alto Networks vs Microsoft Defender for Cloud. Prisma Cloud by Palo Alto Networks is popular among the large enterprise segment, accounting for 72% of users researching this solution on PeerSpot. The top industry researching this solution are professionals from a computer software company, accounting for 20% of all views.
Prisma Cloud by Palo Alto Networks Buyer's Guide

Download the Prisma Cloud by Palo Alto Networks Buyer's Guide including reviews and more. Updated: November 2022

What is Prisma Cloud by Palo Alto Networks?

Prisma Cloud is a comprehensive cloud-native security platform (CNSP) that provides security and compliance coverage for infrastructure, applications, data, and all cloud-native technology stacks throughout the development lifecycle. Prisma Cloud safeguards cloud operations across hybrid and multi-cloud environments, all from a single, unified solution, using a combination of cloud service provider APIs and a unified agent framework.

The move to the cloud has changed all aspects of the application development lifecycle, with security being foremost among them. Security and DevOps teams face a growing number of entities to secure as organizations adopt cloud-native approaches. Constantly changing environments challenge developers to build and deploy at a rapid pace without compromising on security. Prisma Cloud by Palo Alto Networks delivers complete security and compliance coverage across the development lifecycle on any cloud environment, enabling you to develop cloud-native applications with confidence.

Prisma Cloud Features

Prisma Cloud offers comprehensive security coverage in all areas of the cloud development lifecycle:

  • Code security: Protect configurations, scan code before it enters production, and integrate with other tools.

  • Security posture management: Monitor posture, identify and remove threats, and provide compliance across public clouds.

  • Workload protection: Secure hosts and containers across the application lifecycle.

  • Network security: Gain network visibility and enforce micro segmentation.

  • Identity security: Enforce permissions and secure identities across clouds.

Benefits of Prisma Cloud

  • Unified management: All users use the same dashboards built via shared onboarding, allowing cloud security to be addressed from a single agent framework.

  • High-speed onboarding: Multiple cloud accounts and users are onboarded within seconds, rapidly activating integrated security capabilities.

  • Multiple integration options: Prisma Cloud can integrate with widely used IDE, SCM, and CI/CD workflows early in development, enabling users to identify and fix vulnerabilities and compliance issues before they enter production. Prisma Cloud supports all major workflows, automation frameworks, and third-party tools.

Reviews from Real Users

Prisma Cloud stands out among its competitors for a number of reasons. Two major ones are its integration capabilities, as well as its visibility, which makes it very easy for users to get a full picture of the cloud environment.

Alex J., an information security manager at Cobalt.io, writes, “Prisma Cloud has enabled us to take a very strong preventive approach to cloud security. One of the hardest things with cloud is getting visibility into workloads. With Prisma Cloud, you can go in and get that visibility, then set up policies to alert on risky behavior, e.g., if there are security groups or firewall ports open up. So, it is very helpful in preventing configuration errors in the cloud by having visibility. If there are issues, then you can find them and fix them.”

Luke L., a cloud security specialist for a financial services firm, writes, “You can also integrate with Amazon Managed Services. You can also get a snapshot in time, whether that's over a 24-hour period, seven days, or a month, to determine what the estate might look like at a certain point in time and generate reports from that for vulnerability management forums.”

Prisma Cloud by Palo Alto Networks was previously known as Palo Alto Networks Prisma Cloud, Prisma Public Cloud, RedLock Cloud 360, RedLock, Twistlock, Aporeto.

Prisma Cloud by Palo Alto Networks Customers

Amgen, Genpact, Western Asset, Zipongo, Proofpoint, NerdWallet, Axfood, 21st Century Fox, Veeva Systems, Reinsurance Group of America

Prisma Cloud by Palo Alto Networks Video

Archived Prisma Cloud by Palo Alto Networks Reviews (more than two years old)

Filter by:
Filter Reviews
Industry
Loading...
Filter Unavailable
Company Size
Loading...
Filter Unavailable
Job Level
Loading...
Filter Unavailable
Rating
Loading...
Filter Unavailable
Considered
Loading...
Filter Unavailable
Order by:
Loading...
  • Date
  • Highest Rating
  • Lowest Rating
  • Review Length
Search:
Showingreviews based on the current filters. Reset all filters
Security Architect at a computer software company with 11-50 employees
Real User
Looks across our various cloud estates and provides information about what's going on, where it is going on, and when it happened
Pros and Cons
  • "One of the main reasons we like Prisma Cloud so much is that they also provide an API. You can't expect to give someone an account on Prisma Cloud, or on any tool for that matter, and say, "Go find your things and fix them." It doesn't work like that... We pull down the information from the API that Prisma Cloud provides, which is multi-cloud, multi-account—hundreds and hundreds of different types of alerts graded by severity—and then we can clearly identify that these alerts belong to these people, and they're the people who must remediate them."
  • "Based on my experience, the customization—especially the interface and some of the product identification components—is not as customizable as it could be. But it makes up for that with the fact that we can access the API and then build our own systems to read the data and then process and parse it and hand it to our teams."

What is our primary use case?

We have a very large public cloud estate. We have nearly 300 public cloud accounts, with almost a million things deployed. It's pretty much impossible to track all of the security and the compliance issues using anything that would remotely be considered homegrown—scripts, or something that isn't fully automated and supported. We don't have the time, or necessarily even the desire, to build these things ourselves. So we use it to track compliance across all of the various accounts and to manage remediation. 

We also have 393 applications in the cloud, all of which are part of various suites, which means there are at least 393 teams or groups of people who need to be held accountable for what they have deployed and what they wish to do. 

It's such a large undertaking that automating it is the only option. To bring it all together, we use it to ensure that we can measure and track and identify the remediation of all of our public cloud issues.

How has it helped my organization?

The solution provides risk clarity at runtime and across the entire pipeline, showing issues as they are discovered during the build phases. Our developers are able to correct them using the tools they use to code. It gives our developers a point to work towards. If the information provided by this didn't exist, then we wouldn't be able to give our developers the direction that they need to go and fix the issues. It comes back to ownership. If we can give full ownership of the issues to a team, they will go fix them. Honestly, I don't care how they fix them. I don't really mind what tools they use.

It is reducing run-time alerts. It's still in the process of working on those, but we have already seen a significant decrease, absolutely.

What is most valuable?

The entire concept is the right thing for us. It's what we need. The application is the feature, so to speak it. What it does is what we want it for: looking across the various cloud estates and providing us with information about what's going on in our cloud, where it is, when it happened. The product is the most valuable feature. It's not a do-all and end-all product. That doesn't exist. But it's a product with a very specific purpose. And we bought it for that very specific purpose.

When it comes to protecting the full cloud native stack—the pure cloud component of the stack—it is very good.

One of the main reasons we like Prisma Cloud so much is that they also provide an API. You can't expect to give someone an account on Prisma Cloud, or on any tool for that matter, and say, "Go find your things and fix them." It doesn't work like that. We've got to be able to clearly identify who owns what in our organization so that we can say, "Here's a report for your things and this is what you must go and fix." We pull down the information from the API that Prisma Cloud provides, which is multi-cloud, multi-account—hundreds and hundreds of different types of alerts graded by severity—and then we can clearly identify that these alerts belong to these people, and they're the people who must remediate them. That's our most important use case, because if you can't identify users, you can't remediate. No user is going to sit there going through over a million deployed things in the public cloud and say, "That one's mine, that one's not, that's mine, that's not." It's both the technology that Prisma Cloud provides and the ability to identify things distinctly, that comprise our use case.

It also provides the visibility and control we need, regardless of how complex or distributed our cloud environments become. It doesn't care about the complexity of our environment. It gives us the visibility we need to have confidence in our compliance. Without it, we would have no confidence at all.

It is also part of our DevOps processes and we have integrated security into our CI/CD pipeline. To be honest, those touchpoints are not as seamless as they could be because our processes do rely on multiple tools and multiple teams. But it is one of the key requirements in our DevOps life cycle for the compliance component to be monitored by this. It's a 100 percent requirement. The teams must use it all the time and be compliant before they move on to the next stage in each release. It is a bit manual for us, but that's because of our environment. It's given our SecOps teams the visibility they need to do their jobs. There's absolutely no chance that those teams would have any visibility, on a normal, day-to-day basis, simply because the SecOps teams are very small, and having to deal with hundreds of other development teams would be impossible for them on a normal basis.

What needs improvement?

Based on my experience, the customization—especially the interface and some of the product identification components—is not as customizable as it could be. But it makes up for that with the fact that we can access the API and then build our own systems to read the data and then process and parse it and hand it to our teams. At that point, we realized, "Okay, we're not never going to have it fully customizable," because no team can expect a product, off-the-shelf, to fit itself to the needs of any organization. That's just impossible.

So customization from our perspective comes through the API, and that's the best we can do because there is no other sensible way of doing it. The customization is exactly evident inside the API, because that's what you end up using.

In terms of the product having room for improvement, I don't see any product being perfect, so I'm not worried about that aspect. The RedLock team is very responsive to our requirements when we do point out issues, and when we do point out stuff that we would like to see fixed, but the product direction itself is not a big concern for us.

Buyer's Guide
Prisma Cloud by Palo Alto Networks
November 2022
Learn what your peers think about Prisma Cloud by Palo Alto Networks. Get advice and tips from experienced pros sharing their opinions. Updated: November 2022.
656,474 professionals have used our research since 2012.

For how long have I used the solution?

We've been using it since before it was called Prisma Cloud. We're getting on towards two years since we first purchased it.

What do I think about the stability of the solution?

The stability of Prisma Cloud is very good. I have no complaints along those lines. It seems to fit the requirements and it doesn't go down. Being a SaaS product, I would expect that. I haven't experienced any instability, and that's a good thing.

What do I think about the scalability of the solution?

Again, as a SaaS product, I would expect it to just scale.

How are customer service and support?

We regularly use Palo Alto technical support for the solution. I give it a top rating. They're very good. They have a very good customer success team. We've never had any issues. All our questions have been answered. It has been very positive.

Which solution did I use previously and why did I switch?

We did not have a previous solution.

How was the initial setup?

The initial setup was very straightforward. It's a SaaS product. All you have to do is configure your end, which isn't very hard. You just have to create a role for the product and, from there on, it just works, as long as the role is created correctly. Everything else you do after that is managed for you.

We have continuously been deploying it on new accounts as we spin them up. Our deployment has been going on since year one, but we've expanded. Two years ago we probably had about 40 or 50 cloud accounts. Now, we have 270 cloud accounts.

We have a team that is dedicated to managing our security tools. Something this big will always require some maintenance from our side: new accounts, and talking to internal teams. But this is as much about management of the actual alerts and issues than it is anything else. It's no longer about whether the tool is being maintained. We don't maintain it. But what we do is maintain our interaction with the tool. We have two people, security engineers, who work with the tool on a regular basis.

What was our ROI?

It's a non-functional ROI. This isn't a direct-ROI kind of tool. The return is in understanding our security postures. That's incredibly important and that's why we bought it and that's what we need from it. It doesn't create funds; it is a control. But it certainly does stop issues, and how do you quantify that?

What's my experience with pricing, setup cost, and licensing?

Pricing wasn't a big consideration for us. Compared to the work that we do, and the other costs, this was one of the regular costs. We were more interested in the features than we were in the price.

If a competitor came along and said, "We'll give you half the price," that doesn't necessarily mean that's the right answer, at all. We wouldn't necessarily entertain it that way. Does it do what we need it to do? Does it work with the things that we want it to work with? That is the important part for us. Pricing wasn't the big consideration it might be in some organizations. We spend millions on public cloud. In that context, it would not make sense to worry about the small price differences that you get between the products. They all seem to pitch it at roughly the same price.

Which other solutions did I evaluate?

Before the implementation of Prisma Cloud, there were only two solutions in the market. The other one was Dome9. We did an evaluation and we chose this one, and they were both very new. This is a very new concept. It pretty much didn't exist until Prisma Cloud came along.

The Prisma Cloud solution was chosen because of the way it helped integrate with our operations people, and our operations people were very happy with it. That was one of the main concerns.

Both solutions are very good at what they do. They approach the same problem from different directions. It was this direction that worked for us. Having said that, certain elements of Prisma Cloud were definitely more attractive to us because they matched up with some of our requirements. I'm very loath to say one product is better than the other, because it does depend on your requirements. It does depend on how you intend to use it and what it is, exactly, that you're looking for.

What other advice do I have?

You need to identify how you'll be using it and what your use cases are. If you don't have a mature enough organizational posture, you're not going to use it to actually fix the issues because you won't have the teams ready to consume its information. You need to build that and that needs to be built into the thinking around that product. There's no point having information if you're not going to act on it. So understand who is going to act on it, and how, and then you've got a much better path to understanding your use for this. There's no point in buying a product for the sake of the product. You need the processes and the workflows that go with it and you need to build those. It's not good enough to just hope that they will happen.

The solution doesn't secure the entire spectrum of compute options because there are other Palo Alto products that secure containers, for example. This is very specifically focused on the configuration of the public cloud instances. It doesn't look inside those instances. You would need something else for that. You don't want to be using other products to do this. You don't want to mistake this for something that does everything. It doesn't. It is a very specific product and it is amazingly good at what it does.

We do integrate it with our workflow as part of the process of getting an application onto the internet. It does integrate with our workflow, giving us a posture as part of the workflow. But it is not a workflow tool.

It definitely does multi-cloud. It does the three major ones plus Alibaba Cloud. It doesn't reach into hybrid cloud, in the sense that it doesn't understand anything non-cloud. We don't use it to provide security, although it is very good for that. We already have an advanced security provision posture, because we are a very large organization. We just use it to inform us of security issues that are outside our other controls.

Prisma Cloud doesn't provide us with a single tool to protect all of our cloud resources and applications in terms of security and compliance reports because we have non-cloud-related tools being folded into the reports as well. Even though it works on the cloud, and is excellent at what it does, we integrate it with our Qualys reports, for example, which is the scanning on our hosts. Those hosts are in the cloud, but this doesn't touch them. There's no such thing as a single security tool, frankly. It's basically part of our portfolio and it's part of what every organization needs, in my opinion, to be able to manage their cloud security postures. Otherwise, it would just never work.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
LukeLynch - PeerSpot reviewer
Cloud Security Specialist at a financial services firm with 501-1,000 employees
Real User
Gives me a holistic view of cloud security across multiple clouds or multiple cloud workloads within one cloud provider
Pros and Cons
  • "You can also integrate with Amazon Managed Services. You can also get a snapshot in time, whether that's over a 24-hour period, seven days, or a month, to determine what the estate might look like at a certain point in time and generate reports from that for vulnerability management forums."
  • "In addition to that, I can get a snapshot of what I deemed were the priority vulnerabilities, whether it was identity access management, key rotation, or secrets management. Whatever you deem to be a priority for mitigating threats for your environment, you can get that as a snapshot."
  • "It's not really on par with, or catering to, what other products are looking at in terms of SAST and DAST capabilities. For those, you'd probably go to the market and look at something like Veracode or WhiteHat."

What is our primary use case?

Primarily the intent was to have a better understanding of our cloud security posture. My remit is to understand how well our existing estate in cloud marries up to the industry benchmarks, such as CIS or NIST, or even AWS's version of security controls and benchmarks.

When a stack is provisioned in a cloud environment, whether in AWS or Azure or Google Cloud, I can get an appreciation of how well the configuration is in alignment with those standards. And if it's out of alignment, I can effectively task those who are accountable for resources in clouds to actually remediate any identifiable vulnerabilities.

How has it helped my organization?

The solution is really comprehensive. Especially over the past three to four years, I was heavily dependent on AWS-native toolsets and config management. I had to be concerned about whether there were any permissive security groups or scenarios where logging might not have been enabled on S3 buckets, or if we didn't have encryption on EBS volumes. I was quite dependent on some of the native stacks within AWS.

Prisma not only looks at the workloads for an existing cloud service provider, but it looks at multiple cloud service providers outside of the native stack. Although the native tools on offer within AWS and Azure are really good, I don't want to be heavily dependent on them. And with Google, where they don't have a security hub where you can get that visibility, then you're quite dependent on tools like Prisma Cloud to be able to give you that. In the past, that used to be Dome9 or Evident.io. Palo Alto acquired Evident.io, and that became rebranded as this cloud posture management solution. It's proven really useful for me.

It integrates capabilities across both cloud security posture management and cloud workload protection. The cloud security posture management is what it was initially intended for, looking at configuration of cloud service workloads for AWS, Azure, Google, and Alibaba. And you can look at how the configuration of certain workloads align to standards of CIS, NIST, PII, etc.

And that brings our DevOps and SecOps teams closer together. The engineering aspect is accountable for provisioning dedicated accounts for cloud consumers within the organization. There might be just an entity within the business that has a specific use case. You then want to go to ensure that they take accountability for building their services in the cloud, so that it's not just a central function or that engineering is solely responsible. You want something of a handoff so that consumers of cloud within the organization can also have that accountability, so that it's a shared responsibility. Then, if you're in operations, you have visibility into what certain workloads are doing and whether they're matching the standards that have been set by the organization from a risk perspective.

You've also got the software engineering side of the business and they might just be focused on consuming base images. They may be building container environments or even non-container environments or hosting VMs. They also have a level of accountability to ensure that the apps or packages that they build on top of the base image meet a certain level of compliance, depending on what your business risk-appetite is. So it's really useful in that you've got that shared accountability and responsibility. And overall, you can then hand that off to security, vulnerability management, or compliance teams, to have a bird's-eye view of what each of those entities is doing and how well they're marrying up to the expected standards.

Prior to Prisma cloud, you'd have to have point solutions for container runtime scanning and image scanning. They could be coupled together, but even so, if you were running multiple cloud service providers in parallel, you could never really get the whole picture from a governance perspective. You would struggle to actually determine, "Okay, how are we doing against the CIS benchmark for Azure, GCP, and AWS, and where are the gaps that we need to address from a governance and a compliance perspective so as to reduce our risk and the threat landscape?" Now that you've got Prisma Cloud, you can get that holistic view in a single pane of glass, especially if you're running multiple cloud workloads or a number of cloud workloads with one cloud service provider. It gives you the ability to look at private, public, or hybrid offerings. It saves me having to go to market and also run a number of proofs of concepts for point solutions. It's an indication of how the market has matured and how Palo Alto, with Prisma Cloud in particular, understands what their consumers and clients want.

It can certainly help reduce alert investigation times, because you've got the detail that comes with the alert, to help remediate. The level of detail offered up by Prisma Cloud, for a given engineer who might not be that familiar with a specific type of configuration or a specific type of alert, saves the engineer having to delve into runbooks or online resources to learn how to remediate a particular alert. You have to compare it to a SIEM solution where you get an event or an alert is triggered. It's usually based on a log entry and the engineer would have to then start to investigate what that alert might mean. But with Prisma Cloud and Prisma Cloud Compute, you get that level of detail off the back of every event, which is really useful.

It's hard to quantify how much time it might save, but think about the number of events and what it would be like if they didn't have that level of detail on how to remediate, each time an event occurred. Suppose you had a threshold or a setting that was quite conservative, based on a particular cloud workload, and that there were a number of accounts provisioned throughout the day and, for each of those accounts, there were a number of config settings that weren't in alignment with a given standard. For each of those events, unless there was that level of detail, the engineer would have to look at the cloud service provider's configuration runbooks or their own runbooks to understand, "Okay, how do I change something from this to this? What's the polar opposite for me to get this right?" The great thing about Prisma Cloud is that it provides that right out-of-the-box, so you can quickly deduce what needs to be done. For each event, you might be saving five or 10 minutes, because you've got all the information there, served up on a plate.

What is most valuable?

For me, what was valuable from the outset was the fact that, regardless of what cloud service provider you're with, I could segregate visibility of specific accounts to account owners. For example, at AWS, you might have an estate that's solely managed by yourself, or there might be a number of teams within the organization that do so.

You can also integrate with Amazon Managed Services. You can also get a snapshot in time, whether that's over a 24-hour period, seven days, or a month, to determine what the estate might look like at a certain point in time and generate reports from that for vulnerability management forums. In addition to that, I can get a snapshot of what I deemed were the priority vulnerabilities, whether it was identity access management, key rotation, or secrets management. Whatever you deem to be a priority for mitigating threats for your environment, you can get that as a snapshot.

You can also automate how frequently you want reports to be generated. You can then understand whether there has been any improvement or reduction in vulnerabilities over a certain time period.

The solution also enables you to ingest logs to your preferred SIEM provider so that you've got a better understanding of how things stack up with event correlation and SIEM systems.

If you've got an Azure presence, you might be using Office 365 and you might also have a presence in Google Cloud for the data, specifically. You might also want to look at scenarios where, if you're using tools and capabilities for DevOps, like Slack, you can plug those into Prisma Cloud as well to understand how well they marry up to vulnerabilities. You can also use it for driving out instant vulnerabilities into Slack. That way, you're looking at what your third-party SaaS providers are doing in relation to certain benchmarks. That's really useful as well.

In addition, an engineer may provision something like a shared service, a DNS capability, a sandbox environment, or a proof of concept. The ability to filter alerts by severity helps when reporting on the services that have been provisioned. They'll come back as a high, medium, or low severity and then I ensure that we align with our risk-appetite and prioritize higher and medium vulnerabilities so that they are closed out within a given timeframe.

When it comes to root cause, Prisma Cloud is quite intuitive. If you have an S3 bucket that has been set to public but, realistically, it shouldn't have been, you can look at how to remediate that quite intuitively, based on what the solution offers up as a default setting. It will offer up a way to actually resolve and apply the correct settings, in line with a given standard. There's almost no thinking involved. It's on-point and it's as if it offers up the specific criteria and runbooks to resolve particular vulnerabilities.

That assists security, giving them an immediate way to resolve a given conflict or misalignment. The time-savings are really incomparable. If you were to identify a vulnerability or a risk, you might have to draw up what the remediation activity should look like. However, what Prisma Cloud does is that it actually presents you with a report on how to remediate. Alternatively, you can have dynamic events that are generated and applied to Slack, for example. Those events can then be sent off to a JIRA backlog or the like. The engineers will then look at what that specific event was, at what the criteria are, and it will tell them how to remediate it without their having to set time aside to explain it. The whole path is really intuitive and almost fully automated, once it's set up.

What needs improvement?

One scenario, in early days, was in trying to get a view on how you could segregate account access for role-based access controls. As a DevSecOps squad, you might have had five or six guys and girls who had access to the overall solution. If you wanted to hand that off to another team, like a software engineering team, or maybe just another cloud engineering team, there were concerns about sharing the whole dashboard, even if it was just read-only. But over the course of time, they've integrated that role-based access control so that users should only be able to view their own accounts and their own workloads, rather than all of the accounts.

Another concern I had was the fact that you couldn't ingest the accounts into Prisma Cloud in an automated sense. You had to manually integrate them or onboard them. They have since driven out new features and capabilities, over the last 12 months, to cater for that. At an organizational level you can now plug that straight into Prisma Cloud, as and when new accounts are provisioned or created. Then, by default, the AWS account or the Azure account will actually be included, so you've got visibility straight away.

The lack of those two features was a limitation as to how far I could actually push it out within the organization for it to be consumed. They've addressed those now, which is really useful. I can't think of anything else that's really causing any shortcomings. It's everything and more at the moment.

For how long have I used the solution?

I've been using Prisma Cloud for about 12 months now

How was the initial setup?

It's pretty straightforward to run an automated setup, if you want to go down that route. The capabilities are there. But in terms of how we approached it, it was like a plug-and-play into our existing stack. Within AWS, you just have to point Prisma Cloud at your organizational level so that you can inherit all the accounts and then you have the scanning capability and the enforcement capability, all native within Prisma Cloud. There's nothing that we're doing that's over and above, nothing that we would have to automate other than what is actually provided natively within Prisma Cloud. I'm sure if you wanted to do additional automation, for example if you wanted to customize how it reports into Slack or how it reports into Atlassian tools, you could certainly do that, but there's nothing that is that complex, requiring you to do additional automation over and above what it already provides.

What was our ROI?

I haven't gone about calculating what the ROI might be.

But just looking at it from an operational engineering perspective and the benefits that come with it, and when it comes to the governance and compliance aspects of running AWS cloud workloads, I now put aside half an hour or an hour on a given day of the week, or alternative days of the week. I use that time to look at what the client security posture is, generate a number of reports, and hand them off to a number of engineering teams, all a lot quicker than I used to be able to do so two or three years ago.

In the past, at times I would have had to run Trusted Advisor from AWS, to look at a particular account, or run a number of reports from Trusted Advisor to look at multiple accounts. And with Trusted Advisor, I could never get a collective view on what the overall posture was of workloads within AWS. With Prisma Cloud, I can just select 30 AWS accounts, generate one report, and I've got everything I need to know, out-of-the-box. It gives me all the different services that might be compliant/non-compliant, have passed/failed, and that have high, medium, or low vulnerabilities. It has saved me hours being able to get those snapshots.

I can also step aside by putting an automated report in place and receive that on a weekly basis. I've also got visibility into when new accounts are provisioned, without my having to keep tabs on whether somebody has just provisioned a new account or not. The hours that are saved with it are really quite high.

What's my experience with pricing, setup cost, and licensing?

As it stands now, I think things have moved forward somewhat. Prisma and the suite of tools by Palo Alto, along with the fact that they have integrated Prisma Cloud Compute as a one-stop shop, have really got it nailed. They understand that not all clients are running container workloads. They bring together point solutions, like what used to be Twistlock, into that whole ecosystem, alongside a cloud security posture management system, and they'll license it so that it's favorable for you as a consumer. You can think about how you can have that presence and not then be dependent on multiple third-parties.

Prisma cloud was originally destined for cloud security posture management, to determine how the configuration of cloud services aligns with given standards. Through the evolution of the product, they then integrated a capability they call Prisma Cloud Compute. That is derived from point solutions for container and image scanning. It has the capabilities on offer within a single pane of glass.

Prior to the given scenario with Prisma Cloud, you'd have to either go to Twistlock or Aqua Security for container workloads. If you were going open source, obviously that would be free, but you'd still have to be looking at independent point solutions. And if you were looking at governance and compliance, you'd have to look at the likes of Dome9, Evident.io, and OpenSCAP, in a combination with Trusted Advisor. But the fact that you can just lean into Prisma Cloud and have those capabilities readily available, and have an account manager that is priced based on workloads, makes it a favorable licensing model.

It also makes the whole RFP process a lot more streamlined and simplified. If you've got a purchasing specialist in-house, and then heads-of-functions who might have a vested interest in what the budget allocation is, from either a security perspective or from a DevOps cloud perspective, it's really quite transparent. They work the pricing model in your favor based on how you want to actually integrate with their products. From my exposure so far, they have been really flexible on whatever your current state is, with a view to what the future state might be. There's no hard sell. They "get" the journey that you're on, and they're trying to help you embrace cloud security, governance, and compliance as you go. That works favorably for them as well, because the more clients that they can acquire and onboard, the more they can share the experience, helping both the business and the consumer, overall.

Which other solutions did I evaluate?

Prior to Prisma cloud, I was looking at Dome9 and Evident.io. Around late 2018 to early 2019, Palo Alto acquired Evident.io and made it part of their Prisma suite of security tools.

At the time, the two that were favorable were Evident.io and Dome9, side-by-side, especially when running multiple AWS accounts in parallel. At the time, it was Dome9 that came out as more cost-effective. But I actually preferred Evident.io. It just happened to be that we were evaluating the Prisma suite and then discovered that Palo Alto had acquired Evident.io. For me that was really useful. As an organization, if we were already exploring the capabilities of Palo Alto and had a commercial presence with them, to then be able to use Prisma Cloud as part of that offering was really good for me as a security specialist in cloud. Prior to that, if as an organization you didn't have a third-party cloud security posture management system for AWS, you were heavily dependent on Trusted Advisor.

What other advice do I have?

My advice is that if you have the opportunity to integrate and utilize Prisma Cloud you should, because it's almost a given that you can't get any other cloud security posture management system like Prisma Cloud. There are competitors that are striving to achieve the same types of things. However, when it comes to the governance element for a head of architecture or a head of compliance or even at the CSO level, without that holistic view, if you use one of them you are potentially flying blind. 

Once you've got a capability running in the cloud and the associated demand that comes through from the business to provision accounts for engineers or technical service owners or business users, the given is that not every team or every user that wants to consume the cloud workload has the required skill set to do so. There's a certain element of expertise that you need to securely run cloud workloads, just as is needed for running applications or infrastructure on-premise. However, unless you have an understanding of what you're opening up to—the risk element to running cloud workloads, such as a potential attacks or compromise of service—from an organizational perspective, it's only a matter of time before something is leaked or something gets compromised and that can be quite expensive to have to manage. There are a lot of unknowns. 

Yes, they do give you capabilities, such as Trusted Advisor, or you might have OpenSCAP or you might be using Forseti for Google Cloud, and there are similar capabilities within Azure. However, the cloud service providers aren't native security vendors. Their workloads are built around infrastructure- or platform-as-a-service. What you have to do is look at how you can complement what they do with security solutions that give you not just the north-south view, but the east-west as well. You shouldn't just be dependent on everything out-of-the-box. I get the fact that a lot of organizations want to be cloud-first and utilize native security capabilities, but sometimes those just don't give you enough. Whether you're looking at business-risk or cyber-risk, for me, Prisma Cloud is definitely out there as a specialist capability to help you mitigate the threat landscape in running cloud workloads.

I've certainly gone from a point where I understood what the risk was in not having something like this, and that's when I was heavily dependent on native tools that are offered up with cloud service providers. 

The first release that came out didn't include the workload management, because what happened, I believe, was that Palo Alto acquired Twistlock. Twistlock was then "framed" into cloud workload management within Prisma Cloud. What that meant was that you had a capability that looks at your container workloads, and that's called Prisma Cloud Compute, which is all available within a single pane of glass, but as a different set of capabilities. That is really useful, especially when you're running container workloads.

In terms of securing the entire development life cycle, if you integrate it within the Jenkins CI/CD pipeline, you can get the level of assurance needed for your golden images or trusted image. And then you can look at how you can enforce certain constraints for images that don't match the level of compliance required. In terms of going from what would be your image repository, when that's consumed you have the capability to look at what runtime scanning looks like from a container perspective. It's not really on par with, or catering to, what other products are looking at in terms of SAST and DAST capabilities. For those, you'd probably go to the market and look at something like Veracode or WhiteHat.

It all depends on the way an organization works, whether it has a distributed or centralized setup. Is there like a central DevOps or engineering function that is a single entity for consuming cloud-based services, or is there a function within the business that has primarily been building capabilities in the cloud for what would otherwise be infrastructure-as-a-service for internal business units? The difficulty there is the handoff. Do you look at running it as a central function, where the responsibility and the accountability is within the DevOps teams, or is that a function for SecOps to manage and run? The scenario is dependent on what the skill sets are of a given team and what the priorities are of that team. 

Let's say you have a security team that knows its area and handles governance, risk, and compliance, but doesn't have an engineering function. The difficulty there is how do you get the capability integrated into CI/CD pipelines if they don't have an engineering capability? You're then heavily relying on your DevOps teams to build out that capability on behalf of security. That would be a scenario for explaining why DevOps starts integrating with what would otherwise be CyberOps, and you get that DevSecOps cycle. They work closer together, to achieve the end result. 

But in terms of how seamless those CI/CD touchpoints are, it's a matter of having security experts that understand that CI/CD pipeline and where the handoffs are. The heads of function need to ensure that there's a particular level of responsibility and accountability amongst all those teams that are consuming cloud workloads. It's not just a point solution for engineering, cloud engineering, operations, or security. It's a whole collaboration effort amongst all those functions. And that can prove to be quite tricky. But once you've got a process, and the technology leaders understand what the ask is, I think it can work quite well.

When it comes to reducing runtime alerts, it depends on the sensitivity of the alerting that is applicable to the thresholds that you set. You can set a "learning mode" or "conservative mode," depending on what your risk-appetite is. You might want it to be configured in a way that is really sensitive, so that you're alerted to events and get insights into something that's out of character. But in terms of reducing the numbers of alerts, it all depends on how you configure it, based on the sensitivity that you want those alerts to be reporting on.

I would rate Prisma Cloud at eight out of 10. It's primarily down to the fact that I've got a third-party tool that gives me a holistic view of cloud security posture. At the click of a button I can determine what the current status is of our threat landscape, in either AWS or Azure, at a conflict level and at a workload level, especially with regards to Prisma Cloud Compute. It's all available within a single pane of glass. That's effectively what I was after about two or three years ago. The fact that it has now come together with a single provider is why I'd rate it an eight.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Buyer's Guide
Prisma Cloud by Palo Alto Networks
November 2022
Learn what your peers think about Prisma Cloud by Palo Alto Networks. Get advice and tips from experienced pros sharing their opinions. Updated: November 2022.
656,474 professionals have used our research since 2012.
Devin Charters - PeerSpot reviewer
Sr. Security Operations Manager at a healthcare company with 5,001-10,000 employees
Real User
Provides feedback directly to teams responsible for AWS or cloud accounts, enabling them to fix issues independently
Pros and Cons
  • "The policies that come prepackaged in the tool have been very valuable to us. They're accurate and they provide good guidance as to why the policy was created, as well as how to remediate anything that violates the policy."
  • "The integration of the Compute function into the cloud monitoring function—because those are two different tools that are being combined together—could use some more work. It still feels a little bit disjointed."

What is our primary use case?

We are using it for monitoring our cloud environment and detecting misconfigurations in our hosted accounts in AWS or Azure.

How has it helped my organization?

As the security operations team, our job is to monitor for misconfigurations and potential incidents in our environment. This solution does a good job of monitoring those for us and of alerting us to misconfigurations before they become potential security incidents or problems.

We've set the tool up so that it provides feedback directly to the teams responsible for their AWS or cloud accounts. It has been really helpful by getting information directly to the teams. They can see what the problem is and they can fix it without us having to go chase them down and tell them that they have a misconfiguration.

The solution secures the entire spectrum of compute options such as hosts and VMs, containers and Containers as a Service. We are not using the container piece as yet, but that is a functionality that we're looking forward to getting to use. Overall, it gives us fantastic visibility into the cloud environment.

Prisma Cloud also provides the data needed to pinpoint root cause and prevent an issue from occurring again. A lot of that has to do with the policies that are built into the solution and the documentation around those policies. The policy will tell the user what the misconfiguration is, as well as give them remediation steps to fix the misconfiguration. It speeds up our remediation efforts. In some of the cases, when my team, the security team, gets involved, we're not necessarily experts in AWS and wouldn't necessarily know how to remediate the issue that was identified. But because the instructions are included as part of the Prisma Cloud product, we can just cut and paste it and provide it to the team. And when the teams are addressing these directly, they also have access to those remediation instructions and can refer them to figure out what they need to do to remediate the issue and to speed up remediation on misconfigurations. 

In some cases, these capabilities could be saving us hours in remediation work. In other cases, it may not really be of value to the team. For example, if an S3 bucket is public facing, they know how to fix that. But on some of the more complex issues or policies, it might otherwise take a lot more work for somebody to figure out what to do to fix the issue that was identified.

In terms of the solution’s ability to show issues as they are discovered during the build phases, I can only speak to post-deployment because we don't have it integrated earlier in the pipeline. But as far as post-deployment goes, we get notified just about immediately when something comes up that is misconfigured. And when that gets remediated, the alert goes away immediately in the tool. That makes it really easy in a shared platform like this, where we have shared responsibility between the team that's involved and my security operations team. It makes it really easy for us to be able to go into the tool and say, "There was an alert but that alert is now gone and that means that the issue has been resolved," and know we don't have to do any further research.

For the developers, it speeds up their ability to fix things. And for my team, it saves us a ton of time in not having to potentially investigate each one of those misconfigurations to see if it is still a misconfiguration or not, because it's closed out automatically once it has been remediated. On an average day, these abilities in the solution save my team two to three hours, due to the fact that Prisma Cloud is constantly updating the alerts and closing out any alerts that are no longer valid.

What is most valuable?

The policies that come prepackaged in the tool have been very valuable to us. They're accurate and they provide good guidance as to why the policy was created, as well as how to remediate anything that violates the policy. 

The Inventory functionality, enabling us to identify all of the resources deployed into a single account in either AWS or Azure, or into Prisma Cloud as a whole, has been really useful for us.

And the investigate function that allows us to view the connections between different resources in the cloud is also very useful. It allows us to see the relationship traffic between different entities in our cloud environment.

What needs improvement?

The integration of the Compute function into the cloud monitoring function—because those are two different tools that are being combined together—could use some more work. It still feels a little bit disjointed.

Also, the permissions modeling around the tool is improving, but is still a little bit rough. The concept of having roles that certain users have to switch between, rather than have a single login that gives them visibility into all of the different pieces, is a little bit confusing for my users. It can take some time out of our day to try to explain to them what they need to do to get to the information they need.

For how long have I used the solution?

I have been using Palo Alto Prisma Cloud for about a year and a half.

What do I think about the stability of the solution?

We really have had very few issues with the stability. It's been up, it's been working. We've had, maybe, two or three very minor interruptions of the service and our ability to log in to it. In each case there was a half an hour or an hour, at most, during which we were unable to get into it, and then it was resolved. There was usually information on it in the support portal including the reason for it and the expectation around when they would get it back up.

What do I think about the scalability of the solution?

It seems to scale fine for us. We started out with 10 to 15 accounts in there and we're now up to over 200 accounts and, on our end, seemingly nothing has changed. It's as responsive as it's ever been. We just send off our logs. Everything seems to integrate properly with no complaints on our side.

We have nearly 600 users in the system, and they're broken out into two different levels. There are the full system administrators, like myself and my team and the security team that is responsible for our cloud environment as a whole. We have visibility across the entire environment. And then we have the development teams and they are really limited to accessing their specific accounts that are deployed into Prisma Cloud. They have full control over those accounts.

For our cloud environments, the adoption rate is pretty much 100 percent. A lot of that has to do with that automated deployment we created. A new account gets started and it is automatically added to the tool. All of the monitoring is configured and everything else is set up by default. You can't build a new cloud account in our environment without it getting added in. We have full coverage, and we intend to keep it that way.

How are customer service and technical support?

Tech support has been very responsive. They are quick to respond to tickets and knowledgeable in their responses. Their turnaround time is usually 24 to 48 hours. It's very rare that we would open anything that would be considered a high-priority ticket or incident. Most of the stuff was lower priority and that turnaround was perfectly acceptable to us.

Which solution did I use previously and why did I switch?

This is our first tool of this sort.

How was the initial setup?

The initial setup was really straightforward. We then started using the provided APIs to do some automated integration between our cloud environment and Prisma Cloud. That has worked really well for us and has streamlined our deployment by a good deal. However, what we found was that the APIs were changing as we were doing our deployment. We started down the path we created with some of those integrations, and then there were undocumented changes to the APIs which broke our integrations. We then had to go back and fix those integrations.

What may have happened were improvements in the API on the backend and those interfered with what we had been doing. It meant that we had to go back and reconfigure that integration to make it work. My understanding from our team that was responsible for that is that the new integration works better than the old integration did. So the changes Palo Alto made were an improvement and made the environment better, but it was something of a surprise to us, without any obvious documentation or heads-up that that was going to change. That caught us a little bit out and broke the integration until we figured out what had changed and fixed it.

There is only a learning curve on the Compute piece, specifically, and understanding how to pivot between that and the rest of the tool, for users who have access to both. There's definitely a learning curve for that because it's not at all obvious when you get into the tool the first time. There is some documentation on that, but we put together our own internal documentation, which we've shared with the teams to give them more step-by-step instructions on what it is that they need to do to get to the information that they're looking for.

The full deployment took us roughly a month, including the initial deployment of rolling everything out, and then the extended deployment of building it to do automated deployments into new environments, so that every new environment gets added automatically.

Our implementation strategy was to pick up all of the accounts that we knew that we had to do manually, while we were working on building out that automation to speed up the onboarding of the new accounts that we were creating.

What about the implementation team?

We did all of that on our own, just following the API documentation that they had provided. We had a technical manager from Palo Alto with whom we were working as we were doing the deployment, but the automated deployment work that we did was all on our own and all done internally.

At this point, we really don't have anybody dedicated to deployment because we've automated that process. That has vastly simplified our deployment. Maintenance-wise, as it is a SaaS platform, we don't really have anybody who works on it on a regular basis. It's really more ad hoc. If something is down, if we try to connect to it and if we can't get into the portal or whatever the case may be, then somebody will open a ticket with support to see what's going on.

What was our ROI?

We have seen ROI although it's a little hard to measure because we didn't have anything like this before.

The biggest areas of ROI that we've seen with it have been the uptake by the organization, the ease of deploying the tool—especially since we got that full automation piece created and taken care of—as well as the visibility and the speed at which somebody can start using the tool. I generally give employees about an hour or two of training on the tool and then turn them loose on it, and they're capable of working out of it and getting most of the value. There are some things that take more time to get up to speed on, but for the most part, they're able to get up to speed pretty quickly, which is great.

What's my experience with pricing, setup cost, and licensing?

The pricing and the licensing are both very fair.

There aren't any costs in addition to the standard licensing fees, at this time. My understanding is that at the beginning of 2021 they're not necessarily changing the licensing model, but they're changing how some of the new additions to the tool are going to be licensed, and that those would be an additional cost beyond what we're paying now.

The biggest advice I would give in terms of costs would be to try to understand what the growth is going to look like. That's really been our biggest struggle, that we don't have an idea of what our future growth is going to be on the platform. We go from X number of licenses to Y number of licenses without a plan on how we're going to get from A to B, and a lot of that comes as a bit of a surprise. It can make budgeting a real challenge for it. If an organization knows what it has in place, or can get an idea of what its growth is going to look like, that would really help with the budgeting piece.

Which other solutions did I evaluate?

We had looked at a number of other tools. I can't tell you off the top of my head what we had looked at, but Prisma Cloud was the tool that we had always decided that we wanted to have. This was the one that we felt would give us the best coverage and the best solution, and I feel that we were correct on that.

The big pro with Prisma Cloud was that we felt it gave us better visibility into the environment and into the connections between entities in the cloud. That visualization piece is fantastic in this tool. We felt like that wasn't really there in some of the other tools. 

Some of the other tools had a little bit better or broader policy base, when we were initially looking at them. I have a feeling that at this point, with the rate that Palo Alto is releasing new policies and putting them into production, that it is probably at parity now. But there was a feeling, at the time, among some of the other members of the team that Palo Alto came up short and didn't have as many policies as some of the other tools that we were looking at.

What other advice do I have?

I would highly recommend automating the process of deploying it. That has made just a huge improvement on the uptake of the tool in our environment and in the ease of integration. There's work involved in getting that done, but if we were trying to do this manually, we would never be able to keep up with the rate that we've been growing our environment.

The biggest lesson I've learned in using this solution is that we were absolutely right that we needed a tool like this in our environment to keep track of our AWS environment. It has identified a number of misconfigurations and it has allowed us to answer a lot of questions about those misconfigurations that would have taken significantly more time to answer if we were trying to do so using native AWS tools.

The tool has an auto-remediation functionality that is attractive to us. It is something that we've discussed, but we're not really comfortable in using it. It would be really useful to be able to auto-remediate security misconfigurations. For example, if somebody were to open something up that should be closed, and that violated one of our policies, we could have Prisma Cloud automatically close that. That would give us better control over the environment without having to have anybody manually remediate some of the issues.

Prisma Cloud also secures the entire development lifecycle from build to deploy to run. We could integrate it closer into our CI/CD pipeline. We just haven't gone down that path at this point. We will be doing that with the Compute functionality and some of the teams are already doing that. The functionality is there but we're just not taking advantage of it. The reason we're not doing so is that it's not how we initially built the tool out. Some of the teams have an interest in doing that and other teams do not. It's up to the individual teams as to whether or not it provides them value to do that sort of an integration.

As for the solution's alerts, we have them identified at different severities, but we do not filter them based on that. We use those as a way of prioritizing things for the teams, to let them know that if it's "high" they need to meet the SLA tied to that, and similarly if it's "medium" or "low." We handle it that way rather than using the filtering. The way we do it does help our teams understand what situations are most critical. We went through all of the policies that we have enabled and set our priority levels on them and categorized them in the way that we think that they needed to be categorized. The idea is that the alerts get to the teams at the right priority so that they know what priority they need to assign to remediating any issues that they have in their environment.

I would rate the solution an eight out of 10. The counts against it would be that the Compute integration still seems to need a little bit of work, as though it's working its way through things. And some of the other administrative pieces can be a little bit difficult. But the visibility is great and I'm pretty happy with everything else.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Sr. Information Security Manager at a healthcare company with 201-500 employees
Real User
Integrates into our CI/CD pipeline giving devs near real-time alerting on whether a configuration is good or bad
Pros and Cons
  • "It scans our containers in real time. Also, as they're built, it's looking into the container repository where the images are built, telling us ahead of time, "You have vulnerabilities here, and you should update this code before you deploy." And once it's deployed, it's scanning for vulnerabilities that are in production as the container is running."
  • "The challenge that Palo Alto and Prisma have is that, at times, the instructions in an event are a little bit dated and they're not usable. That doesn't apply to all the instructions, but there are times where, for example, the Microsoft or the Amazon side has made some changes and Palo Alto or Prisma was not aware of them. So as we try to remediate an alert in such a case, the instructions absolutely do not work. Then we open up a ticket and they'll reply, "Oh yeah, the API for so-and-so vendor changed and we'll have to work with them on that." That area could be done a little better."

What is our primary use case?

Our use case for the solution is monitoring our cloud configurations for security. That use case, itself, is huge. We use the tool to monitor security configuration of our AWS and Azure clouds. Security configurations can include storage, networking, IAM, and monitoring of malicious traffic that it detects.

We have about 50 users and most of them use it to review their own resources.

How has it helped my organization?

If, for a certain environment, someone configures a connection to the internet, like Windows RDP, which is not allowed in our environment, we immediately get an alert that says, "Hey, there's been a configuration of Windows Remote Desktop Protocol, and it's connected directly to the internet." Because that violates our policy, and it's also not something we desire, we will immediately reach out to have that connection taken down.

We're also integrating it into our CI/CD pipeline. There are parts we've integrated already, but we haven't done so completely. For example, we've integrated container scanning into the CI/CD. When they build a container into the pipeline, it's automatically deployed and the results come back to our console where we're monitoring it. The beauty of it is that we give our developers access to this information. That way, as they build, they actually get near real-time alerting that says, "This configuration is good. This configuration is bad." We have found that very helpful because it provides instant feedback to the development team. Instead of doing a review later on where they find out, "Oh, this is not good," they already know: "Oh, we should not configure it this way, let's configure it more securely another way." They know because the alerts are in near real-time.

That's part of our strategy. We want to bring this information as close to the DevOps team as possible. That's where we feel the greatest benefit can be achieved. The near real-time feedback on what they're doing means they can correct it there, versus several days down the road when they've already forgotten what they did.

And where we have integrated it into our CI/CD pipeline, I am able to view vulnerabilities through our different stages of development.

It has enhanced collaboration between our DevOps and SecOps teams by being very transparent. Whatever we see, we want them to see. That's our strategy. Whatever we in security know, we want them to know, because it's a collaborative effort. We all need each other to get things fixed. If they're configuring something and it comes to us, we want them to see it. And our expectation is that, hopefully, they've fixed it by the time we contact them. Once they have fixed it, the alert goes away. Hopefully, it means that everyone has less to do.

We also use the solution's ability to filter alerts by levels of security. Within our cloud, we have accounts that are managed and certain groups are responsible. We're able to direct the learning and the reporting to the people who are managing those groups or those cloud accounts. The ability to filter alerts by levels of security definitely helps our team to understand which situations are the most critical. They're rated by high, medium, and low. Of course we go after the "highs" and tell them to fix them immediately, or as close to immediately as possible. We send the "mediums" and "lows" to tickets. In some instances, they've already fixed them because they've seen the issue and know we'll be knocking on the door. They realize, "Oh, we need to fix this or else we're going to get a ticket." They want to do it the right way and this gives them the information to enable them to make the proper configuration.

Prisma Cloud also provides the data needed to pinpoint root cause and prevent an issue from occurring again. When there's an alert and an issue, in the event it tells you how to fix it. It will say, "Go to this, click on this, do this, do that." It will tell you why you got the alert and how to fix it.

In addition, the solution’s ability to show issues as they are discovered during the build phases is really good. We have different environments. Our low environments are dev, QA, and integrations, environments that don't have any data. And then we have the upper environment which actually has production data. There's a gradual progression as we go from the lower environments and eventually, hopefully, they figure out what to do, and then go into the upper environment. We see the alerts come in and we see how they're configuring things. It gives us good feedback through the whole life cycle as they're developing a product. We see that in near real-time through the whole development cycle.

I don't know if the solution reduces runtime alerts, but its monitoring helps us to be more aware of vulnerabilities that come in the stack. Attackers may be using new vulnerabilities and Prisma Cloud has increased the visibility of any new runtime alerts.

It does reduce alert investigation times because of the information that the alerts give us. When we get an alert, it will tell us the source, where it comes from. We're able to identify things because it uses a protocol called a NetFlow. It tracks the network traffic for us and says, "This alert is generated because these attackers are generating alerts," or "It's coming internally from these devices," and it names them. For example, we run vulnerability scanning weekly in our environment to scan for weaknesses and report on them. At times, a vulnerability scanner may trigger an alert in Prisma. Prisma will say, "Oh yeah, something is scanning your environment." We're able to use this Prisma information to identify the resources that have been scanning our environment. We're able to identify that really quickly as our vulnerability scanner and we're able to dismiss it, based on the information that Prisma provides. Prisma also provides the name or ID of a particular service or user that may have triggered an alert. We are able to reach out to that individual to say, "Hey, is this you?" because of the information provided by Prisma, without having to look into tons of logs to identify who it was.

Per day, because Prisma gives us the information and we don't have to do individual research, it saves us at least one to two hours, easily and probably more. 

What is most valuable?

One of the most valuable features is monitoring of configurations for our cloud, because cloud configurations can be done in hundreds of ways. We use this tool to ensure that those configurations do not present a security risk by providing overly excessive rights or that they punch a hole that we're not aware of into the internet.

One of the strengths of this tool is because we, as a security team, are not configuring everything. We have a decentralized DevOps model, so we depend on individual groups to configure their environments for their development and product needs. That means we're not aware of exactly what they're doing because we're not there all the time. However, we are alerted to things such as if they open up a connection to the internet that's bringing traffic in. We can then ask questions, like, "Why do you need that? Did you secure it properly?" We have found it to be highly beneficial for monitoring those configurations across teams and our DevOps environment.

We're not only using the configuration, but also the containers, the container security, and the serverless function. Prisma will look to see that a configuration is done in a particular, secure pattern. When it's not done in that particular pattern, it gives us an alert that is either high, medium, or low. Based on those alerts, we then contact the owners of those environments and work with them on remediating the alerts. We also advise them on their weaker-than-desirable configuration and they fix it. We have people who are monitoring this on a regular basis and who reach out to the different DevOps groups.

It scans our containers in real time. Also, as they're built, it's looking into the container repository where the images are built, telling us ahead of time, "You have vulnerabilities here, and you should update this code before you deploy." And once it's deployed, it's scanning for vulnerabilities that are in production as the container is running. And we're also moving into serverless, where it runs off of codes, like Azure Functions and AWS Lambdas, which is a strip line of code. We're using Prisma for monitoring that too, making sure that the serverless is also configured correctly and that we don't have commands and functions in there that are overly permissive.

What needs improvement?

The challenge that Palo Alto and Prisma have is that, at times, the instructions in an event are a little bit dated and they're not usable. That doesn't apply to all the instructions, but there are times where, for example, the Microsoft or the Amazon side has made some changes and Palo Alto or Prisma was not aware of them. So as we try to remediate an alert in such a case, the instructions absolutely do not work. Then we open up a ticket and they'll reply, "Oh yeah, the API for so-and-so vendor changed and we'll have to work with them on that." That area could be done a little better.

One additional feature I'd like to see is more of a focus on API security. API security is an area that is definitely growing, because almost every web application has tons of APIs connecting to other web applications with tons of APIs. That's a huge area and I'd love to see a little bit more growth in that area. For example, when it comes to the monitoring of APIs within the clouded environment, who has access to the APIs? How old are the APIs' keys? How often are those APIs accessed? That would be good to know because they could be APIs that are never really accessed and maybe we should get rid of them. Also, what roles are attached to those APIs? And where are they connected to which resources? An audit and inventory of the use of APIs would be helpful.

For how long have I used the solution?

I've been using Palo Alto Prisma for about a year and a half.

What do I think about the stability of the solution?

It's a stable solution.

What do I think about the scalability of the solution?

The scalability is "average".

How are customer service and technical support?

Palo Alto's technical support for this solution is okay.

Which solution did I use previously and why did I switch?

We did not have a previous solution. It was the same solution called Redlock, which was then purchased by Palo Alto.

How was the initial setup?

The initial setup took a day or two and was fairly straightforward.

As for our implementation strategy, it was 

  • add in the cloud accounts
  • set up alerting
  • fine tune the alerts
  • create process to respond to alerts
  • edit the policies.

In terms of maintenance, one FTE would be preferable, but we do not have that.

What about the implementation team?

We implemented it ourselves, with support from Prisma.

What's my experience with pricing, setup cost, and licensing?

One thing we're very pleased about is how the licensing model for Prisma is based on work resources. You buy a certain amount of work resources and then, as they enable new capabilities within Prisma, it just takes those work resource units and applies them to new features. This enables us to test and use the new features without having to go back and ask for and procure a whole new product, which could require going through weeks, and maybe months, of a procurement process.

For example, when they brought in containers, we were able to utilize containers because it goes against our current allocation of work units. We were immediately able to do piloting on that. We're very appreciative of that kind of model. Traditionally, other models mean that they come out with a new product and we have to go through procurement and ask, "Can I have this?" You install it, or you put in the key, you activate it, and then you go through a whole process again. But this way, with Prisma, we're able to quickly assess the new capabilities and see if we want to use them or not. For containers, for example, we could just say, "Hey, this is not something we want to spend our work units on." And you just don't add anything to the containers. That's it.

What other advice do I have?

The biggest lesson I have learned while using the solution is that you need to tune it well.

The Prisma tool offers a lot of functionality and a lot of configuration. It's a very powerful tool with a lot of features. For people who want to use this product, I would say it's definitely a good product to use. But please be aware also, that because it's so feature rich, to do it right and to use all the functionality, you need somebody with a dedicated amount of time to manage it. It's not complicated, but it will certainly take time for dedicated resources to fully utilize all that Prisma has to offer. Ideally, you should be prepared to assign someone as an SME to learn it and have that person teach others on the team.

I would rate Prisma Cloud at nine out of 10, compared to what's out there.

Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
PeerSpot user
Governance Test and Compliance Officer at Thales
Real User
We are able to filter alerts by security level so our teams understand which situations are critical
Pros and Cons
  • "I was looking for a vulnerability scanner and I was looking for one place in which I could find everything. This tool not only does vulnerability scanning, but it also gives me an asset management tool."
  • "We would like it to have more features from the risk and compliance perspectives."

What is our primary use case?

I was looking for one tool which, as a WAF, could provide me with information regarding applications  and with features where I can oversee things.

We use the solution's ability to filter alerts by levels of security and it helps our teams understand which situations are the most critical. Based on the priorities that I get for my product, I can filter the notices the team needs to work on, to those that require immediate attention. That means it's easier for me to categorize and understand things exactly, on a single dashboard. I can see, at one point in time, that these are my 20 applications that are running. Out of them, I can see, for example, the five major vulnerabilities that I have — and it shows my risk tolerance — so I know that these five are above my risk tolerance. I know these need immediate attention and I can assign them to the team to be worked on immediately.

How has it helped my organization?

Instead of going for multiple tools, this tool has helped me to have one platform where I can have all the features and information I'm looking for.

The tool is working on the principles of governance, risk, and compliance as well. It even helps me in application-level firewall security. It's not just a single tool. It has helped me find out details about multiple things.

The integration with user tools is pretty easy; it's user-friendly.

In terms of a reduction in alerts, it has helped me out in not putting unnecessary time into a couple of things, which can be figured out at a glance. I would estimate the reduction in alerts at about 40 percent.

What is most valuable?

I was looking for a vulnerability scanner and I was looking for one place in which I could find everything. This tool not only does vulnerability scanning, but it also gives me an asset management tool.

It has been good in my test environment when it comes to scanning my infrastructure.

What needs improvement?

We would like it to have more features from the risk and compliance perspectives.

On the governance side of it, we did want it, but the licensing costs for that are so high. As a result, I have to integrate this solution with a couple of additional tools. For example, suppose I wish to assign something to an organization or to another person. To do that I have to integrate it with something like JIRA or Confluence where I can ask them to provide the pieces of information. If the licensing costs were a little lower, I would have been able to assign it then and there. As it is, though, I need to assign it from one platform to another platform, one where the team of engineering people is working. I still need to go to multiple platforms to check if something was assigned, and I have to keep checking between the two platforms to see whether it's not done or not.

For how long have I used the solution?

We have been using Prisma Cloud by Palo Alto Networks for five months, testing it and evaluating it during that time. We are planning to purchase it.

I have been evaluating this product from the point of view of DevOps. I have not been evaluating it from the security operations point of view.

Prisma Cloud actually has two solutions. One is a cloud-based solution and the other is their on-premise solution. I have had a look at and tested both of these tools.

What do I think about the stability of the solution?

It's a stable product.

What do I think about the scalability of the solution?

It's scalable. We discussed that with them. We also discussed the scenario where I want to move from one cloud environment to another, or if I make some other changes. How flexible is the tool as far as working with different cloud environments goes? And it is perfectly fine in that regard.

If we deploy it, I will be using it quite extensively for my day-to-day vulnerability scans.

How are customer service and technical support?

I would rate their technical support at nine out of 10. They have been very supportive. Every time I have called them they have been there for me.

Which solution did I use previously and why did I switch?

I was using multiple tools from here and there: one tool for vulnerability scans, one for risk management. But this has provided me an answer for not just one tool but for multiple requirements that I have.

How was the initial setup?

The initial setup was easy. I got to help from their technical department and the device is more or less plug-and-play. If you have specifications which are required by the cloud, and your products are running on those specific cases, then it becomes quite easy. You just have to install it and it's good to go in your infra.

Since I did it for my development center only, I just had to install one installer and then the agents were installed automatically after running a script. For the whole environment, it could not have taken more than a day or two.

What's my experience with pricing, setup cost, and licensing?

Security tools are not cheap. This one is a little heavy on the budget, but so are all the other security tools I have evaluated.

There are no additional costs to the standard licensing fees for Prisma Cloud.

Which other solutions did I evaluate?

I looked at Trend Micro Cloud One Workload Security. Both it and Palo Alto Prisma Cloud are good for container-level security and scanning. But the financial part of it and budgeting play an important role.

With Prisma, it's not just one feature. It has also provided me with solutions for a couple more of my requirements. That was not the case with Trend Micro. In addition, Prisma Cloud was easy for me to figure out. The only con I see in Prisma Cloud is that because of its cost, I have to use multiple tools.

What other advice do I have?

It's a good tool. I would tell anybody to give a shot. It's easy, it's user-friendly; it's like a plug-and-play tool.

I am a single point of contact for this solution, right now. I'm working on it with my entire management to review things. I have to coordinate because of the multiple platforms they have. Roles have been assigned at different levels. There is a consultant's role, a reviewer's role, and there is an implementer's role. The latter is supposed to be working with them.

Root cause analysis needs to be done at my own level. The solution does inform me that a predicted vulnerability exists and this is the asset where it could be happening. But the intelligence has to be provided by the security consultant.

If something becomes visible during the build phase, we already have a pretty good area where we can change the product so that it does not impact the production environment.

The solution provides an integrated approach across the full lifecycle to provide visibility and security automation and, although we have not started using that part of it yet, it will definitely enable us to take a preventive approach to cloud security when we do use it.

Overall, it provides all the pieces of information that you require, in one place and time. I think it's going to be good to work with them.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Senior Manager at a computer software company with 501-1,000 employees
Real User
Flags cloud compliance issues for us
Pros and Cons
  • "One of the most valuable features is the compliance of RedLock, which we are using for any issues with security. It flags them and that's the primary objective of that feature."
  • "The feedback that we have given to the Palo Alto team is that the UI can be improved. When you press the "back" button on your browser from the Investigate tab, the query that you're working on just disappears. It won't keep the query on the "back" button."

What is most valuable?

Prisma Cloud has multiple components. We are already using RedLock, and it has Twistlock included in it. It also has PureSec, which should be pretty useful for our cloud security.

One of the most valuable features is the compliance of RedLock, which we are using for any issues with security. It flags them and that's the primary objective of that feature. We are still working on implementing the other features that were integrated into Prisma Cloud from Twistlock and PureSec.

What needs improvement?

The feedback that we have given to the Palo Alto Networks team is that the UI can be improved. When you press the "back" button on your browser from the Investigate tab, the query that you're working on just disappears. It won't keep the query on the "back" button.

Also, the way the policies are structured and the alerts are created could be better. It requires a lot of manual work to search through the policies when creating an alert.

These are minute nuances. They are not major issues and are more about convenience than they are product bugs.

For how long have I used the solution?

We are still working with the Palo Alto Networks representatives to implement our rollout.

What do I think about the stability of the solution?

Because we are already using Palo Alto Networks firewall, we expect Prisma Cloud should be pretty stable.

How was the initial setup?

It's a team effort and multiple people will be involved.

What other advice do I have?

It's definitely a good product. If a company is heavily into the public cloud environment, they must look to use a product like this to gain good visibility into their security. It will also help with the compliance of how they are doing things in the cloud. It's definitely a good, must-have tool.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user1272177 - PeerSpot reviewer
Manager - cybersecurity at a comms service provider with 10,001+ employees
Real User
Sophisticated, easier, more user-friendly, and has a flexible deployment
Pros and Cons
  • "I would say Twistlock is a fairly sophisticated tool."
  • "In terms of improvement, there are some small things like hardening and making sure the Linux resources are deployed well but that's more at an operational level."

What is our primary use case?

In terms of our use cases, we are a telecom firm and we work a lot with telecom firms around the world, and so we have a lot of solutions other than Twistlock. We have applications, we have consumer-based solutions that we run on a daily basis, and heavily regulatory processes as well. We found it's better that we move our core application than our user systems on container because they're quick, they're effective, easy to deploy, and easy to maintain. But because of the sanctions, heavily regulated security is a very core part of the entire environment, and thus we had to go ahead and look for a solution that would help automate that security part and because it was almost impossible to go about doing that manually.

What needs improvement?

In terms of improvement, there are some small things like hardening and making sure the Linux resources are deployed well but that's more at an operational level. Day-to-day, we do find a lot of issues but having a tool to help us with them is what we want because manually, it's not feasible for us. Other than that, we not really looking for any other add-ons or plug-ins because that was our core problem.

For how long have I used the solution?

We have been using Twistlock for just under five months. 

What do I think about the scalability of the solution?

We had deployed it on-prem like it was on our infrastructure. It is primarily in our hands how we want to scale it because we could have run that across all of our data centers and multiply the licenses because it was fairly easy to acquire this. We have a running relationship with Palo Alto but we did not face any direct issues with scalability at the moment because we were running it on our premises.

How are customer service and technical support?

We have people from Palo Alto. We have not had any major issues as such therein we had to reach out but there are some times we create service tickets that go to Palo Alto because Twistlock has networking image of audio open-source development so maybe sometimes there are glitches in that, and we reach out to them but more often the network is just that. We've never had any issues, major or drastic, issues that we need to reach out to L1 and L2. 

How was the initial setup?

The initial setup was very complex. We have more than 10,000 servers on-premises and this is excluding what we have off-prem and on cloud deployment as well.

What about the implementation team?

We used an integration because we got them from Palo Alto. We have a network firewall from them. 

What other advice do I have?

I would say Twistlock is a fairly sophisticated tool. It's not the most user-friendly so if somebody wants to use it for their deployment, their firm, they need to have the right people on your team to know how to use it because it's not a plug and play kind of software, like Aqua Security which is a little more plug and play. I think it's easier, more user-friendly, and has a more flexible kind of deployment. If you can configure it well, Twistlock is a lot better in providing you real-time statistics than Aqua Security.

I would rate it an eight out of ten. 

I recommend two months of POC in this. It's fairly new but until now it's been pretty good.

Which deployment model are you using for this solution?

On-premises
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user1206177 - PeerSpot reviewer
Sr. Manager IT Operations at a tech vendor with 5,001-10,000 employees
Real User
Provides cross-cloud security but it isn't so user-friendly
Pros and Cons
  • "The product is quite good for providing multi-clouds or cross-cloud security from a single-pane -of-glass."
  • "Palo Alto should work on ease-of-use and the user-friendliness to be more competitive with some competing products."

What is our primary use case?

We use cloud solutions generally for client demos of products.  

How has it helped my organization?

It has not been implemented, but Prisma or Dome9 will provide us with better cloud security and less administration time for our cloud instances. 

What is most valuable?

RedLock is quite good for providing multi-clouds or cross-cloud security.  

What needs improvement?

In our testing, we have found the Check Point product CloudGuard Dome9 to be more user-friendly at this point. Palo Alto Prisma's interface was not as user-friendly. Palo Alto should work on this part of its solution to be more competitive with ease-of-use. I do not feel Palo Alto is short of any features, but if we compare the two side-by-side, I think the user interface for Palo Alto needs to be improved to make it at least as good as Dome9.  

For how long have I used the solution?

We just started evaluating it, so we have just been using it for a little more than a month doing some evaluations and proof of concept.  

What do I think about the stability of the solution?

The product is stable.  

What do I think about the scalability of the solution?

We have not tested scalability extensively to this point because our cloud accounts are not being used so much that it warrants scaling it up. We only dedicated a small amount of resources for the product at this point while exploring it.  

There are up to 10 users on RedLock in our company and there are never more than 10 at this point.  

How are customer service and technical support?

We worked with both the Palo Alto and Check Point technical support teams during our evaluations. So we were connected to the technical team at Palo Alto. Their technical support was excellent. The presales team was very proactive and helped us in every aspect we needed to resolve our queries during implementation and they provided knowledge to our team internally. The technical support from both vendors was very good. This was not a problem.  

Which solution did I use previously and why did I switch?

We have been using the native security solutions from each of the clouds or cloud service partners we deal with, but they have limited functionality. That is why we began to look into other options. 

How was the initial setup?

The initial setup was not too easy and yet not too complex. It was pretty good. The deployment took a couple of days. For deployment, it required only one person. For maintenance, it requires a team of engineers. We have a team with different roles and responsibilities. We have someone from the network team, we have someone from the infosec [information security] team, we have someone from the cloud team, and we have someone from our Unix team. So there is one person from each team who has been assigned roles and responsibilities with explorations of Prisma. The team monitors the system on a day-to-day basis and checks for threats and then, according to what they find, then they decide on any necessary course of action.  

What about the implementation team?

Our company did the deployment ourselves with an internal team. We did not use an integrator or consultant.  

Which other solutions did I evaluate?

We did not use any specific or dedicated cloud security product before evaluating the options we chose to review. Currently, we do not have any specific product that we purchased specifically for cloud security. Recently we came across Palo Alto Prisma Cloud Security and Check Point Cloud Guard Dome9 products and we chose to evaluate both and engage in POCs.  

We wanted to find some solution where we could see all our cloud accounts and manage them in one single pane of glass. When we used the native solutions that were in place through our cloud providers, we had to manage several different clouds by going to each individually. These dedicated products have everything for cloud security management in one place and we can monitor all our cloud activity from there. There is also the benefit that the functionality of dedicated products is more robust.  

Currently, we have stopped using RedLock. We are focusing on exploring Dome9 by Check Point. We have found it very easy to use and the interface is quite user-friendly.  

What other advice do I have?

The advice I would give to someone seriously considering these cloud solution products is to be careful with procedures you use while testing them. During the setup phase, there were not many challenges. But while integrating the cloud accounts, I would recommend the users initially provide only read-only access not read-write access, just as a precaution. The users should also be cautious not to expose cloud data to vendors like Dome9 or Palo Alto or whomever the vendor will be.  

On a scale from one to ten where one is the worst and ten is the best, I would rate the Palo Alto product overall as a seven-out-of-ten. Dome9 I would currently rate eight-out-of-ten. Palo Alto's rating could improve with enhancements to ease-of-use.  

Which deployment model are you using for this solution?

Hybrid Cloud
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
PeerSpot user
DevOps Solutions Lead at a tech services company with 501-1,000 employees
MSP
Good runtime mechanism, and very good network mapping
Pros and Cons
  • "The runtime mechanism on the solution is very useful. It's got very good network mapping between containers. If you have more than one container, you can create a content data link between them."
  • "The innovation side of the solution could be more efficient and more detailed."

What is our primary use case?

We primarily use the solution to create a cluster or scenario, for runtime management on containers.

How has it helped my organization?

We have three containers in our organization. Two were legacy containers and we added a third. The solution helped with our DevOps pipeline and allowed us to inspect and analyze the product.

What is most valuable?

The runtime mechanism on the solution is very useful. It's got very good network mapping between containers. If you have more than one container, you can create a content data link between them.

What needs improvement?

I'm not sure about areas for improvement on the solution, however, I do think the compliance and dashboarding could be better.

The innovation side of the solution could be more efficient and more detailed.

For how long have I used the solution?

I've been using the solution for two months.

How was the initial setup?

The initial setup was reasonably complex. The container security makes it a complex implementation. If I were to rate the complexity out of ten I'd give it a seven.

What other advice do I have?

We use the cloud deployment model.

I'd rate the solution nine out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
CTO at Aymira Healthcare Technologies, LLC
Real User
Ensures compliance and keeps us free of bad actors
Pros and Cons
  • "The most valuable feature is that the rule set is managed and that it can be run on a regularly scheduled basis."
  • "The pricing for the solution needs improvement."

What is our primary use case?

The primary use case for this solution was to run the rule set for the CIS 20 framework and HIPAA compliance.

How has it helped my organization?

This solution will ensure that we've got a more secure environment, mitigating any sort of bad actors coming in and either destroying or disrupting the environment.

What is most valuable?

The most valuable feature is that the rule set is managed and that it can be run on a regularly scheduled basis.

What needs improvement?

The pricing for the solution needs improvement.

What do I think about the stability of the solution?

The stability of this solution is very good. Very favorable.

What do I think about the scalability of the solution?

We have four people involved with this solution. They are administrators and DevOps resources.

The solution is currently used across our entire environment. I bought licenses for one hundred hosts and I only have twenty-eight. So, there will be no incremental cost for me until I exceed one hundred hosts, which is a long way away.

How are customer service and technical support?

Technical support is very good. They have been very responsive to various requests in the past.

Which solution did I use previously and why did I switch?

We did not use another solution prior to this one.

How was the initial setup?

The initial setup was very straightforward. RedLock was very helpful in setting up the environment. The deployment took approximately two hours.

Two people are required for deployment and maintenance.

What about the implementation team?

We worked with a reseller. They are Rocus Networks out of Charlotte, North Carolina. We had a very good experience with them.

What's my experience with pricing, setup cost, and licensing?

Our licensing fees are $18,000 USD per year. There are no costs in addition to the standard licensing fees.

Which other solutions did I evaluate?

We evaluated the Dome9 solution in addition to this one. RedLock was selected based on Rocus' recommendation.

What other advice do I have?

This is a product for which I had a very specific need, and my security partner recommended it. This product is one of the leaders. I would, however, suggest that you do a POC before implementing this solution.

It has very good support in all of the cloud environments. I think that they offer a lot of functionality in supporting that space. I don't think that this product is perfect, but it fits my needs perfectly.

I would rate this solution a nine out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Real User
Has provided us with insight into the dynamic topology of our containers

What is our primary use case?

Our primary use case for this solution is for container security and monitoring.

How has it helped my organization?

It has helped us understand the dynamic topology of our containers and manage security through the application of policies that our pipelines apply straight from Git.

What is most valuable?

The most valuable feature is the automated forensics.

What needs improvement?

I would like to see the inclusion of automated counter-attack, although this is probably illegal.

For how long have I used the solution?

More than one year.
Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PeerSpot user
Cloud Architect, Oracle ACE, Oracle DBA at Pythian
MSP
Helps secure our client's Linux workloads on any infrastructure, with end-to-end encryption
Pros and Cons
  • "The dynamic workload identity creation, attestation, and assignment is the best feature. In addition, the application dependency map across heterogeneous environments for compliance is a striking feature."
  • "More documentation with real-world use cases would be helpful."

What is our primary use case?

Our client needed a solution which would be a true implementation of the concept "Trust, but verify," and Aporeto fulfills that notion as it decouples security from network and infrastructure. It services microservices in a nifty and seamless way.

How has it helped my organization?

Aporeto has accelerated our client's expansion to the cloud. With Aporeto, they have secured their Linux workloads on any infrastructure with end-to-end encryption and have a path for modernizing with a security layer that is future-proofed.

What is most valuable?

The dynamic workload identity creation, attestation, and assignment is the best feature. In addition, the application dependency map across heterogeneous environments for compliance is a striking feature.

It integrates quite well with the AWS products as it uniquely fingerprints each workload. Aporeto is designed to combine metadata from the orchestration layer, the container, the operating system, and the AWS instance identity document. By combining these information sources, along with dynamic attributes such as image scanner inputs, Aporeto is designed to create a strong cryptographic identity for each workload. It authenticates and authorizes all network communications within a virtual private cloud (VPC), across VPCs independent of their region or availability zone, and across cloud environments.

What needs improvement?

More documentation with real-world use cases would be helpful. Another useful feature would be greater transparency and visibility into the security checks being implemented.

What do I think about the stability of the solution?

In AWS, it scales with the cloud and we have found no issues at all with the stability.

What do I think about the scalability of the solution?

Aporeto is now available in AWS where it efficiently deploys, manages, and secures applications at scale on various platforms including Kubernetes, Docker, Linux, and Mesos, among others.

What's my experience with pricing, setup cost, and licensing?

The purchasing process was easy and quick. It is a very economical solution.

We chose to procure this solution via AWS Marketplace because that's where we get all other solutions and to make sure it's supported by AWS.

What other advice do I have?

I would rate it as a nine out of ten, due to its cloud-facing features which fit in nicely with the whole cloud ecosystem.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
Download our free Prisma Cloud by Palo Alto Networks Report and get advice and tips from experienced pros sharing their opinions.
Updated: November 2022
Buyer's Guide
Download our free Prisma Cloud by Palo Alto Networks Report and get advice and tips from experienced pros sharing their opinions.