What is our primary use case?
PowerScale (formerly Isilon) is effectively a giant NAS. We have two clusters, one for production workloads and one for Disaster Recovery and Business Continuity purposes. These clusters are installed in separate data-centers, physically located in two different places in the country. Both clusters were deployed at the same time when we first adopted the solution, and we have been growing them at an almost equal rate ever since.
Our production cluster is attached to our High-Performance Computing (HPC) environment, and this was the primary use case in the beginning: to provide scale-out storage for the Bioinformatics team, who do omics analysis on plant and seafood organisms that we do scientific research on. As time went on, we expanded our use of the platform for other user groups in the organization.
Eventually, PowerScale became the de-facto solution for anything related to unstructured data or file-based storage. Today, we also use the platform to host users’ home directories, large media files, and really any kind of data that doesn't really fit anywhere else, such as in a SharePoint library or a structured database. Nowadays, almost everyone in the organisation is a direct or indirect user of the platform. The bulk of the storage, however, continues to be consumed by our HPC environment, and Bioinformaticians continue to be our largest users. But we also have data scientists, system modellers, chemists, and machine-learning engineers, to name a few.
Our company has multiple sites throughout the country and overseas, with the two primary data-centers supporting our Head Office and most of the smaller sites. Some of these sites, however, have a need for local storage, so our DR/BCP PowerScale cluster receives replicated data from both our production cluster as well as these other file servers.
How has it helped my organization?
Before PowerScale we used to have a different EMC product. I believe it was VNX 5000, which is primarily a block storage array with some NAS functionality. We did not have a HPC environment, however we did have a group of servers that performed approximately the same function.
Back in those days, raw storage had to be partitioned into multiple LUNs, and presented as several independent block devices because of size limitations of the storage array. When one of these devices started to run out of space, it was extremely cumbersome and time-consuming to shift data away from it, which slowed down our science. We wanted a solution that would free our users from the overhead of all of that data wrangling. Isilon was a good fit because it enabled us to effectively consolidate five separate data stores into a single filesystem, providing a single point of entry to our data for all of our users.
PowerScale helped us consolidate our former block storage into a full-fledged, scale-out, file storage platform with great success. We then decided to expand our use cases further, replacing some of the ancillary Windows File Servers that provided network file shares in our Head Office. We now have a single platform for all our unstructured data needs at our main locations.
We have not explored using PowerScale cloud-enabling features yet, but it is in our roadmap. The fact that those features exist out of the box, and can be enabled as required is another reason the platform is so versatile.
The switch to PowerScale was transformative. Before we implemented it, users had to constantly move their data between different storage platforms, which was time consuming and a high barrier of entry for getting the most of our centralized compute. Distributed, parallel processing is challenging enough, to add data wrangling on top of it created massive cognitive overload. Scientists are always under pressure to deliver on time, and deadlines are unforgiving. The easier we can make leveraging technology for them, the better.
We officially launched our current HPC environment shortly after we introduced Isilon, supporting approximately 20 users. Today, that number has grown 17500% to over 350 users across all of our sites. In an organization with nearly 1,000 employees, that's more than a third of our workforce! I credit PowerScale as one of the critical factors responsible for that growth. PowerScale simplified data management because it allows you to present the same data via multiple different protocols (eg: SMB, NFS, FTP, HTTP, etc), tremendously reducing our users’ cognitive overhead.
Before adopting PowerScale, we also faced capacity constraints in our environment. I had to constantly ask end-users to clean up and remove files they no longer needed. Our block data stores were constantly sitting at around 90% utilization. Expanding the storage array was not only expensive: every time that we wanted to provision additional space we had to decide if it was justified to re-architect the environment versus adding yet another data store. And going with the later option meant going back to our users again to free up space before more capacity could be added. All of this wasted massive amounts of time, that could have otherwise been spent running jobs and doing science.
Once we introduced scale-out storage, capacity upgrades and expansion became straightforward. The procurement process was simplified because now we can easily project when we will hit 90% storage utilization, and our users have visibility of how much storage they are individually consuming thanks to accounting-only quotas, which help keeping storage usage down. PowerScale provides a lot of metrics out of the box, which are easy to navigate and visualize using InsightIQ, and most recently DataIQ.
I can certainly recommend PowerScale for mission-critical workloads, it is a powerful but simple platform with little administration overhead. We use it in production for a variety of use cases, and it would be hard for our organization to operate effectively without it.
What is most valuable?
When we selected Isilon as our preferred storage provider, many considerations came into play, but the deciding factor was how little administration it requires. We no longer need a dedicated storage administrator looking after it. Instead, our Systems Engineers can handle the day-to-day operations without requiring in-depth expertise in storage management. The simplicity of the solution was a strong selling point when we first started looking into it. For example, when you have replicated clusters, you must ensure that you can actually failover between them in the event of a disaster. PowerScale makes setting up and checking the status of replication schedules extremely simple.
Over time, we started using more and more of its capabilities. I believe the most valuable feature we started using, beyond the initial scope for the solution, is the multi-protocol system that allows you to access the same set of files using different network protocols like NFS or SMB. PowerScale’s Unified Permission Model ensures that data security and access permissions are honoured regardless of whether the client is a Windows desktop or a Linux server. Our users can now access the data they need for their research, without having to deal with multiple credentials depending on the environment they are using, or having to rely on specific clients. The same file can be opened and edited from Windows Explorer or from the Linux command line, and we can guarantee that the ownership and permissions of that file will remain consistent. It reduces friction and cognitive overhead, which is what I value the most.
Data security and availability are also included in solution, out-of-the-box. Of course you still need to be aware of how to configure the different features to your use case, but from a data security and availability perspective, you can leverage replication schedules, snapshotting, increased redundancy at rest, and all of those features which we now consider a must-have. With PowerScale, I can have piece of mind that if a specific directory needs to be protected, it will be protected.
What needs improvement?
The only thing that I think PowerScale could do better is improving the HTTP data access protocol. At the present, you cannot protect access to data via HTTP or HTTPS the same way that you can secure data access through other protocols like NFS or SMB. You can either access a file because it can be access by anyone in the organization, or you cannot at all. There is no in-between. HTTP is not considered a first-class data access protocol, so the Unified Permission Model that would allow a user to authenticate before being able to access a private file, does not apply.
However, with the recent introduction of S3 starting from OneFS 9, I believe the necessary plumbing is already there for HTTPS to also be elevated to a first-class protocol in the future because both protocols sit behind a web server under the hood. It does not sound like it would be too complicated to implement, but it would be a valuable feature and it is currently missing.
Buyer's Guide
Dell PowerScale (Isilon)
January 2026
Learn what your peers think about Dell PowerScale (Isilon). Get advice and tips from experienced pros sharing their opinions. Updated: January 2026.
879,853 professionals have used our research since 2012.
For how long have I used the solution?
We started exploring storage solutions for our environment back in 2012. We have been using PowerScale for nearly 10 years now.
What do I think about the stability of the solution?
PowerScale has never failed us. Since it was first installed, it has been running with almost 100% uptime since we started using it. We have only had to shut down the entire cluster once because we were moving data-centres. In earlier versions, sometimes you had to reboot the entire cluster for significant OS upgrades. Today, rolling upgrades are the norm, where only a single node is ever down at a time.
What do I think about the scalability of the solution?
At the beginning, we procured four initial nodes, which amounted to about 400 TiB of usable space. We now have just shy of 2 PiB of total installed capacity at each cluster. Our storage usage has grown quite a bit, moving from terabytes to petabytes, but I have no doubt that we will be able to continue growing at the same rate or even more in the future. The original Isilon had already been designed to scale to multiple petabytes, PowerScale will only continue to push that further. We highly value being able to grow our capacity without having to be concerned with platform limits.
PowerScale now also offers more choice when it comes to mixing and matching different types of storage nodes within the same cluster. For example, you can get all-SSD or NVMe nodes alongside old-fashion SAS disks, that you might want to consider adding when performance is critical in your environment. In our case, the performance we get without these new nodes is sufficient for our needs. The best part is that should we ever need to provide a faster pool of disks, there is no administration overhead to do so: just add the new node types, set the tiering rules that you want, and let the system rebalance itself. No partitioning, no moving data around yourself. It is transparent to the end-users as well as the administrators. You can even tier data to a cloud pool for the archive if you want! This simplicity is, again, one of the main reasons we decided to stay on the platform.
How are customer service and support?
I needed technical support on a few occasions, specifically while implementing multi-protocol access for Linux and Windows clients. There was an instance when my engagement with support had to run for longer than I expected, but that was because the solution I wanted to achieve was highly complex from a technical perspective. We had to escalate the issue a few times to the next tier of engineers until they came through with a solution. It was always an excellent customer service experience, and I can certainly recommend Dell EMC Support to anyone who asks.
That said, we only tend to contact Support when we are unable to resolve issues or find the answers with need in the product knowledge bases, or the community forums. The availability of product information online is both comprehensive and of excellent quality.
How was the initial setup?
The initial setup was straightforward. Since it was a green-fields implementation, we did not run into any issues. EMC, who later merged with Dell to form Dell EMC, even let us evaluate the platform in our own data-centre, so by the time we decided to procure the solution, all we had to do was to revert to “factory settings”. The longest part of the process was migrating around 84 TiB of data from our old data stores, as it happens with any data migration exercises. But once the data had been relocated, it became a matter of simply pointing the servers to the new data store entry points. Users were happy to take it from there, and were certainly overjoyed at the additional space they had to work with.
What about the implementation team?
It was a long time ago now so details are fuzzy, but we dealt with EMC directly, with the help of an integrator for some of the initial design and implementation. EMC was our primary point of contact for platform-specific support when we first started, and their guidance around the different features of the platform was invaluable.
Today, that same integrator continues to help us with ongoing procurement, simplifying decisions around which of the many available node types might be the best suited to our environment, or ensuring that we stay on top of our node refresh cycle as older ones reach end of life.
What's my experience with pricing, setup cost, and licensing?
Price was also a significant factor in our decision to go with PowerScale. The team at EMC, now Dell EMC, came through with a highly competitive offer that tipped the scales towards their solution. There was only one other solution around the same price point, but it could not match PowerScale on features. That other solution is no longer on the market.
The licensing model is interesting, because it is essentially “pay to unlock”. Most of the available features are software-defined, so they are already available in OneFS, the underlying Operating System, waiting for you to activate them as needed. There are a few additional costs, however. NDMP backups require you to install fibre cards, which are sold separately. Then of course you have the cost of tape and off-site storage, but you would have those same costs with most other platforms. Luckily, we do not need to back-up the whole cluster because we can rely on cluster replication and snapshots (on both source and target clusters) to achieve our RPOs. But we do have a legal requirement to preserve some data for an extended period, so we use tape for that.
Which other solutions did I evaluate?
We evaluated three other competing solutions based on multiple criteria. Some of those solutions no longer exist, or have evolved into a different offering. We went through a rigorous evaluation process, which assessed the platforms’ scalability, ease of use or complexity to administer, performance, and of course TCO. Isilon was the brand name that blew all others out of the water. It was an easy decision for us to make based on the criteria we set.
What other advice do I have?
I give Dell EMC PowerScale a high 9 out of 10. It is not quite a 10, mainly because we do not have a use for all the features it provides, which you need to be aware of from a security point of view (eg: to ensure that they do not introduce unexpected risk). The ecosystem has also grown to be somewhat more complex in terms of the many different types of nodes that you can have. This gives you a lot of flexibility, but it does go slightly against the idea of simplicity that was so attractive initially.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.