We use the solution to organize the data structure. Some of its applications are geared towards companies in the oil and gas sector. For instance, it supports SIP solutions that conduct scanning and comprehensive Seismographic analysis. Additionally, other customers include broadcast companies with vast historical assets. Essentially, they aim to manage their content libraries efficiently. It primarily focuses on data management and storage solutions.
Senior Technical Consultant at Amplus
Addresses the customer's need for a global rather than discrete file system
What is our primary use case?
How has it helped my organization?
PowerScale addresses the customer's need for a global rather than discrete file system. It resolves performance issues and offers comprehensive support. PowerScale needs more expansion regarding solutions such as HSM or integration with tape libraries.
What is most valuable?
Dell has pairing and utilizes optical services within the same infrastructure. This means utilizing services from the same infrastructure for internal file system needs and providing access to the public.
What needs improvement?
The solution should improve its pricing and features.
Buyer's Guide
Dell PowerScale (Isilon)
June 2025

Learn what your peers think about Dell PowerScale (Isilon). Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
857,028 professionals have used our research since 2012.
For how long have I used the solution?
I have been using Dell PowerScale (Isilon) as a consultant and reseller for seven years.
What do I think about the stability of the solution?
The product is stable.
What do I think about the scalability of the solution?
The solution is scalable and is suitable for enterprise customers.
How are customer service and support?
Support is very good.
How would you rate customer service and support?
Positive
What's my experience with pricing, setup cost, and licensing?
IBM is cheaper than Dell PowerScale.
What other advice do I have?
The maintenance depends on the time you are willing to invest in learning about the platform. It varies for each individual, and if you have people eager to learn, it can make a significant difference.
IBM built its sources of disk management which control costs. They don't rely on purchasing from vendors. For example, Dell PowerScale doesn't manufacture the disks; instead, they source them from suppliers or engage in patching. They do not produce the disks themselves; they procure them.
IBM can utilize gateways that offer a similar file system to PowerScale. These gateways provide both block storage and file services. This is different from PowerScale because when purchasing PowerScale, you acquire building blocks including CPU and memory. This configuration lacks the flexibility to adapt to various infrastructures. While this setup can be configured, it may pose limitations.
You can customize security settings within the tool, including access and file-level permissions. This focuses on enabling 'write once' capabilities, making it challenging to alter data without appropriate authorization. It would be impossible to tamper with unless an individual gains access by obtaining administrator credentials.
Overall, I rate the solution an eight out of ten.
Disclosure: My company has a business relationship with this vendor other than being a customer: Reseller

CIO at a educational organization with 201-500 employees
We can easily deploy, manage, and maintain systems without needing a huge amount of expertise to facilitate them
Pros and Cons
- "Since it can scale so easily, as long as I have money to buy more nodes, I can grow it as big as I need to. That is important in our business. As sequencing technologies continue to evolve, and as those technologies evolve, the amount of data generation never gets smaller. It just always seems to get bigger. This is one of the absolute key aspects: We can grow on demand without having to forklift stuff."
- "The thing that they are working on now, and we are following closely is more native cloud integrations. The way that we envision workloads in the future is around moving compute to data instead of the other way around. So, we would like to have a single pane glass to manage storage across a variety of different platforms, including native cloud. That would be awesome."
What is our primary use case?
TGen is a nonprofit biomedical research institute. Our focus is primarily on genomics, translating discoveries in the field of genomics into treatments for patients.
It is central to our data storage of scientific data. We sequence the human genomes of folks with different diseases, primarily cancer but also other disorders, e.g., rare childhood disorders and people with mitochondrial diseases as well as neurological diseases. When you do this, it generates a considerable amount of data. Each time that a whole genome sequence is run, you generate anywhere from four to eight terabytes of data. For example, if you are looking at 1,000 patients, that can be anywhere from four to eight petabytes of data. TGen has about seven petabytes of storage being used for storing these genomes, which is a fair amount.
Isilon is an on-prem, scale-out storage. The nodes are linked together through a back-end high-speed interconnect.
We are running current versions of software on the node. It has versions now. The nomenclature is sometimes not the easiest to follow, because they still like to rebrand things.
How has it helped my organization?
It has given us the capability to focus on our prime objective, which is science, without having to necessarily be concerned about the back-end infrastructure that powers it. This is something we are always looking to achieve: Being able to focus on our prime mission without having technology get in the way. Scientists don't want to learn all about your storage system. They just want to do their science.
It is a critical piece for storing scientific data for our Institute. It is where we put our most valuable and precious data. We also leverage it for work on administrative data, spreadsheets, Word documents, etc. So, it is flexible. We access it via NFS and SMB. Those are the two primary methods of access that we use along with some others, such as S3 for some particular use cases.
Deploying and managing storage at a petabyte scale using Isilon is extremely simple. The user interface for management tasks is intuitive. The documentation is thorough and good, and if you get stuck, then the support is very capable. Overall, I have confidence that we can easily deploy, manage, and maintain systems without needing a huge amount of expertise to facilitate them.
PowerScale has helped us by consolidating the data without having it dispersed. Prior to this solution, we would have many different physically separate storage solutions. To do the science, sometimes data needs to go from one place to another. Moving your data at a petabyte scale, or even at hundreds of terabytes, is very time-consuming and expensive. By having the consolidation within these clusters, it has enabled us to very easily access and compute data without having to push it around to a bunch of different places.
We have a "thinly provisioned" workforce. One of the crucial aspects is that we can continue to scale a solution without having to add more humans to take care of it.
What is most valuable?
There is a reason that we chose this platform to store this priceless data. We know it is resilient. It also provides data protection that helps me sleep at night.
One of the most important factors about it is you can manage a lot of storage without a lot of people. Therefore, ease of management is really important for us because we are a nonprofit. We don't have a huge IT staff to support a pretty substantial IT infrastructure. So, ease of management is always a really crucial consideration.
Another aspect of the management that is super important is having the CloudIQ feature to monitor performance and other data remotely. We have four clusters that we manage. Having all those clusters, being able to have a single dashboard to take a look at the health of everything every morning, helps out a lot.
One of the nice things is that they have several different node types spread all the way from super high performance, flash-based storage nodes through more of what we consider an archive tier. So, we are able to use technologies, what Dell EMC has labeled SmartPools that will tier data automatically between different types of storage. So, we can ensure that hot data resides on the high-performance storage. Whereas, once data has gotten colder, then it can be pushed off to the low-performance storage to help control costs.
We have used the solution’s support for the S3 protocol, but in a limited use case. We are looking to expand that because we are doing more work towards cloud-based solutions. So, having the flexibility of S3 is important as we design new workloads that will be more cloud-centric. They will be able to use that protocol to access data on nodes without necessarily having to go back and refactor everything.
It is good and efficient when maximizing storage utilization. The operating system behind it, called OneFS, provides granularity, data protection, and control. So, you can actually adjust the amount of overhead being consumed for your data protection, depending upon what your needs are. It is pretty efficient at keeping data protected. At the end of the day, that is one of the most important things: Knowing that your data is safe.
Dell EMC keeps adding more features to the solution’s OneFS operating system. We have been iterating with them for quite some time. The solution is continually improving and becoming more robust and reliable. One of the latest things that really helped us out was the ability to perform upgrades without having cluster-wide outages, which is huge because we don't want to shut down operations unless we absolutely have to. Having that was a really big win for us. This saved us time. More importantly, it has kept our labs functioning during upgrades, as opposed to having shut down sequencers for a day while we go through and upgrade everything, which is important.
What needs improvement?
Something that still could be improved upon is adding additional node types of different sizes to facilitate a better way to run in distributed offices. For example, we have a lab up in Flagstaff, but they don't have a lot of IT infrastructure. Therefore, it is not really appropriate to run this system at their location. So, we run it down here in Phoenix. It would be nice if there was a smaller solution that we could deploy up there that was still as cost-effective as the bigger solutions.
The thing that they are working on now, and we are following closely is more native cloud integrations. The way that we envision workloads in the future is around moving compute to data instead of the other way around. So, we would like to have a single pane glass to manage storage across a variety of different platforms, including native cloud. That would be awesome.
For how long have I used the solution?
We were using PowerScale before Dell EMC even bought Isilon. So, we have been using it for some time now.
What do I think about the stability of the solution?
We have run this product for so many years now. I can count on one hand the number of times where we have had any kind of issue that impacted availability. Usually, it turned out not to be the cluster but something else. It is extremely robust and continues to function.
We are not super aggressive in patching or anything. We believe that stability is number one. Availability is just of the most critical importance so that is really where we focus.
What do I think about the scalability of the solution?
Once you have set up your initial cluster, adding more capacity to it is extremely easy. It is so easy that one of our salespeople added a node to the cluster. Having a salesperson do something technical is always a little bit interesting, but they didn't have any problems at all. "Boom," and it works.
This is one of the nice things that goes back to that whole ease of management. Being able to add additional capacity is pretty simple. You just buy the nodes and plug them in, as long as you have enough of the right kind of node types. However, if you meet all that criteria, it is that easy to do.
Since it can scale so easily, as long as I have money to buy more nodes, I can grow it as big as I need to. That is important in our business. As sequencing technologies continue to evolve, and as those technologies evolve, the amount of data generation never gets smaller. It just always seems to get bigger. This is one of the absolute key aspects: We can grow on demand without having to forklift stuff.
I have done forklifting, and it is a drag. I don't want to do that again. We want to just keep being able to grow as we need to ensure our customers have the resources that they need to do their work.
How are customer service and support?
I have worked pretty closely with their engineers over a number of years. They have implemented several different items that we have suggested.
The technical support is excellent. They have good support teams within Dell EMC, but also the VARs that we use have been extremely good at helping us as well. We kind of have multiple different angles of support, and that is one of the reasons that we continue to invest in Dell EMC. They have a model that we can rely on for getting the right answers.
I would rate the technical support as a nine out of 10, because nobody is perfect.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
We got our first cluster in 2008. Before that, we were using JBODs connected to Linux hosts. This was a homegrown solution. Frankly, there wasn't really anything available at that time that could meet our needs which didn't cost millions of dollars. So, we went from something that was good enough to something that was much better.
We switched because we needed something that scaled much larger than what we could build and comfortably support. That was the number one reason. Number two was, at that time, I was still doing all the technical work, and I was the one building it. I had too many other things to do. So, I needed to find something that could be supported by other people, not just me. This was really getting something that we could run in a more enterprise-type fashion, as opposed to something that we built because we had to and there weren't any other options.
Today, we have two individuals responsible for storage. Not just this storage, but any other storage systems that exist. Previously, while the storage was a lot smaller, it still took about four of us working on it. By having a single platform, where we can run a variety of workloads on it, this enabled us to not have to continually grow our storage administration staff, even though our data footprint increased many fold over the years.
How was the initial setup?
It was straightforward. There wasn't anything super complex about it.
We just deployed a new cluster last year. It took around three to four months before it was really cranking in full production. Once they are running in full production, they are adding value.
What about the implementation team?
Even to this day, if we still run into something that we are not sure about, we can call support or get local support, who generally get things addressed quickly and to our satisfaction.
What's my experience with pricing, setup cost, and licensing?
Since I have to manage all the budgets, I always want things to be less expensive. However, I would say the pricing is fair. Their costs are in alignment with their competitors. It is a good value for the money.
Like anything else, it could always be less expensive. That would be great. At the same time, I would like to make sure that they keep innovating.
Which other solutions did I evaluate?
We went pretty much straight to the Isilon product. At the time, there were no other products available that did what that product did. They were kind of unique.
We keep going back to them even though there are other products now that report to have similar characteristics. We keep going back to them because it has been such a good experience. We have a high degree of confidence in Dell EMC being able to deliver a product that meets our needs. It is cost-effective and helps me sleep at night because a lot of the data is precious. Sometimes, you get samples that you would never be able to get again, where they are kind of a one-off thing. If you lose them, then they are gone forever. We have to bear that in mind. That is really why we continue to invest in this solution.
What other advice do I have?
I would rate it as nine and a half out of 10. One of the main reasons that we have been successful as an institute is because we have back-end infrastructure, e.g., scale-out storage. This lets scientists focus on doing science, which is really important.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Buyer's Guide
Dell PowerScale (Isilon)
June 2025

Learn what your peers think about Dell PowerScale (Isilon). Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
857,028 professionals have used our research since 2012.
Solutions Architect / Systems Engineer at Unique Digital, Inc.
Provides good flexibility and stores all our unstructured data
Pros and Cons
- "The solution's most valuable features are scalability and flexibility."
- "Dell PowerScale (Isilon) is a little bit pricey, and its pricing could be improved."
What is our primary use case?
We use the solution to store all our unstructured data.
What is most valuable?
The solution's most valuable features are scalability and flexibility. It allows us to scale storage capacity without downtime.
What needs improvement?
Dell PowerScale (Isilon) is a little bit pricey, and its pricing could be improved.
For how long have I used the solution?
I have been using Dell PowerScale (Isilon) for two years.
What do I think about the stability of the solution?
I rate the solution ten out of ten for stability.
What do I think about the scalability of the solution?
Around 30,000 users use the solution daily in our organization.
I rate the solution’s scalability ten out of ten.
How was the initial setup?
On a scale from one to ten, where one is difficult and ten is easy, I rate the solution’s initial setup ten out of ten.
What about the implementation team?
The solution's deployment process is pretty extensive. It has a dedicated back-end network and then connects to the data center network on the front end. The solution can be deployed in a few days. Dell services did the deployment for us.
What's my experience with pricing, setup cost, and licensing?
The solution's licensing cost varies based on capacity and performance requirements.
What other advice do I have?
I am using the latest version of the solution. We partner with many third-party software products that can be used for different types of data replication. I would have users analyze their data and put as much of it on Dell PowerScale (Isilon) as they can. The solution stores all the unstructured data related to all my projects. It's the core of our data center.
Overall, I rate the solution ten out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
Technical Project Manager at a tech services company with 201-500 employees
Easy to expand, helps consolidate data storage, and offers great support
Pros and Cons
- "I don't have to rebuild the cluster to add a node."
- "That said, for the other security features, it would be helpful if Tenable - and I know it's outside the scope of this question itself - had Isilon-specific plugins."
What is our primary use case?
It was a good fit for the system that we put in, as far as the amount of secondary data that was going to be generated on our system. Not only did it have the capacity for everything, but it also had the scale-up and scale-out features. We needed expansion without having to reimage the system. The larger we scaled it out, the better IOP and the bandwidth. It checked all of the boxes in terms of what we really wanted to hit for a tier-two storage system.
What is most valuable?
I just heard my SME today say OneFS is the best feature of the whole solution. The continuum improvements that OneFS has kept within the industry and kept up with standards, the ease at which it can be deployed, and the ease at which it can be upgraded, all are key features of this system.
A key feature that I love is scalability. I don't have to rebuild the cluster to add a node. It can be scaled up and out without taking my system down.
PowerScale helps consolidate data storage and multiple applications into a single platform for easier manageability. As an example, I’d probably use the scenario of when I ingest data from a partner, and then I use the capabilities within Isilon to distribute the data across the other clusters in my enterprise. While we like to think that we're running an enterprise environment, their definition of enterprise and my definition of an enterprise are not the same. The idea here is, that I'm able to take in data from one organization at one cluster, and then use the smart features and the other features of Isilon, one of the best-operating systems, to redistribute that data to any other cluster that needs it.
The impact PowerScale had on our company's storage efficiency has been really good. I just recently saw a report on this a few weeks ago. We're actually doing really well as far as compression and deduplication go. We've over-bought compared to capacity based on the deduplication and compression that we're getting out of the system right now.
We really overbought on capacity. We have sites that are only 20% used. Then again, that goes back to the de-duplication and compression we're getting out of Isilon. They should be at 45% to 50% consumption at this point. The deduplication and compression, however, are working well. We're only using 20% of the capacity. I'll have a hard time when I go on a life cycle lease and I will have a very hard time convincing leadership that I still need the capacity. When they start reading and seeing these reports, it'll create a problem for me as I’ll have to justify it. However, to be clear, it's a good problem to have.
PowerScale has helped free up our employees' time to focus on other business priorities. We were able to do things like due diligence and research on InsightIQ and DataIQ and were able to do product comparisons while not having to worry about Isilon. It's freed up the cycles on those guys really well. I've got them to a point now where I'm cross-training them into Avamar.
PowerScale has helped reduce our overall risk in that it's dependable. The data is always going to be there. I don't have to worry about my end users. It has reduced risk across the entire enterprise.
What needs improvement?
In terms of PowerScale's cybersecurity, including its ransomware protection, considering the environment that we're in, I don't have to really worry about ransomware. That said, for the other security features, it would be helpful if Tenable had Isilon-specific plugins. That's what I'm looking for. If Tenable had specific Isilon plugins, when they do compliance scans, that would be ideal. Right now, the only plugins being used are the BSD plugins. When they scan across Isilon, they come back with all kinds of security findings which are false positives that my team then has to go and chase down. As far as Isilon security is concerned, it’s lovely. As far as being able to prove it, it’s not so lovely. I don't know if there's a partnership between Tenable and Dell that maybe we can bridge the gap on that one.
A recent development is, that there's a key feature coming out in OneFS 9.3, however, when you then try to get to 9.3 or 9.4 of the OneFS, it's been pulled from the download of the Dell website and we're referring back to 9.2.1 as the target code. The feature I'm looking for is in 9.3. If it's not going to be available to download, they should stop telling me about it.
For how long have I used the solution?
We've used the solution for six years.
What do I think about the stability of the solution?
The stability is awesome. There are a few drives every now and again, however, with the product itself, we haven't had any issues with it.
How are customer service and support?
Dell's support for PowerScale is awesome. It's probably, one of the best SEs that I've had in recent history is my PowerScale SE. If there's something I need or information that I'm looking for, I know exactly who to go to. They're really responsive. It's really cool.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
This was a greenfield build. Isilon and PowerScale are what we put in from the very beginning.
How was the initial setup?
I was not involved in the initial setup or deployment of this solution. My understanding is that it was pretty straightforward. We had a little bit of a rough spot when we went to do a OneFS upgrade, however, that's due to putting in hardening. When we had to back it off to do the upgrade, the hardening didn't back out as easily as it went in. That created some snafu and we ended up undoing all of the hardenings across the board. We created our own scripts to do it and it was much easier to manage.
When we deployed just PowerScale. Every PowerScale installment went with a complete stack, that included the switching, the server-side, the VMware, and everything that went along with building a stack. Isilon only occupied about three or four days' worth of a six-week installment period. It was pretty easy on a per-installation basis.
What was our ROI?
We've seen ROI in terms of time. We're also implementing the new version of vROps in which we can see the cost of our different applications, and how they use the different features.
From a time perspective, I have seen a return on investment in just the fact that I can take people now and redirect them to other products. I'm not going to reduce staff, however, I am going to redirect to other product lines. I have one guy that went from being our storage SME to probably one of my top guys, as far as VMware is concerned as well. It's worked out nicely.
What's my experience with pricing, setup cost, and licensing?
The licensing is great. I'm not aware of the price point. As I was just telling my crew today that our job is to come up with solutions, not worry about the price. That's the management's problem to worry about the cost. If they don't like the cost, they'll come back and tell us to find another solution. Up to this point, I'd say the price point is okay.
Which other solutions did I evaluate?
We did evaluate other options. I couldn't say exactly which ones. I wasn't necessarily on the program when they did the evaluation, and therefore, I don't know what products were evaluated. That said, there was an evaluation period done.
What other advice do I have?
In terms of versions, we have a mix of X410 and H500.
I’m not sure of the solution's flexibility for supporting various data workflows while keeping them protected. I would have to refer to my SME on that one. I don't really have feedback on that.
Speaking from a point where I don't know how much money we have invested, from productivity, stability, and ease of management perspective, I would absolutely 100% back it up every time. It's never provided a hiccup. Of all the components in our IT system, it's probably the least troublesome. It has been a workhorse and solid since the day we put it in.
I'd rate it eight out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
System Team Leader at Deakin University
As you add more nodes in a cluster, you get more effective utilisation
Pros and Cons
- "The solution has simplified management by consolidating our workloads. Rather than managing all the different workloads on different storage arrays, Windows Servers, etc., we just have one place per data centre where we manage all their unstructured data, saving us time."
- "The replication could lend itself to some improvement around encryption in transit and managing the racing of large volumes of data. The process of file over and file back can be tedious. Hopefully, you never end up going into a DR. If you do go into a DR, you know the data is there on the remote site. However, in terms of the process of setting up the replicates and filing them back, that is just very tedious and could definitely do with some improvement."
What is our primary use case?
- Research data
- Departmental file shares
- Data centre storage: NFS
We have two data centres in our university. We have Cisco UCS, Pure Storage, and are heavily virtualised with VMware. PowerScale is our unstructured data storage platform. It provides scaled-out storage and our high-level NFS across applications. It also provides all the storage for our researchers and business areas, as well as students, on the network.
With the exception of block workloads, which is primarily VMware, Oracle Databases, etc., everything else it is on PowerScale. It definitely has allowed us to consolidate the ease of management.
How has it helped my organization?
With the quotas having fewer large pools of storage in the data centres, we typically only have one or two Isilon clusters. That gives us the ability to multi-tenant or allocate data to different applications and isolate workloads. It is very efficient when managing that volume of storage. We are not tuning it every day or week. The only time that we are really doing anything with it is if we're planning an upgrade of some sort several times a year. Outside of that, it just does what we want it to do.
We automate the vast majority of the things that we do on the Isilon clusters: provisioning of storage, allocation of storage, management of quotas wrapped into tens of thousands of students, and managing permissions. That's the level of support they have for their built-in API's, which is probably a huge game changer for us in the way that we manage the storage. It makes it far more efficient inside of PowerScale.
Compared to doing it manually, what we have been able to automate using the API is saving us at least tens of hours a month versus when we used to get service requests. We have even been able to delegate out to different areas. If we have an area with whom we do file shares, we delegate out the ability for them to create new shares and manage their permissions themselves.
The solution allows us to manage storage without managing RAID groups or migrating volumes between controllers. We see this in the big refresh that we did earlier in the year. After you have clicked the "Join" button and joined, you go to the old node and click remove, then wait for it to finish. You don't have to configure anything when you add new node types, they are automatically configured. You can tune them and override things if you want, but there is no configuration required.
PowerScale has enabled us to maximise the business value of our data and gain new insights from it. It gives us the ability to have our data stored and presented via whatever protocol is required. Now, we can look at all these different protocols without having to move or duplicate the data.
The solution allows you to focus on data management, rather than storage management, so you can get the most out of your data. We looked at the types of data that we have on the cluster, then we just target it based on the requirements. We don't have to worry about building up different capabilities, arrays, RAID types, etc. We just have the nodes, and through simple policy, can manage it as data rather than managing it as different RAID pools and capacity levels. If someone needs some data storage, then we ask what their requirements are and we just target based on that. Therefore, we manage it as a workload rather than a disk type.
What is most valuable?
Their SmartQuotas feature is probably the thing that we use most heavily and consistently. Because it is a scaled-out NAS product, you end up with clusters of multiple petabytes. This allows you to have quotas for people and present smaller chunks of storage to different users and applications, managing oversubscription very easily.
We use the policy-based file placement, so we have multiple pools of storage. We use the cold space file placement to place, e.g., less-frequently accessed or replicated data onto archive nodes and more high-performance research data onto our high-performance nodes. It is very easy to use and very straightforward.
The node pools give us the ability to non-disruptively replace the whole cluster. With our most recent Gen6 upgrade, we moved from the Gen5 nodes to the Gen6 nodes. In January this year, we ended up doing a full replacement of every component in the system. That included storage nodes, switching, etc., which we were able to replace non-disruptively and without any outages to our end users or applications.
We use the InsightIQ product, which they are now deprecating and moving into CloudIQ. The InsightIQ product has been very good. You can break down the cost performance right down to protocol latency by workstation. When we infrequently do have issues, we use it to track down those issues. It also has a very good file system reporting.
For maximising storage utilisation, it is very good. As you add more nodes in a cluster, you typically get more effective utilisation. It is incredibly flexible in that you can select different protection levels for different files, not necessarily for file systems or blocks of storage, but actually on a per file basis. Occasionally, if we have some data that is not important, we might need to use a lower protection. For other data that is important, we can increase that. However, we have been very happy with the utilisation.
Dell EMC keeps adding more features to the solution’s OneFS operating system. In terms of group work, we have used it for about 13 years. The core feature set rollup has largely stayed the same over that time. It has been greatly improved over that time as well. So, it has always been that storage NFS sandbox, and they've broadened their scope for NFS v4, SMB3 Multi-channel, etc. They are always bringing up newer protocols, such as S3. Typically, those new features, such as S3, don't require new licensing. They are just included, which is nice.
Over the years, the improvements to existing protocols have been important to us. When we first started using it, they were running open source sandbox for their SMB implementation under the covers and they used a built-in NFS server in a free VSD. Whereas, with the new implementations that they introduced for OneFS 7 have had huge increases in performance and been very good, though there's not necessarily any new features. We even use HDFS on the Isilons as well at the moment. The continued improvement has been really beneficial.
It is incredibly easy to use the solution for deploying and managing storage at the petabyte scale. With CIFS and IBM Spectrum Scale, there just isn't the horizontal concern. I couldn't think of an easier way to deploy Petabyte NAS storage than using Dell EMC PowerScale.
What needs improvement?
The replication could lend itself to some improvement around encryption in transit and managing the racing of large volumes of data. The process of file over and file back can be tedious. Hopefully, you never end up going into a DR. If you do go into a DR, you know the data is there on the remote site. However, in terms of the process of setting up the replicates and filing them back, that is just very tedious and could definitely do with some improvement.
There is a lack of object support, which they have only just rectified.
For how long have I used the solution?
About seven years.
What do I think about the stability of the solution?
The stability has been exceptional. I've been very happy with the stability of it. In the last six years, we have pretty much been disruption free. Prior to that, we have had one or two issues, which we worked with their support to fix.
We had a major refresh at the start of the year when we replaced one petabyte at one site and a half a petabyte at another site. This completely replaced everything and took us about a month. It was finished with one staff member overseeing the process, moving the data and roping in one or two other staff at different times to help with the physical backing.
They are quite heavy, so you always want to have two or three people involved. It has very minimal staff management required. For example, once the hardware is racked, it needs just one operator who joins the nodes, waiting for the data to move over. Internally, this is non-disruptive to the user.
Firing up the old nodes, that is more of a management thing.
What do I think about the scalability of the solution?
Pretty much everyone touches the solution in some way or another. It has been a bit different right now with COVID-19, since a lot of people have been recently working remotely. In any given day, probably 12,000 people have been using it. That is just going by the number of active connections that we have from staff, students, and researchers at any time.
We can't see anyway that we would ever reach the limits of the product in terms of scalability and our workloads. We have no concerns around scalability.
It has a back-end network that it's managing to get switches with enough ports to plug the nodes in, if you want to go big. That is the most complicated part, not the actual management of storage. As you add more nodes, that management overhead remains largely the same.
For larger scalability, I would be very comfortable with it. We would just have to do some good site planning to ensure that we have enough room for it.
Our usage is pretty extensive. It touches on almost every area of our organization. With the introduction object and support for Red Hat OpenShift, which they're releasing in OneFS 9.0, we are very keen to explore and extend the usage in those areas. That is part of the reason why we are upgrading our test cluster on OneFS 9.0 to specifically evaluate use with Red Hat OpenShift and Kubernetes in clouds. It definitely has a very strong place now in the data centre, and we don't see it going away anytime soon, as we see more workloads going onto it.
How are customer service and technical support?
The support has been mixed. If you get through to the right engineers, you can get problems resolved incredibly quickly. If you don't, you can go around in circles for a long time. We do typically have to escalate support tickets through account managers to get them positioned correctly. However, once that happens, issues are resolved pretty quickly and we're generally happy.
The technical support is average. There are certainly not the best that we have ever dealt with, but far from the worst ones. I would not recommend the product based on their tech support alone.
Which solution did I use previously and why did I switch?
Going back 13 years prior, we used to have a lot of Microsoft and Linux-based file servers all over the place. They were all siloed with a lot of wasted capacity. Consolidating all those down into a small handful of Isilon clusters has dramatically reduced the amount of silos that we have in the organization. In terms of reducing waste from having storage stuck in one silo or isolated area, it has made a huge improvement.
We have previously used IBM Spectrum, and I don't think you can buy anymore. Briefly, eight years ago, we moved a large portion of the workload off Isilon onto Spectrum. That was the biggest regret that I have had in my career. We couldn't get back on the Isilon fast enough. It was a commercial decision to move away from Isilon, which wasn't the cheapest. However, it was far more mature than the IBM product. Spectrum cost us so much that what we saved in capital expenditure we then lost in productivity, overhead, and maintenance. It was just a disaster. The support that we received from IBM was the worst support I have ever received. I've been in this industry and job for about 17 years now, and I have never had a worst support experience that I've had from IBM. It was a nightmare.
When we needed to get the issue with Spectrum fixed, there was no doubt about getting PowerScale. We couldn't get back on PowerScale fast enough. We just made that happen, and as soon as we did, all the fires were put out.
About 13 years ago, we were using six terabyte nodes back. Now, they're obviously a lot bigger than that. While scalability was definitely a key interest, the main driver for us was the ease of management to sort of consolidate all the separate file servers with their own operating systems and RAID arrays, and consolidating them into one pool of storage where we could allocate quotas and still manage capacity effectively, but centralize it and reduce waste. The ability to scale out was just icing on the cake, and definitely something we were very interested in. It's something we've utilised quite heavily over time, but the ease of management was the main driver.
How was the initial setup?
The initial setup has always been straightforward. The process of creating a new cluster is largely the same now as it was 13 years ago. You get your first node, then connect the serial port to it. You answer about 10 questions, then you're ready to go. The rest of the nodes are added by clicking a button. It's incredibly easy to set up, and it says a lot that the process has been the same for about 13 years. There's not really much to improve or simplify, because it is already incredibly simple.
Assuming the hardware was racked, you could have the cluster setup and your minimum three nodes joined within half an hour to 45 minutes.
The process of adding a node is very straightforward: It is pressing a button. This can take five minutes, then the process is complete. Once you have added new nodes, you can then remove old nodes.
Understand your workload. Make sure you size and cost it correctly for the amount of metadata you expect to see on it. Don't undersize your SSD.
For the whole replacement this year, I got one of our junior staff members, who had have never actually used our PowerScale, to do the whole upgrade process. I just pointed him in the right direction. Because it was very easy, he managed to do it without any issues.
What about the implementation team?
We don't use any professional services. We always do it in-house.
Two people are needed for racking hardware. Only one person is needed to deploy it, as that process is very straightforward.
What was our ROI?
The solution has simplified management by consolidating our workloads. Rather than managing all the different workloads on different storage arrays, Windows Servers, etc., we just have one place per data centre where we manage all their unstructured data, saving us time.
PowerScale has reduced the number of admins that we need. It has allowed our admins to focus on adding value through automating tasks and streamlining operations for our customers, rather than focusing on the day-to-day and tuning RAID profiles. We can use our APIs to automate workflows for customers and have quicker turnaround times.
What's my experience with pricing, setup cost, and licensing?
The solution is expensive; it is not the cheapest solution out there. If you look at it from a total cost of ownership perspective, then it is a very compelling solution. However, if you're looking at just dollar per terabyte and not looking at the big picture, then you could be distracted by the price. It is not an amazing price, but it's pretty good. It is also very good when you consider the total cost of ownership and ease of management.
We added on a deduplication license. That is the only thing that we have added. That was a decision where it was cheaper for us to license the deduplication than it was to buy more storage, so we went with that approach. We just did an analysis and found this was the case.
We haven't really hit a workload or situation that we have had any issues catering for. Certainly with the huge number of different node types now, we could position any sort of performance from very cheap, deep archive through to high performance, random workloads. I feel like we could respond very quickly to any business requirement that came up assuming they had budget. Even if we didn't have budget, largely with the way our clusters are configured, we typically mix in high and low performance. We won't buy top of the line, high performance, but we will buy basic H500 nodes, which are a large amount of self-spinning disks. That is what we standardize for our high performance tier.
Which other solutions did I evaluate?
13 years ago, it was called Isilon Systems. They were a start up in Seattle, while we are in Australia. We were importing the hardware directly. At that time, there was nothing really else that we were looking at. We were just caught up in revolutionising the way we would be managing one pool storage. Then, six to eight year ago, when we had that little stint on IBM Spectrum, we didn't go to market. We very heavily evaluated the IBM product and NetApp in cluster mode as an alternative. We did rule out NetApp from a management perspective as far too difficult to manage. The Spectrum product that we saw on paper and from our evaluation of loaned hardware seemed like it was going to be on par with Isilon. Little did we know the nightmare that would ensue from that.
The biggest lesson that we learned was from moving away from it onto the IBM product. The maturity of a product is very directly correlated to the amount of time you spend managing it, as it is a very mature product. We have been using it for 13 years, and the core has a very solid, mature foundation that has been built over that time.
We have dealt with Nimble Storage in the past. I would recommend Nimble Storage based on their support (at that time), as they had exceptional support. However, Dell EMC support is no worse than Cisco or any of the other vendors that we have had to deal with, but it is nothing special.
What other advice do I have?
Just don't underestimate how important a mature product is compared to something leading edge or new.
PowerScale's positioned primarily to receive the call within that data centre. We have PowerScale heavily centralized, both in our IT department and on our campuses. We don't really have any storage from PowerScale in the cloud or our edge because we have very good network connectivity. In terms of the right tiers of storage, the level of flexibility that we have for adding different types of storage with different characteristics to our existing cluster now is the best it's ever been in the 13 years that we've managed it.
Between CloudIQ and DataIQ, they're replacing their legacy InsightIQ product. We haven't moved to CloudIQ yet to start looking at it.
Early on, since we have been using the solution for 13 years, if you added a new node type, then you would have to add three physical nodes to start a new pool and only end up with 66 percent utilisation on that storage pool. Whereas, in the Gen6 hardware, you can have more smaller nodes in one rackmount chassis. Now, you can add a new storage type and gain much better storage efficiency off the bat.
The S3 protocol specifically comes in OneFS 9.0. We have a test cluster for it, which we are in the process of upgrading to have a look at their S3 support. However, I haven't used it yet. Typically, we use something like MinIO, which is an open source object gateway, and put that in front of the PowerScale cluster.
On the archive side, we still have the A200 nodes. While you can go with the A2000s or go deeper than that, we can manage pretty much anything thrown our way by not going too extreme in our pools by positioning data effectively. I think it's very good.
I would rate the solution as a nine out of 10.
Which deployment model are you using for this solution?
On-premises
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
Director, Marketing & Sales at a tech services company with 201-500 employees
Supports managed servers and gives a lot of operational flexibility
Pros and Cons
- "We use the solution internally to support our own managed servers and run our own support center."
- "The pricing could be reduced."
How has it helped my organization?
The solution has helped improve our service, lower the running cost of the solution, and made it more scalable so we could address more customers at the same time.
What is most valuable?
We use the solution internally to support our own managed servers and run our own support center. From a business point of view, I like the way the solution cooperates.
Dell PowerScale works well to help our organization manage and run its storage from any location. We used to have systems at different locations, which required a lot of coordination. We made it work, but with the current solution, one guy can do the work that earlier required two or three full-time equivalents.
Dell PowerScale has helped us to reduce or eliminate data silos.
I rate the solution's flexibility a six out of seven for supporting various data workloads while keeping them protected.
It did not really limit risk. It gives a lot of operational flexibility in a way that reduces risk because then the solution is better up to date every time you use it and is consistent across the different data sources. That's good and adds value, but we were already doing that in the old situation. Dell PowerScale just made it easier.
When envisioning the future of our containerized solutions in terms of cloud integration, we will adopt a hybrid strategy. Some of our customers are not able to move to the cloud, where we have to support them. We're following what the customer can do. In many cases, their initial transition into the cloud was not as successful as they thought. In the Dutch market, people are now a little more business-minded while looking into the cloud. That's why I believe that hybrid will be the way forward.
As an integrator, the needs of our customers drive our decision-making process when it comes to selecting a cloud, on-premises, or hybrid environment for containerized applications.
What needs improvement?
The pricing could be reduced.
For how long have I used the solution?
I have been using Dell PowerScale (Isilon) for two years.
What do I think about the stability of the solution?
Dell PowerScale is a stable solution, and not many escalations were brought to my notice.
How are customer service and support?
The solution's technical support team is good, knowledgeable, and quick.
On a scale from one to ten, where one is bad and ten is good, I rate the solution's technical support seven and a half out of ten.
How would you rate customer service and support?
Neutral
What was our ROI?
I see a return on investment in that my people can do more for more customers in less time. The team now covers a larger number of customers than it did in the past, so I did not need to hire new staff for that.
What other advice do I have?
Dell PowerScale clearly delivers what it promised to deliver.
Overall, I rate the solution an eight out of ten.
Disclosure: My company has a business relationship with this vendor other than being a customer: Reseller
Senior System Engineer at Cincinnati children's hospital
Data storage and management system that offers reliability and the ability to share data across multiple channels
Pros and Cons
- "PowerScale helped free up our employees' time to focus on other business priorities. There are now automated jobs such as backing up and replicating data, that reduce the footprint we have. Those types of tasks were previously done manually."
- "Additional metadata reporting would be great. We have to use a separate tool to report on that. We would like to view the age of data and how long it has been since someone has accessed a file."
What is our primary use case?
We use this solution to facilitate sharing data access across multiple platforms. We are a children's hospital and have a lot of PHI data that is critical to keep secure.
How has it helped my organization?
One of the benefits that we have seen from our research department is quotas and chargeback. They are able to control costs based on the projects that they're given and the grants that they receive from the state and federal levels. They are able to track the quotas and chargebacks, which is made possible through Isilon.
Implementing Isilon has removed the previous silos that existed between different teams. Everyone has been able to virtually separate their resources, but still store them physically on the same box.
PowerScale helped free up our employees' time to focus on other business priorities. There are now automated jobs such as backing up and replicating data, that reduce the footprint we have. Those types of tasks were previously done manually.
Isilon also makes it possible to delete large amounts of data and fix active directory permissions. Previously, we would have to create scripts and run them manually. It also reduced our risk of data loss and gave us the ability to recover from snapshots and replicated data.
What is most valuable?
We have data that is accessed from multiple OS from different models and in departments in our company. The ability to serve up that data to all those different platforms is very useful.
One of the best features of Isilon is its reliable performance and ability to report on its performance. Reliability is really important in our environment, with a 24/7 shop that serves patients. In many instances, data access is critical.
Prior to Isilon, we had to access data from multiple different platforms. This solution offers unified storage and the ability to consolidate and migrate data which was a big step forward. It allowed us to cut costs by eliminating multiple platforms, putting it all on one array.
What needs improvement?
Additional metadata reporting would be great. We have to use a separate tool to report on that. We would like to view the age of data and how long it has been since someone has accessed a file.
For how long have I used the solution?
I have used this solution for eight years.
What do I think about the stability of the solution?
This is a stable solution.
What do I think about the scalability of the solution?
This solution's scalability in an on-premise environment is impressive. We continue to throw large workloads at it and performance has been pretty stable. It has multiple nodes, which is useful when we have outages or code upgrades. We're still able to perform those without interruption of service.
How are customer service and support?
The EMC field support is great. They're easily accessible. We have a specific person we call which is invaluable. We are able to open tickets online instead of spending hours on the phone, no matter what day or time. The only challenge we sometimes experience is a language barrier.
How would you rate customer service and support?
Positive
How was the initial setup?
The initial setup for this solution is complex. The F900 uses Dell PowerEdge Servers instead of the traditional nodes. We needed to disable memory allocation features on those servers. When we did that, with EMC support, it brought the cluster down and it was down for a couple of weeks.
The deployment involved a storage analyst, data center analyst, and EMC staff. The data center analyst handled the power requirements and cabling requirements. There are 15,000 users across multiple sites.
This solution requires three people to handle maintenance. Maintenance requires verifying whether jobs are successful, identifying failures, and ensuring that replication is occurring correctly. We do regular creation and deletion of shares, files, and folders.
What was our ROI?
We are able to better handle and reign in budgets by making departments responsible for the data that they are consuming for the grants that they get. The deduplication of data has freed up some of the storage costs that we've traditionally experienced. Some of the newer technology allows us to store more data on less equipment, which means that we're using less footprint in our data center.
What's my experience with pricing, setup cost, and licensing?
This solution is priced slightly higher than others on the market but does offer good quality. With this solution's data reduction and compression, we were able to purchase less. Costs have dropped because of the data rate of compression and deduplication.
Which other solutions did I evaluate?
We evaluated Pure Storage but their support was unreliable. We need fast and reliable support, and EMC has always proven that when we have an outage, they're there to help us.
What other advice do I have?
The user interface is very simple to use. Support is critical when deploying this solution. When we were deploying the F900, there were a lot of problems that were beyond our scope. We frequently needed to touch base with system engineers from EMC.
I would rate this solution a nine out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Director of IT at NatureFresh™ Farms
Allows us to see everything as one large volume, instead of having multiple volumes all over the place
Pros and Cons
- "The single pane of glass for both IT and for the end-user is a valuable feature. On the IT side, I can actually control where things are stored, whether something is stored on solid-state drives or spinning drives... The single pane of glass makes it very easy to use and very easy to understand. We started at 100 terabytes and we moved to 250 and it still feels like the exact same system and we're able to move data as needed."
- "There aren't many templates still coming out for it. They need to provide templates so we can copy and paste what we've done in the past to future, new things."
What is our primary use case?
We used it originally for archiving our video storage, and then we expanded it to include user shares. All of our unstructured data has been moved to PowerScale.
We have now expanded the OneFS to start to use Local S3 Buckets, that use the same API setup as Amazon, but lets us host the data onsite.
In addition we added Power Protect Data Manager that is allowed to backup the Shares and stores, allowing us to have a backup of everything on another location.
How has it helped my organization?
We moved our shares over. Now, instead of taking up a large amount of space on a virtual machine, our shares take it up on one appliance. The load on that virtual machine is much less and it makes it easy to future-proof it, because now we don't have to move it again in the next migration of servers.
We have saved about 30 percent on storage with it. And as we grow, we get more space, meaning the efficiency improves each time we add a node. We went from 75 percent efficiency to 82.5 percent efficiency when we expanded.
The solution provides us with the flexibility to add the right tier of storage at the right time for data that resides at the edge, core, or cloud. That really is nice. We did one use case where we put it out at the edge, and it was nice to have the Isilon at the edge. It really helped improve things. It helped the storage of the cameras, and it helped get the data back to the core in a reasonable time. It allowed us to go from the edge to the core and then up to the cloud, instead of trying to go from the very edge to cloud.
PowerScale also allows us to manage storage without managing RAID groups or migrating volumes between controllers. It simplifies the storage. It allows us to see everything as one large volume instead of having multiple volumes all over the place.
And when it comes to the business value of our data, it allows us to see what's being used and how it's being used, and we can do so much more quickly and efficiently. As a result, we can better evaluate how we're storing the data.
It has also helped us to reduce data silos. We used to have four video servers out there, all storing data. On the home farm, now, we're down to one server storing data in one location, and that includes all the user shares.
All our data is in one place and that has increased performance. We could never afford to say, "Let's have this information on solid-state," and allowed the OneFS to decide, based on usage, of where it would be stored: on a fast drive or on a slow drive. It automatically does that in the background for us, instead of our having to manually move it and then have the user change where they get the information from.
In addition, it has simplified management by consolidating our workloads. It's all done in the same portal now. And while it hasn't reduced our number of storage admins, it has definitely reduced the time we spend looking at it, so we can focus on other efforts. It saves me about five hours a week.
Another benefit is that it allows us to focus on the data rather than where it's stored. Now, we don't have to worry about moving it around from place to place to get efficiencies out of the data. We just have it all in one place. The single interface, the SmartPools policy, decides where it needs to reside.
What is most valuable?
The single pane of glass for both IT and for the end-user is a valuable feature. On the IT side, I can actually control where things are stored, whether something is stored on solid-state drives or spinning drives, as well as the access users get. But the end-user doesn't distinguish the difference between a file and its folder; the end-user doesn't have to see the difference.
The single pane of glass makes it very easy to use and very easy to understand. We started at 100 terabytes and we moved to 250 and it still feels like the exact same system and we're able to move data as needed. There are no performance issues based on how large the storage is.
Adding a node is as simple as racking and stacking the items. It takes about two to three hours to put it into the rack. Once you have it all wired up, it takes you about an hour or 90 minutes with Dell, just to configure things and make sure it's all working. Then you just redefine your policy for where you want the items stored. We just expanded to include the solid-state, a full F200 node, and we just redefined where we wanted those files stored, whether on the super-fast solid-state or on the slow archival mode. Then, overnight, it ran that script and moved all the files around to help increase performance.
We also use the CloudIQ feature to monitor performance and other data remotely. It gives us better insight into where the data's stored and the access times involved. It gives me a better understanding of what's really being accessed and helps me decide what I can move to slower drives first, and what needs to stay in the front-end and remain very fast.
What needs improvement?
There aren't many templates still coming out for it. They need to provide templates so we can copy and paste what we've done in the past to future, new things.
The refresh of the interface with version 9.3 did help a lot of the things. They are at least improving it.
For how long have I used the solution?
I have been using Dell EMC PowerScale for about a year and a half.
What do I think about the stability of the solution?
It's very stable. It's one of the first solutions that I feel comfortable working with during the business day, while people are using it, knowing that I can change things and it's not going to take the system down.
What do I think about the scalability of the solution?
One of the things I like the most about it is the fact that we can scale out now. If we need more space, we order more nodes and it just changes the file structure; it just expands. There are no more individual drives, new arrays, moving things around. It'll just be there.
The future-proofing of what we're doing is a great thing too, because in five years when we're ready to replace that node, just due to its age, we can put the new one in and tell it to archive the old unit. It will move all the files over, in the background, and then we will just remove the old unit. There's no more having to tell users that, "Oh, this whole share is moving and all this stuff is getting done."
How are customer service and support?
The technical support has been really good. It's pretty intuitive to put a ticket in, both through their email and through the calling system. It's usually pretty seamless to get to talk to somebody to actually resolve the issue.
Which solution did I use previously and why did I switch?
Before PowerScale it was just MD Storage Arrays, the standard, and the LUNs that you'd have anywhere. We eliminated that with this. We originally started with PowerScale for our video system. We were looking for a better system, in the long-term, to store our archival video and process it. We looked at unstructured data solutions and picked PowerScale for that and for the future-proofing.
Also, because we are a large Dell EMC shop, it allowed us to keep it all on the same platform. In looking to do things on a larger scale, it allowed us future compatibility, much more easily. Its ability to meet unpredictable future storage needs looks great. It feels like a great solution and it was the right direction for us.
How was the initial setup?
The first setup was pretty complex and a little different to do. Once we had the core system set up, the next deployment was much easier. The complexity came from changing our thought process, internally, regarding how we store files and how unstructured data really works, and then, how to efficiently use this.
Our deployment took about a week. We did a slow move-over, and we still continue to move anything we find over to it.
In terms of administration of the solution, for the most part it's just me who does a lot of the core work. All the users on the farm are using the system now, meaning about 350 people are accessing the data on the Isilon.
What about the implementation team?
We used the reseller, Dell EMC, for the deployment, and it was a great experience. They were there to help us and make sure we understood where we were going and what we were doing.
What was our ROI?
The fact that, with PowerScale, we could start with a few nodes and scale very large made it very cost-efficient for us. It allowed us to start out, see what it can do, and evaluate the product before we actually did a larger investment in it. We invested into it again three months later.
I'd like to say we have seen ROI because we're feeling like we're really starting to store data better and understand what's going on, more than we did a year-and-a-half ago.
What's my experience with pricing, setup cost, and licensing?
It's one of those situations where you have to find the right price for you. When we talked to the reseller, we were able to negotiate the right price for what we needed.
Which other solutions did I evaluate?
We looked at HPE and IBM.
I liked the interface of the PowerScale much better than the other ones. It was more intuitive. I logged on and could almost get to work with it right away. I felt like I could hop on and just start using it, whereas with the other ones I felt that there was a larger, steeper learning curve.
What other advice do I have?
Dell EMC keeps adding more features to the solution's OneFS operating system. The last addition was its CloudPools and that allows us to do backups to the public cloud for the data that we want to keep but don't even need on-prem anymore. It turned the system into a never-ending resource. We can now decide what we want to keep, long-term, without having to expand our storage system.
PowerScale is one of those things that will grow in your environment. Once you start it with one thing, you'll learn that it can do much more, very quickly. That's a great thing about starting small with it, you can expand very quickly later on.
Which deployment model are you using for this solution?
On-premises
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Google
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.

Buyer's Guide
Download our free Dell PowerScale (Isilon) Report and get advice and tips from experienced pros
sharing their opinions.
Updated: June 2025
Popular Comparisons
Dell PowerStore
Red Hat Ceph Storage
IBM FlashSystem
Pure Storage FlashBlade
NetApp FAS Series
Amazon EFS (Elastic File System)
HPE 3PAR StoreServ
Hitachi Virtual Storage Platform
Nutanix Unified Storage (NUS)
NetApp StorageGRID
Buyer's Guide
Download our free Dell PowerScale (Isilon) Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- EMC Isilon vs. Sonexion Scale-out Lustre Storage System
- How to backup Dell EMC PowerScale (Isilon) with Veeam or an alternative tool?
- What is the biggest difference between EMC Isilon and NetApp FAS Series?
- How would you compare the performance of DDN Storage vs Dell EMC PowerScale (Isilon)?
- When evaluating NAS, what aspect do you think is the most important to look for?
- EMC Isilon vs. Sonexion Scale-out Lustre Storage System
- What is the difference between NAS and SAN storage?
- What are the top 8 Network Attached Storage (NAS) devices?
- What advice do you have for people considering NAS storage?
- What is the best way to migrate shares from Windows Cluster Server to Cohesity?