All-Flash Storage Arrays Configuration Reviews

Showing reviews of the top ranking products in All-Flash Storage Arrays, containing the term Configuration
NetApp AFF (All Flash FAS) logo NetApp AFF (All Flash FAS): Configuration
Systems Management Engineer at a legal firm with 201-500 employees

In simple terms, you just rack the hardware, you load your codes, and it's ready for configuration. That is pretty straightforward.

View full review »
Sarith Sasidharan - PeerSpot reviewer
System Administrator at a government with 201-500 employees

It has a good interface. Its configuration and flexibility are also good.

View full review »
MohanReddy - PeerSpot reviewer
Sr. Technology Architect at a pharma/biotech company with 10,001+ employees

I do remote support, so I'm not working on the data center side. We have an on-site team that could better describe the installation and deployment. However, my impression is that deploying AFF is straightforward. 

The architect is the main person working with the NetApp products, and he does a deep dive before touching any product. Our team has minimal exposure to NetApp because our work involves a mix of vendors. We have people working on the NetApp side but not regularly. The architect spends a lot of time on NetApp in his day-to-day activities, and he makes the changes. He takes and gives recommendations about which product to use, whereas we provide remote support from a different region altogether. The implementation, changes, configuration, and decision-making are all done from the headquarters.

And once it is implemented, the remote team logs in and does the navigation part. We check the array and identify any problems. If we find anything, we immediately reach out to the architect. He's the one who engages with NetApp and relays information to the remote team. That's how we learn as an organization. We spend time on the products to gain knowledge and experience with vendors.

View full review »
Senior Consultant at a tech services company with 10,001+ employees

I have also worked on an IBM DS8000 series and some similar products from EMC.

IBM had released the 8700 with the AFF configuration. However, I was with another company at the time. The majority of my experience is with NetApp using the CLI, but with the IBM product, I was using the GUI. I prefer the CLI in both systems.

With respect to the pros and cons between the vendors, it is difficult for me to judge. Each filesystem has benefits with respect to the vendor and the technology that they use.

View full review »
AIX and Storage Specialist at a computer software company with 1,001-5,000 employees

The initial setup is straightforward for the customer. We require more in-depth disk management and understand how the disk will be distributed. Otherwise, it is simple.

The implementation of NetApp with CIFS and NFS is quite quick to deploy. When they came out with the latest models, they provided us with three protocols. Going forward, this will be very useful.

It takes one to two days to deploy NetApp AFF. Apart from the basic configuration, there are many things that need to be done for the integration part, like antivirus integration, LAN configuration, and NDMP configuration. Those all take time. So it can be done in two days, but it might take more time depending on what needs to be done.

View full review »
Pedro Paz - PeerSpot reviewer
System Engineer at Eni Energies et Services

The initial setup is complex. It should be easier.

The initial deployment took three days, and that was working on it two or three hours a day. We got two appliances, 2750s, at the end of last year and we completed the setup about three weeks ago. We set up the volumes and the v-servers. We are currently configuring the system and, in the next month or so, the appliance will be done and it will be transferred to the new site offshore.

Our deployment included initializing all the disks, doing the network configuration setup, including the IPs, the mask, the gateways, the DNS, et cetera. Then we had to apply the licenses for all the services. Next, we had to create the volume structure. Then we could start mounting them on other devices so that we can integrate the storage itself with the rest of our system.

We have five people working on the solution.

View full review »
SolidFire logo SolidFire: Configuration
Cloud Architect at a computer software company with 51-200 employees

I'm also familiar with Pure, NetApp, and VNX.

Pure's are more traditional to controller architecture as opposed to the distributed architecture of SolidFire. It's also all-flash, just like SolidFire. It's even simpler than SolidFire in terms of deployment and management. They've got an active controller configuration so that upgrades are essentially transparent as you upgrade a node or scale. It's just the way that the architecture's designed on the back end.

View full review »
Pure Storage FlashArray logo Pure Storage FlashArray: Configuration
DavidGrudek - PeerSpot reviewer
Systems Engineer at a tech services company with 1-10 employees

Only two weeks ago we set up a new solution in a new location that we're building. It's pretty straightforward. There are certain internal matters that only the vendor can handle. But, that's fairly common with most good storage arrays. Besides this, it's really easy. The vendor is really simple to work with. One need only provide him with a list of the IP's he uses for management and replication. 

I did not do the initial storage myself, as I'm in Chicago and it is handled in Omaha, Nebraska. I did have to coordinate everything, however. We were sent a form to fill out with the name and IP use. At this point, the arrival of a technician is scheduled, who asks where the rack should be placed. At this point, it is racked, cabled up and all the initial IP configurations are introduced. This is the point at which the person can take over and start carving out the ones he wants or creating the V-Vault, should he so desire. The process is really simple.

The technician's visit lasted an hour-and-a-half. I've been doing this for a long time. So, perhaps, it took me another hour to configure everything, although the level of involvement can play a factor. We created two only and a V-Vault. Like I said, it's really easy.

View full review »
Technical Consultant at Injazat Data Systems

The initial configuration, the base configuration, is really simple. Compared to other products, it's really easy. The racking and stacking will take time, but that has more to do with the data center operations, not specifically the Pure Storage configuration. I'll say this: once the base configuration of the storage array is connected with those cables, once the IP is assigned, the setup process won't take long. It won't take more than 30 to 45 minutes to complete the base configuration, and then it will be ready for provisioning the contract data. 

View full review »
General Manager at PRACSO S.R.L.

We use FlashArray for proofs of concept and to bring the clients our infrastructure to test them. We have six clients using the solution. Most of them are using the ActiveCluster configuration.

View full review »
IT Contractor at a financial services firm with 51-200 employees

The initial setup is more complex than the Nimble. Mostly the configurations must be done by the Pure engine at the back end.

It took two or three weeks to deploy the solution.

You only need one person to deploy and maintain the solution.

View full review »
Itamar  Garcia - PeerSpot reviewer
SAP Services Manager at Think about IT

I'm a Pure Storage fan because I think this solution offers great performance. It provides fast access, is user-friendly and it's easy for us to support this product. The product is also very powerful with high-speed duplication which makes it easier to manage things. I understand that there is a new version of system operation that includes a self-protect configuration to avoid getting hacked. It's crucial for us here in Brazil where we need to be safe. 

View full review »
HPE Nimble Storage logo HPE Nimble Storage: Configuration
Chris Childerhose - PeerSpot reviewer
Lead Infrastructure Architect at ThinkON

The initial setup is straightforward. You connect the array to the network and power it up. You then run the Nimble Setup Manager which will detect the array on the network and allow you to complete an initial configuration. Once that is done you can then use the Web UI to finalize the configuration.

View full review »
Systems Engineer at a tech services company with 51-200 employees

The initial setup is very simple, and in fact, the simplest of these solutions.

It takes approximately four hours to deploy including hooking up power, cabling, getting it set up on the rack, and configuration.

View full review »
Infrastructure Specialist at a tech services company with 1-10 employees

We had some issues with the configuration and we contacted support. We were satisfied.

View full review »
Technical Manager at a manufacturing company with 11-50 employees

The solution should allow for easier configurations.

View full review »
Head Of Information Technology at Zambia National Building Society

We are just a user.

I'm learning to understand it. I originally was not a support administrator, however, over time, found that it's actually straightforward. I'm looking forward to getting certified or trained in Nimble.

I would advise people to go for it. They just need to make sure that they understand the support around Nimble, in terms of them knowing how to support solutions. Obviously, you don't want every call that comes to you to have to phone Nimble to do it for you. You want to support everyone quickly and escalate issues only.

Also, ensure that you look out for the configuration or proposed architecture from Nimble in terms of the best implementation approach where you have the store ones and the other supporting components to get it right. Sometimes people rush to make it cheaper, however, then you eliminate certain key components that backup mechanisms.

So far, that solution is great. I'd rate it nine out of ten.

View full review »
HPE 3PAR StoreServ logo HPE 3PAR StoreServ: Configuration
Ricky Santos - PeerSpot reviewer
System Administrator at ON Semiconductor Phils. Inc.

The cloud-based monitoring Infosight would be better if users are automatically enrolled in the cloud/group based on the configuration or information gathered or uploaded on the internet.

The auto-discovery of the system is not easy for first-time users.

View full review »
Presales Engineer at a tech services company with 51-200 employees

Technical support is perfect.

We did a comparison in the market, and HPE was the best service providers for regular configuration and for the remote configuration.

View full review »
Senior IT Infrastructure & Data Center Operation Engineer at Ministry of Communications and Information Technology (MCIT), Egypt

It is straightforward. I power on 3PAR and take care of the cabling. 3PAR is managed by two components: a services processor and a server component. The server can be a virtual appliance or a physical appliance. 

For upgrading, I take different configurations from the services processor. I update a package on the servers, which makes it easy to upgrade in production. For initial configuration, I do an upgrade offline, and it is easy.

View full review »
Systems Engineer at a tech services company with 51-200 employees

The initial setup is very simple.

The installation and configuration are quick. You can complete them in one or two hours, which is fast.

View full review »
Storage Infrastructure Engineer at Cambridge Health Alliance

The configuration and flexibility should improve.

View full review »
Solution Sales Manager at a computer software company with 11-50 employees

HPE have have made a new system to substitute with 3PAR. It is the HPE Primera solution. It is a product similar to 3PAR, but it's more converged together with the hybrid system. You can implement it on-premise of the cloud. It has cloud connect functionalities and the market is going that way. 

If you want a solution better than HPE 3PAR StoreServ you will be using Primera.

I would advise those wanting to implement this solution to use a partner which has high-scale competencies. They need to be technical experts, both in hardware and software. Customers cannot implement this solution themself. 

In the beginning, it was allowed for customers to do the installation, but after one or two years, HPE decided that it was not allowed. It is not allowed today. You need a certified person to do the installation and configuration.

I rate HPE 3PAR StoreServ a nine out of ten.

View full review »
Team Leader Presales at a comms service provider with 51-200 employees

The product is quite expensive. The costs involved are high.

We would like the product to make it more of a straightforward procedure when we need to configure storage. There were quite a lot of options and parameters that we don't have knowledge of. We used some external company to make the configuration for us, for the 3PAR storage.

View full review »
Hitachi Virtual Storage Platform F Series logo Hitachi Virtual Storage Platform F Series: Configuration
Engineer at Secretaria de Educacion del Gobierno del Estado de Mexico

I have worked with this equipment for the last two years. When I worked before in Hitachi Data Systems, I worked for a data architect and designed complete solutions, so I have a lot of interaction with the clients.  I handle the solutions, the capacity of the disk, configuration, initial set up, definitions of the DP pools, assigning the volumes, creating the entire SAN, etc. Also, I manage the SAN switches. I worked for Pearson, Sonic, Mobiistar, Macromer, Mynorte, and Santander. Every time, I created the whole environment— all open and the mainframe in development. For example, at Ponavid, I created the whole solution, assigned the space, performed all the troubleshooting, as well as supported all the hardware and the performance.

View full review »
Senior Manager -Datacenter Planning and Operations at a comms service provider with 1,001-5,000 employees

The installation and initial setup are not straightforward. We had to do a lot of configuration and required assistance from the vendor to complete it. In particular, there were certain things that there was some confusion about.

The location and the sizing were also things that we had to plan for internally. 

View full review »
Paolo Brega - PeerSpot reviewer
Business Developer Manager & Marketing Manager at TAI Software Solution

We have had only one instance when we had the disk failure, and we received some calls from them asking about the instance and applying a new update. They were very good. 

Their support team is absolutely on top of my list due to their knowledge and effectiveness. They can really assist you in any case. If you need any assistance with the configuration, they will be able to assist you. I would rate them a five out of five.

View full review »
IBM FlashSystem logo IBM FlashSystem: Configuration
Storemgr67 - PeerSpot reviewer
Storage Manager at a financial services firm with 10,001+ employees

They can improve its initial configuration. The initial configuration is currently very difficult. There are multiple choices or alternative ways to configure based on the use case and what you are targeting out of the device, that is, more capacity or more performance. These multiple alternatives cause a lot of confusion.

They should increase the processing part of the nodes. Currently, you can cluster up to eight nodes. From my experience and the workload that I am facing in my environment currently, I would like to see either a bigger or stronger node or a larger number of nodes that can be clustered together. We formally communicated to them that we need to see either this or that, and they are working on something.

View full review »
UNIX Security Consultant at a retailer with 1,001-5,000 employees

The solution has a low number of NVME host attachments at 16 per IO group over the fiber channel. This is magnitudes lower than competing products. 

The 8.5 release for the 7300 and 9500 Flash Systems no longer allows IO group migrations. The replacement volume mobility is not as seamless as IO group migrations.

The Kubernetes CSI driver and the open-stack cinder driver still rely on SSH instead of native APIs for configuration changes. This reduces the limit of outstanding configuration changes that can be submitted to storage in bulk. 

The solution has not yet adopted Swordfish APIs and its SMI-S APIs are legacy and depreciated. Swordfish's are vendor-independent APIs made by the Storage Industry Association that allow you to manage storage no matter your vendor. These new generation APIs were released after ten years but IBM has not yet jumped on board. With a multi-vendor environment like ours, implementations are easier with universal APIs. 

Redhat Enterprise Linux clones such as CentOS, AlmaLinux, or Rocky Linux are not supported. All are binary compatible and should be supported because they are fundamentally the same product with different branding. 

It would be helpful to have a public page listing the minimum supported firmware levels for HBAs from different vendors. We have run into bugs with fiber channel cards that were solved with firmware updates. It was a laborious process to cross-reference vendor information so it would be helpful for IBM to provide recommended baselines for firmware. 

View full review »
AANKITGUPTAA - PeerSpot reviewer
Consultant at Pi DATACENTERS

The initial setup is complex because it requires a more extensive knowledge base. There are multiple parts from storage, and data parts, and the configuration zoning of each of them.

View full review »
Dell Unity XT logo Dell Unity XT: Configuration
Assistant Manager Specialist at a computer software company with 1,001-5,000 employees

The interface and configuration could improve.

View full review »
Evangelos Nikolaidis - PeerSpot reviewer
Freelance IT Professional at a energy/utilities company with 201-500 employees
  • One of the most useful features for us was the deduplication. It had been challenging for us to store certain types of data and to use patterns of storage to reduce storage size.
  • The IOPS and the speed were also an important part of the solution.
  • In addition, there was a Unity machine that offers block-level and NAS, and we used the block-level storage.
  • We also use the side-to-side storage verification for the recovery site.
  • Finally, the device was flexible and we could change the configuration to meet our needs.
View full review »
Consultancy Department Chief with 201-500 employees

The initial setup was completely straightforward. Anyone who can handle "next, next, next, finish" can deploy Unity. It was deployed within half an hour.

We installed it, connected it to fabric, updated the software, went through the initial configuration dialogue, and that was it. Most of the time in getting it up and running was spent on the data migration.

For deployment and maintenance it requires just one person. In our organization that person is our storage and virtual infrastructure administrator.

View full review »
Francisco Gimo - PeerSpot reviewer
Management Information System Officer at a mining and metals company with 501-1,000 employees

We use Dell representatives for the implementation of the solution. We ended up doing the configurations ourselves without needing the support. 80 percent of the deployment we did ourselves. We had two people involved in the deployment and the solution has not required any maintenance so far.

We deployed not only the one Dell Unity XT but a bunch of them. It took us approximately one week for everything to be finished. Additionally, we deployed some hosts, switches, and other systems.

View full review »
Huawei OceanStor logo Huawei OceanStor: Configuration
Technical Support Executive at a comms service provider with 5,001-10,000 employees

We are a distributing partner of Huawei. I have experience in terms of installation, configuration, and professional services.

I'd rate the solution at a nine out of ten. 

I'd recommend the solution to others. I'd also advise on Dorado Storage, however, they would need to use a professional service. 

View full review »
Yordan Velez Rodríguez - PeerSpot reviewer
Architect of solutions at Datrix

OceanStor could use improvement with configuration and synchronizing with other vendors. There is some latency in the migration due to the amount of data. 

I would like to see more iterations and direct integration with solutions like Docker and Kubernetes. Also, if the customer needs to make a special solution or have an additional process for analyzing data directly on the storage using data technology, they should be able to do that.

View full review »
Data Center Engineer at Emerging Communications Limited

Huawei OceanStor could improve the deployment stage, configuration, and documentation. It is difficult. Huawei OceanStor is not similar to Cisco or Dell. The documentation or procedures is not readily available to the customers.

I would rate the difficulty level of Huawei OceanStor a three out of five.

View full review »
Huawei OceanStor Dorado logo Huawei OceanStor Dorado: Configuration
Solution Delivery Expert at a tech services company with 11-50 employees

The installation and configuration are quite simple.

The length of time required for deployment depends on your scenario. If you deploy it as a DR site, it will take between three and four hours to complete. This includes all of the features.

View full review »
Dell SC Series logo Dell SC Series: Configuration
Technical Manager at a manufacturing company with 11-50 employees

The configuration could be easier in Dell EMC SC Series.

View full review »
Lenovo ThinkSystem DM Series logo Lenovo ThinkSystem DM Series: Configuration
Technical Specialist at Pouyan Pardazesh Tehran Co

The Lenovo ThinkSystem DM Series was very hard to configure. Installation of the product was very hard. These are its areas for improvement.

It would be easier to configure initially if you have direct access to the panel, so if they could remove the APIs for configuration, that would be preferable.

View full review »
Lenovo ThinkSystem DE Series logo Lenovo ThinkSystem DE Series: Configuration
IT Consultant at a tech services company with 51-200 employees

The best thing about this solution is its price, which reduces our costs. Additionally, this solution's ease of configuration and management are valuable features.

View full review »
Project Analysis, Design & Implementation at a comms service provider with 11-50 employees

I do the initial configuration of Lenovo ThinkSystem DE Series and then the customer manages the whole storage solution.

View full review »
HPE Primera logo HPE Primera: Configuration
Asanka Karunanayake - PeerSpot reviewer
Head of Hosting & LAN Services at Lanka Communication Services (Pvt) Ltd.

The solution is quite straightforward. It's not complex. We had already done the sizing and we had already done the planning. The HP engineers were well aware of our needs. In the end, we just plugged in and gave them access, and they themselves did the configuration as per the initial requirements.

View full review »
Service Manager at a tech services company with 10,001+ employees

I used Pure Storage Flash Array in the past, but our current customer uses HPE Primera for storage.

The maintenance of HPE Primera doesn't require staff, but we had one engineer who took care of its installation and configuration.

I'm rating HPE Primera nine out of ten.

View full review »
Dell PowerMax NVMe logo Dell PowerMax NVMe: Configuration
VP Global Markets, Global Head of Storage at a financial services firm with 10,001+ employees

The deployment process is a standard procedure for deploying SAN, and that's with any vendor. I'd say that the process wasn't any different from deploying another solution. We've got our architecture and our blueprints. We worked with a solutions architect and that design drives the configuration, and then we go ahead and deploy that configuration.

Deployment took around three months. Some of this was due to internal processes, timing, and pandemic conditions. Over December, we were hampered with end-of-year change control freezes in place so some of the activity couldn't get done. All in all, I'd say we probably could have been done in about six to eight weeks.

I had three people working on this internally (not counting the non storage resources) as we deployed to two geographies in different time zones. 

Maintenance is just ongoing service and that'd be the same as any technological asset. It has a mean time before failure. We monitor it on a daily basis. Alerts are actioned with the vendor. However, the platform does have five-nines of availability  and multiple layers of redundancy.

View full review »
Storage Team Manager at a government with 10,001+ employees

For us, it's straightforward to set up. We've been doing this for a long time, so it's really easy for us to set up a new array in a data center. We had one that hit the dock about two weeks ago and it's already up and running and provisioning to customers. 

NetApp will say, "Well, that's two weeks. We can come in and do it in one day." But we explain, "No, you can't because there are internal processes that we have to go through." Every piece of equipment we get, even the PowerMax, goes through its paces. We don't just turn it on and hope for the best. We check and double-check all our configuration settings. But overall, PowerMax is easy to set up. They configure it at the factory, deliver it, put it in the data center, and then we hook it to our Fibre Channel fabric and Ethernet fabrics and we're good to go. Competitors will say, "Well, it's so much easier to migrate from one array to another on our platform, versus the Dell EMCs." That's not necessarily true. We have to look at what they are actually measuring and whether we are comparing apples to apples.

With VPLEX, we can do migrations on-the-fly, live. It's no longer a six-month to one-year effort to get off of one array and move to another. We just bring the other array in, present it to VPLEX, and VPLEX takes it from there.

For a new deployment of one PowerMax, we need one FTE. On a day-to-day basis, to manage all of our PowerMaxs, we need three FTEs. But that is across two different data centers with a total of 10 PowerMax/VMAX units. It's a pretty big installation. Across our organization we have 55,000 employees. Since our HR is on this solution, and that's how people get paid, it's like we have 55,000 people using it, in a sense. Most access is through an application, but in another sense, it's used by pretty much everybody in the state.

View full review »
Jeff Dao - PeerSpot reviewer
Infrastructure Lead at Umbra Ltd.

With the SCM memory, it has been set it and forget it. It is being used as a cache drive. There is very little configuration for us to do. We just know that it is working.

PowerMax NVMe's QoS capabilities give us a lot of visibility into taking a look at what could be a potential performance issue. However, because it is so fast, we haven't really noticed any slowdowns from the date of deployment even until today.

It is a very good storage appliance for enterprise-level, mission-critical IT workloads because of its high redundancy, parity drives. It gives us the ability to not worry about our data. Or, if something were to go wrong, e.g., a drive pops, then we have our mission-critical warranty. We get a drive the same day, then get it swapped by the next business day at the latest.

PowerMax NVMe has made it a lot easier to understand how much we are able to provision. It has made it a lot faster to provision new things. 90% of my time for provisioning has been reduced. Also, it has made it very easy to understand and see everything behind it versus the older heritage, where Dell EMC was very convoluted and hard to get working. Things that used to take an hour, probably now take five to 10 minutes.

View full review »
Product Manager at a tech services company with 10,001+ employees

It is important for our clients that PowerMax provides NVMe scale-out capabilities. They are also getting great performance as compared to the old storage array model. 

Provisioning is faster and immediate. We can do immediate allocation and configuration. As compared to the old storage array model where it used to take half an hour, in PowerMax, we can do it in 5 to 10 minutes. It doesn't take that much time, and there isn't much delay in the PowerMax array.

Our workload is reduced because we are not dealing with any issues. We are not facing many issues on the PowerMax side as compared with the previous one.

View full review »
Vince Vitro - PeerSpot reviewer
Sr Solutions Architect at a healthcare company with 1,001-5,000 employees

Setting up PowerMax is definitely complex. The initial configuration of the array itself is pretty simple, but once you start trying to connect hosts and set up replication, then it becomes a lot more work than it probably should be. It took a couple of days for the initial setup, but after that, there has been some ongoing work as we put more and more on there. 

View full review »
Haseeb Sheikh - PeerSpot reviewer
Manager Private Cloud Solutions at ufone

The SRDF site-to-site replication for the volumes is the most important feature for us. That enables us to do site recovery and replication for our VMware infrastructure.

Along with that, the NVMe response time is very good. We used to have a VMAX 20K but we have just upgraded, and moved two or three generations ahead to PowerMax, and the response time is great. Because we are coming from a hybrid storage scenario, the performance of NVMe is a huge upgrade for us. The 0.4 millisecond response time means our application works great and we are seeing huge performance improvements in our VMware and physical environments.

Regarding data security, EMC has introduced CloudIQ solution with the PowerMax environment, and that enables live monitoring of the telemetry and security data array of the PowerMax. CloudIQ also has a feature called Cybersecurity. That monitors for security vulnerabilities or security events that are occurring on the array itself. That feature is very helpful. We have been able to do some vulnerability assessment tests on the array, which have helped us to resolve issues regarding data security and security vulnerabilities. We are not using the encryption feature of the PowerMax, because we didn't order the PowerMax configuration for it.

CloudIQ helps the environment and lets us manage the respective connected environments. A good feature in CloudIQ is the health score of each connected infrastructure. It gives you timely alerts and informs you when a health issue is occurring on the arrays and needs to be fixed. Those reports and health notices are also sent to Dell EMC support, which proactively monitors all the infrastructure and they will open service requests themselves.

In terms of efficiency, the compression we are currently receiving is 4.2x, which is very good efficiency. We are storing 435 terabytes of data in just 90 TB. In addition to what I mentioned about the NVMe performance, which is very good, we were achieving 150k IOPS on the VMAX, but on the PowerMax the same workload is hitting 300k-plus IOPS. That is sufficient for the workload and means the application is performing as required, according to the SLAs as defined on the PowerMax.

When it comes to workload congestion protection, we have not faced any congestion yet in our environment. We have some spikes on Friday evenings, but they are being handled by PowerMax dutifully. It can beautifully handle up to 400k IOPS, even though it is only designed for 300k IOPS. That is another illustration of its good performance.

View full review »
Enterprise Architect at a healthcare company with 10,001+ employees

The initial setup was straightforward. We received great support on the ground from Dell. They are familiar with our data center and they were able to prepare the site to install the equipment without any delays so everything ran on schedule. Deployment took one day. 

We worked with a project manager both internally and from Dell, and we ensured that we had all the necessary power, networking, and other connectivity ready for deployment to take place on schedule.

The deployment involved myself and another engineer, as well as Dell engineers. It took several days to get the configuration right from a layout perspective, but overall, it was straightforward.

View full review »
IBM FlashSystem 9100 NVMe logo IBM FlashSystem 9100 NVMe: Configuration
Greg Brown - PeerSpot reviewer
Consulting Client Executive at Jeskell Systems, LLC

It is very easy. We have a few novices, and it is extremely easy. What it comes with today, it didn't have years ago. In the current version, it has all the defaults, meaning you can plug it, and it will immediately satisfy a typical user. There is fine-tuning because when you take a generalized configuration, it is not applicable everywhere. So, there might be a setting that is typical and is good for the majority of cases. Once you get it installed, you can start looking at the defaults, fine-tune it a bit, and tweak it a bit. Because it comes with all the defaults on it, you could basically plug it in and use it. It is very easy to install.

The product is easy to install, but the data is hard to migrate. That's because when you have any data that lives on existing systems, the data has to be migrated from existing storage. So, the installation part is relatively simple, but how you get the data over from the existing storage can be challenging because you can't have a company go offline while the data is being moved. With the product, IBM provides a data migration facility to simplify the migration and allow applications to stay online while the data is being migrated.

In terms of maintenance, the machine is extraordinarily easy to maintain. The maintenance of it is almost self-contained, and there are several reasons for that. 

The first reason is that the modules simply don't fail. The major problem with disk maintenance has always been hard drives failing. Anyone who has had a laptop with a hard drive knows it can fail over time. That's why people back their data up, but with this new technology that IBM has introduced, the modules don't fail anymore. So, if the drives don't fail, the major maintenance issue with storage goes out of the window. 

Another reason is that IBM provides proactive maintenance. If there is anything that looks out of sorts, they set up parts and service people to fix it before it actually causes a problem at the user site. They have a monitoring system that can see the error rates increasing before the error rates cause an outage. Because of that, it's on proactive maintenance, which is very helpful. When you do replace something, it's just a plugin module, and it doesn't impact the application. So, it will continue to run during the maintenance, and that's really helpful. It is called concurrent maintenance, meaning you can repair something without causing an outage.

View full review »
Dell PowerStore logo Dell PowerStore: Configuration
Duco Rob - PeerSpot reviewer
Founder and CEO at Desktoptowork

The setup process could be improved. We had some issues regarding configuration and the time it took to do things. It wasn't specifically the people we worked with, but more the process and how it's done. They can work on that.

I'm not satisfied with the process they used to do the setup and the timeframe within which everything was done. For example, there were things which we needed to do together with Dell EMC. There were three meetings for doing three specific things, but we could have done them in one meeting. That's why the duration, from the moment we bought the hardware until we were able to use it, was two or three months. We had expected that we would be able to use the hardware quicker. But because of the implementation process, it took us longer than we wanted.

View full review »
Engineer at a financial services firm with 1,001-5,000 employees

Dell EMC's support was very efficient during the setup of the entire solution. They handled the complete setup in terms of connections and also helped us with the software configuration and moving data from our old solution into the new solution.

View full review »
Solutions Architect/ Consultant at IVT

This solution enables us to add compute and capacity independently, although we have not had to change our configuration.

PowerStore uses machine learning and automation for optimizing resources but we are just starting with it, so I don't know much about these features.

I would rate this solution a nine out of ten.

View full review »
Pavilion HyperParallel Flash Array logo Pavilion HyperParallel Flash Array: Configuration
Manager of Platform Software at a healthcare company with 51-200 employees

In our current configuration, we can only run the line controllers in high availability, active-standby mode, whereas we would like to see active-active implemented. That would get us more performance with a given number of line cards.

Their global namespace support is coming, and I believe it is based on NFS 4.1. We have a mix of both Linux and Windows usage in the company, and getting an NFS 4.1 client with Windows currently is difficult because I don't think that's supported. This is not an issue with the Pavilion product directly, versus more of the general environment. We would essentially like to see a Windows NFS 4.1 client supported so that we can take advantage of the Pavilion feature from both platforms.

Having a little more ease of use with the NFS global namespace vis-a-vis Windows would be an improvement.

View full review »
DDN IntelliFlash logo DDN IntelliFlash: Configuration
Lead Systems Engineer at a retailer with 5,001-10,000 employees

I wouldn't say I like anything about this solution. We are looking for a replacement with Dell EMC and Pure Storage. Tegile's performance, support, and features are horrible. It's going down.

Multiple companies have bought it. It looked okay at one point in time, like four years ago. Even though it wasn't one of the best, it still looked okay. Since the management has changed several times, it looks like it's going down the drain. 

Performance is horrible now. Our original intent was to buy new storage in about two years. But since it became a critical urgency for us, we decided to purchase a new one in two or three months.

It would be better if they improved the codebase. We have issues very often with their code, and I think that is the main pain point. The hardware is also horrible because we have either a controller failure or a SATADOM failure very often. Now and then, we also have a disc failure. 

They have to get their act together. They have to make sure their hardware is robust, they have to make sure their code is good, and then we can think about new features and functionality. 

First, make the unit run properly, and then we can think about additions. Obviously, their support has to be knowledgeable. Because when I told them, "we have latency issues, come troubleshoot it for us," nobody came. But if we tell them that "we need to do a firmware upgrade," then they are like, "okay. Let's do a firmware upgrade." They will come to do the firmware upgrade, and then they will go. But with the firmware upgrades, you might never know when it works properly and when it doesn't work properly.

If there is a disc that needs to be replaced, and we ask them to replace it, they'll say, "okay, just share the remote station with us, and we'll run some commands, and we'll validate which disc is faulty. If it's really faulty, we will send the disc. We do that, and then they find the faulty disc and send a replacement.

They will do these minor things. But that's not what we are looking for. We are looking for more features and more functionality. Like if there is latency, try to help us out and help the customer find where the latency is. It doesn't necessarily have to be only with SAN storage. It might be a configuration issue, or it might be something else. So, you should help the customer find where the issue is. Unfortunately, that is not what we are getting from them. So they have to improve that a lot.

View full review »
Zadara logo Zadara: Configuration
Steve Healey - PeerSpot reviewer
CTO at Pratum

One of the most valuable features is its integration with other cloud solutions. We have a presence within Amazon EC2 and we leverage compute instances in there. Being able to integrate with compute, both locally within Zadara, as well as with other cloud vendors such as Amazon, is very helpful, while also being able to maintain extremely low latency between those connections. We have leveraged 10-Gig direct connections between them to be able to hook up the storage element within Zadara with the cloud platforms such as Amazon EC2. That is one of the primary technical driving factors.

The other large one is the partnership and the managed service offering from Zadara. That means they have a vested interest and are able to understand any issues or problems that we have. They are there to help identify and work through them and come to solutions for us. We have a unique workload, so problems that we may have to identify and work through could be unique to us. Other customers that are just looking to manage a smaller amount of data would not ever identify or have to work through the kinds of things we do. Having a partner that is interested in helping to work through those issues, and make recommendations based on their expertise, is very valuable to us.

Zadara's dedicated cores and memory provide us with a single-tenant experience. We are multi-tenant in that we manage multiple organizations and customers within our environment. We send all of that data to that single-tenant management aspect within Zadara. We have a couple of different virtual, private storage arrays, a couple of them in high-availability. The I/O engine type we're leveraging is the 2400s.

We also have disaster recovery set up on the other side of the U.S. for replication and remote mirroring. Being able to manage that within the platform allows us to add additional storage ourselves, to change the configuration of the VPSA to scale up or scale down, and to make any changes to meet budgetary needs. It truly allows us to manage things from a performance standpoint as well. We can also rely upon Zadara, as a managed-services provider, to manage those requests on our behalf. In the event that we needed to submit a ticket  and say, "Hey, can you add additional storage or volumes?" it's very helpful to have them leverage their time and expertise to perform that on our behalf.

It is also very important that Zadara provides drive options such as SSD, NL-SAS, and SSD cache, for our workload in particular. We require our data to not only be accessible, but to be fast. Typically, most stored data that is hotter or more active is pushed onto faster storage, something like flash cache. The flash cache we began with during our first year with Zadara worked pretty well initially. But our workload being a little unique, after that, the volume of data exceeded the kind of logic that can be used in that type of cache. It just looks at what data is most frequently accessed. Usually the "first in" is on that hot flash cache, and our workload was a little bit more random than that, so we weren't getting as much of the benefit from that flash cache. 

The fact that Zadara provides us with the ability to actually add a hybrid of both SSDs and SATA allows us to specifically designate what volumes and what data should be on those faster drives, while still taking into account budget constraints. That way, we can manage that hybrid and reduce the performance on some of the drives that are housing data that is really being stored long-term and not accessed. Having that hybrid capability has tremendously helped with the flexibility to manage our needs from a performance standpoint as well as a cost perspective.

As far as I know, they also have solid support for the major cloud vendors out there, in addition to some others that I hadn't heard of. But they certainly support Amazon EC2 and Google and Rackspace, among others. Those integrations are very important. Most organizations have some sort of a cloud presence today, whether they're hosting certain servers or compute instances or some other workload out in the cloud. Being able to integrate with the cloud and obtain data and store data, especially with all these next-generation threats and things like ransomware out there, is important. Having backups and storage locations that you can push data to, offsite, or integrate with, is definitely key.

View full review »
CTO at a tech services company with 51-200 employees

We have dozens of customers and I cannot easily estimate the scale of their usage. With respect to our organization, there are many people who use Zadara, and the roles are varied.

Starting from the front end, we have the salespeople that are out there trying to sell our cloud services, and supporting them are the solution architects. The solution architects will be deeper into the technology and they help to design solutions. For example, they scope out opportunities and gather requirements. Often, they interact with Zadara to ask questions and for help to design certain bespoke solutions.

Then, working backward, there are the senior cloud engineers and they would typically get deeper involved in the design of the customer's environment. They would also then do any of the complex builds.

There is also a team of what we call tier-two Net Apps. These are engineers that set up the customer's environment, in particular for the more standard configurations.

Finally, there is the NOC, who monitors the platform and takes any calls from customers if there were ever any issues.

View full review »
Chief Information Officer at a tech services company with 201-500 employees

With the 24/7 management that comes with Zadara cloud services, we know that we have somebody reliable on the other side that can assist us if we need help. We have asked them for assistance a couple of times and it was not outage-related but rather, it was related to how we can take advantage of a couple of things that they provide, such as snapshots. Each time, they have been able to help us and it was a very quick and very pleasant experience.

The vendor provides proactive monitoring and support where within the console, they will let us know if we are utilizing our hard drives incorrectly, perhaps if we are requesting too much throughput from the kind of hard drives that we have. They monitor our performance and will let us know, for example, if we have something misconfigured.

Of course, if a hard drive goes bad, they automatically replace it. We don't even have to know about it. That's quite amazing and I know this because having run systems like that in my past, I know that this is a major headache and I'm happy that it is removed from me. There are a lot of things that they handle automatically without us even knowing.

If there is a situation where we have a misconfiguration or something similar, where we have influence and the opportunity to improve, they let us know. It can be initiated in different ways such as an email report, part of the conversation with the customer success manager, or it can just be a console message that we see when we log in to see how things are going. The console shows us alerts and things like that to keep us informed.

Overall, they are knowledgeable and responsive and I would definitely rate their support a ten out of ten.

View full review »