In simple terms, you just rack the hardware, you load your codes, and it's ready for configuration. That is pretty straightforward.
All-Flash Storage Arrays Configuration Reviews
Showing reviews of the top ranking products in All-Flash Storage Arrays, containing the term Configuration
NetApp AFF (All Flash FAS): Configuration
It has a good interface. Its configuration and flexibility are also good.
I do remote support, so I'm not working on the data center side. We have an on-site team that could better describe the installation and deployment. However, my impression is that deploying AFF is straightforward.
The architect is the main person working with the NetApp products, and he does a deep dive before touching any product. Our team has minimal exposure to NetApp because our work involves a mix of vendors. We have people working on the NetApp side but not regularly. The architect spends a lot of time on NetApp in his day-to-day activities, and he makes the changes. He takes and gives recommendations about which product to use, whereas we provide remote support from a different region altogether. The implementation, changes, configuration, and decision-making are all done from the headquarters.
And once it is implemented, the remote team logs in and does the navigation part. We check the array and identify any problems. If we find anything, we immediately reach out to the architect. He's the one who engages with NetApp and relays information to the remote team. That's how we learn as an organization. We spend time on the products to gain knowledge and experience with vendors.
Harish Manukonda says in a NetApp AFF (All Flash FAS) review
Senior Consultant at a tech services company with 10,001+ employees
I have also worked on an IBM DS8000 series and some similar products from EMC.
IBM had released the 8700 with the AFF configuration. However, I was with another company at the time. The majority of my experience is with NetApp using the CLI, but with the IBM product, I was using the GUI. I prefer the CLI in both systems.
With respect to the pros and cons between the vendors, it is difficult for me to judge. Each filesystem has benefits with respect to the vendor and the technology that they use.
reviewer1560501 says in a SolidFire review
Cloud Architect at a computer software company with 51-200 employees
I'm also familiar with Pure, NetApp, and VNX.
Pure's are more traditional to controller architecture as opposed to the distributed architecture of SolidFire. It's also all-flash, just like SolidFire. It's even simpler than SolidFire in terms of deployment and management. They've got an active controller configuration so that upgrades are essentially transparent as you upgrade a node or scale. It's just the way that the architecture's designed on the back end.
Pure Storage FlashArray: Configuration
Only two weeks ago we set up a new solution in a new location that we're building. It's pretty straightforward. There are certain internal matters that only the vendor can handle. But, that's fairly common with most good storage arrays. Besides this, it's really easy. The vendor is really simple to work with. One need only provide him with a list of the IP's he uses for management and replication.
I did not do the initial storage myself, as I'm in Chicago and it is handled in Omaha, Nebraska. I did have to coordinate everything, however. We were sent a form to fill out with the name and IP use. At this point, the arrival of a technician is scheduled, who asks where the rack should be placed. At this point, it is racked, cabled up and all the initial IP configurations are introduced. This is the point at which the person can take over and start carving out the ones he wants or creating the V-Vault, should he so desire. The process is really simple.
The technician's visit lasted an hour-and-a-half. I've been doing this for a long time. So, perhaps, it took me another hour to configure everything, although the level of involvement can play a factor. We created two only and a V-Vault. Like I said, it's really easy.
Prabakaran Kaliamurthi says in a Pure Storage FlashArray review
Technical Consultant at Injazat Data Systems
The initial configuration, the base configuration, is really simple. Compared to other products, it's really easy. The racking and stacking will take time, but that has more to do with the data center operations, not specifically the Pure Storage configuration. I'll say this: once the base configuration of the storage array is connected with those cables, once the IP is assigned, the setup process won't take long. It won't take more than 30 to 45 minutes to complete the base configuration, and then it will be ready for provisioning the contract data.
We use FlashArray for proofs of concept and to bring the clients our infrastructure to test them. We have six clients using the solution. Most of them are using the ActiveCluster configuration.
HPE Nimble Storage: Configuration
The initial setup is straightforward. You connect the array to the network and power it up. You then run the Nimble Setup Manager which will detect the array on the network and allow you to complete an initial configuration. Once that is done you can then use the Web UI to finalize the configuration.
reviewer1502937 says in a HPE Nimble Storage review
Systems Engineer at a tech services company with 51-200 employees
The initial setup is very simple, and in fact, the simplest of these solutions.
It takes approximately four hours to deploy including hooking up power, cabling, getting it set up on the rack, and configuration.
reviewer1589604 says in a HPE Nimble Storage review
Infrastructure Specialist at a tech services company with 1-10 employees
We had some issues with the configuration and we contacted support. We were satisfied.
HPE 3PAR StoreServ: Configuration
The cloud-based monitoring Infosight would be better if users are automatically enrolled in the cloud/group based on the configuration or information gathered or uploaded on the internet.
The auto-discovery of the system is not easy for first-time users.
reviewer1021158 says in a HPE 3PAR StoreServ review
Presales Engineer at a tech services company with 51-200 employees
Technical support is perfect.
We did a comparison in the market, and HPE was the best service providers for regular configuration and for the remote configuration.
ITmanager10038 says in a HPE 3PAR StoreServ review
Senior IT Infrastructure & Data Center Operation Engineer at Ministry of Communications and Information Technology (MCIT), Egypt
It is straightforward. I power on 3PAR and take care of the cabling. 3PAR is managed by two components: a services processor and a server component. The server can be a virtual appliance or a physical appliance.
For upgrading, I take different configurations from the services processor. I update a package on the servers, which makes it easy to upgrade in production. For initial configuration, I do an upgrade offline, and it is easy.
reviewer1502937 says in a HPE 3PAR StoreServ review
Systems Engineer at a tech services company with 51-200 employees
The initial setup is very simple.
The installation and configuration are quick. You can complete them in one or two hours, which is fast.
Peter Sachs says in a HPE 3PAR StoreServ review
Storage Infrastructure Engineer at Cambridge Health Alliance
The configuration and flexibility should improve.
reviewer1751949 says in a HPE 3PAR StoreServ review
Solution Sales Manager at a computer software company with 11-50 employees
HPE have have made a new system to substitute with 3PAR. It is the HPE Primera solution. It is a product similar to 3PAR, but it's more converged together with the hybrid system. You can implement it on-premise of the cloud. It has cloud connect functionalities and the market is going that way.
If you want a solution better than HPE 3PAR StoreServ you will be using Primera.
I would advise those wanting to implement this solution to use a partner which has high-scale competencies. They need to be technical experts, both in hardware and software. Customers cannot implement this solution themself.
In the beginning, it was allowed for customers to do the installation, but after one or two years, HPE decided that it was not allowed. It is not allowed today. You need a certified person to do the installation and configuration.
I rate HPE 3PAR StoreServ a nine out of ten.
teamlead968247 says in a HPE 3PAR StoreServ review
Team Leader Presales at a comms service provider with 51-200 employees
The product is quite expensive. The costs involved are high.
We would like the product to make it more of a straightforward procedure when we need to configure storage. There were quite a lot of options and parameters that we don't have knowledge of. We used some external company to make the configuration for us, for the 3PAR storage.
Hitachi Virtual Storage Platform F Series: Configuration
The installation of this solution is a little bit hard. In these kinds of products, the original companies are often interested in setting up the product in the customer's environment.
We are located in Iran, and there is a lot of tension and lack of communication because of our locale. There are restrictions in a lot of our data centers that preclude us from having special software from the original company, and because of that, we face a lot of problems when using these kinds of products.
The initial setup and configuration are usually done by the CE of the vendor, and after the initial configuration is complete, These products are sent to our data centers.
reviewer893490 says in a Hitachi Virtual Storage Platform F Series review
Technical Consultant at a wholesaler/distributor with 5,001-10,000 employees
This solution is not customer-oriented, so my advice for anybody who is implementing it is to acquire help for the installation, setup, and configuration.
I would rate this solution a seven out of ten.
German Vazquez says in a Hitachi Virtual Storage Platform F Series review
Engineer at Secretaria de Educacion del Gobierno del Estado de Mexico
I have worked with this equipment for the last two years. When I worked before in Hitachi Data Systems, I worked for a data architect and designed complete solutions, so I have a lot of interaction with the clients. I handle the solutions, the capacity of the disk, configuration, initial set up, definitions of the DP pools, assigning the volumes, creating the entire SAN, etc. Also, I manage the SAN switches. I worked for Pearson, Sonic, Mobiistar, Macromer, Mynorte, and Santander. Every time, I created the whole environment— all open and the mainframe in development. For example, at Ponavid, I created the whole solution, assigned the space, performed all the troubleshooting, as well as supported all the hardware and the performance.
IBM FlashSystem: Configuration
They can improve its initial configuration. The initial configuration is currently very difficult. There are multiple choices or alternative ways to configure based on the use case and what you are targeting out of the device, that is, more capacity or more performance. These multiple alternatives cause a lot of confusion.
They should increase the processing part of the nodes. Currently, you can cluster up to eight nodes. From my experience and the workload that I am facing in my environment currently, I would like to see either a bigger or stronger node or a larger number of nodes that can be clustered together. We formally communicated to them that we need to see either this or that, and they are working on something.
Dell EMC Unity XT: Configuration
reviewer1716639 says in a Dell EMC Unity XT review
Assistant Manager Specialist at a computer software company with 1,001-5,000 employees
The interface and configuration could improve.
- One of the most useful features for us was the deduplication. It had been challenging for us to store certain types of data and to use patterns of storage to reduce storage size.
- The IOPS and the speed were also an important part of the solution.
- In addition, there was a Unity machine that offers block-level and NAS, and we used the block-level storage.
- We also use the side-to-side storage verification for the recovery site.
- Finally, the device was flexible and we could change the configuration to meet our needs.
Egor Bobryshev says in a Dell EMC Unity XT review
Consultancy Department Chief with 201-500 employees
The initial setup was completely straightforward. Anyone who can handle "next, next, next, finish" can deploy Unity. It was deployed within half an hour.
We installed it, connected it to fabric, updated the software, went through the initial configuration dialogue, and that was it. Most of the time in getting it up and running was spent on the data migration.
For deployment and maintenance it requires just one person. In our organization that person is our storage and virtual infrastructure administrator.
Huawei OceanStor: Configuration
reviewer1675029 says in a Huawei OceanStor review
Technical Support Executive at a comms service provider with 5,001-10,000 employees
We are a distributing partner of Huawei. I have experience in terms of installation, configuration, and professional services.
I'd rate the solution at a nine out of ten.
I'd recommend the solution to others. I'd also advise on Dorado Storage, however, they would need to use a professional service.
OceanStor could use improvement with configuration and synchronizing with other vendors. There is some latency in the migration due to the amount of data.
I would like to see more iterations and direct integration with solutions like Docker and Kubernetes. Also, if the customer needs to make a special solution or have an additional process for analyzing data directly on the storage using data technology, they should be able to do that.
Huawei OceanStor Dorado: Configuration
reviewer1372173 says in a Huawei OceanStor Dorado review
Solution Delivery Expert at a tech services company with 11-50 employees
The installation and configuration are quite simple.
The length of time required for deployment depends on your scenario. If you deploy it as a DR site, it will take between three and four hours to complete. This includes all of the features.
Dell EMC SC Series: Configuration
reviewer1297803 says in a Dell EMC SC Series review
IT Director - Enterprise Storage and Data Protection at a manufacturing company with 10,001+ employees
We do the physical installation for our clients. We handle the initialization of the site. We get access and do the initial configuration where we configure the storage pool, the profile, and whatever is needed there. Then we do a series of tests. Functional tests, such as failover tests, are something we do for our clients during the implementation process.
Lenovo ThinkSystem DM Series: Configuration
reviewer1358325 says in a Lenovo ThinkSystem DM Series review
Technical Specialist at a tech vendor with 11-50 employees
It's important to know that the storage is entry level mode. In comparison with other similar solutions, I think this one is the most expensive. Clients don't understand why they should pay so much money for the DM Series. It's possible to get other solutions with similar configurations like SSV, SaaS, HPE, MS-DOS. These are cheaper solutions.
I would rate this solution a nine out of 10.
HPE Primera: Configuration
With HPE Primera there is no additional cost, but it depends what configuration we are talking about. If we are excluding support, it is very expensive. There is no showstopper for prices, just support or service. We are a Platinum Partner of HP Enterprise so we give quotes and configure HP Enterprise solutions. I don't know what happens in the US market, but in Georgia it is very expensive.
It is expensive if compared with Dell Technology.
The solution is quite straightforward. It's not complex. We had already done the sizing and we had already done the planning. The HP engineers were well aware of our needs. In the end, we just plugged in and gave them access, and they themselves did the configuration as per the initial requirements.
Dell EMC PowerMax NVMe: Configuration
Feisal Anooar says in a Dell EMC PowerMax NVMe review
VP Global Markets, Global Head of Storage at a financial services firm with 10,001+ employees
The deployment process is a standard procedure for deploying SAN, and that's with any vendor. I'd say that the process wasn't any different from deploying another solution. We've got our architecture and our blueprints. We worked with a solutions architect and that design drives the configuration, and then we go ahead and deploy that configuration.
Deployment took around three months. Some of this was due to internal processes, timing, and pandemic conditions. Over December, we were hampered with end-of-year change control freezes in place so some of the activity couldn't get done. All in all, I'd say we probably could have been done in about six to eight weeks.
I had three people working on this internally (not counting the non storage resources) as we deployed to two geographies in different time zones.
Maintenance is just ongoing service and that'd be the same as any technological asset. It has a mean time before failure. We monitor it on a daily basis. Alerts are actioned with the vendor. However, the platform does have five-nines of availability and multiple layers of redundancy.
Reviewer593747 says in a Dell EMC PowerMax NVMe review
Storage Team Manager at a government with 10,001+ employees
For us, it's straightforward to set up. We've been doing this for a long time, so it's really easy for us to set up a new array in a data center. We had one that hit the dock about two weeks ago and it's already up and running and provisioning to customers.
NetApp will say, "Well, that's two weeks. We can come in and do it in one day." But we explain, "No, you can't because there are internal processes that we have to go through." Every piece of equipment we get, even the PowerMax, goes through its paces. We don't just turn it on and hope for the best. We check and double-check all our configuration settings. But overall, PowerMax is easy to set up. They configure it at the factory, deliver it, put it in the data center, and then we hook it to our Fibre Channel fabric and Ethernet fabrics and we're good to go. Competitors will say, "Well, it's so much easier to migrate from one array to another on our platform, versus the Dell EMCs." That's not necessarily true. We have to look at what they are actually measuring and whether we are comparing apples to apples.
With VPLEX, we can do migrations on-the-fly, live. It's no longer a six-month to one-year effort to get off of one array and move to another. We just bring the other array in, present it to VPLEX, and VPLEX takes it from there.
For a new deployment of one PowerMax, we need one FTE. On a day-to-day basis, to manage all of our PowerMaxs, we need three FTEs. But that is across two different data centers with a total of 10 PowerMax/VMAX units. It's a pretty big installation. Across our organization we have 55,000 employees. Since our HR is on this solution, and that's how people get paid, it's like we have 55,000 people using it, in a sense. Most access is through an application, but in another sense, it's used by pretty much everybody in the state.
With the SCM memory, it has been set it and forget it. It is being used as a cache drive. There is very little configuration for us to do. We just know that it is working.
PowerMax NVMe's QoS capabilities give us a lot of visibility into taking a look at what could be a potential performance issue. However, because it is so fast, we haven't really noticed any slowdowns from the date of deployment even until today.
It is a very good storage appliance for enterprise-level, mission-critical IT workloads because of its high redundancy, parity drives. It gives us the ability to not worry about our data. Or, if something were to go wrong, e.g., a drive pops, then we have our mission-critical warranty. We get a drive the same day, then get it swapped by the next business day at the latest.
PowerMax NVMe has made it a lot easier to understand how much we are able to provision. It has made it a lot faster to provision new things. 90% of my time for provisioning has been reduced. Also, it has made it very easy to understand and see everything behind it versus the older heritage, where Dell EMC was very convoluted and hard to get working. Things that used to take an hour, probably now take five to 10 minutes.
Vipindas K.P says in a Dell EMC PowerMax NVMe review
Product Manager at a tech services company with 10,001+ employees
It is important for our clients that PowerMax provides NVMe scale-out capabilities. They are also getting great performance as compared to the old storage array model.
Provisioning is faster and immediate. We can do immediate allocation and configuration. As compared to the old storage array model where it used to take half an hour, in PowerMax, we can do it in 5 to 10 minutes. It doesn't take that much time, and there isn't much delay in the PowerMax array.
Our workload is reduced because we are not dealing with any issues. We are not facing many issues on the PowerMax side as compared with the previous one.
Setting up PowerMax is definitely complex. The initial configuration of the array itself is pretty simple, but once you start trying to connect hosts and set up replication, then it becomes a lot more work than it probably should be. It took a couple of days for the initial setup, but after that, there has been some ongoing work as we put more and more on there.
The SRDF site-to-site replication for the volumes is the most important feature for us. That enables us to do site recovery and replication for our VMware infrastructure.
Along with that, the NVMe response time is very good. We used to have a VMAX 20K but we have just upgraded, and moved two or three generations ahead to PowerMax, and the response time is great. Because we are coming from a hybrid storage scenario, the performance of NVMe is a huge upgrade for us. The 0.4 millisecond response time means our application works great and we are seeing huge performance improvements in our VMware and physical environments.
Regarding data security, EMC has introduced CloudIQ solution with the PowerMax environment, and that enables live monitoring of the telemetry and security data array of the PowerMax. CloudIQ also has a feature called Cybersecurity. That monitors for security vulnerabilities or security events that are occurring on the array itself. That feature is very helpful. We have been able to do some vulnerability assessment tests on the array, which have helped us to resolve issues regarding data security and security vulnerabilities. We are not using the encryption feature of the PowerMax, because we didn't order the PowerMax configuration for it.
CloudIQ helps the environment and lets us manage the respective connected environments. A good feature in CloudIQ is the health score of each connected infrastructure. It gives you timely alerts and informs you when a health issue is occurring on the arrays and needs to be fixed. Those reports and health notices are also sent to Dell EMC support, which proactively monitors all the infrastructure and they will open service requests themselves.
In terms of efficiency, the compression we are currently receiving is 4.2x, which is very good efficiency. We are storing 435 terabytes of data in just 90 TB. In addition to what I mentioned about the NVMe performance, which is very good, we were achieving 150k IOPS on the VMAX, but on the PowerMax the same workload is hitting 300k-plus IOPS. That is sufficient for the workload and means the application is performing as required, according to the SLAs as defined on the PowerMax.
When it comes to workload congestion protection, we have not faced any congestion yet in our environment. We have some spikes on Friday evenings, but they are being handled by PowerMax dutifully. It can beautifully handle up to 400k IOPS, even though it is only designed for 300k IOPS. That is another illustration of its good performance.
IBM FlashSystem 9100 NVMe: Configuration
The initial setup is easy and not complicated.
From scratch, the Rack and Stack take perhaps two days, and the configuration of the system takes an additional day.
Dell EMC PowerStore: Configuration
The setup process could be improved. We had some issues regarding configuration and the time it took to do things. It wasn't specifically the people we worked with, but more the process and how it's done. They can work on that.
I'm not satisfied with the process they used to do the setup and the timeframe within which everything was done. For example, there were things which we needed to do together with Dell EMC. There were three meetings for doing three specific things, but we could have done them in one meeting. That's why the duration, from the moment we bought the hardware until we were able to use it, was two or three months. We had expected that we would be able to use the hardware quicker. But because of the implementation process, it took us longer than we wanted.
reviewer1528422 says in a Dell EMC PowerStore review
Engineer at a financial services firm with 1,001-5,000 employees
Dell EMC's support was very efficient during the setup of the entire solution. They handled the complete setup in terms of connections and also helped us with the software configuration and moving data from our old solution into the new solution.
This solution enables us to add compute and capacity independently, although we have not had to change our configuration.
PowerStore uses machine learning and automation for optimizing resources but we are just starting with it, so I don't know much about these features.
I would rate this solution a nine out of ten.
Pavilion HyperParallel Flash Array: Configuration
reviewer1536714 says in a Pavilion HyperParallel Flash Array review
Manager of Platform Software at a healthcare company with 51-200 employees
In our current configuration, we can only run the line controllers in high availability, active-standby mode, whereas we would like to see active-active implemented. That would get us more performance with a given number of line cards.
Their global namespace support is coming, and I believe it is based on NFS 4.1. We have a mix of both Linux and Windows usage in the company, and getting an NFS 4.1 client with Windows currently is difficult because I don't think that's supported. This is not an issue with the Pavilion product directly, versus more of the general environment. We would essentially like to see a Windows NFS 4.1 client supported so that we can take advantage of the Pavilion feature from both platforms.
Having a little more ease of use with the NFS global namespace vis-a-vis Windows would be an improvement.
reviewer1634190 says in an IntelliFlash review
Lead Systems Engineer at a retailer with 5,001-10,000 employees
I wouldn't say I like anything about this solution. We are looking for a replacement with Dell EMC and Pure Storage. Tegile's performance, support, and features are horrible. It's going down.
Multiple companies have bought it. It looked okay at one point in time, like four years ago. Even though it wasn't one of the best, it still looked okay. Since the management has changed several times, it looks like it's going down the drain.
Performance is horrible now. Our original intent was to buy new storage in about two years. But since it became a critical urgency for us, we decided to purchase a new one in two or three months.
It would be better if they improved the codebase. We have issues very often with their code, and I think that is the main pain point. The hardware is also horrible because we have either a controller failure or a SATADOM failure very often. Now and then, we also have a disc failure.
They have to get their act together. They have to make sure their hardware is robust, they have to make sure their code is good, and then we can think about new features and functionality.
First, make the unit run properly, and then we can think about additions. Obviously, their support has to be knowledgeable. Because when I told them, "we have latency issues, come troubleshoot it for us," nobody came. But if we tell them that "we need to do a firmware upgrade," then they are like, "okay. Let's do a firmware upgrade." They will come to do the firmware upgrade, and then they will go. But with the firmware upgrades, you might never know when it works properly and when it doesn't work properly.
If there is a disc that needs to be replaced, and we ask them to replace it, they'll say, "okay, just share the remote station with us, and we'll run some commands, and we'll validate which disc is faulty. If it's really faulty, we will send the disc. We do that, and then they find the faulty disc and send a replacement.
They will do these minor things. But that's not what we are looking for. We are looking for more features and more functionality. Like if there is latency, try to help us out and help the customer find where the latency is. It doesn't necessarily have to be only with SAN storage. It might be a configuration issue, or it might be something else. So, you should help the customer find where the issue is. Unfortunately, that is not what we are getting from them. So they have to improve that a lot.
One of the most valuable features is its integration with other cloud solutions. We have a presence within Amazon EC2 and we leverage compute instances in there. Being able to integrate with compute, both locally within Zadara, as well as with other cloud vendors such as Amazon, is very helpful, while also being able to maintain extremely low latency between those connections. We have leveraged 10-Gig direct connections between them to be able to hook up the storage element within Zadara with the cloud platforms such as Amazon EC2. That is one of the primary technical driving factors.
The other large one is the partnership and the managed service offering from Zadara. That means they have a vested interest and are able to understand any issues or problems that we have. They are there to help identify and work through them and come to solutions for us. We have a unique workload, so problems that we may have to identify and work through could be unique to us. Other customers that are just looking to manage a smaller amount of data would not ever identify or have to work through the kinds of things we do. Having a partner that is interested in helping to work through those issues, and make recommendations based on their expertise, is very valuable to us.
Zadara's dedicated cores and memory provide us with a single-tenant experience. We are multi-tenant in that we manage multiple organizations and customers within our environment. We send all of that data to that single-tenant management aspect within Zadara. We have a couple of different virtual, private storage arrays, a couple of them in high-availability. The I/O engine type we're leveraging is the 2400s.
We also have disaster recovery set up on the other side of the U.S. for replication and remote mirroring. Being able to manage that within the platform allows us to add additional storage ourselves, to change the configuration of the VPSA to scale up or scale down, and to make any changes to meet budgetary needs. It truly allows us to manage things from a performance standpoint as well. We can also rely upon Zadara, as a managed-services provider, to manage those requests on our behalf. In the event that we needed to submit a ticket and say, "Hey, can you add additional storage or volumes?" it's very helpful to have them leverage their time and expertise to perform that on our behalf.
It is also very important that Zadara provides drive options such as SSD, NL-SAS, and SSD cache, for our workload in particular. We require our data to not only be accessible, but to be fast. Typically, most stored data that is hotter or more active is pushed onto faster storage, something like flash cache. The flash cache we began with during our first year with Zadara worked pretty well initially. But our workload being a little unique, after that, the volume of data exceeded the kind of logic that can be used in that type of cache. It just looks at what data is most frequently accessed. Usually the "first in" is on that hot flash cache, and our workload was a little bit more random than that, so we weren't getting as much of the benefit from that flash cache.
The fact that Zadara provides us with the ability to actually add a hybrid of both SSDs and SATA allows us to specifically designate what volumes and what data should be on those faster drives, while still taking into account budget constraints. That way, we can manage that hybrid and reduce the performance on some of the drives that are housing data that is really being stored long-term and not accessed. Having that hybrid capability has tremendously helped with the flexibility to manage our needs from a performance standpoint as well as a cost perspective.
As far as I know, they also have solid support for the major cloud vendors out there, in addition to some others that I hadn't heard of. But they certainly support Amazon EC2 and Google and Rackspace, among others. Those integrations are very important. Most organizations have some sort of a cloud presence today, whether they're hosting certain servers or compute instances or some other workload out in the cloud. Being able to integrate with the cloud and obtain data and store data, especially with all these next-generation threats and things like ransomware out there, is important. Having backups and storage locations that you can push data to, offsite, or integrate with, is definitely key.
We have dozens of customers and I cannot easily estimate the scale of their usage. With respect to our organization, there are many people who use Zadara, and the roles are varied.
Starting from the front end, we have the salespeople that are out there trying to sell our cloud services, and supporting them are the solution architects. The solution architects will be deeper into the technology and they help to design solutions. For example, they scope out opportunities and gather requirements. Often, they interact with Zadara to ask questions and for help to design certain bespoke solutions.
Then, working backward, there are the senior cloud engineers and they would typically get deeper involved in the design of the customer's environment. They would also then do any of the complex builds.
There is also a team of what we call tier-two Net Apps. These are engineers that set up the customer's environment, in particular for the more standard configurations.
Finally, there is the NOC, who monitors the platform and takes any calls from customers if there were ever any issues.
Reviewer429856 says in a Zadara review
Chief Information Officer at a tech services company with 201-500 employees
With the 24/7 management that comes with Zadara cloud services, we know that we have somebody reliable on the other side that can assist us if we need help. We have asked them for assistance a couple of times and it was not outage-related but rather, it was related to how we can take advantage of a couple of things that they provide, such as snapshots. Each time, they have been able to help us and it was a very quick and very pleasant experience.
The vendor provides proactive monitoring and support where within the console, they will let us know if we are utilizing our hard drives incorrectly, perhaps if we are requesting too much throughput from the kind of hard drives that we have. They monitor our performance and will let us know, for example, if we have something misconfigured.
Of course, if a hard drive goes bad, they automatically replace it. We don't even have to know about it. That's quite amazing and I know this because having run systems like that in my past, I know that this is a major headache and I'm happy that it is removed from me. There are a lot of things that they handle automatically without us even knowing.
If there is a situation where we have a misconfiguration or something similar, where we have influence and the opportunity to improve, they let us know. It can be initiated in different ways such as an email report, part of the conversation with the customer success manager, or it can just be a console message that we see when we log in to see how things are going. The console shows us alerts and things like that to keep us informed.
Overall, they are knowledgeable and responsive and I would definitely rate their support a ten out of ten.