What is most valuable?
So we actually go with a three tier solution. We have near line, we have fast class, fiber channel, and we have SSD, the flash. We began making an entrance into introducing flash. That was really as part of a 7400. So we acquired a 7400, took our F400, put it into DR, and with that 7400, we've now been able to actually grow and increase based upon the needs.
So we've been able to look at the data, look at the growth and the need for the SSD, as needed, and we moved things around. We're starting to introduce AO, but realistically, we didn't have to initially jump in and put everything into all flash. I know the sales force wanted us to, but realistically, at the end of the day, we wanted to take a more cautious approach, and it's paid off for us.
How has it helped my organization?
One of the biggest benefits that we just experienced was we actually went through an Oracle E-Business Suite R11 to R12 migration. Three-quarters of a terabyte database. Oracle came in, said this should take you somewhere around 24 to 36 hours. Realistically, at the end of the day, it took 10 hours, and a lot of that had to do with the 3PAR back end storage system and our ability to transform the actual virtual volumes and the IO, the rate configurations, within minutes. We had one instance where we took the entire 750 gig database, that virtual volume from fast cache to SSD in six minutes.
What needs improvement?
Actually, during our migration we had a very choreographed timed execution of needing to transform virtual volumes from one level, from one tier to the next. AO wasn't necessarily getting us there. It would need to see and predict, and these were ad hoc, one off, it's going to happen this one time workload, and never happen again. And so one of the things that's been thrown out is, hey, could you all give us some ability to actually choreograph that, to actually be able to lay it out and then trigger it fly by wire in a way, but have it pre-laid out.
What do I think about the stability of the solution?
We've actually had a number of drives fail over a three year period, and actually before that, we had on the MC and other systems we'd see drives fail. The drive failures, however, and the way that the predictability comes about and the disc is actually evacuated on a 3PAR, and it's done, you know, preemptively, that's been a game changer for us. Rather than watching an entire raid volume go offline or become poor performing or unstable, we don't have that. Mechanical devices are gonna fail. Ideally, they don't impact your business. That's been one of the big things for us.
What do I think about the scalability of the solution?
Overall our ability to add storage increase the IOs, on demand and as needed, I can't ask a whole lot more based upon the choices that we made. There are of course more scalable aspects of 3PAR out there than what we have landed on, but based upon what we utilize and the choices that we made, we're still well within. Of course, the beauty of storage and a business is that anytime you build it, they find ways to fill it up. And so we've continued to stay on top of that.
With the insight that we get as far as disc usage, we are actually able to more properly calculate our capacity though with thin provisioning. So we're not just stamping out storage and saying, hey, it's wholly dedicated, we have no idea kind of what our growth is. You know, it's wasted over here and needed over here. We don't run into that. It's used through the thin provisioning capabilities across the platform. So that's another aspect of scalability that I think, you know, you don't necessarily find in other systems.
How are customer service and technical support?
You know, realistically, we have probably seen more upgrades, former firmware updates, insertive updates, good solid response. When, Heartbleed and a couple of other issues came out with open SSL, we saw within a month timeframe that we were getting updates, being notified, okay, here's the level that you need to be running at. That's not necessarily the case with other vendors. It's been really good overall.
Which solution did I use previously and why did I switch?
So originally, we were actually running on EMC CX700 and VNX 5300s. The back end was front ended actually with AIXP5P6 series systems. We were needing to realistically bring our ERP system forward. Poor performance dictated that, you know, we can no longer really continue to do business the way we were doing business on that platform, so we looked at others, including EMC, Hitachi, IBM, and actually HP 3PAR was late to the game and came knocking.
How was the initial setup?
The biggest part with 3PAR is overcoming your pre-existing mindset. So coming into it originally, the whole idea of chunklets and not having dedicated storage groups or, you know, raid types, it took time to understand operationally what what you could really do with it. And so in that sense, I would say that there was some complexity. From a services standpoint, they came in, they knocked it out, they got it installed, and we integrated into the environment. We started migrating.
They've made advancements in migrations, that, you know, I've seen now. It would have made life easier for us back then, but they've listened and they've, you know, made improvements.
Which other solutions did I evaluate?
Realistically, we ended up choosing HP. It was the more expensive solution at the time, but given the need for the performance, we also looked at a three to five year roadmap and the ability to continue to grow and the ability to add additional storage tiers within the same frame, that played a big part in it for us.
What other advice do I have?
In comparing HP 3PAR against really EMC and some of the others, the ability to kind of maximize the actual storage. So thin provisioning, the ability to use all disc realistically across the storage system from an IOPS perspective, rather than your traditional monolithic, to where you're isolating storage groups and raid groups to particular LUNs and that's all the disc they have, so your spindles are limited, you move away from that.
At the same time, our ability and our need realistically to transform the raid or the stripe size, our IO kind of dictated that at times, or our lack of knowledge of IO, and that was really, came along as a third item, is the tools that were native in the 3PAR InServ store gave us the ability to look at the IO versus Navi-analyzer and others, while the capabilities there, we were either inhibited from a performance standpoint, or we weren't getting all the data and visibility that we needed.
Don't be afraid of the price tag, number one. If you're willing to really set out a roadmap and know the investment and what you're able to give back to the business, look at what you're able to give back to the business. In our case, we had individuals during close, close would take up to 18 days. It's now down to 10 days. We had individuals that would literally kick off reporting FSGs at night and go home and then check back on them. They might fail, and they'd have to try and kick them back off. They couldn't run them ad hoc during the day. They had to only run them during certain times because the system wouldn't sustain it.
Now they can do that any time they want. So don't just look at the price tag of the infrastructure. Look at what you can actually give back to the business, see how you can actually facilitate the business's strategic direction.
I think peer reviewers are priceless. Realistically, you can get all the marketing hype, but at the end of the day, seeing how somebody has either pushed the boundaries on a product, looked at the product, and used it in ways that a development team could never- or a product team could never actually envision, and see it either live or die, you know, how it performed, those are the things that you get out of community and from peer reviews, that you're not necessarily going to get from your traditional marketing.
Finding a group of individuals that you know is important, that you know the context of their background, because with any data, especially on the Internet, you have to understand the context of where people are coming from, what their knowledge level is, how truthful they're really willing to be. And so having that trusted community is very important.
Disclosure: PeerSpot contacted the reviewer to collect the review and to validate authenticity. The reviewer was referred by the vendor, but the review is not subject to editing or approval by the vendor.
As a follow up to point number 2, in the "Room for Improvement" section, I tried to clarify that AO was not getting us where we needed to be and we did in fact utilize the DO functionality. The issue that we saw, however, was the lack of a choreographed DO operation. There were well over 30 DO operations that were executed during the entire upgrade and chart of account update process. These were written out in a document and then had to initiated manually at the appropriate time. At more than one point during the upgrade, weary eyes called in to question whether or not the proper DO operation had been initiated. As a one time operation, AO never would have touched these Virtual Volumes in a timely manner or to the degree required. I hope that clarifies our approach and reasoning a little more.
As for point number 3, there is a double pronged issue here. We had already made an investment in a specific drive size for the SSD, FC and NL class of drives. In addition, we utilize a large number of Oracle, MS SQL and Exchange Databases on this frame. Choosing separate drive classes allows us slide certain VMFS volumes (VMDK's are segregated amongst them based upon service, system or IO type) across the different tiers and make specific changes as needed.
As for the second item within point number 3, the deduplication on SSD for such databases obviously becomes problematic for inline dedupe solutions versus post-process. However, with post-process dedupe we can adversely impact other high read IO systems such as those building cubes, performing database maintenance or running master data management processes. Thus, we took the approach of utilizing a combination of Virtual Volume Thin Provisioning, proper NUMA configurations, customized allocation unit block sizes for XFS and NTFS (multiples of 16K), along with ensuring that settings such as IFI (Instant file initialization) were in use within the VM guests.
Going forward, it is our hope that the combination of DO and increased use of AO will allow these specific high IO tablespaces, VMFS volumes and 3PAR Virtual Volumes to more efficiently traverse the various drive classes during the peak usage time-frames. It may be seen as a "yesterday's approach", however it works for us based upon our budgets, staff and current technology investment / roadmap. All that to say, we're not opposed to the All-In Flash approach; we're just not convinced that the paint is dry.