Try our new research platform with insights from 80,000+ expert users
it_user683436 - PeerSpot reviewer
Data Center Systems Engineer at a tech services company with 1,001-5,000 employees
MSP
The Dual Fabric design allows for online/in-service upgrades during production with no impact.
Pros and Cons
  • "The Dual Fabric design allows for online/in-service upgrades during production with no impact."
  • "HTML5 interface is a much needed improvement over the old Java interface, but still needs a little work."

What is most valuable?

The Dual Fabric design allows for online/in-service upgrades during production with no impact. Also, single point of management for all server profile and firmware management allows for guaranteed uniformity in the datacenter.

How has it helped my organization?

I have deployed 2-3 dozen UCS systems and managed many more for customers. Customers always love the unified management, speed of setup, and the improved performance after migration of workloads to UCS servers.

What needs improvement?

HTML5 interface is a much needed improvement over the old Java interface, but still needs a little work.

For how long have I used the solution?

I have deployed and managed Cisco UCS solutions for approximately 5 years.

Buyer's Guide
Cisco UCS B-Series
June 2025
Learn what your peers think about Cisco UCS B-Series. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
857,028 professionals have used our research since 2012.

What do I think about the stability of the solution?

As with any system, there are very occasional bugs. But Cisco is quick to remedy any issue. Firmware is often already out to fix issues that we run into.

What do I think about the scalability of the solution?

I did not encounter any issues with scalability.

How are customer service and support?

The technical support is excellent.

Which solution did I use previously and why did I switch?

I have experience with HP, IBM and Dell rack servers. I switched to UCS when I joined a Cisco partner and learned to deploy UCS.

How was the initial setup?

when customers are first introduced to UCS, the setup is somewhat complex. Yet the learning curve is reasonable.

What other advice do I have?

Cisco UCS is a fantastic product that is widely deployed with excellent support. Additionally, Cisco has developed CVD’s (Cisco Validated Designs) that assist partners and customers to properly deploy Cisco UCS with most major storage vendors. CVD’s are highly detailed deployment guides which are comprehensively tested by Cisco to ensure quick, highly reliable and predictable deployments.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user229368 - PeerSpot reviewer
Sr. Network Engineer at a tech services company with 1,001-5,000 employees
Consultant
We can configure Service Profile and the way it combines LAN, SAN and the Server on the blade

What is most valuable?

For UCS B series, its integration with devices like FI, N5K. I also like UCSM where we can configure Service Profile and the way it combines LAN, SAN, and the server on the blade. I love it.

How has it helped my organization?

I am working with a Gold Partner company and we deploy this product to our customers, and so far we have deployed it in many clients and we have not received any complaints.

What needs improvement?

GUI had some trouble before with Java updates, but that is fixed now.

For how long have I used the solution?

One to two years.

What do I think about the stability of the solution?

No.

What do I think about the scalability of the solution?

No.

How are customer service and technical support?

10/10.

Which solution did I use previously and why did I switch?

We use Dell, HP, UCS, all of them, but personally I like this product more.

How was the initial setup?

It very easy and straightforward.

What's my experience with pricing, setup cost, and licensing?

Pricing and licensing depend upon the requirements of the clients, but the recommended ones are to go with some Ethernet and Fabric ports mix.

Which other solutions did I evaluate?

Yes, we give options to the customers and go for this upon their choice.

What other advice do I have?

This is a complete solution with FI.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
Cisco UCS B-Series
June 2025
Learn what your peers think about Cisco UCS B-Series. Get advice and tips from experienced pros sharing their opinions. Updated: June 2025.
857,028 professionals have used our research since 2012.
PeerSpot user
Data Center Practice Manager at The Plow Group
Real User
The hardware is easily swappable and with utilizing the boot from SAN option, you can always keep your server intact due to the service profiles.
Pros and Cons
  • "The hardware is easily swappable and, utilizing the boot from SAN option, you can always keep your server intact due to the service profiles."
  • "The UCS manager interface needs to be cleaned up a bit and can be streamlined, but no major complaints."

What is most valuable?

The UCS environment as a whole. The hardware is easily swappable and, utilizing the boot from SAN option, you can always keep your server intact due to the service profiles. So if your blade has failures and you have a hot spare, you can transfer the service profile to a new blade and be operational in mere minutes. Huge for uptime and perfect for environments like VMware ESXi hosts, which is what I use them for primarily.

How has it helped my organization?

We can be scalable to a greater degree using Cisco UCS. The options available and the connectivity to a Nexus switch with universal ports have been a game changer.

What needs improvement?

The UCS manager interface needs to be cleaned up a bit and can be streamlined, but no major complaints. Get off Java once and for all and release 3.2 so it can be all HTML 5.

For how long have I used the solution?

I have been using Cisco UCS since early 2011, so six and a half years.

What do I think about the stability of the solution?

B-Series blades, along with the C Series tack mounts are the most reliable server hardware platform I have worked with in my 20+ years in the industry.

What do I think about the scalability of the solution?

None. Cisco UCS, to this day, has been the most easily scalable server product I have encountered. Hyper-converged solutions have potential, yet have not shown me that they are scalable at an enterprise level the way the B Series UCS are at this time.

How are customer service and technical support?

Some of the best in the industry. Always helpful and mostly flexible.

Which solution did I use previously and why did I switch?

In the past, I have used rack mount and blade solutions from Dell, HPE, and IBM. None of them have come close to the combination of performance and reliability that I get from Cisco.

How was the initial setup?

Initial UCS setup is complex, but once you have your service profiles (templates) configured, adding new blades and provisioning boot LUNs is very easy. Cloning options make it even more so.

What's my experience with pricing, setup cost, and licensing?

Nothing shocking. Very straightforward. Make sure you work with a vendor partner than can get you a substantial discount off of list pricing.

Which other solutions did I evaluate?

I have evaluated dozens of server solutions (Dell PowerEdge, IBM X series and HPE ProLiant) and many, many more.

What other advice do I have?

Do it and don’t look back. Just make sure you get strong in-house knowledge of UCS early on, unless you are willing to outsource UCS support to an MSP. It is easily learnable, but there is a bit of a curve to support the overall UCS infrastructure at the start.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Technical Sales Architect at a tech services company with 501-1,000 employees
Consultant
The UCS Manager uses a single pane of glass to monitor, deploy and provision servers.
Pros and Cons
  • "Since its UCS release in 2009, Cisco has extended the core functionality with Central, a tool for managing multiple domains"
  • "Right now, the market is rapidly transitioning to solid-state media and the Cisco options tend to be less varied and more expensive than a broader slate of products from HP, Dell or IBM."

What is most valuable?

Previously, the physical trappings of Cisco UCS, Intel chip-sets and UCS Manager were the most useful part of this server system. As we embrace new Intel CPU's, Chip-sets and memory, we are gaining added value from the original UCS design - which was a software construct based on XML API's and a suite of code that is really starting to blossom as a central automation vehicle, that scales to deliver new features and extended integration with a suite of security, management and performance offerings Cisco has added to its portfolio.

While UCS hardware leveraged standard x86 designs, the use of a single pane of glass to monitor, deploy and provision servers was a huge timesaver. Since its UCS release in 2009, Cisco has extended the core functionality with Central, a tool for managing multiple domains, Director - an automation tool and Performance Manager. In the past few years CIsco has been on a buying binge for the Data Center, snapping up Cliqr, Lancope, AppDynamics, ContainrX, and several others that are being integrated with in-house analytics tools like Tetration and external tools like Turbonomic to provide an incredibly powerful, secure automation platform that will be the foundation of a future autonomic server environment with adaptive security and dynamic self diagnosis.

Cisco UCS Manager is embedded in the cost of the fabric controllers and is used to manage the servers, chassis and fabric. It also serves as a link point for integrating tools like Director, Performance Manager and Central. Future additions to the UCS tool set are extensions that Cisco is feeling out how best to offer to customers - for straight purchase - or via subscription.

I encourage UCS users and those considering UCS adoption to dig into the subscription offerings and get some clarity on how they grow over time. For example, as powerful security tools like Stealthwatch (Lancope) are added, what other systems are required and how are those subscriptions managed. When Analytics are required - do you need a Gigabuck Third Party offering or are you going to jump on Cisco's Tetration bandwagon and roll your own? I push for simplicity with Cisco. However, you need good data for that conversation. Talk to the apps, dev and ops teams as to what is needed today, where you are going and what future needs will become vs what might be nice to have. Once you understand where you are going, you are in a much better position to negotiate with a relative newbie like Cisco on how best to get there.

Things will only get better going forward. UCS Manager is an XML construct. Everything is in software and can scale and expand with increased hardware capability, while other architectures require extensive effort on each end to develop hardware, then update and test a new rev of software for reliability and consistency.

The big challenge for Cisco today is they built UCS manager for Cisco CCIE's anxious and able to have every knob and dial available to tweak. As a result, UCS manager is overly complex relative to functions and features and a lot of effort can go into streamlining and simplifying the User Experience. However, after 8 years in the market and huge acceptance of its increased ROI over competitive offerings and an appreciation for what UCS provides in OPex reduction, you can buy experienced UCS engineers vs having to develop and train them, only to see them purchased by a competitor.


How has it helped my organization?

I have a client who is currently managing 1500 servers with two people for a mission-critical retail operation. Previous operations teams using HP and IBM servers required 4x more people to manage the same number of servers.

What needs improvement?

This product comes from Cisco, who is fourth in the worldwide supply chain. That means options take a bit longer to get to their platform, as they insist on doing their own quality validations. Right now, the market is rapidly transitioning to solid-state media and the Cisco options tend to be less varied and more expensive than a broader slate of products from HP, Dell or IBM.

Cisco UCS offers a scalable platform with tremendous OpEx advantages. However, Cisco does not have the storage play that Dell (With Cisco Partner EMC in its fold) and HP have. With their long position in the market place on the PC supply chain side, both Dell and HP source and deliver high volume, low cost, advanced enterprise solutions from previous consumer focused suppliers like Samsung and Toshiba. Example’s like Sandisk’s 3.8TB SSD used in EMC VxRail products and newly announced Samsung 15TB and 6.4TB 1M IOPs SSD come to mind. While Cisco still carries the earlier versions of similar technology from FusionIO, the next gen lower cost options from Samsung will take a while to be approved and provided by Cisco.

Cisco’s internal testing and validation processes – to assure UCS Manager compatibility - mean they lag both HP and Dell in delivery on the newest storage paradigms – specifically the breadth of the SSD and NVRam offerings. Both these trends (High performance, High capacity SSD, and NVRam) offer major changes in architectural models. For organizations that seels to push the bleeding edge in testing and development, UCS will lag in delivery by a quarter or two. This has little impact on mainstream enterprises who will not adopt before a technology is thoroughly vetted by industry “Pioneers” – usually mid-sized shops that “took a chance” on introducing a new platform into their relatively modest environment.

For how long have I used the solution?

I have used UCS since 2008, when the product was first released.

What do I think about the stability of the solution?

No issues with stability that we have not seen across other systems. In particular, due to Cisco networking dominance, the focus is on drivers that work with their products for all the competitors as well. Networking is typically the server area with the most work to be done – but this is the strength of Cisco.

What do I think about the scalability of the solution?

UCS originally promised to support 40 chassis per fabric – that has now been scaled down to 20 – which limits users to domains of “just 160” physical server blades. This has not proven to be an issue or obstacle. The release of UCS Central provides software to manage an array of fabrics so we can scale to thousands of physical servers.

How are customer service and technical support?

Customer Service:

This is a foundational core of the Cisco Data Center automation experience and is a far more robust platform than currently provided by competitors. Customer service from Cisco and its partner community is thus on par with the same exemplary service provided by its TAC teams for business critical network deployments.

Technical Support:

Leveraging Cisco Network Technical Monitoring – the ability to call for a case and get resolution - is a process we are well aware of and very comfortable with.

Which solution did I use previously and why did I switch?

HP was the incumbent, displaced by UCS, which has proven easier to manage scale and use. The HP system just had too many pieces and the iLO lockin was a major cost that the UCS architecture leapfrogged.

How was the initial setup?

Initial setup requires some training due to its scale. It’s like riding a car vs driving a truck. You use the auto driving skills when you drive a truck – but there are a few things to be aware of. One of the nuances with UCS is that it is a fully abstracted, scalable environment. So you can set up your domain to accommodate a single server or 160 servers. This requires adopting a standard naming convention, IP addressing, etc. Once those are established, like a truck vs a car – you can haul a lot more freight with UCS.

What's my experience with pricing, setup cost, and licensing?

Obviously, the worst-kept secret with all vendors is to negotiate as close to fiscal year-end as possible. For Cisco, the year-end is July 31st, so they are well positioned for organizations deploying summer projects. The other issue is the move to bundle licenses. That is great for highly dense environments like a data center, but it makes much more sense for individual licenses for distributed environments like hundreds of storefronts or clinics distributed across a wide geographic area.

Which other solutions did I evaluate?

As stated earlier, we had HP. As a marquis client they fought hard on equipment price to maintain their position. However, the decision was based on OpEx, which greatly favored UCS. Once we had a few systems in place and people trained up on their use, it was not long before HP was displaced. Because both the IBM and Dell management architectures were quite similar, we looked and got a few quotes, but did not see anything to justify further evaluation resources.

What other advice do I have?

The biggest issue is automation. How to move the mundane tasks from people to machines; alert filters to improve management productivity and reduce overhead. Cisco is deploying a suite of products (Central, Director, Performance Manager, etc.), as are IBM, HP and even Dell. However, UCS manger provides such a robust base that the ability to scale and realize benefits is greater.

At the end of the day, the UCS product requires planning before just jumping in, due to its ability to scale. As a user, you need to evaluate naming conventions, IP addressing models and so forth – think about the entire enterprise as opposed to a single server or rack of servers.

Use very good hardware and innovative network elements, such as the VIC 10Gb cards that allow for traffic sequestration and partitioning across multiple virtual channels in a single link and of course UCS Manager. I actually have the patent on similar IP when we started blade server systems with an acquisition by Intel. The direct spin-off was the IBM Blade Center, but due to the IBM investment in Tivoli, they never used our central management system. Cisco took a network- vs compute-centric perspective as they embarked on their server designs and, with a clean sheet of paper, evolved a centralized manager for deployment and systems management that enables huge scales in management productivity.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Senior System Specialist at Burns & McDonnell
Real User
Leaderboard
We use it in the converged infrastructure to push out profiles, firmware, and console access.

What is most valuable?

We are using it in the converged infrastructure with the common UCS Manager to push out:

  • Profiles
  • Firmware
  • Console access
  • VLAN configurations
  • Troubleshooting

How has it helped my organization?

Running in the VCE Vblock gives us the flexibility to deploy a large virtual workload of servers. We use a mix of mainly Windows servers and a few Linux appliances.

I had one blade server fail. The replacement was up and operating quickly after the blade server was swapped over.

What needs improvement?

Smaller locations are held up where they use a pair of converged infrastructure interfaces for redundancy.

To deploy a standard Cisco Blade system with redundancy for maintenance and reliability you have to purchase two converged infrastructure 6296 or 6396 interface / switches, and the chassis, uplink interfaces, plus the blade servers to drop in one or more blade chassis. From my point of view the initial cost to do this for a small regional office where we usually have the computer in a dedicated network closet for the switches and servers.

Cisco does now have a “Mini” solution where they have put the converged infrastructure and management into the chassis via the slots where the uplink interfaces normally install. This setup can support multiple blades and even external C series chassis in a converged environment all sharing some form of external storage from what I have read but never used or experienced.

Most of my companies need is for data distribution from a file sharing server(s), a domain controller and possibly a local database server. I can cover this all in one 2U server from another company that I can cram in 3-6 TB of DAS / RAID disks for file storage with enough RAM and CPU cores in 2 sockets to cover my compute / VM needs.

My demands for servers in most remote sites are different than most. Our end-users all have either a laptop or powerful CAD workstation to do their engineering on. We don’t do VDI via VDI terminals. We do use VDI for engineering apps in 2D on our VBlock and in C-Series UCS servers with NVidia shared video cards for CAD / 3D rendering in our VDI pools.


For how long have I used the solution?

The original M2 servers were in operation for more than five years. The new M4s have been up for under a year.

What do I think about the stability of the solution?

There was only one server failure during my use of 24 blades in my old system. There were 20 blades in my new/replacement implementation. In reality, this is a small installation.

What do I think about the scalability of the solution?

We have not encountered any scalability issues. We added blades and upgraded memory along the way. We had open slots in the chassis and added additional blades. We upgraded the RAM in existing systems for more VM headroom.

How are customer service and technical support?

There were no issues with technical support, as most was handled via VCE.

Which solution did I use previously and why did I switch?

We had standalone 2U servers from HPE that were tied to a SAN for shared storage.

Limited memory expansion was what we had previously. We did dual Vblock installations to absorb the multiple little clusters of VM hosts that we had on separate servers.

We still use HPE servers as standalone VMware hosts in smaller sites.

The newer generation HPE servers have very high disk capacity servers where we can get 3 TB of disk in a 2U host.

How was the initial setup?

The Vblock system was installed and operational at handover. We had to provide IP ranges for servers, management interfaces, etc. However, the VCE installation teams did the actual configurations of the hosts, SAN, and network connectivity.

What's my experience with pricing, setup cost, and licensing?

Although I was not completely involved in the pricing or licensing costs, I do have to monitor licensing allocation of VMware CPU licenses.

I know that Cisco licenses the number of ports and uplinks on various interfaces inside the Vblock. However, we have not done any upgrades beyond our initial purchase of the replacement Vblocks to run into any new licensing additions.

Which other solutions did I evaluate?

We looked at other considerations, such as BladeSystem from HPE and standalone server stacks, at least five years ago when we purchased the original set of Vblocks.

It was the only integrated system that fit our needs. It filled the requirement for new computing power, an updated network, and SAN storage. It also filled the expansion possibilities of a data center in a box with almost one point of contact for support.

What other advice do I have?

Look closely at your needs.

  • Do you need more computing power and memory or storage expansion possibility?
  • Do you need redundancy in installation sites HA/DRS?
  • If you do HA/DRS, does it need to be near real-time disk writes, or more managed recovery/failover?
Disclosure: My company has a business relationship with this vendor other than being a customer: We are one of the few that had the arrangement to actually purchase the VBlock directly from VCE and not via a 3rd party VAR as when the original systems were put out for bid. After we had done all the specification with the VCE configuration team the VAR tried to tack on a percentage for passing the order from them to VCE and it almost canceled the whole system.
PeerSpot user
it_user429375 - PeerSpot reviewer
Technical Solutions Architect at a tech services company with 501-1,000 employees
Real User
It changed our mindset to abstract the server, making it a stateless object for workloads.

What is most valuable?

Why pick a UCS blade over a Dell, HPE or Lenovo system? The answer depends on what application I need to run. If I want a small-scale, 3-4 server application space in a localized area, I want a rack mount, for a price advantage. If I need a larger-scale virtualized environment, I prefer blades, and for the lowest OpEx as I scale out, I find Cisco's UCS lets me manage a larger footprint with fewer people.

How has it helped my organization?

Previously, we focused on CPUs and servers, relying on the Intel cadence for change. With Cisco UCS, we became network-centric and changed our mindset to abstract the server, making it a stateless object for workloads. Managing blade servers logically lets us take full advantage of Moore's law – which started with 640 cores per fabric and now provides 5760 cores for B200-M4 blades in our standard 20 chassis pods; more workloads per pod, and fewer people to manage them. This has significantly improved our OpEx costs.

What needs improvement?

Cisco is behind as far as SSD qualifications and options allowed, relative to other vendors, but that is in keeping with their philosophy of a stateless working environment. If I add a unique storage attribute to my blades, I encumber it with a state that requires manual intervention to move around.

SSD evolution is coming hard and fast with higher density, lower cost options popping up each quarter. New form factors like M.2, U.2, Multi-TB, NVMe and now signs of Optane are emerging across a range of price points turning the once stolid server domain into the wild west. Dell and HPE have field qualification processes with vendors such that very soon after new products are shipping, they are available for use in their servers.

The process is slower for UCS as Cisco must perform extensive validation to assure compatibility with UCS-Manager. Does the device respond in time to blade controller logic, are there issues with time-outs for UCS-Manager that might have either type 1 or type 2 fault errors. Hence the array of new SSD products are more robust with HPE and Dell than for Cisco.

This goes to the core difference in architectural philosophy between the Legacy server vendors and Cisco that calls for a stateless environment leveraging networked storage so that any workload can be readily moved to a new server as a more powerful system is deployed, or a fault occurs on the old server. If an HPE blade has a local boot option with a new 1TB SSD – then you cannot move that workload remotely to a new 2-socket 36-core blade. You have to have a technician go on site to physically pull the boot SSD from the older blade and insert it into a new blade, then confirm it got the right one. This adds labor cost and slows down the upgrade process – increasing OpEx costs to manage the legacy infrastructure.

For how long have I used the solution?

We have used this since inception in 2009.

What was my experience with deployment of the solution?

The change in mindset from building stateful servers to stateless devices managed across an intelligent fabric with logical abstraction took about a month for operations to come up to speed on; no looking back since.

What do I think about the stability of the solution?

We went through the original teething pains of any new system. In particular, once we had our operational epiphany on what the potential was, we were limited by how fast features could be added to UCS Manger. With XML extensions, UCS Central (Manager of Managers) and UCS Director (Automation), we have enough on our plate.

What do I think about the scalability of the solution?

Early on, we encountered scalability issues – UCS was to support 40 chassis – but it only did 10, then increased to 20. 20 chassis (160 servers) is more than enough as Moore's law, increased CPU core count and higher network bandwidth all made for the ability to place more workloads in a pod than we were comfortable with. So, it rapidly caught up.

How are customer service and technical support?

Customer Service:

Customer service is excellent.

Technical Support:

Technical support is excellent. Cisco understands what is needed and it plays to their networking strengths. Ironically, most of my previous rack system problems came down to network constraints as we ran into switch domain boundaries, VLAN mapping issues and so forth; the basic blocking and tackling for Cisco.

Which solution did I use previously and why did I switch?

We previously used HPE. They had a good blade system and good racks, but their iLO is expensive and gets very complex at scale.

How was the initial setup?

Initial setup was straightforward. More time was spent educating us on UCS Manager, the logical tool, service profiles and the other tools of automated provisioning than physical connectivity, which is child's play.

What about the implementation team?

We bought through a vendor, who showed us how to set up and some tricks of the trade to short circuit the learning process. Then, after a few months, we were cruising at scale.

What was our ROI?

ROI is not something we share, but I will note that we now use 2 persons to manage 1600 servers in two remote data centers. This is across 25 domains that can all be seen at once and, as alerts come in, drilled down and addressed from a web console.

What's my experience with pricing, setup cost, and licensing?

SSD evolution is coming hard and fast with higher density, lower cost options popping up each quarter. New form factors like M.2, U.2, Multi-TB, NVMe and now signs of Optane are emerging across a range of price points turning the once stolid server domain into the wild west. Dell and HPE have field qualification processes with vendors such that very soon after new products are shipping, they are available for use in their servers.

The process is slower for UCS as Cisco must perform extensive validation to assure compatibility with UCS-Manager. Does the device respond in time to blade controller logic, are there issues with time-outs for UCS-Manager that might have either type 1 or type 2 fault errors. Hence the array of new SSD products are more robust with HPE and Dell than for Cisco.

This goes to the core difference in architectural philosophy between the Legacy server vendors and Cisco that calls for a stateless environment leveraging networked storage so that any workload can be readily moved to a new server as a more powerful system is deployed, or a fault occurs on the old server. If an HPE blade has a local boot option with a new 1TB SSD – then you cannot move that workload remotely to a new 2-socket 36-core blade. You have to have a technician go on site to physically pull the boot SSD from the older blade and insert it into a new blade, then confirm it got the right one. This adds labor cost and slows down the upgrade process – increasing OpEx costs to manage the legacy infrastructure.

Which other solutions did I evaluate?

Before choosing we also evaluated HPE, Dell, and IBM. We all found that, aside from the physical differences, they had the same architecture and OpEx; external management; local switch infrastructure in each chassis; complex routing rules when scaling domains; and challenges in provisioning new units. Once we learned the "UCS Way," we were more efficient.

Disclosure: My company has a business relationship with this vendor other than being a customer: My company and Cisco are partners.
PeerSpot user
Juan Dominguez - PeerSpot reviewer
Juan DominguezSenior Solutions Architect & Consultant at ZAG Technical Services
Top 20Consultant

Cisco UCs is definitely a system that overcome the competition from many angles. It's single pane management and policy driven format are atop of the field. I have created and deployed HP and Dell, by far Cisco UCS is the most flexible and scalable in my opinion. Excellent content in your write up.

it_user413451 - PeerSpot reviewer
Infrastructure Consultant at a tech consulting company with 501-1,000 employees
Consultant
With the virtual NICs/HBA, we can redesign the IO schema without upgrading hardware. Configuring the hardware platform could be better.

What is most valuable?

  • Virtual NICs/HBA
  • Nexus FC/Ethernet convergence

How has it helped my organization?

VirtualNICs/HBA allow us to redesign completely the IO schema (network and storage) without needing to upgrade or acquire additional hardware and controllers.

What needs improvement?

  • Hardware platform configuration

For how long have I used the solution?

I have used this solution for three years. We are currently using Cisco UCS, Chassis Model UCS5108, I/O Modules UCS-IOM-2208XP, Fabric Interconnect Model UCS-FI-6248UP, and Cisco Nexus 5548.

What do I think about the stability of the solution?

We had some issues with certain NX-OSs in the Fabric Interconnect.

What do I think about the scalability of the solution?

I have not encountered any scalability issues.

How are customer service and technical support?

Technical support is 7/10.

Which solution did I use previously and why did I switch?

I did not previously use a different solution. I’ve previous experience with HPE BladeSystem. UCS looks to me more flexible and powerful.

How was the initial setup?

Initial setup was complex. UCS deployment & management requires deep knowledge of the platform.

What's my experience with pricing, setup cost, and licensing?

It is expensive (like all converged platforms). From a cost perspective, UCS must be evaluated seriously in order to determine if the company requirements justify the acquisition. It is important to take into account that UCS is an end-to-end solution. Integration with Cisco Nexus is a must.

Which other solutions did I evaluate?

Before choosing this product, I did not evaluate other options, but in the convergence market, UCS should be evaluated as a clear option to evaluate.

What other advice do I have?

Training, training, training and planning, planning, planning!

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Technical Head at a tech services company with 51-200 employees
Real User
The most valuable feature is the UCS Manager which integrates everything.

What is most valuable?

The most valuable feature is the UCS Manager which integrates everything.

Java and HTML 5 base admin console is now available

How has it helped my organization?

Its provisioning and ease of management have improved our functioning.

What needs improvement?

Power Options for setting up Grid needs to have further customization.

N+1 for power supply is not applicable in some data centers

For how long have I used the solution?

I've used it for 3 years.

What was my experience with deployment of the solution?

We've had no issues with deployment.


What do I think about the stability of the solution?

We've had no issues with stability.

N+1 for powersupply is not applicable on grid type data centers

What do I think about the scalability of the solution?

It's highly scalable. With everything set up, downtime is never an issue when adding blades.

How is customer service and technical support?

Cisco has been very supportive to us, as well as the partners

How was the initial setup?

A bit complex since using the FI was new to us.

What about the implementation team?

Vendor team were very helpful and was able to train us to use and manage the system.

What other advice do I have?

We used a Versa Stack solution so compute and storage was a breeze once setup was done.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user