No more typing reviews! Try our Samantha, our new voice AI agent.

Share your experience using CGI Oracle Applications Services

The easiest route - we'll conduct a 15 minute phone interview and write up the review for you.

Use our online form to submit your review. It's quick and you can post anonymously.

Your review helps others learn about this solution
The PeerSpot community is built upon trust and sharing with peers.
It's good for your career
In today's digital world, your review shows you have valuable expertise.
You can influence the market
Vendors read their reviews and make improvements based on your feedback.
Examples of the 104,000+ reviews on PeerSpot:

Samuel Rothenbuehler - PeerSpot reviewer
CTO at Axians Amanox
Real User
Top 5Leaderboard
Dec 30, 2024
What you might not know about Nutanix that makes it so unique
Pros and Cons
  • "Nutanix has several unique capabilities to ensure linear scalability."
  • "There is a need is to be able to consume Nutanix storage from outside the cluster for other, non-Nutanix workloads."

What is our primary use case?

As a systems integrator we use Nutanix on a daily basis since 2013 as our main, strategic and only infrastructure solution for virtualization and it's our related storage component. We can offer most use cases today on Nutanix including VDI, server virtualization, big data and mission critical.

How has it helped my organization?

As a system integrator, Nutanix offers a highly standardized solution that can be deployed in a timely fashion compared to legacy three-tier, generation one converged, and most competing hyper-converged solutions. This allows us to move quickly with a small team of architects, and implementation specialists for large projects.

What is most valuable?

Some years ago when we started working with Nutanix the solution was essentially a stable, user-friendly, hyper-converged solution offering a less future-rich version of what is now called the distributed storage fabric. This is what competing solutions typically offer today and for many customers, it isn't easy to understand the added value (I would argue they should in fact be a requirement) Nutanix offers today in comparison to other approaches.

Over the years Nutanix has added lots of enterprise functionality like deduplication, compression, erasure coding, snapshots, (a)-sync replication and so on. While they are very useful, scale extremely well on Nutanix and offer VM granular configuration (if you don't care about granularity do it cluster wide by default). It is other, maybe less obvious features or I should say design principles which should interest most customers a lot:

Upgradeable with a single click

This was introduced a while ago, I believe around version 4 of the product. At first, it was mainly used to upgrade the Nutanix software (Acropolis OS or AOS) but today we use it for pretty much anything from the hypervisor to the system BIOS, and the disk firmware and also to upgrade sub-components of the Acropolis OS. There is, for example, a standardized system check (around 150 checks) called NCC (Nutanix Cluster Check) which can be upgraded throughout the cluster with a single click independent of AOS. The one-click process also allows you to use a granular hypervisor upgrade such as an ESXi offline bundle (could be a patch release). The Nutanix cluster will then take care of the rolling reboot, vMotion etc. to happen in a fully hyper-converged fashion (e.g. don't reboot multiple nodes at the same time). If you think how this compares to a traditional three-tier architecture (including converged generation 1) you do have a much simpler and well-tested workflow which is what you use by default. And yes it does automatic prechecks and also ensures what you are updating is on the Nutanix compatibility matrix. It is also worth mentioning that upgrading AOS (the complete Nutanix software layer) doesn't require a host reboot since it isn't part of the hypervisor but installed as a VSA (regular VM). It also doesn't require any VMs to migrate away from the node/host during and after the upgrade (I love that fact since bigger clusters tend to have some hiccups when using vMotion and other similar techniques especially if you have 100 VMs on a host) not to mention the network impact.

Linearly scalable

Nutanix has several unique capabilities to ensure linear scalability. The key ingredients are data locality, a fully distributed metadata layer as well as granular data management. The first is important especially when you grow your cluster. It is true that 10G networks offer very low latency but the overhead will count towards every single read IO so you should consider the sum of them (and there are a lot of read IOs you get out of every single Nutanix node!). If you look at what development is currently ongoing in the field of persistent flash storage you will see that the network overhead will only become more important going forward. 

The second key point is the fully distributed metadata database. Every node holds a part of the database (the metadata belonging to its current local data for the most part and replica information from other nodes). All metadata is stored on at least three nodes for redundancy (each node writes to its neighbor nodes in a ring structure, there are no metadata master nodes). No matter how many nodes your cluster holds (or will hold) there is always a defined number of nodes (three or five) involved when a metadata update is performed (a lookup/read is typically local). I like to describe this architecture using Big O notation where in this case you can think of it as O(n) and since there are no master nodes there aren't any bottlenecks at scale. The last key point is the fact that Nutanix acts as an object storage (you work with so-called Vdisks) but the objects are split into small pieces (called extends) and distributed throughout the cluster with one copy residing on the local node and each replica residing on other cluster nodes. If your VM writes three blocks to its virtual disk they will all end up on the local SSD and the replicas (for redundancy) will be spread out in the cluster for fast replication (they can go to three different nodes in the cluster avoiding hot spots). If you move your VM to another node, data locality (for read access) will automatically be built again (of course only for the extends your VM currently uses). You might now think that you don't want to migrate that extends from the previous to the now local node but if you think about the fact that the extent will have to be fetched anyhow then why not save it locally and serve it directly from the local SSD going forward instead of discarding it and reading it over the network every single time. This is possible because the data structure is very granular. If you would have to migrate the whole Vdisk (e.g. VMDK) because this is the way your storage layer saves its underlying data then you simply wouldn't do it (imagine vSphere DRS migrates your VMs around and your cluster would need to constantly migrate the whole VMDK(s)). If you wonder how this all matters when a rebuild (disk failure, node failure) is required then there is good news too! Nutanix immediately starts self-healing (rebuild lost replica extends) whenever a disk or node is lost. During a rebuild, all nodes are potentially used as sources and targets to rebuild the data. Since extends are used (not big objects) data is evenly spread out within the cluster. A bigger cluster will increase the probability of a disk failure but the speed of a rebuild is higher since a bigger cluster has more participating nodes. Furthermore, a rebuild of cold data (on SATA) will happen directly on all remaining SATA drives (doesn't use your SSD tier) within the cluster since Nutanix can directly address all disks (and disk tiers) within the cluster.

Predictable

Thanks to data locality a large portion of your IOs (all reads, can be 70% or more) are served from local disks and therefore only impact the local node. While writes will be replicated for data redundancy they will have second priority over local writes of the destination node(s). This gives you a high degree of predictability and you can plan with a certain amount of VMs per node and you can be confident that this will be reproducible when adding new nodes to the cluster. As I mentioned above, the architecture doesn't read all data constantly over the network and uses metadata master nodes to track where everything is stored. Looking at other hyper-converged architectures you won't get that kind of assurance especially when you scale your infrastructure and the network won't keep up with all read IOs and metadata updates going over the network. With Nutanix a VM can't take over the whole cluster's performance. It will have an influence on other VMs on the local node since they share the local hot tier (SSD) but that's much better compared to today's noisy neighbor and IO blender issues with external storage arrays. If you should have too little local hot storage (SSD) your VMs are allowed to consume remote SSD with secondary priority over the other node's local VMs. This means no more data locality but is better than accessing local SATA instead. Once you move away some VMs or the load on the VM gets smaller you automatically get your data locality back. As described further down Nutanix can tell you exactly how much virtual disk uses local (and possibly remote) data, you get full transparency there as well.

Extremely fast

I think it is known that hyper-converged systems offer very high storage performance. There is not much to add here but to say that it is extremely fast compared to traditional storage arrays. And yes, a full flash Nutanix cluster is as fast (if not faster) than an external full flash storage array with the added benefit that you read from your local SSD and don't have to traverse the network/SAN to get it (that and of course all other hyper-convergence benefits). Performance was the area where Nutanix had the most focus when releasing 4.6 earlier this year. The great flexibility of working with small blocks (extends) rather than the whole object on the storage layer comes at the price of much greater metadata complexity since you need to track all these small entities throughout the cluster. To my understanding, Nutanix invested a great deal of engineering to make their metadata layer extremely efficient to be able to even beat the performance of an object-based implementation. As a partner, we regularly conduct IO tests in our lab and at our customers and it was very impressive to see how all existing customers could benefit from 30-50% better performance by simply applying the latest software (using a one-click upgrade of course).

Intelligent

Since Nutanix has full visibility into every single virtual disk of every single VM it also has lots of ways to optimize how it deals with our data. This is not only the simple random vs sequential way of processing data but it allows to not have one application take over all system performance and let others starve (to name one example). During a support case, we can see all sorts of crazy information (I have a storage background so I can get pretty excited about this) like where exactly your applications consumes their resources (local, remote disks). What block size is used random/sequential, working set size (hot data), and lots more. All with single virtual disk granularity. At some point, they were even thinking of making a tool that would look inside your VM and tell you what files (actually sub-file level) are currently hot because the data is there and just needs to be visualized.

Extensible

If you take a look at the upcoming functionality I wrote about further down you can see just some examples of what is possible due to the very extensible and flexible architecture. Nutanix isn't a typical infrastructure company but is more comparable to how Google, Facebook, and others engineer and build their data centers. Nutanix is a software company following state-of-the-art design patterns and uses modern frameworks. Something I was missing when working with traditional infrastructure. For about a year now they heavily extended what they call the app mobility fabric which comes on top of the distributed storage fabric I mentioned above. This layer allows moving workloads between local hypervisors (currently KVM<->ESXi) and soon between private and public clouds as well. You can for example use KVM-based Acropolis Hypervisor clusters for all your remote offices to get rid of high vSphere licensing costs without losing the main functionality and replicate the VMs to a central vSphere-based cluster. The replicated VMs can then be started on vSphere and Nutanix takes care of the conversion. The hypervisor is a commodity just like your x86 servers.

Visionary

When Nutanix released version 1 of its hyper-converged product in 2011 it was a great idea and a good implementation of the same. Most people in IT didn't however expect that it will become the approach with the highest focus throughout the industry. Today the largest players in IT infrastructure push their hyper-converged products and solutions more than any other and while there are still other less radical approaches (e.g. external all-flash storage), it is foreseeable that they will be less and less important for the big part of IT projects. Nutanix is the leader in the hyper-convergence space but having converged storage within your x86 commodity compute layer is by far not the only thing Nutanix has done since then. Their own included hypervisor is a pretty interesting alternative for all those who don't want to spend lots of dollars on vSphere licenses. While it will not yet suit all of your use cases you might actually be surprised at how much of the functionality vSphere offers today (distributed switch, host profiles, guest customization, HA etc.) you care about is already included out of the box with the added value of greatly reduced complexity (yes I am calling vSphere complex compared to Nutanix Acropolis Hypervisor).

Standardized

Since Nutanix is purchased solely as an appliance solution (even though they are only making the software on top). You are always dealing with a pretested, preconfigured solution stack. You do have a choice when it comes to memory, CPU, disk, and GPU and you get to select from three hardware providers (Nutanix directly, DELL, and Lenovo) but they are all predefined options. This allows to guarantee a high level of stability and fast resolution of support cases. As a Nutanix partner this is worth a lot since the experience we get from one customer is valid for any other customer as well. It also allows us to be very efficient and consistent when implementing or expanding the solution since we can put standardized processes in place to reduce possible issues during implementation to a minimum. Once the Nutanix hardware is rack mounted at the customer their software automatically installs the hypervisor of choice (KVM, Hyper-V or ESXi) and configures are necessary variables (IP addresses, DNS, NTP etc.). This is done by the cluster itself, the nodes stage each other over the local network.

And last but not least: With outstanding support

The support we get from Nutanix is easily the best from all vendors we work with. If you open a case you directly speak to an engineer who can help quickly and efficiently. Our customers sometimes open support cases directly (not through us) and so far the feedback was great. One interesting aspect is the VMware support we receive from Nutanix even if the licenses are not sold by them directly. They analyze all ESXi/vCenter logs we send them. If the bug isn't storage related we also open a case with VMware to continue investigating. They do have the possibility to directly engage with VMware by opening a support case directly (Nutanix->VMware) which we saw on multiple occasions. The last case we witnessed was a non-responsive hosted process (vCenter disconnects) where the first log analysis by Nutanix pointed out a possible issue with the Active Directory Integration Service. We then opened a VMware case which was handled politely but after two weeks when there wasn't much progress other than collecting logs and more logs we remembered what the Nutanix engineer suggested and there was our solution. Disabling Active Directory Integration did the trick. I wouldn't say VMware support isn't good as well but we are always glad that Nutanix takes a look at the logs as well because at the end of the day, you are just happy if you can move on and work on other things, not support cases. 

Note: I strongly encourage you to take a look at the Nutanix Bible (nutanixbible.com) where all mentioned aspects and many more are described in great detail.

What needs improvement?

Nutanix has the potential to replace most of today's traditional storage solutions. These are classic hybrid SAN arrays (dual and multi-controller), NAS Filers, newer All-Flash Arrays as well as any object, big data etc. use cases.

For capacity, it usually comes down to the price for large amounts of data where Nutanix may offer higher-than-needed storage performance at a price point that isn't very attractive. This has been addressed in the first step using storage-only nodes which are essentially intelligent disk shelves (mainly SATA) with their own virtual SDS appliance preinstalled. Storage nodes are managed directly by the Nutanix cluster (the hypervisor isn't visible and no hypervisor license is necessary). While this is going in the right direction, larger storage nodes are needed to better support "cheap, big storage" use cases. For typical big data use cases today's combined compute and storage nodes (plus optionally storage-only nodes) are already a very good fit! 

The Nutanix File Services (Filer with active directory integration) are a very welcomed addition customers get with a simple software upgrade. Currently, this is available as a tech preview to all Acropolis Hypervisor (AHV) customers and will soon be released to ESXi as well. This is one example of a service running on top of the Nutanix distributed storage fabric, well integrated with the existing management layer (Prism) offering native scale-out capabilities and One-Click upgrade like everything else. The demand from customers for a built-in filer is big, they are looking to not depend on legacy filer technology any longer. We are looking forward to seeing this technology mature and offer more features over the coming months and years.

Another customer need is to be able to consume Nutanix storage from outside the cluster for other, non-Nutanix workloads. These could include bare metal systems as well as non-supported hypervisors (e.g. Xen Server etc.). This functionality (called Volume Groups) is already implemented and available for use by local VMs (e.g. Windows Failover Cluster Quorum) and will soon be qualified for external access (already working from a technical point of view including MPIO multi-pathing with failover). It will be interesting to see if Nutanix will allow active-active access to such iSCSI LUNs (as opposed to the current active-passive implementation) with the upcoming release(s). Imagine if you upgraded your Nutanix cluster (again this would be a simple One-Click software upgrade) and all of sudden you have a multi-controller, active-active (high-end) storage array. (Please note that I am not a Nutanix employee and that these statements describing possible future functionality are to be understood as speculation from my side which might never become officially available.)

For how long have I used the solution?

We have been using this solution for three to five years.

Which deployment model are you using for this solution?

Hybrid Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Disclosure: My company has a business relationship with this vendor other than being a customer. We are a partner for over ten years based in Switzerland. The author of this review previously worked five years at a large storage vendor as System Engineer specialized in Storage, Virtualization and VCE converged infrastructure.
PeerSpot user
Director - Cloud Architecture at a computer software company with 10,001+ employees
Real User
Top 10Leaderboard
Oct 8, 2021
It's easy to develop, maintain, and manage , helps in task mining, process mining
Pros and Cons
  • "With UiPath in place, we are moving away from training people on the core applications, which are pretty complex. We've built a headless frontend, so it's really easy for anybody to join our company and be productive."
  • "Every human-dependent task has been automated and is now entirely done by a robot, fostering a culture change where people proactively find ways to automate hours of daily work so they can focus more on the company's strategic initiatives, creating a win-win for the company and the employees."
  • "Process mining is something that we are doing, but I think it needs to be more mature. UiPath is still new to process mining. They are decent in SAP ERP when it comes to process mining. But compared to Oracle's E-Business Suite and all, they are still pretty new."
  • "Process mining is something that we are doing, but I think it needs to be more mature."

What is our primary use case?

There are multiple use cases. For example, we have automated shipping process for the operations department, including shipping repairs, production line assembly, and line automation using attended robots. 

Using unattended bots, we have done Engineering, finance, accounts receivable, invoicing, invoice validations, and reconciliations. For sales, we have automated order management and order inquiries. We also have proactive monitoring bots for IT. So daily IT jobs are monitored, and the robot automatically takes action by triggering tickets in ServiceNow

How has it helped my organization?

Every human-dependent task has been automated and is now entirely done by a robot. Still, the people who formerly did the job are part of the process. They check if the robot is doing it right or wrong. And then, we keep on retraining the robot for any new exceptions that come up, so we are fully equipped to handle new scenarios. So this has fostered a culture change within the organization where people are coming up with ways to automate what they were spending several hours a day doing. Once these tasks are automated, they can focus more on the company's strategic initiatives. And that's a win-win for the company as well as the employees.

With UiPath in place, we are moving away from training people on the core applications, which are pretty complex. We've built a headless frontend (Touchless ERP), so it's really easy for anybody to join our company and be productive. A robot handles shipping, repair, and the assembly line instead of a human. Plus, these processes are executed with high accuraccy, removing human error from the equation. That's how we are mitigating a lot of our human-related problems.

Furthermore, the culture that we've built has helped in terms of the principle of retire, recruit or resign. So when someone quits, retires, or we need to recruit, we first determine if we can automate their job functions. The RPA governance team tries to digitize the person's tasks as well as their knowledge and experience in the organization.

What is most valuable?

The attended bots have helped a lot, and we have a leveraged concept of touch-less ERP. We can build a UI or a headless frontend, like a popup, that interacts with a human, and then the robot carries out its function. So the person doesn't need to be trained on any core applications like Oracle ERP or customer applications, which would entail additional screening responsibilities and the normal human tendency to make mistakes. 

So we have built a headless frontend that is technology agnostic. And it interacts with the end-user in a user-friendly, straightforward fashion. So it's only four or five steps, whereas it was formerly 30 or 40 steps. And that's how this way of implementing RPA has helped the customer heavily. 

What needs improvement?

Process mining is something that we are doing, but I think it needs to be more mature. UiPath is still new to process mining. They are decent in SAP ERP when it comes to process mining. But compared to Oracle's E-Business Suite and all, they are still pretty new. And they need to come up with some model templates for how to adopt process mining within your organization. For example, they could use some accelerators to help onboard customers. Right now, it's not that easy. So process mining is something that they need to improve on for Oracle ERP.

UiPath has an excellent strategic roadmap when it comes to features. So if it's not already in the works, one thing they could add is AI fabric, and if UiPath could bundle it into the primary licensing with Orchestrator, that would be great. Concerning artificial intelligence and NLP machine learning, we are getting rid of the requirement where a chatbot can integrate and synchronously interact with the robot in real-time.

The ability to listen to and reply things in real-time would be good to have. They are already doing it, but the ease of implementation and options to integrates needs to be flexible, and the license should be bundled into their current offering so that it's more cost-effective.

Another thing is UiPath's OCR we hope it has the same quality as FlexiCapture, a separate solution by ABBYY. That would be awesome if the UiPath had native OCR functions that were the same level of maturity as FlexiCapture.

For how long have I used the solution?

I've been using UiPath for three years now.

What do I think about the stability of the solution?

UiPath Cloud Orchestrator and on premise bots has 100% stability in last 3 years we have faced zero Product related issues. the Platform is rock solid stable robust also upgrades have been very easy to do with zero impact on existing running processes. 

We have around 34 RPA Processes built using UiPath have a very good execution success rate of 94-96% , rest 4-6% failures/exceptions are only due to data or network issues.  

What do I think about the scalability of the solution?

Scalability is great with UiPath. We have a distributed load for robots, so the long-running ones run on a different unattended bot while the fast-running and business-critical bots run on a dedicated VM. So that's how we have distributed the scalability. But if you want to add a new bot, it should take less than 30 minutes to host or create a new machine. And then, the deployment or deployment of the package or the code takes less than 30 seconds. So It's hardly 30 minutes. 

So about 20 attended bots are deployed. Some of them are in two or three shifts. In total, around 40 people are using attended bots, and there are only two unattended bots. So UiPath is running more than 30 processes for 30 different people. We also have around eight business analysts spread out across various departments who use this too. So you're talking about 50 plus people at least connected right now. And these are purely business analysts or  staff on the shop floor, like operators and repair technicians. But the processes that we have developed are helping around 60-plus business users that we use to do something.

So technically, with the deployment that we have done — around 24 robots—the utilization for the unattended robots is approximately 52 percent. So technically, what we have is sufficient for the next year, at least. We can develop more processes with the existing licenses that we hold. So definitely expansion-wise, we'll expand more on the process side.

How are customer service and support?

UiPath technical support is excellent. They have a good knowledge base, and the support team resolves issues fast. Once you create a ticket on cloud technical support, it's pretty easy. You get an email, start communicating, and then it is solved in a day or so.   

The UiPath licensing and product teams have been beneficial. They gave us an option of a free trial where you can download it free and use it for 60 or 90 days. And that trial is extendable. So we got six months of free usage for UiPath Enterprise Edition. And that's how we were able to do things confidently. We could implement things to show to businesses and get funding. UiPath has been flexible when it comes to licensing costs, including free trials. Their e-Team comes in, helps you deploy, implement stuff, and solve technical challenges you have on the ground. 

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

We hadn't used any automation solutions before. It was always a human. 

How was the initial setup?

Out of all the automation solutions we tried, UiPath was the easiest to install and configure. Within less than 20 minutes, you can set up a machine with the robot and make it functional. For the initial setup, if the Windows machine is ready, you only need to install and set up the robot.

UiPath also has the simplest deployment versus all the other vendors. Deployment takes less than 30 seconds. It's just a click of the button from Orchestrator, where the packages are available to download for the robot. In addition, there is an auto-download feature that automatically installs the packages remotely on several machines.

So the implementation strategy includes unattended bots in data centers, which run on their own. They are in a secure facility and are very hardened machines. They only communicate certain out-call to Orchestrator in the cloud. There are also attended bots sitting all over the warehouses and people using them on their own laptops. So implementation strategy for the attended bots isn't complex because they are overseen by daily users on their laptops and desktops. So it's deployed on those machines. Additional security and hardening will be required on the unattended bots installed on the data center level.

For implementation, we used some outside consultants to set up the governance framework for RPAs and the discovery mechanism. They also helped us identify hotspots, determine ROI, establish the recovery cost in months, and do value realization.

So after going live, they helped us look at things like cost and time savings as well as the actual recovery costs incurred in the process. They also helped evaluate the solution in terms of automatability index, productivity, efficiency, and quality. So, we analyzed all of these factors, which helped to prioritize the processes selected. We did all of this first and then converted everything into a factory model. What we call the digital factory team is responsible for development. And then, the governance and architecture teams orchestrate the development by going out there and doing task mining and process mining. For maintenance, we have five or six people for 32 processes.

What about the implementation team?

Level of expertise with SI which is LTI is excellent

What was our ROI?

We have an Novigo automation framework that helps us to calculate ROI from UiPath. It calculates ROI based on the time a human would take to perform the task manually versus the time needed for a bot to do it. Then the hours are converted into dollars. Finally, we can get the cost per transaction per hour per dollar spent per record. That helps you calculate an accurate ROI when identifying RPA case, and we can see the value realized every month. We calculate the amount of records and hours saved through automation and convert that into a dollar amount per process. That's how we have been able to do a very accurate value realization based on the number of transactions processed by the robot.

What's my experience with pricing, setup cost, and licensing?

The license is around $40,000 yearly. Technically, that's the lease price. But UiPath told us to go through our distributor, and by doing so, we've saved at least 50 percent of the costs. So it's now around $24,000. That is pretty cheap compared to Blue Prism or any other vendor out there that offers a combination of the same features. 

You do need to purchase an additional license to get Insight licenses. We were the beta testers of UiPath's Insight application, which is purely on the analytics side. A lighter version of insights should be embedded into Orchestrator itself inside the standard offering of the UI without the extra license costs. So that way, we could track the number of processes, exceptions, success rate, and calculations in a single platform without buying any additional licenses. 

Which other solutions did I evaluate?

Automation Anywhere, Blue Prism, WinAutomation, and UiPath were four we considered. We used Novigo Automation Framework Reports to test and evaluate each solution to see which one best suits our requirements. That's how we chose UiPath over the rest. It included deep-dive reports from Gartner, Forrester, Nelson Hall, and many other third-party analysts. These reports featured a highly detailed heat map as well as information about the cost-effectiveness of the deployment model, the development roadmap, and the maturity of the product.  So that helped us to easily compare UiPath to the others.

What other advice do I have?

I rate UiPath nine out of 10 because it's easy to develop, manage, and maintain. That's what sets it apart from Automation Anywhere, Blue Prism, or Power Automate. It's a simple drag-and-drop interface, which allows for low code automation development. And that makes it pretty much different from other options that we have. For example, developing basic automation can take few hours in UiPath versus a few days in Automation Anywhere. Implementation-wise, this is a pretty easy model when it comes to having a cloud orchestrator versus on-premise. Going on the cloud has reduced the burden of needing dedicated administrators to manage and maintain the platform. Minimal administration is required to manage and maintain Orchestrator in the cloud.

So that's the fastest way to onboard UiPath. But there is a lot of groundwork on the governance piece. So before even investing in UiPath or any other RPA platform, you need a solid governance framework and a foundation that will help you do the correct automation at the right time. Versus just doing anything using the tool. The tool is designed to do all types of automation but they need to be beneficial for the organization with good ROI and value realization.

Which deployment model are you using for this solution?

Hybrid Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Disclosure: My company does not have a business relationship with this vendor other than being a customer.