We use Quest SharePlex for a specific reason. We are migrating from on-premise to cloud. SharePlex is a replication service. We are using it to do the migration of our databases to AWS Cloud. Once everything is migrated, then we won't be using the solution. We have small to very large databases in terms of size. We were looking at a solution where we could cut down the migration window. We wanted to cut down the outage window and reduce the overall cost of migration to the cloud. For example, if there was a very large amount of data, e.g., five or 10 terabytes, and we were trying to migrate that database over a weekend, then that it is a risky proposition. We were looking for a solution where we could do a migration before the cutover time, then keep it in sync. So, when we were at a point when we were ready to switch, then we could just stop the replication and bring everything up in the cloud. That was our main use case for using SharePlex. We started on version 9.2. Now, we are on version 10.
Our application is hosted in multiple data centers and we primarily use SharePlex for keeping the data replicated from one data center to the other. We use it for business continuity, so that if we run into any issues in the primary data center where the application is currently hosted, with SharePlex we are in a position to switch over to the secondary data center, to do a failover, pretty much in real time. We have Cisco UCS servers where we have our Oracle Databases running with Linux version 7. These are standalone databases, but I've been a part of a different business unit in our company where we had real application clusters. Right now, it's standalone Oracle 12c and we are migrating to 19c. We have SharePlex replicating Oracle data. It is IoT data so the number of transactions happening is huge. SharePlex keeps the data in sync with the other data center, which has almost the same configuration of the bare-metal servers, Linux, Oracle and SharePlex. We primarily use SharePlex for the IDBMS, but we have also used SharePlex for Postgres and for Kafka. Our implementation of SharePlex is entirely on-premises.
I don't know how easy it would be to change the architecture in an already implemented replication. For example, if we have a certain way of architecting for a particular database migration and want to change that during a period of time, is that an easy or difficult change? There was a need for us to change the architecture in-between the migration, but we didn't do it. We thought, "This is possibly complicated. Let's not change it in the middle because we were approaching our cutover date." That was one thing that we should have checked with support about for training. Also, maybe if we could have a seperate section of showing out-of-sync tables in Foglight, instead of looking into the "warning" messages.
I would like the solution to have some kind of machine learning and AI capabilities. Often, if we want to improve the performance of posting, we have to bump up a parameter. That means we need to stop the process, come up with a figure that we want to bump the parameter up to, and then start SharePlex. Machine learning and AI capabilities for these kinds of improvement would tremendously help boost productivity for us. We have discussed this and they have said it is in their future pipeline, but not with a definite date at which we will see this type of feature.
It's really good value for the money. There are some things they could improve on, but in terms of the pricing, features, and support, as a holistic package, we are not thinking of anything else at this point in time.
It is not as expensive as Oracle GoldenGate and has worked really well within our budgets.
It's worth your time to take a hard look at pricing and compare costs but you should also consider the simplicity for any administrator who is new to replication.