Try our new research platform with insights from 80,000+ expert users
it_user510570 - PeerSpot reviewer
Technical Lead at a tech services company with 51-200 employees
Consultant
It caches part of the database. We can execute complex queries using EntryProcessors.

What is most valuable?

  • Entry processors
  • Distributed cache
  • Events

How has it helped my organization?

It has allowed us to greatly improve our response times in several services by caching part of the database into Coherence and executing complex queries using EntryProcessors.

What needs improvement?

Configuration is too complex. One of the reasons configuration can be complex in the POF is due to the XML files required for the IDs of the classes. Also, the normal configuration files in XML are not so easy to use; I’m not an expert on them, but I have heard complaints from some of my colleagues.

There is a a lot of room for improvement with POF serialisation. It is slow compared to other serialization mechanisms. We have done some testing using Kryo and custom serialisation built by ourselves, and I have managed to serialise / deserialise 10 times faster than POF. And I haven’t event tried using Unsafe. That means there is even more room for improvement.

POF serialisation also requires both XML files with the IDs of all the Java classes that are going to be stored and implementing the write external read external methods with all the fields of the classes. If you have a few classes, it is fine, but when you try to store complex messages like FIXML or FPML protocols, it becomes quite a nightmare. In our case, we have built a code generator that solves our problem, but is not a simple solution.

Support for writing the cache contents to disk and recover it should be available in production. This feature allows writing the current content of the cache into a file on disk and being able to repopulate the cache later with this information. This is very useful when, for any reason, there is a need to stop all the cache nodes for some time and restart them again without losing information. The problem is that it is not, or at least it was not, supported for production environments. That means we cannot really use it. Our solution was to use a backing database, but that is not trivial, either, because the only way to represent our complex objects into the database was with blob binaries.

For how long have I used the solution?

I have used it for four years.

Buyer's Guide
Oracle Coherence
May 2025
Learn what your peers think about Oracle Coherence. Get advice and tips from experienced pros sharing their opinions. Updated: May 2025.
856,873 professionals have used our research since 2012.

What do I think about the stability of the solution?

We found some issues using the incubator libraries for database integration on writing and also using the feature to write cache contents to disk.

What do I think about the scalability of the solution?

I have not encountered any scalability issues.

How are customer service and support?

Technical support is 5/10; not very good, in Spain at least.

Which solution did I use previously and why did I switch?

I did not previously use a different solution.

Which other solutions did I evaluate?

We have recently evaluated other solutions such as Hazelcast and GridGain.

What other advice do I have?

Get a good expert on the technology, because the learning curve can be high.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user509847 - PeerSpot reviewer
Program Director and Architect at a tech services company with 501-1,000 employees
Consultant
It delivered a standalone caching solution that prioritized speed of data serialization and integrity. Integrating it into an overall solution was not easy.

What is most valuable?

The features that were of most value to us:

  • Uptime
  • Scalability
  • Speed
  • The ability to read-through and write-through to a backing datastore (something that other caches usually require a separate solution for)

How has it helped my organization?

An example of how the product improved organizational function was in the speed of request processing. The client had needs to support very high throughput, particularly at certain “peak” times of the year, both in terms of bandwidth for data streaming, as well as retail transactions. A robust standalone caching solution that prioritized speed of data serialization and data integrity was a must. Coherence delivered on that front.

What needs improvement?

Coherence’s issues (besides high monetary cost compared to other caching solutions) were mostly around the high learning curve required to use it properly, as well as the technical challenge of maintaining a separate artifact of mapped, POF-serializable data types for the cache to have available in its classpath.

In other words, integrating it into an overall solution was not easy from an integration and a code complexity standpoint. This caused developers to either put off integrating it for as long as possible, or otherwise struggle with it more than with something like memcached or ehcache.

For how long have I used the solution?

I have personally used the solution for approximately one year (it was in use much longer at the organization).

What do I think about the stability of the solution?

For the most part, we have not encountered stability issues.

What do I think about the scalability of the solution?

Scalability was not an issue.

Which solution did I use previously and why did I switch?

In this case, Coherence was the incumbent technology.

How was the initial setup?

While Coherence was already deployed on premises, integrating it into a new application was cumbersome.

What's my experience with pricing, setup cost, and licensing?

My understanding is that Coherence is not cheap, based on Oracle’s Technology Global Price List. However, as a contractor, I did not participate in decision-making related to cost.

Which other solutions did I evaluate?

I have evaluated/used other products since, and have concluded that Coherence offers slightly superior performance and integrated read/write-through at the cost of technical complexity. Complexity being a huge contributor to risk/cost for projects, I am more likely to use other products as a result.

What other advice do I have?

My advice to those looking to implement Coherence is to hire someone who has used it extensively in the past, and to create sufficient documentation internally to bring developers up to speed with how to integrate it into their applications. The learning curve to get comfortable with the configuration/deployment/mapping was the single biggest pain point for our project, and greater than I would expect of a third-party integration like this.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
Oracle Coherence
May 2025
Learn what your peers think about Oracle Coherence. Get advice and tips from experienced pros sharing their opinions. Updated: May 2025.
856,873 professionals have used our research since 2012.
it_user506469 - PeerSpot reviewer
Enterprise Data Architect at a computer software company with 1,001-5,000 employees
Vendor
You can add and remove resources on demand without taking your production system down. It lacks decent visual management and monitoring tools.

What is most valuable?

Coherence allows linear on-line scalability across multiple commodity servers. It’s a very important feature for SaaS providers, as it allows adding and removing resources on demand and without taking your production system down. High availability is a key for financial business continuity, and by distributing clusters across multiple Amazon Availability Zones – while still maintaining very good latency - Coherence is able to sustain hardware or even AZ failure without any interruption on the application side, given that cluster architecture is designed correctly.

How has it helped my organization?

We implement ticketing analytics using Coherence.

What needs improvement?

Coherence lacks decent visual management and monitoring tools. The free solution offered (JConsol) is not very superb in quality, and is not designed for use in a web-based, SaaS environment.

Also, some information on the website is outdated and does not reflect the latest functionality or syntax.

We would like to have comprehensive, native, reliable, out-of-box replication between clusters; we have a DR center on the west coast and we would like to replicate data there, ideally in three clicks, without changing a lot of settings or extensive setup and development.

For how long have I used the solution?

I used the solution for six months.

What do I think about the stability of the solution?

I have not encountered any stability issues.

What do I think about the scalability of the solution?

We have not encountered any scalability issues on the cluster side, but we have seen problems with big messages on the .NET client.

How are customer service and technical support?

Technical support is 3 or even 2 out of 10. Working directly through an account rep or a contact on Coherence dev team helps a lot.

Which solution did I use previously and why did I switch?

I did not previously use a different solution.

Which other solutions did I evaluate?

Before choosing this product, we evaluated GridGain and Hazelcast.

What other advice do I have?

Get a cheaper price and consider your code stack – it fits best with Java-based companies.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user487638 - PeerSpot reviewer
CTO at a tech company with 501-1,000 employees
Vendor
The biggest improvement is in speed - including faster record retrieval and workflow processing.

Valuable Features

While you can have a successful career with Coherence just being a get/set man, its true power is realized when you leverage the full scale of the cluster as a whole and exploit its distributed processing capabilities.

For the use cases I’ve implemented, the features I used most frequently and have gone head-to-head with incumbents are as follows:

  • InvocationService - I have to admit that I took this for granted up until I went against IBM’s eXtreme Scale. Most organizations want to preload/warm the cache and the InvocationService allows you to issue commands to each member in a distributed manner. Parallelizing this activity gains economies of scale since the load time and rebalancing can be kept to a minimum. Said another way, a million rows can be loaded in the time it takes to load 100,000 if you have 10 storage enabled members. Each member is issued a command which details what rows it is responsible for loading. Coherence provides a number of the libraries required to handle this including ‘retry’ functionality hooks and abstracts all the threading/concurrency logic which would be a nightmare to sort out; as IBM learned on this project. This is in direct contrast to extremeScale’s capability - which relied on leveraging Java’s Executor classes. Basically, they had to roll their own distributed processing engine while on-site.
  • Filter, Aggregators and EntryProcessors - Before MapReduce & Hadoop came on the scene in such force Coherence had equivalent functionality that was much easier to use. Filters provide the ability to use conditional boolean logic against your data Out-Of-the-Box. Many fail to realize how powerful this is. In the bake off extremeScale had nothing close to this and therefore had to code it. The requirement was to port a StoredProcedure’s logic, which took 30+ secs to run, into something the grid can run. The implementation was based on an EntryProcessor that leveraged Filters and Aggregators. While I would love to say it was strategic coding ability, it wasn’t - I merely used OOB tools. The end result was that the EntryProcessor, running a complex workflow, was a magnitude faster than IBM’s get() call.
  • POF - Portable Object Format is the binary optimized proprietary Coherence serialization. It provides staggering Object compaction. For example, an Item object that was 750 bytes with Java serialization is 31 bytes with POF. This has a rippling impact across the entire app, the cluster, even your network since it needs to handle the chatty cluster members.

Improvements to My Organization

The biggest improvement is in speed - everything is faster from record retrieval to workflow processing.

Room for Improvement

Tooling around complex cluster config files so issues can be identified before the cluster is stood up - and subsequently collapses. Cluster management tools that are independent of WebLogic. Dynamic cluster config rollout and rollback. Ideally this would be used in dev as a prod cluster should be locked down. I’d also like to have some sort of GUI (out of the box) that illustrated cluster member vitals; storage, heap, offHeap, watermarks, evictions, etc.

Monitoring and configuration could be easier while support for streaming data windows and the like isn’t available yet. Moreover, native cron(scheduling) capabilities and an Async API would be a nice to have but those challenges can be overcome with 3rd party libraries. Lastly, native security features would alleviate some concerns and workarounds however, I fully understand impact on performance...

Use of Solution

I’ve used Coherence since 2008. I transitioned into consulting where I led a number of projects across several organizations to define, install, and integration clusters for maximum impact on critical business systems.

Customer Service and Technical Support

I have only needed an assist from Oracle once and the issue turned out to be a config problem. The organization had a healthy support agreement and Oracle was able to turned it around quick. Perhaps one of the reasons I haven’t engaged them more is because I jumped into the community early on. I attended every Coherence SIG [Special Interest Group] meeting that I could and became friendly with a few of the developers.

Initial Setup

Coherence is very easy to get running locally. Standing up, or defining a cluster for that matter is another task entirely. Each cluster has many ‘knobs’ to dial in. While this offers great flexibility, one should exercise caution when getting into areas of the config that are not understood.

The objective of the project and the performance need to be kept in sight. Here are some questions to help drive the configurations files: Is your project read or write heavy? This will dictate if you should have more smaller nodes vs less larger ones. Should they be storage enabled or not? How much data does the app generally use, would a near cache be beneficial? How often is your reference data used, how much is there determines if it should replicated or not. How many members should there be? Do I need to use a prime number somewhere? Why? Do I need eviction policies, what should they be based on? How do I tell if my cluster is too chatty? How will other apps leverage the cluster? Should I use WKA? Will that prevent new members from joining?

It goes on and on and we didn’t touch DR or monitoring.

Implementation Team

I’ve done both and in most cases the projects didn’t have proper momentum until a SME was introduced and the questions above could be addressed. Most folks apply relational thinking towards a cluster and that generally doesn't end well. While you can use rich objects, I’d look for a different model - something flat. Or you need to strictly define your cache strategy to keep hierarchies together (hard to do).

Other Solutions Considered

A side by side POC was done with IBMs eXtreme Scale on a project. I also have experience with Gemfire - and wish I didn’t.

Other Advice

Take the time to learn it and test all assumptions. For example, I was using push replication [PR] to satisfy a client's disaster recovery [DR] requirement. All of a sudden the primary cluster collapsed - ran out of memory despite having high watermarks configured. As it turned out the DR site connection went down and PR calls started to queue. The high watermark calculation did not know about the PR queue. This was very subtle use case as I didn’t consider what would happen to the PR calls if the other end wasn't available.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user488157 - PeerSpot reviewer
Technology Consultant at a comms service provider with 10,001+ employees
Vendor
Customer Facing Application response time has improved. The error and exception handling is not that great as it can be difficult to debug issues.

Valuable Features

As well as using HotCache to synchronise a Coherence cache with database tables in real-time, it can also be used to warm the cache by loading an initial dataset. The nice thing about this approach is that cache warming is just an extension to the setup for cache synchronisation.

Improvements to My Organization

The Customer Facing Application response time has been improved 95%. Also, in the releases the Portal does not goes down as the data is being pulled from the Cache not from the Database.

Room for Improvement

Monitoring API needs to be improved and needs to be user friendly. Also, the error and exception handling is not that great as it can be difficult to debug issues.

Use of Solution

We have been using this solution for five years.

Deployment Issues

The GAR file naming convention.

Stability Issues

There have been issues in the cache configuration file in older versions, and a nodes eviction and timeout error.

Scalability Issues

To add additional capacity, the cluster has to be fully recycled and that cause the down time of the environment.

Customer Service and Technical Support

Oracle Customer Support works when we escalate the issue, otherwise first level support is not that good.

Initial Setup

The latest version is straightforward, as there is lots of configuration done through the WebLogic console.

Implementation Team

We implemented it ourselves. Before implementation, review the requirements thoroughly because if the cache sizing is not correctly defined it creates a major bottleneck. The size of the JVM depends on the size of cache.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user488163 - PeerSpot reviewer
Consultant at a financial services firm with 1,001-5,000 employees
Vendor
By allowing for data distribution and replication through clustering, it improves the reliability of information systems.

What is most valuable?

  • Query response time
  • Clustering, data distribution, data affinity

How has it helped my organization?

Coherence has improved response times for queries of sizeable data sets. Also by allowing for data distribution and replication through clustering, it improves the reliability of information systems.

What needs improvement?

  • An API allowing for ‘joins’ between different caches, similar to DB joins.
  • A more streamlined configuration. There is a multitude of proxy, node, extended client, etc. scripts and config files that need to be maintained. What about making this less of a hassle in future by bringing more consistency into the configuration process?

For how long have I used the solution?

I've been using it for three years

What do I think about the stability of the solution?

We had one instance when we experienced intermittent network failure. This issue was not reproducible for obvious reasons. Coherence failed to live up to its SLA by not being able to recover but getting into a state where new nodes were created when the old ones were still there but for some reason no longer recognized as being part of the cluster. The Oracle support was not something to write home about, i.e. there was a constant request for more info (logs, timelines, etc. – which were provided) and never a feeling that the problem was understood or at least that there was any serious attempt at investigating or reproducing on Oracle’s side.

How are customer service and technical support?

Medium to Good. Sometimes prompt competent responses, at other times support was lacking.

Which solution did I use previously and why did I switch?

It was a company decision as this is a commercial product with guarantee of support.

How was the initial setup?

It was complex. There are a multitude of configuration files and shell scripts, most of which could be copy and pasted. No uniformity of approach or tool to allow for proper management of configuration.

What was our ROI?

ROI is reasonably good, since no cheaper alternative satisfying company requirements was identified.

What's my experience with pricing, setup cost, and licensing?

The product is considered expensive, hence the company will be on the lookout for a replacement if feasible. I'm not involved in licensing discussion.

What other advice do I have?

My advice would be for Oracle to prepare a database with already existing configurations from clients. This would help future clients to have templates for various solutions already instead of reinventing the wheel. Generally Oracle fails at this chapter as opposed to open source solutions. It is very painful to start from scratch with little or no concrete solutions posted online (full solutions with commercial value).

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
PeerSpot user
Consultant at a tech consulting company with 51-200 employees
Consultant
Oracle needs to continue to develop and add to Coherence, avoiding the all to common bloat in enterprise software

On the 12th July, Oracle announced the 12c release of the full Cloud Application Foundation (CAF) stack.

Since Oracle are trying to bring all their products in line with each other, Oracle Coherence has leaped up in version number from 3.7.1 to 12.1.2 despite 12c being only one major release after 3.7.1.

Major is certainly the operative word here. Oracle has put a lot of effort into upping the amount of added value of running Coherence with Weblogic. Here’s a summary of the changes to Coherence in 12c

Managed Coherence Servers

The first change I want to highlight is a biggie - gone are the days of ActiveCache and Coherence*Web and Coherence Managed Servers and GAR (Grid Archive) files are here to stay! If you have a big investment into ActiveCache already, then have no fear, it isn’t retired just yet but is being phased out to give you a chance to refactor those existing Coherence*Web applications.

The idea is to enhance the use of Coherence with Weblogic by optimizing packaging and deployment and provide application isolation and “lifecycle events” (see below!) thanks to the ability to deploy Grid Archives to a Managed Coherence Server. The advantage is that Oracle say that Grid Archives can be used by standalone Coherence customers too!

So the next question is…what is a “Grid Archive”?

Grid Archive (GAR)

A GAR file is simply a directory structure for Coherence configuration files which can be packaged and referenced as a module by other applications.
  • GARs must contain at least two folders: lib and META-INF in the root directory.
  • The META-INF directory must contain a coherence-application.xml file
  • GARs need to be packaged in an EAR to be referenced by other modules.

    GoldenGate HotCache

    A major aspect of any cache is consistency of data. Coherence has always been very good at keeping in sync with backend data sources. HotCache fits particularly well with Coherence in that it monitors the database for changes and then pushes them into the cache. The really clever thing about this though, is that extra overhead is avoided by making sure that only stale changes get pushed, lowering latency.

    Live Events

    We’ve seen the usefulness of cache event processing in Coherence before (Steve even presented on it at JavaOne). The implementation in 12c has changed a little but the theory remains the same, as do the sort of events available to process. Register an event interceptor with the cache and you can process events relating to the cache data, the cache itself (monitor the movement of partitions around, for instance), or “lifecycle events” – a notification that a ConfigurableCacheFactory instance has either activated or been disposed.

    REST Enhancements

    The Coherence REST API has been updated in more than one area:
    • Run multiple REST applications
      • Configure multiple context paths in the cache config and your application server can run multiple REST applications. As simple as that!
    • REST security
      • Very necessary, Coherence REST security uses both authentication and authorization. Authentication support includes: HTTP basic, client-side SSL certificate, and client-side SSL certificate together with HTTP basic. Authorization is implemented using Oracle Coherence*Extend-style authorization.
    • Support for named queries
      • Named queries are CohQL expressions which are configured for resources in the coherence-rest-config.xml file. In a nutshell, an expression is defined in the XML file and given a name. Using a GET request on the query name will return the results of that query!
    While all this is nice to have, and some of it very necessary (I’m looking at you, REST api and HotCache), one of the most appealing things about Coherence for me has always been its conciseness when compared to competitors. The lightweight distribution of Coherence never held it back either – indeed, it has performed very well in the market for data grids (and distributed caches, for that matter) thanks to some great design.

    If Oracle can maintain that philosophy as it continues to develop and add to Coherence, avoiding the bloat that people often assume comes with all enterprise software, Coherence certainly seems like it will continue to be a formidable player and a crucial component of Oracle’s wider cloud strategy.

    Disclaimer: The company I work for is partner with several vendors including Oracle
    Disclosure: My company does not have a business relationship with this vendor other than being a customer.
    PeerSpot user