Terracotta is actually a product used for clustering. It is not used standalone in architecture but is used in the Software AG architecture to cluster the request or state the request data between integration servers.
The distributed caching mechanism helps in reducing database load and accelerating data access times for my projects by enhancing performance for the response we have. If we have a response or APIs that hit many times, we can implement caching to improve the performance. Another aspect of Terracotta is that it can state or save the session data in Terracotta, making the sessions or data remain consistent over or across all instances that we use.
The best features of Terracotta are its clustering capabilities. If we have architectures that contain two or three Integration Servers (IS), each IS instance keeps this data in its own memory. If a user session is tied to a specific IS instance and that instance fails, or the load balancer redirects the user to IS2, the session data will be lost, causing product experience issues. This is where Terracotta plays its role. Terracotta is used as distributed memory data, so all instances in architecture are connected to the Terracotta server, allowing them to share the session data between all instances of these integration servers.
The impact of Terracotta's high availability and failover capabilities on the organization's data consistency and reliability is significant. In our current architecture, we implemented two nodes of Terracotta. We have one active node and another mirror node. If we have one failed, or something happened in the first node, the request or the failover will be loaded to the other node.
For failover, we have many examples. The load balancer might fail to route the request to the first Terracotta node, we might have a network issue, or we might have a server that was down. The failover or the request will be routed to the other node.
Terracotta's ability to integrate with existing enterprise technologies and support multi-tier architecture has helped accommodate the organization's business needs by being responsible for failover and clustering the requests between the data nodes. If we have many nodes in our architecture, we can cluster the data or save the sessions between all these nodes. This helps to increase and enhance the user experience.