

Find out what your peers are saying about Databricks, Amazon Web Services (AWS), Apache and others in Streaming Analytics.
Returns depend on the application you deploy and the amount of benefits you are getting, which depends on how many applications you are deploying, what are the sorts of applications, and what are the requirements.
At least fifteen to twenty percent of our time has been saved using Teradata, which has positively affected team productivity and business outcomes.
Independent research showed that Teradata VantageCloud users achieved an average ROI of 427% across three years with payback under a year, demonstrating the platform's ability to deliver a strong financial return.
We have realized a return on investment, with a reduction of staff from 27 to eight, and our current return on investment is approximately 14%.
I was getting prompt responses, and it was nicely handled regarding the support.
I would rate them eight if 10 was the best and one was the worst.
The customer support for Teradata has been great.
They are responsive and knowledgeable, and the documentation is very helpful.
Customer support is very good, rated eight out of ten under our essential agreement.
According to me, it is quite scalable in terms of all the data it can handle and stream.
Whenever we need more resources, we can add that in Teradata, and when not needed, we can scale it down as well.
This flexibility allows organizations to scale according to their needs, balancing performance, cost, and compliance requirements.
This expansion can occur without incurring downtime or taking systems offline.
Its massively parallel process architecture allows the platform to distribute workload efficiently, enabling organizations to run heavy analytic queries without compromising speed or stability.
I find the stability to be almost a ten out of ten.
The workload management and software maturity provide a reliable system.
If it were easier to configure clusters and had more straightforward configuration, high-level API abstraction in the APIs could improve it.
Regarding additional improvements, I would say probably around error handling, where when we encounter errors specific to our response structures and everything, or the tables or anything of that nature, it would be better if we were prompted with better error handling mechanisms.
Observability and monitoring are areas that could be enhanced.
I want to highlight two features for improvement: first, storing data in various formats without requiring a tabular structure, accommodating unstructured data; and second, adding AI ML features to better integrate Gen AI, LLM concepts, and user-friendly experiences such as text-to-SQL capabilities.
Unlike SQL and Oracle, which have in-built replication capabilities, we don't have similar functionality with Teradata.
The most challenging aspect is finding Teradata resources, so we are focusing on internal training and looking for more Teradata experts.
I thought Confluent would stop me when I crossed the credits, but it did not, and then I got charged.
Teradata is much more expensive than SQL, which is well-performed and cheaper.
Initially, it may seem expensive compared to similar cloud databases, however, it offers significant value in performance, stability, and overall output once in use.
Role-based access control (RBAC), strong audit and compliance features, high availability, fault tolerance, and encrypted data at rest and in-transit are key features.
These features are important due to scalability and resiliency.
The Kafka Streams API helps with real-time data transformations and aggregations.
The best features Apache Kafka on Confluent Cloud offers would be the connection with various external systems through various languages such as Python and C#.
Teradata's security helps our organization meet compliance requirements such as GDPR and IFRS, and it is particularly essential for revenue contracting or revenue recognition.
Its architecture allows information to be processed efficiently while maintaining stable performance, even in highly demanding environments.
It facilitates data integration, where we integrate and analyze data from various sources, making it a powerful and high-quality reliable solution for the company.
| Product | Mindshare (%) |
|---|---|
| Apache Kafka on Confluent Cloud | 0.6% |
| Apache Flink | 9.8% |
| Databricks | 8.2% |
| Other | 81.4% |
| Product | Mindshare (%) |
|---|---|
| Teradata | 8.8% |
| Snowflake | 9.5% |
| Oracle Exadata | 7.6% |
| Other | 74.1% |

| Company Size | Count |
|---|---|
| Small Business | 6 |
| Midsize Enterprise | 3 |
| Large Enterprise | 8 |
| Company Size | Count |
|---|---|
| Small Business | 28 |
| Midsize Enterprise | 13 |
| Large Enterprise | 52 |
Apache Kafka on Confluent Cloud provides real-time data streaming with seamless integration, enhanced scalability, and efficient data processing, recognized for its real-time architecture, ease of use, and reliable multi-cloud operations while effectively managing large data volumes.
Apache Kafka on Confluent Cloud is designed to handle large-scale data operations across different cloud environments. It supports real-time data streaming, crucial for applications in transaction processing, change data capture, microservices, and enterprise data movement. Users benefit from features like schema registry and error handling, which ensure efficient and reliable operations. While the platform offers extensive connector support and reduced maintenance, there are areas requiring improvement, including better data analysis features, PyTRAN CDC integration, and cost-effective access to premium connectors. Migrating with Kubernetes and managing message states are areas for development as well. Despite these challenges, it remains a robust option for organizations seeking to distribute data effectively for analytics and real-time systems across industries like retail and finance.
What are the key features of Apache Kafka on Confluent Cloud?In industries like retail and finance, Apache Kafka on Confluent Cloud is implemented to manage real-time location tracking, event-driven systems, and enterprise-level data distribution. It aids in operations that require robust data streaming, such as CDC, log processing, and analytics data distribution, providing a significant edge in data management and operational efficiency.
Teradata is a powerful tool for handling substantial data volumes with its parallel processing architecture, supporting both cloud and on-premise environments efficiently. It offers impressive capabilities for fast query processing, data integration, and real-time reporting, making it suitable for diverse industrial applications.
Known for its robust parallel processing capabilities, Teradata effectively manages large datasets and provides adaptable deployment across cloud and on-premise setups. It enhances performance and scalability with features like advanced query tuning, workload management, and strong security. Users appreciate its ease of use and automation features which support real-time data reporting. The optimizer and intelligent partitioning help improve query speed and efficiency, while multi-temperature data management optimizes data handling.
What are the key features of Teradata?
What benefits and ROI do users look for?
In the finance, retail, and government sectors, Teradata is employed for data warehousing, business intelligence, and analytical processing. It handles vast datasets for activities like customer behavior modeling and enterprise data integration. Supporting efficient reporting and analytics, Teradata enhances data storage and processing, whether deployed on-premise or on cloud platforms.
We monitor all Streaming Analytics reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.