Senior Data Engineer at a media company with 1,001-5,000 employees
Real User
Top 10
Nov 29, 2025
My main use case for Teradata is primarily storing data and managing the client base, and it was used for batch processing and parallel batch processing.When I say storing data and managing the client base, a specific example of how I use Teradata for that is handling the daily transactions of various customers in the telecom domain, where client transactions are stored in various clusters, so we use Teradata for extracting and loading.Teradata is used for extracting and loading the data as my primary application.
My main use case for Teradata is mostly for data warehousing and analytics. Teradata is used to store customer data; specifically, accounting transactions from multiple sources. The data is kept for analytical purposes for users. Since the data is very large-scale, an MPP architecture RDBMS was needed to store it. Hence, Teradata was chosen.
Senior product analyst at a financial services firm with 10,001+ employees
Real User
Top 10
Oct 30, 2025
There are two major use cases for Teradata. One is that whatever data we cleanse and aggregate, we push it to Teradata for our business users. We create some ETL pipelines and automate them. The second use case is for data wrangling. Whatever data we publish to Teradata is used for various analyses, various SQLs, and a lot of dashboards sit on top of Teradata. My most recent project was for inferring the Net Promoter Score for one of the largest Australian banks where I used Teradata for ETL and data analysis. The entire cleaned data of the bank was stored in Teradata, wherein we had eight to ten different datasets coming in from different sources that were aggregated or converged into Teradata. Using that data, we developed certain business rules on top of that aggregated dataset, which was further fed into Tableau that sat on top of Teradata. Using that data, we were able to infer the customer Net Promoter Score for a rolling six-week average.
Interim and Fractional CIO and CTO at a consultancy with 11-50 employees
Real User
Top 10
Sep 30, 2025
My main use case for Teradata involves data analytics, data mapping, and data process improvement. I use Teradata for our in-in processes such as revenue reporting. We have utilized it for procurement to pay and analyzed all the data points from procure to settlements to payments, ensuring that the right data has been sent, checking the data status, and guaranteeing the cleanliness and quality of the data. We are using Teradata for generative AI related to data mapping, data content, and data distribution.
Teradata is primarily used for data warehousing across all customers. My clients have built-in applications that use Teradata, and their use varies from customer to customer, depending on the industry and database size. The primary function is as an OLAP analytical ecosystem.
Team Leader for Data Base Administrators at a government with 1,001-5,000 employees
Real User
Top 10
Nov 4, 2024
Our primary use case for Teradata is as a data warehouse; we store our databases for the data warehouse, including the EDW. It is mainly used for our data warehouse environment where we run a lot of analytics and heavy queries.
The use cases vary based on the projects. In most projects, I worked with ETL tools like Informatica alongside Teradata. However, there was a specific project where we built a real-time data warehouse for active directory data using Teradata. Oracle was the source system, with potential additional sources feeding into the Oracle database. We used Unix scripting to extract data from Oracle and leverage a colleague's expertise in Unix for the Teradata portion. We wrote ETL queries and performed data profiling before loading the data into the target data warehouse for further processing by other tools. Our task was to create a data warehouse, an active data warehouse in Teradata.
We used its capabilities for critical tasks, particularly in the realm of recovery jobs. We capitalized on the database's ability to transfer entire blocks of data rather than just transactional information. This approach, especially when dealing with block-level data, proved significantly faster compared to other techniques such as UBC WAN.
Teradata is a powerful tool for handling substantial data volumes with its parallel processing architecture, supporting both cloud and on-premise environments efficiently. It offers impressive capabilities for fast query processing, data integration, and real-time reporting, making it suitable for diverse industrial applications.
Known for its robust parallel processing capabilities, Teradata effectively manages large datasets and provides adaptable deployment across cloud and on-premise...
My main use case for Teradata is primarily storing data and managing the client base, and it was used for batch processing and parallel batch processing.When I say storing data and managing the client base, a specific example of how I use Teradata for that is handling the daily transactions of various customers in the telecom domain, where client transactions are stored in various clusters, so we use Teradata for extracting and loading.Teradata is used for extracting and loading the data as my primary application.
My main use case for Teradata is mostly for data warehousing and analytics. Teradata is used to store customer data; specifically, accounting transactions from multiple sources. The data is kept for analytical purposes for users. Since the data is very large-scale, an MPP architecture RDBMS was needed to store it. Hence, Teradata was chosen.
There are two major use cases for Teradata. One is that whatever data we cleanse and aggregate, we push it to Teradata for our business users. We create some ETL pipelines and automate them. The second use case is for data wrangling. Whatever data we publish to Teradata is used for various analyses, various SQLs, and a lot of dashboards sit on top of Teradata. My most recent project was for inferring the Net Promoter Score for one of the largest Australian banks where I used Teradata for ETL and data analysis. The entire cleaned data of the bank was stored in Teradata, wherein we had eight to ten different datasets coming in from different sources that were aggregated or converged into Teradata. Using that data, we developed certain business rules on top of that aggregated dataset, which was further fed into Tableau that sat on top of Teradata. Using that data, we were able to infer the customer Net Promoter Score for a rolling six-week average.
My main use case for Teradata involves data analytics, data mapping, and data process improvement. I use Teradata for our in-in processes such as revenue reporting. We have utilized it for procurement to pay and analyzed all the data points from procure to settlements to payments, ensuring that the right data has been sent, checking the data status, and guaranteeing the cleanliness and quality of the data. We are using Teradata for generative AI related to data mapping, data content, and data distribution.
Teradata is primarily used for data warehousing across all customers. My clients have built-in applications that use Teradata, and their use varies from customer to customer, depending on the industry and database size. The primary function is as an OLAP analytical ecosystem.
Our primary use case for Teradata is as a data warehouse; we store our databases for the data warehouse, including the EDW. It is mainly used for our data warehouse environment where we run a lot of analytics and heavy queries.
Our use cases were related to banks and other financial institutions.
I use the solution in my company for reporting.
The use cases vary based on the projects. In most projects, I worked with ETL tools like Informatica alongside Teradata. However, there was a specific project where we built a real-time data warehouse for active directory data using Teradata. Oracle was the source system, with potential additional sources feeding into the Oracle database. We used Unix scripting to extract data from Oracle and leverage a colleague's expertise in Unix for the Teradata portion. We wrote ETL queries and performed data profiling before loading the data into the target data warehouse for further processing by other tools. Our task was to create a data warehouse, an active data warehouse in Teradata.
We use Teradata IntelliFlex for data warehousing, which is a way of presenting data for faster retrieval.
We used its capabilities for critical tasks, particularly in the realm of recovery jobs. We capitalized on the database's ability to transfer entire blocks of data rather than just transactional information. This approach, especially when dealing with block-level data, proved significantly faster compared to other techniques such as UBC WAN.