Data professional at a financial services firm with 1,001-5,000 employees
Real User
Apr 17, 2021
If I could give any advice to the guys who are developing it, I would suggest them to really look at the enterprise features, such as being able to log what's going on, being able to capture the current state of processing, and being able to recover from error situations. So, there should be a focus on logging, recoverability, and monitoring. We should be able to monitor what's going on, and in case of any issues, we should be able to recover and restart processing and other things. For scalability and performance, I would probably suggest the Pushdown feature so that you can do the transformation directly on the data source. You do not need to do that calculation within the ETL server. For this, you should be aware of the type of data because each database or kind of storage, such as Hadoop, has its own ANSI standard or language, such as SQL. Microsoft, Oracle, and IBM have their own language. Based on the feedback that I have got, its initial setup takes some time. It could perhaps be simpler.
Data Integration facilitates the combination of data from diverse sources into a unified view, crucial for businesses to make informed decisions and enhance operational efficiency. With comprehensive solutions available, organizations can streamline their data workflows. Data Integration solutions are vital for businesses aiming to handle large volumes of data efficiently. These solutions help in synchronizing data from multiple sources, ensuring consistent data across platforms, and...
If I could give any advice to the guys who are developing it, I would suggest them to really look at the enterprise features, such as being able to log what's going on, being able to capture the current state of processing, and being able to recover from error situations. So, there should be a focus on logging, recoverability, and monitoring. We should be able to monitor what's going on, and in case of any issues, we should be able to recover and restart processing and other things. For scalability and performance, I would probably suggest the Pushdown feature so that you can do the transformation directly on the data source. You do not need to do that calculation within the ETL server. For this, you should be aware of the type of data because each database or kind of storage, such as Hadoop, has its own ANSI standard or language, such as SQL. Microsoft, Oracle, and IBM have their own language. Based on the feedback that I have got, its initial setup takes some time. It could perhaps be simpler.