Data Analyst at a insurance company with 501-1,000 employees
Real User
Top 10
Oct 30, 2025
I have been using Dremio for a year and a half. My main use case for Dremio is that I am able to access multiple databases and I can easily and quickly connect Dremio with my dashboards. In my recent project, my databases are full of information about our insurance operation, and I monitor my KPIs by connecting Dremio to my dashboards. I have multiple KPIs that I have to stay tuned to, and I need to access that very quickly and in an easy way, which is how Dremio helps me. About my use case with Dremio, I find databases that I don't know very much about, and in that web environment of Dremio, I am able to scroll and navigate through the databases by searching for an example or a column name, so it is much easier to navigate through with that kind of help.
Senior Consultant - Data Analytics at a comms service provider with 201-500 employees
Real User
Top 5
Oct 21, 2025
I have been using Dremio on and off as a data warehouse for the past three years. My main use case for Dremio is that we use it as a logical data warehouse where we use Dremio with VDSs as an alternative to AWS Glue or Apache Hive. As we are working with our ETL at the end of all of it, after the data types and everything have been cast, we make that available on Dremio as VDS and then we move on to our further data warehousing schemes within Dremio. We use Dremio enterprise-wide now, and the key use case has been reducing our costs when it comes to data storage. Our main use case for Dremio is as a data warehouse, and the challenge that it helped us solve is that physical data warehouses such as Redshift have storage and hardware upscaling conflicts. Dremio helps us decouple those and lets us catalog more. We can manage everything under one system.
I use Dremio for proof of concept purposes. I haven't used it in a real-time project, however, I explore Dremio as a data virtualization application in the ecosystem. It is relatively new, possibly a one-year or two-year-old system.
We use Dremio for financial data analytics and as a data lake. We connect Dremio with Oracle, Docker, MySQL, and utilize it for Power BI. Additionally, we use it to process data from MongoDB, although we face occasional challenges with NoSQL integration.
Sr Manager at a transportation company with 10,001+ employees
Real User
Dec 6, 2023
We have been using it to build one of our frameworks. We primarily use Dremio to create a data framework and a data queue. It's being used in combination with DBT and Databricks.
Dremio is a platform that enables you to perform high-performance queries from a data lake. It helps you manage that data in a sophisticated way. The use cases are broad, but it allows you to make extremely good use of data in a data lake. It gives you data warehouse capabilities with data lake data.
I can visualize traffic from BI and Tableau on the same page and have my tables and schema on the same page. The data link comprises everything. If I want one structure, I connect it to a big table in the hive and the data team that could read my SQL work on my tables, schemas, table structures and everything all in one place. Dermio is as good as any other Presto engine.
I have used this solution as an ETL tool to create data marks on data lakes for bridging. I have used it as a greater layer for ad-hoc queries and for some services which do not require sub-second latency to credit data from very big data lakes. I have also used it to manage simple ad-hoc queries similar to Athena, Presto or BigQuery. We do not have a large number of people using this solution because it's mainly setup as a service to service integration. We integrated a big workload when we started using Dremio and this was very expensive. The migration is still in progress. As soon as this migration is finished, we plan to migrate ad-hoc queries from our analytical team.
Dremio offers a comprehensive platform for data warehousing and data engineering, integrating seamlessly with data storage systems like Amazon S3 and Azure. Its main features include scalability, query federation, and data reflection.Dremio's core strength lies in its ability to function as a robust data lake query engine and data warehousing solution. It facilitates the creation of complex queries with ease, thanks to its support for Apache Airflow and query federation across endpoints....
I have been using Dremio for a year and a half. My main use case for Dremio is that I am able to access multiple databases and I can easily and quickly connect Dremio with my dashboards. In my recent project, my databases are full of information about our insurance operation, and I monitor my KPIs by connecting Dremio to my dashboards. I have multiple KPIs that I have to stay tuned to, and I need to access that very quickly and in an easy way, which is how Dremio helps me. About my use case with Dremio, I find databases that I don't know very much about, and in that web environment of Dremio, I am able to scroll and navigate through the databases by searching for an example or a column name, so it is much easier to navigate through with that kind of help.
I have been using Dremio on and off as a data warehouse for the past three years. My main use case for Dremio is that we use it as a logical data warehouse where we use Dremio with VDSs as an alternative to AWS Glue or Apache Hive. As we are working with our ETL at the end of all of it, after the data types and everything have been cast, we make that available on Dremio as VDS and then we move on to our further data warehousing schemes within Dremio. We use Dremio enterprise-wide now, and the key use case has been reducing our costs when it comes to data storage. Our main use case for Dremio is as a data warehouse, and the challenge that it helped us solve is that physical data warehouses such as Redshift have storage and hardware upscaling conflicts. Dremio helps us decouple those and lets us catalog more. We can manage everything under one system.
I use Dremio for proof of concept purposes. I haven't used it in a real-time project, however, I explore Dremio as a data virtualization application in the ecosystem. It is relatively new, possibly a one-year or two-year-old system.
We use Dremio for financial data analytics and as a data lake. We connect Dremio with Oracle, Docker, MySQL, and utilize it for Power BI. Additionally, we use it to process data from MongoDB, although we face occasional challenges with NoSQL integration.
We use Dremio for data engineering.
We have been using it to build one of our frameworks. We primarily use Dremio to create a data framework and a data queue. It's being used in combination with DBT and Databricks.
Dremio is a platform that enables you to perform high-performance queries from a data lake. It helps you manage that data in a sophisticated way. The use cases are broad, but it allows you to make extremely good use of data in a data lake. It gives you data warehouse capabilities with data lake data.
I can visualize traffic from BI and Tableau on the same page and have my tables and schema on the same page. The data link comprises everything. If I want one structure, I connect it to a big table in the hive and the data team that could read my SQL work on my tables, schemas, table structures and everything all in one place. Dermio is as good as any other Presto engine.
I have used this solution as an ETL tool to create data marks on data lakes for bridging. I have used it as a greater layer for ad-hoc queries and for some services which do not require sub-second latency to credit data from very big data lakes. I have also used it to manage simple ad-hoc queries similar to Athena, Presto or BigQuery. We do not have a large number of people using this solution because it's mainly setup as a service to service integration. We integrated a big workload when we started using Dremio and this was very expensive. The migration is still in progress. As soon as this migration is finished, we plan to migrate ad-hoc queries from our analytical team.