My main use case for SingleStore is real-time analytics and data storage and serving for my ML workloads. It enables me to run low-latency analytics and model-driven use cases at scale, which is quite difficult for OLAP and OLTP databases alone. One specific example of how I use SingleStore for one of my model-driven use cases is as a data warehouse for fintech, which processes payments through TC worldwide. The integration with Azure Data Factory occurred without complications. It helps me in an excellent way since I am very fast in obtaining the data for my dashboards. Additionally, the compression of the information is accurate. Regarding my main use cases with SingleStore, I think it is very useful for managing information for business intelligence processes and processing. Given that the information is brought quickly, it processes correctly when loaded into the dashboard.
Senior Data Analyst at a comms service provider with 1,001-5,000 employees
Real User
Dec 6, 2023
The issue we encountered was the inability to efficiently extract and evaluate our data from existing databases, causing limitations with Tableau, which struggled to handle datasets exceeding a hundred million records. Consequently, we explored alternative systems and initially attempted to access all data from target systems. In pursuit of faster and more effective data management, we considered Exasol as an alternative OLAP system. Ultimately, we opted for SingleStore.
When I was working at Wag, MySQL had trouble scaling. We were using an M4-10X RDS instance and three replicas. These machines are expensive and with the growth that the company was experiencing, it was clear we would not be able to scale properly. There were heavy writes done against MySQL that should not have been there; an event table for example. This table was close to 900M rows, taking more than 300GB. I decided to migrate this table to MemSQL, freeing up both lots of space and write resources on MySQL. After the event table was migrated to MemSQL, using the MemSQL built-in S3 pipeline, the table was compressed down to 30GB.
SingleStore delivers the performance you need for enterprise AI, providing the most performant data platform for apps and analytics at scale. SingleStore enables organizations to scale from one to one million customers in one unified platform. SingleStore offers transparent pricing as shown here https://www.singlestore.com/pricing/
SingleStore caters to over 400 customers globally, including major banks and tech companies in 50+ countries and 40+ verticals. It offers seamless scaling for...
My main use case for SingleStore is real-time analytics and data storage and serving for my ML workloads. It enables me to run low-latency analytics and model-driven use cases at scale, which is quite difficult for OLAP and OLTP databases alone. One specific example of how I use SingleStore for one of my model-driven use cases is as a data warehouse for fintech, which processes payments through TC worldwide. The integration with Azure Data Factory occurred without complications. It helps me in an excellent way since I am very fast in obtaining the data for my dashboards. Additionally, the compression of the information is accurate. Regarding my main use cases with SingleStore, I think it is very useful for managing information for business intelligence processes and processing. Given that the information is brought quickly, it processes correctly when loaded into the dashboard.
The issue we encountered was the inability to efficiently extract and evaluate our data from existing databases, causing limitations with Tableau, which struggled to handle datasets exceeding a hundred million records. Consequently, we explored alternative systems and initially attempted to access all data from target systems. In pursuit of faster and more effective data management, we considered Exasol as an alternative OLAP system. Ultimately, we opted for SingleStore.
The solution is primarily used for storing data.
When I was working at Wag, MySQL had trouble scaling. We were using an M4-10X RDS instance and three replicas. These machines are expensive and with the growth that the company was experiencing, it was clear we would not be able to scale properly. There were heavy writes done against MySQL that should not have been there; an event table for example. This table was close to 900M rows, taking more than 300GB. I decided to migrate this table to MemSQL, freeing up both lots of space and write resources on MySQL. After the event table was migrated to MemSQL, using the MemSQL built-in S3 pipeline, the table was compressed down to 30GB.