Try our new research platform with insights from 80,000+ expert users

Apache Spark vs QueryIO comparison

 

Comparison Buyer's Guide

Executive Summary

Review summaries and opinions

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Categories and Ranking

Apache Spark
Ranking in Hadoop
1st
Average Rating
8.4
Reviews Sentiment
7.7
Number of Reviews
66
Ranking in other categories
Compute Service (4th), Java Frameworks (2nd)
QueryIO
Ranking in Hadoop
15th
Average Rating
8.0
Number of Reviews
1
Ranking in other categories
No ranking in other categories
 

Mindshare comparison

As of May 2025, in the Hadoop category, the mindshare of Apache Spark is 17.8%, down from 21.4% compared to the previous year. The mindshare of QueryIO is 0.5%, down from 0.7% compared to the previous year. It is calculated based on PeerSpot user engagement data.
Hadoop
 

Featured Reviews

Ilya Afanasyev - PeerSpot reviewer
Reliable, able to expand, and handle large amounts of data well
We use batch processing. It works well with our formats and file versions. There's a lot of functionality. In our pipeline each hour, we make a copy of data from MongoDB, of the changes from MongoDB to some specific file. Each time pipeline copied all of the data, it would do it each time without changes to all of the tables. Tables have a lot of data, and in the last MongoDB version, there is a possibility to read only changed data. This reduced the cost and configuration of the cluster, and we saved about $150,000. The solution is scalable. It's a stable product.
MR
Stable with good connectivity and good integration capabilities
Data cleansing is not intuitive and user-friendly. When things have errors, you have to hunt them down as opposed to the solution simply showing you intuitively where to find it. I would recommend that they look at that Tableau Prep tool and see how it is pieced together. That's a great data cleansing tool. If Microsoft has something like that, then we wouldn't even have to look at some of the other options. There needs to be some simplification of the user interface. Right now it's too complicated. There isn't a way to put controls on the solution, so anyone can use any part of it, and sometimes novices will go and try to create things, but not know enough about what is official and what is published. It would be ideal if we could segment off certain sections so that not everyone had access to the whole solution. I'd like to see something more of a mapping tool so that you could see how the reports are connected, similar to Tableau Prep and Naim. That would make for a pretty useful diagnostics check. People would be better able to understand the linkage between your datasets. It would be nice if the solution offered some templates. It would make it even more plug and play, and give people a good jumping-off point. After that, they could explore other bells and whistles as they get further into understanding the solution. The solution should work in some virtualization. It would be a good added feature. If this product had those things then I wouldn't need to use other products.

Quotes from Members

We asked business professionals to review the solutions they use. Here are some excerpts of what they said:
 

Pros

"I like that it can handle multiple tasks parallelly. I also like the automation feature. JavaScript also helps with the parallel streaming of the library."
"The main feature that we find valuable is that it is very fast."
"Provides a lot of good documentation compared to other solutions."
"The most significant advantage of Spark 3.0 is its support for DataFrame UDF Pandas UDF features."
"ETL and streaming capabilities."
"The most valuable feature is the Fault Tolerance and easy binding with other processes like Machine Learning, graph analytics."
"The product's deployment phase is easy."
"Spark can handle small to huge data and is suitable for any size of company."
"Anyone who has even a little bit of knowledge of the solution can begin to create things. You don't have to be technical to use the solution."
 

Cons

"We've had problems using a Python process to try to access something in a large volume of data. It crashes if somebody gives me the wrong code because it cannot handle a large volume of data."
"Needs to provide an internal schedule to schedule spark jobs with monitoring capability."
"Technical expertise from an engineer is required to deploy and run high-tech tools, like Informatica, on Apache Spark, making it an area where improvements are required to make the process easier for users."
"More ML based algorithms should be added to it, to make it algorithmic-rich for developers."
"In data analysis, you need to take real-time data from different data sources. You need to process this in a subsecond, do the transformation in a subsecond, and all that."
"Dynamic DataFrame options are not yet available."
"The solution must improve its performance."
"When you are working with large, complex tasks, the garbage collection process is slow and affects performance."
"There needs to be some simplification of the user interface."
 

Pricing and Cost Advice

"We are using the free version of the solution."
"It is quite expensive. In fact, it accounts for almost 50% of the cost of our entire project."
"Licensing costs can vary. For instance, when purchasing a virtual machine, you're asked if you want to take advantage of the hybrid benefit or if you prefer the license costs to be included upfront by the cloud service provider, such as Azure. If you choose the hybrid benefit, it indicates you already possess a license for the operating system and wish to avoid additional charges for that specific VM in Azure. This approach allows for a reduction in licensing costs, charging only for the service and associated resources."
"Spark is an open-source solution, so there are no licensing costs."
"Since we are using the Apache Spark version, not the data bricks version, it is an Apache license version, the support and resolution of the bug are actually late or delayed. The Apache license is free."
"I did not pay anything when using the tool on cloud services, but I had to pay on the compute side. The tool is not expensive compared with the benefits it offers. I rate the price as an eight out of ten."
"Apache Spark is an open-source solution, and there is no cost involved in deploying the solution on-premises."
"It is an open-source solution, it is free of charge."
Information not available
report
Use our free recommendation engine to learn which Hadoop solutions are best for your needs.
849,686 professionals have used our research since 2012.
 

Top Industries

By visitors reading reviews
Financial Services Firm
27%
Computer Software Company
13%
Manufacturing Company
8%
Comms Service Provider
6%
No data available
 

Company Size

By reviewers
Large Enterprise
Midsize Enterprise
Small Business
No data available
 

Questions from the Community

What do you like most about Apache Spark?
We use Spark to process data from different data sources.
What is your experience regarding pricing and costs for Apache Spark?
Compared to other solutions like Doc DB, Spark is more costly due to the need for extensive infrastructure. It requires significant investment in infrastructure, which can be expensive. While cloud...
What needs improvement with Apache Spark?
The Spark solution could improve in scheduling tasks and managing dependencies. Spark alone cannot handle sequential tasks, requiring environments like Airflow scheduler or scripts. For instance, o...
Ask a question
Earn 20 points
 

Comparisons

No data available
 

Overview

 

Sample Customers

NASA JPL, UC Berkeley AMPLab, Amazon, eBay, Yahoo!, UC Santa Cruz, TripAdvisor, Taboola, Agile Lab, Art.com, Baidu, Alibaba Taobao, EURECOM, Hitachi Solutions
Information Not Available
Find out what your peers are saying about Apache, Cloudera, Amazon Web Services (AWS) and others in Hadoop. Updated: March 2025.
849,686 professionals have used our research since 2012.