

Find out what your peers are saying about Apache, Cloudera, Amazon Web Services (AWS) and others in Hadoop.
I would rate the technical support of Apache Spark an eight because when we had questions, we found solutions, and it was straightforward.
I have received support via newsgroups or guidance on specific discussions, which is what I would expect in an open-source situation.
The fact that no interaction is needed shows their great support since I don't face issues.
Google's support team is good at resolving issues, especially with large data.
Compared to other support systems, such as those in Braze, Tealium, Google, and others like Adobe, Google Cloud takes more time because it is a bigger company.
As a team lead, I'm responsible for handling five to six applications, but Google Cloud Dataflow seems to handle our use case effectively.
Google Cloud Dataflow can handle large data processing for real-time streaming workloads as they grow, making it a good fit for our business.
Google Cloud Dataflow has auto-scaling capabilities, allowing me to add different machine types based on pace and requirements.
MapReduce needs to perform numerous disk input and output operations, while Apache Spark can use memory to store and process data.
Without a doubt, we have had some crashes because each situation is different, and while the prototype in my environment is stable, we do not know everything at other customer sites.
I have not encountered any issues with the performance of Dataflow, as it is stable and backed by Google services.
The job we built has not failed once over six to seven months.
The automatic scaling feature helps maintain stability.
Various tools like Informatica, TIBCO, or Talend offer specific aspects, licensing can be costly;
I find that there really lacks the technical depth to do any recommendations for future updates of Apache Spark.
Outside of Google Cloud Platform, it is problematic for others to use it and may require promotion as an actual technology.
I feel there could be something that they can introduce, such as when we have data in the tables, a feature that creates a unique persona of the user automatically, so we do not have to do that manually.
Dealing with a huge volume of data causes failure due to array size.
It is part of a package received from Google, and they are not charging us too high.
The most important part is that everything can be connected, and the data exchange across overseas connections is fast and reliable.
Apache Spark is the solution, and within it, you have PySpark, which is the API for Apache Spark to write and run Python code.
The solution is beneficial in that it provides a base-level long-held understanding of the framework that is not variant day by day, which is very helpful in my prototyping activity as an architect trying to assess Apache Spark, Great Expectations, and Vault-based solutions versus those proposed by clients like TIBCO or Informatica.
It supports multiple programming languages such as Java and Python, enabling flexibility without the need to learn something new.
The integration within Google Cloud Platform is very good.
I can see what is happening with this script, how many users are affected, whether the script is working, what is failing, and how we can rectify issues with proper monitoring.
| Product | Mindshare (%) |
|---|---|
| Apache Spark | 13.6% |
| Cloudera Distribution for Hadoop | 14.8% |
| HPE Data Fabric | 10.5% |
| Other | 61.1% |
| Product | Mindshare (%) |
|---|---|
| Google Cloud Dataflow | 3.7% |
| Apache Flink | 8.9% |
| Databricks | 8.1% |
| Other | 79.3% |

| Company Size | Count |
|---|---|
| Small Business | 28 |
| Midsize Enterprise | 16 |
| Large Enterprise | 32 |
| Company Size | Count |
|---|---|
| Small Business | 3 |
| Midsize Enterprise | 2 |
| Large Enterprise | 11 |
Apache Spark is a leading open-source processing tool known for scalability and speed in managing large datasets. It supports both real-time and batch processing and is widely used for building data pipelines, machine learning applications, and analytics.
Apache Spark's strengths lie in its ability to process large data volumes efficiently through real-time and batch capabilities. With in-memory computation, it ensures fast data processing and significant performance gains. Its wide range of APIs, including those for machine learning, SQL, and analytics, make it versatile in handling complex data operations. While popular for ease of use and fault tolerance, Spark's management, debugging, and user-friendliness could benefit from improvements. Better GUIs, integration with BI tools, and enhanced monitoring are desired, alongside shuffling optimization and compatibility with more programming languages.
What are Apache Spark's key features?Organizations use Apache Spark predominantly for in-memory data processing, enabling seamless integration with big data frameworks. It's applied in security analytics, predictive modeling, and helps facilitate secure data transmissions in AI deployments. Industries leverage Spark's speed for sentiment analysis, data integration, and efficient ETL transformations.
Google Cloud Dataflow provides scalable batch and streaming data processing with Apache Beam integration, supporting Python and Java. It's designed for efficient data transformations, analytics, and machine learning, featuring cost-effective serverless operations.
Google Cloud Dataflow is a robust tool for handling large-scale data processing tasks with flexibility in processing batch and streaming workloads. It integrates seamlessly with other Google Cloud services like Pub/Sub for real-time messaging and BigQuery for advanced analytics. The platform supports a wide array of data transformation and preparation needs, making it suitable for complex data workflows and machine learning applications. Despite its advantages, users have noted challenges such as incomplete error logs, longer job startup times, and some limitations in the Python SDK.
What are the key features of Google Cloud Dataflow?Industries, especially in retail and eCommerce, implement Google Cloud Dataflow for effective batch job execution, data transformation, and event stream processing. It aids in constructing distributed data pipelines for handling extensive analytics tasks, supporting effective large-scale data-driven decisions.
We monitor all Hadoop reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.