Find out what your peers are saying about Apache, Cloudera, Amazon Web Services (AWS) and others in Hadoop.
I would rate the technical support from Amazon as ten out of ten.
We get all call support, screen sharing support, and immediate support, so there are no problems.
They help with billing, cost determination, IAM properties, security compliance, and deployment and migration activities.
I have received support via newsgroups or guidance on specific discussions, which is what I would expect in an open-source situation.
The technical support is quite good and better than IBM.
Scalability can be provisioned using the auto-scaling feature, EC2 instances, on-demand instances, and storage locations like block storage, S3, or file storage.
Regular updates, patch installations, monitoring, logging, alerting, and disaster recovery activities are crucial for maintaining stability.
Apache Spark resolves many problems in the MapReduce solution and Hadoop, such as the inability to run effective Python or machine learning algorithms.
Without a doubt, we have had some crashes because each situation is different, and while the prototype in my environment is stable, we do not know everything at other customer sites.
We faced challenges but overcame those challenges successfully.
The cost factor differs significantly. When you run Spark application on EKS, you run at the pod level, so you can control the compute cost. But in Amazon EMR, when you have to run one application, you have to launch the entire EC2.
I have thoughts on what would be great to see in the product, such as AI/ML features or additional options.
There is room for improvement with respect to retries, handling the volume of data on S3 buckets, cluster provisioning, scaling, termination, security, and integration between services like S3, Glue, Lake Formation, and DynamoDB.
Various tools like Informatica, TIBCO, or Talend offer specific aspects, licensing can be costly;
Integrating with Active Directory, managing security, and configuration are the main concerns.
Cost optimization can be achieved through instance usage, cluster sharing, and auto-scaling.
I would rate the price for Amazon EMR, where one is high and ten is low, as a good one.
It can be deployed on-premises, unlike competitors' cloud-only solutions.
Amazon EMR helps in scalability, real-time and batch processing of data, handling efficient data sources, and managing data lakes, data stores, and data marts on file systems and in S3 buckets.
Amazon EMR provides out-of-the-box solutions with Spark and Hive.
We are using it to clean the data and transform the data in such a way that the end-user can get the insights faster.
Not all solutions can make this data fast enough to be used, except for solutions such as Apache Spark Structured Streaming.
The solution is beneficial in that it provides a base-level long-held understanding of the framework that is not variant day by day, which is very helpful in my prototyping activity as an architect trying to assess Apache Spark, Great Expectations, and Vault-based solutions versus those proposed by clients like TIBCO or Informatica.
This is the only solution that is possible to install on-premise.
| Product | Market Share (%) |
|---|---|
| Apache Spark | 13.4% |
| Cloudera Distribution for Hadoop | 14.0% |
| Amazon EMR | 10.4% |
| Other | 62.2% |


| Company Size | Count |
|---|---|
| Small Business | 6 |
| Midsize Enterprise | 5 |
| Large Enterprise | 12 |
| Company Size | Count |
|---|---|
| Small Business | 28 |
| Midsize Enterprise | 15 |
| Large Enterprise | 32 |
| Company Size | Count |
|---|---|
| Small Business | 16 |
| Midsize Enterprise | 9 |
| Large Enterprise | 31 |
Spark provides programmers with an application programming interface centered on a data structure called the resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines, that is maintained in a fault-tolerant way. It was developed in response to limitations in the MapReduce cluster computing paradigm, which forces a particular linear dataflowstructure on distributed programs: MapReduce programs read input data from disk, map a function across the data, reduce the results of the map, and store reduction results on disk. Spark's RDDs function as a working set for distributed programs that offers a (deliberately) restricted form of distributed shared memory