Senior Associate at a financial services firm with 10,001+ employees
Real User
Relatively fast when reading data into other platforms but can't handle queries with insufficient memory
Pros and Cons
  • "As compared to Hive on MapReduce, Impala on MPP returns results of SQL queries in a fairly short amount of time, and is relatively fast when reading data into other platforms like R."
  • "The key shortcoming is its inability to handle queries when there is insufficient memory. This limitation can be bypassed by processing the data in chunks."

What is most valuable?

Impala. As compared to Hive on MapReduce, Impala on MPP returns results of SQL queries in a fairly short amount of time, and is relatively fast when reading data into other platforms like R (for further data analysis) or QlikView (for data visualisation).

How has it helped my organization?

The quick access to data enabled more frequent data backed decisions.

What needs improvement?

The key shortcoming is its inability to handle queries when there is insufficient memory. This limitation can be bypassed by processing the data in chunks.

For how long have I used the solution?

Two-plus years.

Buyer's Guide
Apache Hadoop
April 2024
Learn what your peers think about Apache Hadoop. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
767,847 professionals have used our research since 2012.

What do I think about the stability of the solution?

Typically instability is experienced due to insufficient memory, either due to a large job being triggered or multiple concurrent small requests.

What do I think about the scalability of the solution?

No. This is by default a cluster-based setup and hence scaling is just a matter of adding on new data nodes.

How are customer service and support?

Not applicable to Cloudera. We have a separate onsite vendor to manage the cluster.

Which solution did I use previously and why did I switch?

No. Two years ago this was a new team and hence there were no legacy systems to speak of.

How was the initial setup?

Complex. Cloudera stack itself was insufficient. Integration with other tools like R and QlikView was required and in-house programs had to be built to create an automated data pipeline.

What's my experience with pricing, setup cost, and licensing?

Not much advice as pricing and licensing is handled at an enterprise level.

However do take into consider that data storage and compute capacity scale differently and hence purchasing a "boxed" / 'all-in-one" solution (software and hardware) might not be the best idea.

Which other solutions did I evaluate?

Yes. Oracle Exadata and Teradata.

What other advice do I have?

Try open-source Hadoop first but be aware of greater implementation complexity. If open-source Hadoop is "too" complex, then consider a vendor packaged Hadoop solution like HortonWorks, Cloudera, etc.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Aria Amini - PeerSpot reviewer
Data Engineer at Behsazan Mellat
Real User
Top 5Leaderboard
A big-data engineering solution that integrates well into a variety of environments
Pros and Cons
  • "Its integration is Hadoop's best feature because that allows us to support different tools in a big data platform."
  • "It could be more user-friendly."

What is our primary use case?

We use the Apache Hadoop environment for use cases involving big data engineering. We have many applications, such as collecting, transforming, loading, and storing lag event data for big organizations.

What is most valuable?

Its integration is Hadoop's best feature because that allows us to support different tools in a big data platform. Hadoop can integrate all of these features in various environments and have use cases beyond all of the tools in the environment.

What needs improvement?

It could be more user-friendly. Other platforms, such as Cloudera, used for big data, are more user-friendly and presented in a more straightforward way. They are also more flexible than Hadoop. Hadoop's scrollback is not easy to use, either.

For how long have I used the solution?

I have used Apache Hadoop for three years, and I use Hadoop's open-source version.

What do I think about the stability of the solution?

Hadoop is stable because it's run on a cluster. And if some issues occur for a range of servers, Hadoop could continue its activity.

What do I think about the scalability of the solution?

Apache Hadoop is very good for scalability because one of its main features is its scalability tool. For all the big data infrastructure, we have about ten employees working in the Hadoop environment as engineers and developers. One of our clients is a bank, and the Hadoop environment can retrieve a lot of data, so we could have an unlimited number of end users.

How was the initial setup?

The initial setup is, to some extent, difficult because additional skills are required, specifically knowledge of the operating system at installation. We need someone with professional skills to install the Hadoop environment. With one engineer with those skills, Hadoop takes ten days to two weeks to deploy the solution.

Two or three people are needed to maintain the solution. At least two people are required to maintain the Hadoop stack, in case of unexpected situations, like when something gets corrupted, and they need to solve the problem as fast as possible. Hadoop is easy to maintain because of its governance feature, which helps maintain all the Hadoop stacks.

Which other solutions did I evaluate?

Some competitors include Kibana from Elasticsearch, Splunk, and Cloudera. Each of them has some advantages and disadvantages, but Hadoop is more flexible when working in a big data environment. Compared to Splunk and Cloudera, Apache Hadoop is platform-independent and works on any platform. It is also open-source.

What other advice do I have?

We use Hadoop's open-source version and do not receive direct support from Apache. There are good resources on the web, though, so we have no problem getting help, but not directly from the company.

If you want to use big data on a larger scale, you should use Hadoop. But you could use alternatives if you're going to use big data to analyze data in the short term and don't need cybersecurity. You could use your cloud's features. For example, if you are on Google or Amazon Cloud, you could use in-built features instead of Apache Hadoop. If you are, like us, working with banks that don't want to use the cloud or some commercial clouds or have large-scale data, Hadoop is a good choice for you.

I rate Apache Hadoop an eight out of ten because it could be more user-friendly and easier to install. Also, Hadoop has changed some features in the commercial version.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
Flag as inappropriate
PeerSpot user
Buyer's Guide
Apache Hadoop
April 2024
Learn what your peers think about Apache Hadoop. Get advice and tips from experienced pros sharing their opinions. Updated: April 2024.
767,847 professionals have used our research since 2012.
CEO at AM-BITS LLC
Real User
Top 10
A hybrid solution for managing enterprise data hubs, monitoring network quality, and implementing an AntiFraud system
Pros and Cons
  • "The most valuable feature is scalability and the possibility to work with major information and open source capability."
  • "The solution is not easy to use. The solution should be easy to use and suitable for almost any case connected with the use of big data or working with large amounts of data."

What is our primary use case?

This solution is used for a variety of purposes, including managing enterprise data hubs, monitoring network quality, implementing an AntiFraud system, and establishing a conveyor system.

What is most valuable?

The most valuable feature is scalability and the possibility to work with major information and open source capability.

What needs improvement?

The solution is not easy to use. The solution should be easy to use and suitable for almost any case connected with the use of big data or working with large amounts of data.

For how long have I used the solution?

I have been using Apache Hadoop for ten years. Initially, we worked directly, but now we use Cloudera and Bigtop. We are the solution provider.

What do I think about the stability of the solution?

The tool's stability is good. 

What do I think about the scalability of the solution?

We may have 15 people working on this solution.

I rate the solution’s scalability a ten out of ten.

How was the initial setup?

The setup is not easy for a financial or telecom company.


It takes around one month for basic development and around three to four months for enterprise. We require more than 50 engineers to do the engineering stuff and more than 20 If for the data engineering team.


In terms of production, the most significant aspects are security and staging, with a focus on either a one-month or three-month timeframe for security considerations.

What other advice do I have?

The best advice is not to start a project based on Apache Hadoop alone. It is based on technology, and needs a skilled team.

Overall, I rate the solution an eight out of ten.

Disclosure: My company has a business relationship with this vendor other than being a customer: Partner
Flag as inappropriate
PeerSpot user
Analytics Platform Manager at a consultancy with 10,001+ employees
Real User
Parallel processing allows us to get jobs done, but the platform needs more direct integration of visualization applications
Pros and Cons
  • "Two valuable features are its scalability and parallel processing. There are jobs that cannot be done unless you have massively parallel processing."
  • "I would like to see more direct integration of visualization applications."

What is our primary use case?

We use it as a data lake for streaming analytical dashboards.

How has it helped my organization?

There is a lot of difference. I think the best case is that we are able to drill down to transactional records and really build a root-cause analysis for various issues that might arise, on demand. Because we're able to process in parallel, we don't have to wait for the big data warehouse engine. We process down what the data is and then build it up to an answer, and we can have an answer in an hour rather than 10 hours.

What is most valuable?

  • Scalability
  • Parallel processing

There are jobs that cannot be done unless you have massively parallel processing; for instance, processing call-detail records for telecom.

What needs improvement?

In general, Hadoop has as lot of different component parts to the platform - things like Hive and HBase - and they're all moving somewhat independently and somewhat in parallel. I think as you look to platforms in the cloud or into walled-garden concepts, like Cloudera or Azure, you see that the third-party can make sure all the components work together before they are used for business purposes. That reduces a layer of administration configuration and technical support.

I would like to see more direct integration of visualization applications.

For how long have I used the solution?

More than five years.

What do I think about the stability of the solution?

In general, stability can be a challenge. It's hard to say what stability means. You're in an environment that's before production-line manufacturing, where none of the parts relate together exactly as they should. So that can create some instability.

To realize the benefit of these kinds of open-source, big-data environments, you want to use as many different tools as you can get. That brings with it all this overhead of making them work together. It's kind of a blessing and a curse, at the same time: There's a tool for everything.

How are customer service and technical support?

Apache is the open-source foundation that Cloudera and Hortonworks contribute code and some work to. I don't know that there is actually support and structure, per se, for Apache.

We have had premium, at various times with various companies. From the three dominant companies I've worked with - Cloudera, Hortonworks, and MapR - there is a premium support package but that still only covers their base. Distribution is not necessarily all the add-ons that are on top of it, which is really a big challenge: to get everything to work together.

Which solution did I use previously and why did I switch?

There are the older relational database technologies: Netezza, SQL Server, MySQL, Oracle, Teradata. All have some advantages and some disadvantages. Most notably, they are all significantly more expensive in terms of the capital expense, rather than the operational expense. They are "walled-garden," so to speak, that are curated and have a distinct set of tools that work with them, and not the bleeding-edge ingenuity that comes with an open-source platform.

Data warehousing is 30 years old, at least. Big data is, in its current form, has only been around for four or five years old.

How was the initial setup?

There are capacities in which I have been responsible for setup, administration, and building the applications on those environments. Each of the components is relatively straightforward. The complexity comes from all the different components.

What other advice do I have?

Implement for defined use cases. Don't expect it to all just work very easily.

I would rate this platform a seven out of 10. On the one hand, it's the only place you can use certain functions, and on the other hand, it's not going to put any of the other ones out of business. It's really more of a complement. There is no fundamental battle between relational databases and Hadoop.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Data Analytics Practice head at bse
Real User
Top 20
Stable, highly scalable, but integration could improve
Pros and Cons
  • "The scalability of Apache Hadoop is very good."
  • "The integration with Apache Hadoop with lots of different techniques within your business can be a challenge."

What needs improvement?

The integration with Apache Hadoop with lots of different techniques within your business can be a challenge.

For how long have I used the solution?

I have been using Apache Hadoop for approximately nine years.

What do I think about the stability of the solution?

 Apache Hadoop is stable.

What do I think about the scalability of the solution?

The scalability of Apache Hadoop is very good.

What's my experience with pricing, setup cost, and licensing?

The price of Apache Hadoop could be less expensive.

What other advice do I have?

My advice to others is if you have a strong engineering team then this solution is excellent.

I rate Apache Hadoop an eight out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
PeerSpot user
Infrastructure Engineer at Zirous, Inc.
Real User
Top 20
The Distributed File System stores video, pictures, JSON, XML, and plain text all in the same file system.

What is most valuable?

The Distributed File System, which is the base of Hadoop, has been the most valuable feature with its ability to store video, pictures, JSON, XML, and plain text all in the same file system.

How has it helped my organization?

We do use the Hadoop platform internally, but mostly it is for R&D purposes. However, many of the recent projects that our IT consulting firm has taken on have deployed Hadoop as a solution to store high-velocity and highly variable data sizes and structures, and be able to process that data together quickly and efficiently.

What needs improvement?

Hadoop in and of itself stores data with 3x redundancy and our organization has come to the conclusion that the default 3x results in too much wasted disk space. The user has the ability to change the data replication standard, but I believe that the Hadoop platform could eventually become more efficient in their redundant data replication. It is an organizational preference and nothing that would impede our organization from using it again, but just a small thing I think could be improved.

For how long have I used the solution?

This version was released in January 2016, but I have been working with the Apache Hadoop platform for a few years now.

What was my experience with deployment of the solution?

The only issues we found during deployment were errors originating from between the keyboard and the chair. I have set up roughly 20 Hadoop Clusters and mostly all of them went off without a hitch, unless I configured something incorrectly on the pre-setup.

What do I think about the stability of the solution?

We have not encountered any stability problems with this platform.

What do I think about the scalability of the solution?

We have scaled two of the clusters that we have implemented; one in the cloud, one on-premise. Neither ran into any problems, but I can say with certainty that it is much, much easier to scale in a cloud environment than it is on-premise.

How are customer service and technical support?

Customer Service:

Apache Hadoop is open-source and thus customer service is not really a strong point, but the documentation provided is extremely helpful. More so than some of the Hadoop vendors such as MapR, Cloudera, or Hortonworks.

Technical Support:

Again, it's open source. There are no dedicated tech support teams that we've come across unless you look to vendors such as Hortonworks, Cloudera, or MapR.

Which solution did I use previously and why did I switch?

We started off using Apache Hadoop for our initial Big Data initiative and have stuck with it since.

How was the initial setup?

Initial setup was decently straightforward, especially when using Apache Ambari as a provisioning tool. (I highly recommend Ambari.)

What about the implementation team?

We are the implementers.

What's my experience with pricing, setup cost, and licensing?

It's open source.

Which other solutions did I evaluate?

We solely looked at Hadoop.

What other advice do I have?

Try, try, and try again. Experiment with MapReduce and YARN. Fine tune your processes and you will see some insane processing power
results.

I would also recommend that you have at least a 12-node cluster: two master nodes, eight compute/data nodes, one hive node (SQL), 1 Ambari dedicated node.

For the master nodes, I would recommend 4-8 Core, 32-64 GB RAM, 8-10 TB HDD; the data nodes, 4-8 Core, 64 GB RAM, 16-20 TB RAID 10 HDD; hive node should be around 4 Core, 32-64 GB RAM, 5-6 TB RAID 0 HDD; and the Ambari dedicated server should be 2-4 Core, 8-12 GB RAM, 1-2 TB HDD storage.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
it_user340983 - PeerSpot reviewer
it_user340983Infrastructure Engineer at Zirous, Inc.
Top 20Real User

We have since partnered with Hortonworks and are researching into the Cloudera and MapR spaces right now as well. Though our strong suit is Hortonworks, we do have a good implementation team for any of the distributions.

See all 2 comments
Lucas Dreyer - PeerSpot reviewer
Data Engineer at BBD
Real User
Top 5Leaderboard
Good standard features, but a small local-machine version would be useful
Pros and Cons
  • "What comes with the standard setup is what we mostly use, but Ambari is the most important."
  • "In the next release, I would like to see Hive more responsive for smaller queries and to reduce the latency."

What is our primary use case?

The primary use case of this solution is data engineering and data files.

The deployment model we are using is private, on-premises.

What is most valuable?

We don't use many of the Hadoop features, like Pig, or Sqoop, but what I like most is using the Ambari feature. You have to use Ambari otherwise it is very difficult to configure.

What comes with the standard setup is what we mostly use, but Ambari is the most important.

What needs improvement?

Hadoop itself is quite complex, especially if you want it running on a single machine, so to get it set up is a big mission.

It seems that Hadoop is on it's way out and Spark is the way to go. You can run Spark on a single machine and it's easier to setup.

In the next release, I would like to see Hive more responsive for smaller queries and to reduce the latency. I don't think that this is viable, but if it is possible, then latency on smaller guide queries for analysis and analytics.

I would like a smaller version that can be run on a local machine. There are installations that do that but are quite difficult, so I would say a smaller version that is easy to install and explore would be an improvement.

For how long have I used the solution?

I have been using this solution for one year.

What do I think about the stability of the solution?

This solution is stable but sometimes starting up can be quite a mission. With a full proper setup, it's fine, but it's a lot of work to look after, and to startup and shutdown.

What do I think about the scalability of the solution?

This solution is scalable, and I can scale it almost indefinitely.

We have approximately two thousand users, half of the users are using it directly and another thousand using the products and systems running on it. Fifty are data engineers, fifteen direct appliances, and the rest are business users.

How are customer service and technical support?

There are several forums on the web, and Google search works fine. There is a lot of information available and it often works.

They also have good support in regards to the implementation.

I am satisfied with the support. Generally, there is good support.

Which solution did I use previously and why did I switch?

We used the more traditional database solutions such as SAP IQ  and Data Marks, but now it's changing more towards Data Science and Big Data.

We are a smaller infrastructure, so that's how we are set up.

How was the initial setup?

The initial setup is quite complex if you have to set it up yourself. Ambari makes it much easier, but on the cloud or local machines, it's quite a process.

It took at least a day to set it up.

What about the implementation team?

I did not use a vendor. I implemented it myself on the cloud with my local machine.

Which other solutions did I evaluate?

There was an evaluation, but it was a decision to implement with Data Lake and Hortonworks data platform.

What other advice do I have?

It's good for what is meant to do, a lot of big data, but it's not as good for low latency applications.

If you have to perform quick queries on naive or analytics it can be frustrating.

It can be useful for what it was intended to be used for.

I would rate this solution a seven out of ten.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Real User
We are able to ingest huge volumes/varieties of data, but it needs a data visualization tool and enhanced Ambari for management
Pros and Cons
  • "Initially, with RDBMS alone, we had a lot of work and few servers running on-premise and on cloud for the PoC and incubation. With the use of Hadoop and ecosystem components and tools, and managing it in Amazon EC2, we have created a Big Data "lab" which helps us to centralize all our work and solutions into a single repository. This has cut down the time in terms of maintenance, development and, especially, data processing challenges."
  • "Since both Apache Hadoop and Amazon EC2 are elastic in nature, we can scale and expand on demand for a specific PoC, and scale down when it's done."
  • "Most valuable features are HDFS and Kafka: Ingestion of huge volumes and variety of unstructured/semi-structured data is feasible, and it helps us to quickly onboard a new Big Data analytics prospect."
  • "Based on our needs, we would like to see a tool for data visualization and enhanced Ambari for management, plus a pre-built IoT hub/model. These would reduce our efforts and the time needed to prove to a customer that this will help them."
  • "General installation/dependency issues were there, but were not a major, complex issue. While migrating data from MySQL to Hive, things are a little challenging, but we were able to get through that with support from forums and a little trial and error."

What is our primary use case?

Big Data analytics, customer incubation. 

We host our Big Data analytics "lab" on Amazon EC2. Customers are new to Big Data analytics so we do proofs of concept for them in this lab. Customers bring historical, structured data, or IoT data, or a blend of both. We ingest data from these sources into the Hadoop environment, build the analytics solution on top, and prove the value and define the roadmap for customers.

How has it helped my organization?

Initially, with RDBMS alone, we had a lot of work and few servers running on-premise and on cloud for the PoC and incubation. With the use of Hadoop and ecosystem components and tools, and managing it in Amazon EC2, we have created a Big Data "lab" which helps us to centralize all our work and solutions into a single repository. This has cut down the time in terms of maintenance, development and, especially, data processing challenges. 

We were using MySQL and PostgreSQL for these engagements, and scaling and processing were not as easy when compared to Hadoop. Also, customers who are embarking on a big journey with semi-structured information prefer to use Hadoop rather than a RDBMS stack. This gives them clarity on the requirements.

In addition, since both Apache Hadoop and Amazon EC2 are elastic in nature, we can scale and expand on demand for a specific PoC, and scale down when it's done.

Flexibility, ease of data processing, reduced cost and efforts are the three key improvements for us.

What is most valuable?

HDFS and Kafka: Ingestion of huge volumes and variety of unstructured/semi-structured data is feasible, and it helps us to quickly onboard a new Big Data analytics prospect.

What needs improvement?

Based on our needs, we would like to see a tool for data visualization and enhanced Ambari for management, plus a pre-built IoT hub/model. These would reduce our efforts and the time needed to prove to a customer that this will help them.

For how long have I used the solution?

Less than one year.

What do I think about the stability of the solution?

We have a three-node cluster running on cloud by default, and it has been stable so far without any stoppages due to Hadoop or other ecosystem components.

What do I think about the scalability of the solution?

Since this is primarily for customer incubation, there is a need to process huge volumes of data, based on the proof of value engagement. During these processes, we scale the number of instances on demand (using Amazon spot instances), use them for a defined period, and scale down when the PoC is done. This gives us good flexibility and we pay only for usage.

How is customer service and technical support?

Since this is mostly community driven, we get a lot of input from the forums and our in-house staff who are skilled in doing the job. So far, most of the issues we have had during setup or scaling have primarily been on the infrastructure side and not on the stack. For most of the problems we get answers from the community forums.

How was the initial setup?

We didn't have any major issues except for knowledge, so we hired the right person who had hands-on experience with this stack, and worked with the cloud provider to get the right mechanism for handling the stack.

General installation/dependency issues were there, but were not a major, complex issue. While migrating data from MySQL to Hive, things are a little challenging, but we were able to get through that with support from forums and a little trial and error. In addition, the old PoCs which were migrated had issues in directly connecting to Hive. We had to build some user functions to handle that.

What's my experience with pricing, setup cost, and licensing?

We normally do not suggest any specific distributions. When it comes to cloud, our suggestion would be to choose different types of instances offered by Amazon cloud, as we are technology partners of Amazon for cost savings. For all our PoCs, we stick to the default distribution.

Which other solutions did I evaluate?

None, as this stack is familiar to us and we were sure it could be used for such engagements without much hassle. Our primary criteria were the ability to migrate our existing RDBMS-based PoC and connectivity via our ETL and visualization tool. On top of that, support for semi-structured data for ELT. All three of these criteria were a fit with this stack.

What other advice do I have?

Our general suggestion to any customer is not to blindly look and compare different options. Rather, list the exact business needs - current and future - and then prepare a matrix to see product capabilities and evaluate costs and other compliance factors for that specific enterprise.

Disclosure: I am a real user, and this review is based on my own experience and opinions.
PeerSpot user
Buyer's Guide
Download our free Apache Hadoop Report and get advice and tips from experienced pros sharing their opinions.
Updated: April 2024
Product Categories
Data Warehouse
Buyer's Guide
Download our free Apache Hadoop Report and get advice and tips from experienced pros sharing their opinions.