Try our new research platform with insights from 80,000+ expert users
SAPcon677 - PeerSpot reviewer
SAP Consultant at a construction company with 501-1,000 employees
Consultant
Good performance, but the interface should be easier to use
Pros and Cons
  • "This solution is very fast."
  • "The inclusion of a well-performing Time Machine is vital."

What is our primary use case?

We use this solution for database storage.

I am an SAP developer and consultant at my company. I examine the client's system and propose solutions that will ease their processes or make them faster. This involves programming, as well as other kinds of development.

We are using the on-premise deployment model.

What is most valuable?

This solution is very fast.

What needs improvement?

The backup solution and time machine should be more accurate, reliable, and comfortable to use. The inclusion of a well-performing Time Machine is vital.

If the interface were more comfortable and easy to use then it would be excellent. Sometimes, an incorrect request is taken to production and it will corruption everything in the production database.

When there are a large number of records to process in a transaction, it is not any faster than Oracle.

For how long have I used the solution?

One year.
Buyer's Guide
SAP HANA
March 2025
Learn what your peers think about SAP HANA. Get advice and tips from experienced pros sharing their opinions. Updated: March 2025.
857,028 professionals have used our research since 2012.

What do I think about the stability of the solution?

This solution is very stable. We have been using it for one year and there have been no problems with the database.

How was the initial setup?

I was not involved in the setup of this solution. I only installed SAP HANA Express on my laptop, which was easy. The full version requires professional knowledge. It's not something you can install, like Microsoft Office, on any laptop.

What about the implementation team?

We hired a consulting firm in Turkey to set up our solution. The two machines were configured by SAP Turkey.

Which other solutions did I evaluate?

I have more than nineteen years of experience with the Oracle database, from version 7.2 through to RAC. I know the administration, as well as backup and recovery very well.

There are not many differences between Oracle and HANA. As an example, for transactional purposes, it is very similar to Oracle.

We switched to HANA from Oracle because SAP systems are moving entirely to the HANA platform. There will be no support for SAP using Oracle.

What other advice do I have?

We do not use the HANA features, for example, embedded scripts. This is something that we may use in the future.

My advice to anybody looking to implement a relational database is to use Oracle, rather than HANA. HANA consultants are very rare and therefore costly. My testing has also shown that Oracle in memory is much faster than HANA.

This is a good solution, but the vendor inaccurately promises that the database is ten-thousand times faster than Oracle.

I would rate this solution a seven out of ten.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Technical manager at SAFE-SAS
Real User
Huge bandwidth and offers real time results
Pros and Cons
  • "It has a very huge bandwidth and data transfer."
  • "The solution is very expensive, however. The pricing depends on the number of users and many other factors that affect licensing."

What is most valuable?

It is a memory database that has all the content of the database. Once the database is turned on, it is loaded in the server RAM. It has a very huge bandwidth and data transfer. Once you try to do any queries against this database you can get the result very fast. You can get real-time output or results. This aspect is very helpful to me.

What needs improvement?

From the deployment-side, I don't have any issues with the solution and haven't heard of any problems from clients.

The solution is very expensive, however. The pricing depends on the number of users and many other factors that affect licensing.

For how long have I used the solution?

I implement this solution and I've been doing so since 2015.

What do I think about the stability of the solution?

It is very stable. I use the Linux operating system and find it to be quite stable.

What do I think about the scalability of the solution?

The solution is scalable. You have horizontal or vertical capabilities. You can upgrade the server itself in case the memory is at capacity. The resources of one server are not enough because it's big. According to your requirements, you can expand by adding more servers into one big cluster.

How are customer service and technical support?

I don't go through the official support team from SAP, but most of the time I use the website to find the answers I need. It's very detailed and most of the problems that I've faced in the past while handling the implementations I can find on the website or on the internet.

Which solution did I use previously and why did I switch?

Before using SAP HANA, we used other SAP products.

How was the initial setup?

The initial setup is straightforward. For one system, the stand-down system, it will take about four to five days for implementation from scratch. I often handle implementation, so for me, it's straightforward because I have some experience in this area. You do need a skilled team. You have to understand many areas if you want to deploy it yourself. You have to have experience with the storage, the network, with operating systems, etc. 

I know SAP itself recommends that you have to have a certificate or a certified person that can deploy SAP HANA.

What about the implementation team?

We are an integrator, so we handle the installation for clients.

What other advice do I have?

The SAP portfolio is huge. It covers all industries and fields. It is very wide horizontally or vertically. It has modules for all industries, fields, and for all departments: accounts, HR, production, they have a solution for each industry and for each department in any organization.

There are some applications that are very sensitive to the delay or the latency so for these types of applications I would recommend SAP HANA. However, if these are not concerns, there may be other database technologies that would be more cost-effective than HANA.

I would rate this solution eight out of ten.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Buyer's Guide
SAP HANA
March 2025
Learn what your peers think about SAP HANA. Get advice and tips from experienced pros sharing their opinions. Updated: March 2025.
857,028 professionals have used our research since 2012.
SAP Enterprise Solution Architect at a tech services company with 1-10 employees
Real User
Provides us with predictive capabilities for asset maintenance, and real-time forecasts
Pros and Cons
  • "Provides us with predictive capabilities for asset maintenance, and real-time forecasts."
  • "Needs graphical programming without coding."

How has it helped my organization?

Provides us with predictive capabilities for asset maintenance and real-time forecasts.

What is most valuable?

Real-time database, near zero downtime for production business.

What needs improvement?

Graphical programming without coding.

For how long have I used the solution?

Three to five years.

What do I think about the stability of the solution?

System recovery in version 1.0 failed due to corrupt log files. Version 2.0 is stable now.

What do I think about the scalability of the solution?

Should have scalibity from terabytes to petabytes/zetabytes/yotabytes for both scale-up and scale-out, multi-tenancy approach.

How is customer service and technical support?

Excellent.

How was the initial setup?

Gradual deployment from straightforward to complex, on-premise and then to cloud platform.

What's my experience with pricing, setup cost, and licensing?

Set up a consortium of consulting partners and hardware vendors to define your tech. Landscape TCO (total cost of ownership) and then approach the OEM for pricing (on-premise or on cloud or a hybrid model).

Check if you can bring your own licenses for some of the existing application licenses on the new platform, to reduce TCO.

Which other solutions did I evaluate?

Product was the first of its kind for us. However, we later evaluated other products: Oracle Exadata, Exalytics, Teradata, Hadoop, MongoDB.

What other advice do I have?

  • Check out the cloud option to reduce your initial cost of deploying the dev/test system.
  • Strategize on side car approach, remember to try out the best practice model company to get look and feel for your business users.
  • Maintain non-disruptive approach while migrating via demo. 
  • Try out the rapid deployment solutions (RDS) for industry-specific modules.
  • Start end-user training/simulations early on to reduce pushback.
  • Split go-live into two (technical go-live and then business go-live) to maintain stage-wise roll-out.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user3372 - PeerSpot reviewer
Senior Manager of IT at a government with 1,001-5,000 employees
Vendor
Databases, Decisions and Disruption

By now most of us are well aware of the data explosion, that businesses are creating more data than they can effectively manage. This is not a new problem. Throughout history societies have always made efforts to create repositories to organize, analyze and store documents (recorded knowledge). Some of these ancient repositories still exist today in the form of “brick and mortar” libraries. But just like anything else in a consumer’s market, demand (Time-To-Solution) eventually becomes greater than the supply (Information Available/Accessible).

The global economy is currently undergoing a fundamental transformation. Market dynamics and business rules are changing at an ever increasing speed. Those responsible for keeping the company on track for the future have a massive need for high-quality data--both from inside and outside the company. Technology decision makers are facing the challenge of having to create infrastructures that leverage speed, scale and availability.

Data technology must assist in the removal of silos and support collaboration and the sharing of expertise across the company and with business partners. Successful companies will need access not only to their own "Data repository" but to data from various heterogeneous sources. Today, finding mission-critical data or even being aware of all potential sources is more a question of luck and intuition than anything else.

How important is your data to your organization? How does your organization use its data? How do they access and interact with it? Are the decisions being made from data, innovative or disruptive in nature? What’s the value and impact?

According to a Forbes article written by Caroline Howard, “People are sometimes confused about the difference between innovation and disruption. It’s not exactly black and white, but there are real distinctions, and it’s not just splitting hairs. Think of it this way: Disruptors are innovators, but not all innovators are disruptors — in the same way that a square is a rectangle but not all rectangles are squares”.

Database accessibility is critical for rapid but sensible, innovative and disruptive decision making. A business database management system must be able to processes both transactional workloads and analytical workloads fully in-memory. By bringing together OLAP and OLTPL to form a single database, your organization can benefit dramatically from lower total cost up front. Additionally, gaining incredible speed that will accelerate their business processes and custom application.

SAP HANA DB takes advantage of the low cost of main memory (RAM), data processing abilities of multicore processors and the fast data access of solid-state drives relative to traditional hard drives to deliver better performance of analytical and transactional applications.

Fusing SAP HANA with a scalable shared memory platform will enable businesses and government agencies running high-volume databases and multitenant environments to utilize high-performance DRAM that can offer up to 200 times the performance of flash memory to help deliver faster insight.

Here’s my analogy: players go to the “Super Bowl” for one of two reasons, to watch or participate. To be successful in today’s global market companies must effectively participate or risk being on the sidelines watching.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user77352 - PeerSpot reviewer
Database Expert with 51-200 employees
Vendor
Very good for real-time data processing however, it's not currently feasible to store the whole multi-terabyte DW

Since its introduction in 2011, SAP tries to push HANA very heavily and there is a lot of marketing buzz over this new product. For a freelance consultant focused on SAP Sybase database products, like me, it is next to impossible to ignore HANA in year 2013. So, I decided not to rely to marketing slogans and check what HANA is, what it can do, and, importantly, what HANA is NOT. I put my first impressions to this blog post; hopefully other HANA-related posts will follow. Note that I’m not a HANA expert (yetJ) and I’m writing these rows as a person with a lot of experience with IQ and some other RDBMSs and trying to learn HANA.

So, why to compare HANA and IQ? Both are designed for data warehouse environment, both are column-based (with some support of row-based data), both provide a data compression out-of-the-box and highly-parallel. Years ago, much like SAP for HANA today, Sybase claimed that IQ processed data so fast that aggregation tables are not really needed, because the aggregations can be just performed on-the-fly. Well, experience with a number of big projects showed me how problematic that statement was, and it is only a single example.

According to SAP, the strong point of HANA is its ability to utilize CPU cache , which is much faster than accessing the main memory (0.5 - 15 ns. vs. 100 ns.). Currently, IQ and other Sybase RDBMSs lack this capability. Therefore, I decided to build a test environment which allows performing of queries that answer a number of conditions:

  • The query should be performed fully in memory , otherwise it is not fare to compare IQ and HANA. In HANA, queries are executed fully in memory, once relevant columns are loaded into the RAM.
  • The query should perform a lot of logical I/Os and should be hardly optimizable using indexes. Otherwise, the effect of using CPU cache may be not clear enough.
  • The query should take at least a number of seconds to finish. Since both IQ and HANA (very unfortunately) don’t provide the number of logical I/Os performed by a query, we may compare response times only . If the query finishes in small milliseconds, the comparison of response times may be problematic.

Some notes about the test environment:
For IQ, I used 16-core RHEL server with hyper-threading turned on (32 cores visible to OS) and 140GB RAM available. I used IQ 16.0 SP01 for my tests.

For HANA, I had to use HANA SPS6 Developer Edition on a Cloudshare VM, which provides HANA on a Linux server with 24GB RAM. However, only 19.5 GB is actually available from the Linux point of view (free –m output) and most of this memory is allocated by various HANA processes. In fact, less than 3GB RAM is available for user data in HANA . I only wish that SAP would allow us to download HANA and install it on any server that answers to HANA’s requirements for CPUs, but it seems that the SAP’s policy is to distribute HANA as a part of appliances only, so I don’t expect free HANA download any time soon.

This brings us to an additional requirement for the test: the test dataset should be relatively small , because of severe RAM restrictions imposed by HANA Developer Edition on Cloudshare.

Finally, I decided to base my tests on a relatively narrow table that represents information about phone calls (for those involved in Telecom industry, it is like short and very much simplified CDRs). Here is the structure of the table:

create table CDRs (<br> CDR_ID unsigned bigint, -- Phone conversation ID <br> CC_ORIG varchar(3), -- Country code of the call originatior <br> AC_ORIG varchar(2), -- Area code of the call originatior <br> NUM_ORIG varchar(15), -- Phone number of the call originatior <br> CC_DEST varchar(3), -- Country code of the call destination <br> AC_DEST varchar(2), -- Area code of the call destination <br> NUM_DEST varchar(15), -- Phone number of the call destination <br> STARTTIME datetime, -- Start time of the conversation <br> ENDTIME datetime, -- End time of the conversation <br> DURATION unsigned int -- Duration of the conversation in seconds <br> );

I developed a stored procedure that fills this table in SAP Sybase ASE row-by-row according to some meaningful logic and prepared delimited files for IQ and HANA. The input files are available upon request. At first, I planned to run tests on a dataset with 900 million rows, but I finally discovered that I have to go down to 15 million rows because of the VM memory limitations mentioned above.

Important note about the terminology. In IQ, inserting of the data from a delimited file into a database table is called LOAD, and retrieving of the data from a table to a delimited file is called EXTRACT. In HANA, the inserting is called IMPORT and the retrieving is called EXPORT. The term LOAD in HANA has a totally different meaning – it means loading of a whole table, or some of its columns, to the memory from disk, when the data is already in the database.

IMPORT functionality in HANA is not similar to IQ, at all. Actually, it contains two phases: IMPORT and MERGE. During the first phase, the data is imported to a “delta store” in an uncompressed form. Then, the data from the “delta store” is merged into “main store”, where the table data is actually resided. The merge is performed automatically, when a configurable threshold is crossed (for example, the size of the “delta store” becomes too big). To ensure that the imported data is fully inside the “main store”, a manual MERGE may be required. The memory requirements during the MERGE process are quite interesting, maybe I will write about it in a different post. It is pretty much possible that you will be able to IMPORT the data, but will not have enough memory to MERGE it; it happened to me a number of times during my tests. I would recommend you to read more about HANA architecture here: http://www.saphana.com/docs/DOC-1073, Chapter 9.

Given the significant difference between the test systems (a powerful dedicated server for IQ vs. small VM for HANA), I didn’t plan to compare the data load performance between IQ and HANA. However, so far I see HANA performing the IMPORT using not more than 1.5 core of 4 available, thus underutilizing the available hardware. The MERGE phase, though, is executed in a much more parallel way. The bottom line is that IQ seems outperform HANA in data loading, possibly quite by far. I will probably return to this topic in one of following posts, additional tests with larger dataset are required.

Now, we come to the data compression. Since IQ and HANA approach the indexing quite differently, I chose to compare the compression without non-default indexes in both IQ and HANA. It appears that IQ provides better data compression and needs 591M to store 15,000,000 rows, while HANA needs 748M to store the same data. HANA provides a number of compression algorithms for columns, which are chosen automatically, according to the data type and data distribution. However, it seems that neither of compression algorithms offered by HANA contains LZW-like compression used by IQ. I’d prefer to test the compression on a more representative data set (15,000,000 is way too small) and play with different HANA compression algorithms. I hope one of future posts will be dedicated to this topic.

Finally, the data is inside the database and we are ready to query it. To answer the test conditions mentioned above, I chose the following query:

select <br> a.CDR_ID CDR_ID_1, b.CDR_ID CDR_ID_2, <br> a.NUM_ORIG NUM_A, a.NUM_DEST NUM_B, a.STARTTIME STARTTIME_1, a.ENDTIME ENDTIME_1, <br> a.DURATION DURATION_1, <br> b.NUM_DEST NUM_C, b.STARTTIME STARTTIME_2, b.ENDTIME ENDTIME_2, <br> b.DURATION DURATION_2 <br> from CDRs a, CDRs b <br> where a.NUM_DEST = b.NUM_ORIG <br> and datediff(ss, a.ENDTIME, b.STARTTIME) between 5 and 60 <br> order by a.STARTTIME;

This query finds cases when a person A called person B and then the person B called person C almost immediately (in 60 seconds). This query has to perform a lot of logical I/O by its very definition. With my test data set, this query returns 31 rows.

In IQ, this query takes 6.6 seconds while executed fully in memory and when all relevant indexes are in place. The query uses sort-merge join and runs with relatively high degree of parallelism, allocating about 60% of 32 CPU cores available.

In HANA, the same query takes only 1 second with no indexes in place ! Remember, that in my tests HANA is running on a small VM with just 4 virtual CPU cores! The query finishes so fast that I cannot measure the degree of parallelism. Creation of indexes on NUM_ORIG and NUM_DEST reduces the response time to 900 ms.

A note about indexes in HANA: HANA offers only two index types and, by default, it chooses the index type automatically. In my tests, I have found that indexes improve query performance in HANA, sometimes significantly. Unfortunately, I have not found any indication of index usage in HANA query plans, even when some indexes were used by the query for sure. The role of the optimizer statistics in the query plan generation is also not very clear to me. I hope to prepare a separate post about query processing in HANA, stay tuned!

Another amazing and totally unexpected finding in HANA – index creation on NUM_DEST (varchar(15)) takes 194 ms. Index on DURATION (int) is created in 12ms!

My conclusions so far:

  1. HANA in-memory processing is not just about caching, it is much more than that. HANA allows us to achieve incredible performance for resource-intensive queries. Things that seem impossible with other databases, column-based or row-based, may become possible with HANA.
  2. Loading of the data into HANA requires careful resource and capacity planning. Merging of the inserted data with the rest of the table may require much more memory that you have probably thought. Particularly, to perform the merge, both old and new version of the table should fit into memory.
  3. It is pretty much possible that storing of aggregations in HANA is not required indeed, at least in most of the cases. Of course, I need a more representive result set to verify it.
  4. IQ and HANA can be used together in the same system, where they can solve different problems and store different data. HANA is very good for real-time data processing, or for queries that must be executed very quickly. However, it is not feasible to store the whole multi-terabyte data warehouse in HANA's memory in most of the cases, at least not in year 2013. At this point, IQ enters into the game. It is very efficient in massive data loading and data storage, and can answer queries with less strict response time requirements very efficiently. In some scenarios, the raw data can be loaded into IQ, and then, after some refining inside IQ, imported into HANA.

Update: see IQ query plan for my test case here: Download ABC_15mln_fully_in_memory

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
it_user410313 - PeerSpot reviewer
it_user410313Works at a tech services company with 10,001+ employees
Real User

Does Smart bear support SAP

it_user6855 - PeerSpot reviewer
CEO with 51-200 employees
Vendor
Why HANA Is Important

I’m not a great fan of SAP, or Oracle for that matter, but SAP’s HANA architecture is an unexpected innovation from a company that is rooted in serving the dull administrative needs of large organisations. In a nutshell HANA is an in-memory database capable of handling very large amounts of data with frightening speed. This is very timely, and more importantly will serve the needs of organisations for decades to come. While the focus is currently on the ability of HANA to address real-time analytics, the capability offered by HANA will serve us as we move into the feedback and control (cybernetics) era which has yet to unfold.

The current preoccupation with all forms of analytics (data mining, statistics, text mining, optimisation) and big data are predicated on very fast database systems. Traditional disk based technology is typically too slow and SAP has taken a simple idea – placing all data in much faster memory – and made it a reality. The idea is simple, but making it a reality is not. HANA enables many forms of business activity that were simply not possible before – real-time recommendations for customers, real-time tracking of very large distribution networks – and so on. This alone is enough to make HANA important for many businesses.

On the horizon however, and virtually unseen by most commentators, is the need to implement real-time feedback and control systems. It’s all very well to analyse current activity, but at which point is action called for, and what type of action will rectify a situation? Recommending additional purchases to customers in real time might not be optimal, and the response rate might start to drop off. At what point is remedial action needed, and how should the algorithms be modified? This is where we are headed – not just analysis, but analysis of analysis – a level of awareness within systems.

Massive computing ability is needed and there simply is no way that slow disk based technology will deliver the goods. HANA is a foundation for this move into a brave new world – and there are no real alternatives. There is a saying in technology markets that ‘if it works it’s already obsolete’ – I would make HANA an exception to this rule. For many organisations it will be a solid investment that will see them move into an age of real-time, intelligent business systems. Who would have thought that such an innovation would come from a German software company rooted in dull software applications that serve the needs of business administration.

Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
reviewer2179353 - PeerSpot reviewer
Soullution Architech at a tech services company with 51-200 employees
Real User
Easy-to-integrate relational database management solution
Pros and Cons
  • "It is very flexible to integrate with SaaS components."
  • "It is challenging to integrate it with third-party tools."

What is our primary use case?

We use the solution to store and migrate the data.

What is most valuable?

SAP HANA is very flexible and easy to integrate with SaaS components.

What needs improvement?

The solution could be more flexible. It is challenging to integrate it with third-party tools apart from SaaS components. Also, they should include a feature like local field in the next release.

What other advice do I have?

The solution's workflow system application and functionalities are compatible with SaaS components. I advise others to consider knowing the kind of data validation, performance tool, and use cases per their business requirement. They should look for other solutions if there are multiple data sources involved. 

I would rate it nine out of ten.

Which deployment model are you using for this solution?

On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
PeerSpot user
Technical manager at SAFE-SAS
Real User
Useful for real-time analytics, but the pricing could be better
Pros and Cons
  • "The solution is stable."
  • "The installation process could be more straightforward."

What is our primary use case?

We use this solution in conjunction with other prompted solutions primarily for real-time analytics.

What needs improvement?

The pricing on this product could be reduced, and the installation process could be more straightforward.

For how long have I used the solution?

We have been using the solution as integrators for about five years.

What do I think about the stability of the solution?

The solution is stable.

How are customer service and support?

I do not have any experience with customer service and support.

How was the initial setup?

The initial setup is not straightforward. It took us between seven and 15 days for deployment. We have over ten people using this solution in our organization.

What's my experience with pricing, setup cost, and licensing?

I am not sure about the licensing costs for this solution, but the feedback I have received is that it is expensive.

What other advice do I have?

I rate this solution an eight out of ten. It is a good product, but it is expensive.

Disclosure: My company has a business relationship with this vendor other than being a customer: Integrator
PeerSpot user
Buyer's Guide
Download our free SAP HANA Report and get advice and tips from experienced pros sharing their opinions.
Updated: March 2025
Buyer's Guide
Download our free SAP HANA Report and get advice and tips from experienced pros sharing their opinions.