We use the solution to store and migrate the data.
Database Expert with 51-200 employees
Very good for real-time data processing however, it's not currently feasible to store the whole multi-terabyte DW
Since its introduction in 2011, SAP tries to push HANA very heavily and there is a lot of marketing buzz over this new product. For a freelance consultant focused on SAP Sybase database products, like me, it is next to impossible to ignore HANA in year 2013. So, I decided not to rely to marketing slogans and check what HANA is, what it can do, and, importantly, what HANA is NOT. I put my first impressions to this blog post; hopefully other HANA-related posts will follow. Note that I’m not a HANA expert (yetJ) and I’m writing these rows as a person with a lot of experience with IQ and some other RDBMSs and trying to learn HANA.
So, why to compare HANA and IQ? Both are designed for data warehouse environment, both are column-based (with some support of row-based data), both provide a data compression out-of-the-box and highly-parallel. Years ago, much like SAP for HANA today, Sybase claimed that IQ processed data so fast that aggregation tables are not really needed, because the aggregations can be just performed on-the-fly. Well, experience with a number of big projects showed me how problematic that statement was, and it is only a single example.
According to SAP, the strong point of HANA is its ability to utilize CPU cache , which is much faster than accessing the main memory (0.5 - 15 ns. vs. 100 ns.). Currently, IQ and other Sybase RDBMSs lack this capability. Therefore, I decided to build a test environment which allows performing of queries that answer a number of conditions:
- The query should be performed fully in memory , otherwise it is not fare to compare IQ and HANA. In HANA, queries are executed fully in memory, once relevant columns are loaded into the RAM.
- The query should perform a lot of logical I/Os and should be hardly optimizable using indexes. Otherwise, the effect of using CPU cache may be not clear enough.
- The query should take at least a number of seconds to finish. Since both IQ and HANA (very unfortunately) don’t provide the number of logical I/Os performed by a query, we may compare response times only . If the query finishes in small milliseconds, the comparison of response times may be problematic.
Some notes about the test environment:
For IQ, I used 16-core RHEL server with hyper-threading turned on (32 cores visible to OS) and 140GB RAM
available. I used IQ 16.0 SP01 for my tests.
For HANA, I had to use HANA SPS6 Developer Edition on a Cloudshare
VM, which provides HANA on a Linux server with 24GB RAM. However, only
19.5 GB
is actually available from the Linux point of view (free –m output) and
most of
this memory is allocated by various HANA processes. In fact,
less than 3GB
RAM is available for user data in HANA
. I only wish that SAP would allow us
to download HANA and install it on any server that answers to HANA’s
requirements for CPUs, but it seems that the SAP’s policy is to distribute HANA as a
part of appliances only, so I don’t expect free HANA download any time soon.
This brings us to an additional requirement for the test: the test dataset should be relatively small , because of severe RAM restrictions imposed by HANA Developer Edition on Cloudshare.
Finally, I decided to base my tests on a relatively narrow table that represents information about phone calls (for those involved in Telecom industry, it is like short and very much simplified CDRs). Here is the structure of the table:
create table CDRs (<br>
CDR_ID unsigned bigint, -- Phone
conversation ID
<br>
CC_ORIG varchar(3), -- Country code
of the call originatior
<br>
AC_ORIG varchar(2), -- Area code of
the call originatior
<br>
NUM_ORIG varchar(15), -- Phone number
of the call originatior
<br>
CC_DEST varchar(3), -- Country code
of the call destination
<br>
AC_DEST varchar(2), -- Area code of
the call destination
<br>
NUM_DEST varchar(15), -- Phone number
of the call destination
<br>
STARTTIME datetime, -- Start time of
the conversation
<br>
ENDTIME datetime, -- End time of
the conversation
<br>
DURATION unsigned int -- Duration of
the conversation in seconds
<br>
);
I developed a stored procedure that fills this table in SAP Sybase ASE row-by-row according to some meaningful logic and prepared delimited files for IQ and HANA. The input files are available upon request. At first, I planned to run tests on a dataset with 900 million rows, but I finally discovered that I have to go down to 15 million rows because of the VM memory limitations mentioned above.
Important note about the terminology. In IQ, inserting of the data from a delimited file into a database table is called LOAD, and retrieving of the data from a table to a delimited file is called EXTRACT. In HANA, the inserting is called IMPORT and the retrieving is called EXPORT. The term LOAD in HANA has a totally different meaning – it means loading of a whole table, or some of its columns, to the memory from disk, when the data is already in the database.
IMPORT functionality in HANA is not similar to IQ, at all. Actually, it contains two phases: IMPORT and MERGE. During the first phase, the data is imported to a “delta store” in an uncompressed form. Then, the data from the “delta store” is merged into “main store”, where the table data is actually resided. The merge is performed automatically, when a configurable threshold is crossed (for example, the size of the “delta store” becomes too big). To ensure that the imported data is fully inside the “main store”, a manual MERGE may be required. The memory requirements during the MERGE process are quite interesting, maybe I will write about it in a different post. It is pretty much possible that you will be able to IMPORT the data, but will not have enough memory to MERGE it; it happened to me a number of times during my tests. I would recommend you to read more about HANA architecture here: http://www.saphana.com/docs/DOC-1073, Chapter 9.
Given the significant difference between the test systems (a powerful dedicated server for IQ vs. small VM for HANA), I didn’t plan to compare the data load performance between IQ and HANA. However, so far I see HANA performing the IMPORT using not more than 1.5 core of 4 available, thus underutilizing the available hardware. The MERGE phase, though, is executed in a much more parallel way. The bottom line is that IQ seems outperform HANA in data loading, possibly quite by far. I will probably return to this topic in one of following posts, additional tests with larger dataset are required.
Now, we come to the data compression. Since IQ and HANA approach the indexing quite differently, I chose to compare the compression without non-default indexes in both IQ and HANA. It appears that IQ provides better data compression and needs 591M to store 15,000,000 rows, while HANA needs 748M to store the same data. HANA provides a number of compression algorithms for columns, which are chosen automatically, according to the data type and data distribution. However, it seems that neither of compression algorithms offered by HANA contains LZW-like compression used by IQ. I’d prefer to test the compression on a more representative data set (15,000,000 is way too small) and play with different HANA compression algorithms. I hope one of future posts will be dedicated to this topic.
Finally, the data is inside the database and we are ready to query it. To answer the test conditions mentioned above, I chose the following query:
select
<br>
a.CDR_ID CDR_ID_1, b.CDR_ID CDR_ID_2,
<br>
a.NUM_ORIG NUM_A, a.NUM_DEST NUM_B, a.STARTTIME STARTTIME_1, a.ENDTIME
ENDTIME_1,
<br>
a.DURATION DURATION_1,
<br>
b.NUM_DEST NUM_C, b.STARTTIME STARTTIME_2, b.ENDTIME ENDTIME_2,
<br>
b.DURATION DURATION_2
<br>
from CDRs a, CDRs b
<br>
where a.NUM_DEST = b.NUM_ORIG
<br>
and datediff(ss, a.ENDTIME, b.STARTTIME) between 5 and 60
<br>
order by a.STARTTIME;
This query finds cases when a person A called person B and then the person B called person C almost immediately (in 60 seconds). This query has to perform a lot of logical I/O by its very definition. With my test data set, this query returns 31 rows.
In IQ, this query takes 6.6 seconds while executed fully in memory and when all relevant indexes are in place. The query uses sort-merge join and runs with relatively high degree of parallelism, allocating about 60% of 32 CPU cores available.
In HANA, the same query takes only 1 second with no indexes in place ! Remember, that in my tests HANA is running on a small VM with just 4 virtual CPU cores! The query finishes so fast that I cannot measure the degree of parallelism. Creation of indexes on NUM_ORIG and NUM_DEST reduces the response time to 900 ms.
A note about indexes in HANA: HANA offers only two index types and, by default, it chooses the index type automatically. In my tests, I have found that indexes improve query performance in HANA, sometimes significantly. Unfortunately, I have not found any indication of index usage in HANA query plans, even when some indexes were used by the query for sure. The role of the optimizer statistics in the query plan generation is also not very clear to me. I hope to prepare a separate post about query processing in HANA, stay tuned!
Another amazing and totally unexpected finding in HANA – index creation on NUM_DEST (varchar(15)) takes 194 ms. Index on DURATION (int) is created in 12ms!
My conclusions so far:
- HANA in-memory processing is not just about caching, it is much more than that. HANA allows us to achieve incredible performance for resource-intensive queries. Things that seem impossible with other databases, column-based or row-based, may become possible with HANA.
- Loading of the data into HANA requires careful resource and capacity planning. Merging of the inserted data with the rest of the table may require much more memory that you have probably thought. Particularly, to perform the merge, both old and new version of the table should fit into memory.
- It is pretty much possible that storing of aggregations in HANA is not required indeed, at least in most of the cases. Of course, I need a more representive result set to verify it.
- IQ and HANA can be used together in the same system, where they can solve different problems and store different data. HANA is very good for real-time data processing, or for queries that must be executed very quickly. However, it is not feasible to store the whole multi-terabyte data warehouse in HANA's memory in most of the cases, at least not in year 2013. At this point, IQ enters into the game. It is very efficient in massive data loading and data storage, and can answer queries with less strict response time requirements very efficiently. In some scenarios, the raw data can be loaded into IQ, and then, after some refining inside IQ, imported into HANA.
Update: see IQ query plan for my test case here: Download ABC_15mln_fully_in_memory
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
CEO with 51-200 employees
Why HANA Is Important
I’m not a great fan of SAP, or Oracle for that matter, but SAP’s HANA architecture is an unexpected innovation from a company that is rooted in serving the dull administrative needs of large organisations. In a nutshell HANA is an in-memory database capable of handling very large amounts of data with frightening speed. This is very timely, and more importantly will serve the needs of organisations for decades to come. While the focus is currently on the ability of HANA to address real-time analytics, the capability offered by HANA will serve us as we move into the feedback and control (cybernetics) era which has yet to unfold.
The current preoccupation with all forms of analytics (data mining, statistics, text mining, optimisation) and big data are predicated on very fast database systems. Traditional disk based technology is typically too slow and SAP has taken a simple idea – placing all data in much faster memory – and made it a reality. The idea is simple, but making it a reality is not. HANA enables many forms of business activity that were simply not possible before – real-time recommendations for customers, real-time tracking of very large distribution networks – and so on. This alone is enough to make HANA important for many businesses.
On the horizon however, and virtually unseen by most commentators, is the need to implement real-time feedback and control systems. It’s all very well to analyse current activity, but at which point is action called for, and what type of action will rectify a situation? Recommending additional purchases to customers in real time might not be optimal, and the response rate might start to drop off. At what point is remedial action needed, and how should the algorithms be modified? This is where we are headed – not just analysis, but analysis of analysis – a level of awareness within systems.
Massive computing ability is needed and there simply is no way that slow disk based technology will deliver the goods. HANA is a foundation for this move into a brave new world – and there are no real alternatives. There is a saying in technology markets that ‘if it works it’s already obsolete’ – I would make HANA an exception to this rule. For many organisations it will be a solid investment that will see them move into an age of real-time, intelligent business systems. Who would have thought that such an innovation would come from a German software company rooted in dull software applications that serve the needs of business administration.
Disclosure: My company does not have a business relationship with this vendor other than being a customer.
Buyer's Guide
SAP HANA
September 2025

Learn what your peers think about SAP HANA. Get advice and tips from experienced pros sharing their opinions. Updated: September 2025.
869,832 professionals have used our research since 2012.
Soullution Architech at a tech services company with 51-200 employees
Easy-to-integrate relational database management solution
Pros and Cons
- "It is very flexible to integrate with SaaS components."
- "It is challenging to integrate it with third-party tools."
What is our primary use case?
What is most valuable?
SAP HANA is very flexible and easy to integrate with SaaS components.
What needs improvement?
The solution could be more flexible. It is challenging to integrate it with third-party tools apart from SaaS components. Also, they should include a feature like local field in the next release.
What other advice do I have?
The solution's workflow system application and functionalities are compatible with SaaS components. I advise others to consider knowing the kind of data validation, performance tool, and use cases per their business requirement. They should look for other solutions if there are multiple data sources involved.
I would rate it nine out of ten.
Which deployment model are you using for this solution?
On-premises
Disclosure: My company does not have a business relationship with this vendor other than being a customer.

Buyer's Guide
Download our free SAP HANA Report and get advice and tips from experienced pros
sharing their opinions.
Updated: September 2025
Popular Comparisons
SQL Server
Teradata
MySQL
Oracle Database
MariaDB
Denodo
IBM Db2 Database
IBM Cloud Pak for Data
SQLite
CockroachDB
Amazon Aurora
Oracle Database In-Memory
IBM Informix
SAP IQ
AtScale Adaptive Analytics (A3)
Buyer's Guide
Download our free SAP HANA Report and get advice and tips from experienced pros
sharing their opinions.
Quick Links
Learn More: Questions:
- Microsoft sql2017 VS SAP Hana
- What are the biggest benefits of using SAP HANA?
- Is SAP HANA’s customer and technical support reliable?
- Is SAP HANA difficult to set up and start using?
- Which is better: SQL Server or SAP HANA?
- What are the main differences between SAP HANA and Oracle E-Business Suite (EBS)?
- Can anyone share a price comparison between Oracle Cloud (Oracle Fusion), SAP HANA and Microsoft Dynamics NAV?
- How do I extract SAP BW data models and SAP HANA data models?
- When using SAP CAP as the backend, is UI5 better than Vue, React or any other frontend framework?
- What are the differences between SAP ASE and SAP HANA?
Does Smart bear support SAP