SAP BW on HANA: Limitations

I’ve seen very often the advantages of using SAP BW on HANA – very often. And beside all the advantages (which you definitely will have) what is about the disadvantages, the limitations? As open minded person I always ask myself about the shadow site. Because you always have two sites – where light falls there is always shadow. Please don’t understand the following points wrong. It is my personal view on things which happened to me. And I’m always open for new ideas.

  1. In-Memory Technology

So everything is in-memory. The physical (column) storage, the calculation and everything else is in-memory which makes loading data into the memory unnecessary. So you have to keep always approx.. 50 perecent of your in-memory storage free for calculation. This means also that you are limited with your physical (column) storage to a maximum of 50 percent. That it something I would declare as “easy to understand, hard to master” as we are still talking about a Data Warehouse which will be always increase by data (unless you do a proper housekeeping, strictly use the LSA++ concept, etc.).

So there will always a point where you reach the 50 percent point and then you can ask yourself. Where do I need an optimization? Do I really need all my data as physical data? Might there be some calculations for in-memory? Overall you ask yourself totally different questions than before (SAP BW not on HANA). Of course, you don’t want to scale-out your SAP HANA when memory seems to be running short.

 

  1. Row vs. Column Storage

Indeed the compression is very impressive but data will be always loaded into the high-memory consuming row storage before it will be merged to the compressed column storage. Sounds fair but as you have always initial or generally bigger data loads in a SAP BW system as in any other system you will always face “delta table warnings” on SAP HANA. Besides being annoying you will getting nervous when you have the following scenario (which isn’t unlikely for a SAP BW system):

You load a bigger amount of data. The warning starts and you see that the memory consumption is increasing. Another load starts in SAP BW (might be the second background job on SAP BW) and memory consumption is still increasing. Somebody starts executing a report – the person has to wait because the thread/session consumption is increasing as well (paralleling processing on SAP HANA). And the Snapshot starts on SAP HANA (every 5 minutes) and SAP HANA does a critical delta merge of the table which is still be updated by the data load. Conclusion, system is extremely slow the next 5-10 minutes.

So you have always be aware of separating loading processes from reporting times which isn’t unusual. But it also shows that resources on SAP HANA are really equally shared and necessary operations will be always fulfilled with a higher priority. Actually I would say you see above mentioned situation only on SAP BW on HANA. And as SAP always mentions that SAP HANA is delivered perfectly configured I wonder whether there is only one perfect configuration because SAP BW on SAP HANA might differ from other perfect configurations.

I found a SAP Note where the separation of delta merges and snapshots for SAP BW on HANA is planned for Revision 72 of SAP HANA which shows at least that SAP recognized this kind of bottleneck.

 

  1. Multi Temperature Data

Multi temperature is only available for Column Storage. Row Storages is always in-memory. But all SAP BW InfoProviders are based on Colum Storage. The only thing about multi temperature data is the loading in one direction – into memory. So whenever a Column-based table is requested it will be partially (when only a few columns are requested) or fully loaded in-memory. Normally you need the data in SAP BW for a reason. So most of the SAP BW data is in-memory (except archived data). How is the data getting colder? Restart SAP HANA (normally only tables flagged with preload are loaded directly into memory after a restart) or an OOM (Out of Memory) situation which forces SAP HANA to unload all low priority tables from the memory. Normally I won’t try to live at the edge (OOM) to regularly unload the data from memory. Therefore you have multi temperature data but only in one direction – into memory.

 

  1. In-memory modeling or pushing down the logic

First of all not every logic can’t be pushed down to the memory. You have limitations where you have encounter a long runtime of the HANA model which makes it unattractive to directly report on it. For example a big amount of data (10 million data sets) with non-equal JOINS are probably such a killer. Of course it is still really fast (3-4 minutes) and without in-memory it would really took a longer time but normally the end user expect for their reporting (on an in-memory database) a faster execution than 3-4 minutes.

So you have still too rate whether the logic pushdown is worth for real-time execution or whether it might be better saving the data physical by transformation in a InfoProvider.

 

In conclusion, SAP BW on HANA is really great. But I also recognized that not easily everything is possible now. Many people expect that you SAP HANA only use as a database and everything in SAP BW is perfectly scaled for using SAP HANA as database. So you still have the possibility to use the in-memory SQL-near flexibility by modeling in SAP HANA on SAP BW tables. This is really a win-win situation because you have the possibility to respect t

 

Leave a comment

*