Dieser Artikel wurde von DB Masters für Oracle für den Oracle for SAP Technology Update Newsletter im April/Mai 2016 geschrieben. Da der Artikel in einem englischsprachigen Newsletter erschienen ist, ausnahmsweise auf Englisch.
SAP BI with Oracle Database In-Memory POC
In-Memory database technology has been around since the 1990’s. Numerous database vendors have come and gone and filled that niche for years. In 2011 SAP came up with the HANA In-Memory concept. The initial idea was to efficiently handle analytical queries through columnar stored data structures. Planned as a BWA replacement the technology matched well.
Later SAP expanded the concept to include OLTP. Technically, they chose to mix self-developed and acquired technologies such as P*Time, TREX Search Engine, MaxDB (livecache), and Sybase ASE for this purpose. Customers wanting to exploit the In-Memory technology from HANA require special hardware to run the system and a migration/conversion from the technology they knew today to the HANA flavor of the day.
Nearly at the same time, Oracle fully integrated the In-Memory technology into the Oracle 12c database.
The Oracle database In-Memory promises good performance of high analytical processing capabilities, near transparent implementation, and requires no special hardware or data migration. When done properly, customers will continue to use the performance, scalability, availability, and reliability available today with the added high performance analytical capability.
This paper explores the pre-certification testing and findings of one customer’s experience with the Oracle database In-Memory. The Oracle Database In-Memory is certified for use with SAP of June 2015.
We at DB Masters in Austria are focused on solving customer business challenges by proper implementation of the Oracle Database and related technologies. We have experience with many large ERP systems and understand their special needs. However, our focus is Oracle centric - independent of the applications running on it. We are in business since 2000 servicing customers across Europe. Our customer list is long.
When Oracle 18.104.22.168 In-Memory has been initially released by Oracle one of our customers expressed interest this technology. The company approved to a proof of concept to evaluate the performance impact for their SAP BW application.
We took the “top” (longest running) SQL statement from their production SAP BI and captured measures from the following scenarios:
- The “current” query runtimes from the production environment as baseline – which is not entirely accurate as there are a lot of other activities in the database.
- 11gR2 with both the original buffer cache size and 256GB cache. – Statements were executed several times to warm up the cache so we were able to eliminate the I/O time from the execution. This allowed us to measure performance differences between Oracle 11gR2 and Oracle 12c results.
- 12cR1 with both the original buffer cache size and 256GB cache. – Statements were also executed several times to warm up the cache so we were able to eliminate the I/O time from the execution. This allowed us to measure to performance differences between Oracle 11gR2 and Oracle 12c results. The runtime difference between Oracle 11g and 12c where negligible. We can assume there are no performance differences for the test statements.
- Enable Oracle Database In-Memory and put all related objects into the In-Memory Column store. Several configuration settings were tried – like different In-Memory compression options – but at the end all results where done with FOR QUERY LOW compression.
Here are the results over all queries – keep in mind that this is an older SAP BI Version, not “optimized” for Oracle Database In-Memory technology:
|Type of test||Result|
|Runtime on production system||Always between 2500 and 2750 seconds depending on the current system load|
|11g/12c Test system original buffer cache||About 1455 seconds|
|11g/12c Test system 256GB buffer cache||About 382 seconds|
|Using Oracle Database In-Memory with QUERY LOW Compression||About 118 Seconds|
The numbers are not as impressive as when Larry announced the Oracle Database In-Memory, but depending on the baseline running up to 25 times faster does not look so bad.
There still remains the question often coming: “Why do you not put the whole database into buffer cache?” This would speed up the system by a factor of 7-8.
The answer is easy: most customers will not be willing to pay for servers having the required amount of memory. If your SAP BI database is 5, 10 or more TB in size, you would still need up to 50% of the database size for buffer cache. Today, there are servers having up to 32TB and even more memory, but they are very expensive. Current Intel based servers with 2 sockets are able to address up to 3TB memory – typically affordable servers are populated with up to 1.5TB memory.
The advantage of Oracle 12c Database In-Memory is that only table data must fit into the SGA. This typically ranges between 1/3 to ½ of the database size, depending on the type of system. The fact that In-Memory table data is stored and accessed in a compressed form means that the footprint will be reduced further. Our tests indicate that most SAP BI tables could be compressed to something between 0,2 and 0,4 of the original table size.
A current Intel based server with 2 sockets and 1.5TB memory should be able to put a 10TB SAP BI database completely into the memory! Additionally, you can get rid of indexes only used to speed up lookups– especially bitmap indexes needed for query performance. Fewer indexes to maintain will speed up data loading processes in a dramatic way while reducing disk space and backup sizes.
Another advantage of Oracle Database In-Memory is the short amount of time it takes to implement it. Believe it or not, this customer POC was done within 5 hours – including creating a clone of the database on the test system and running all of our tests! You may compare this to any SAP Feature implementation project you like to.
Deactivating the Oracle Database In-Memory is also quite easy. One simple command line directive dynamically enables and disables the feature. This offers a quick and efficient fallback if any show stopping issues should arise.
Last but not least – how is this effecting HA/DR? With Oracle database In-Memory, you may continue to still utilize all High Availability functionalities like Oracle RAC, Oracle Data Guard, and failover solutions from Oracle or other vendors. No changes. Recoverability remains also untouched. The feature is fully integrated with all of the Oracle HA and DR capabilities.
In summary, experience with this new Oracle feature was very positive. It lived up to our expectations with respect to the ease of implementation and the performance benefits. It is now a certified option and nearly transparent to implement. Customers now have a viable option to explore this technology at a reasonable cost with no business disruption.
About the Autor:
Christian Pfundtner, CEO of DB Masters, is working with Oracle Databases since 1992 (Oracle 6). One of the first 4 OCMs in Europe. Our focus is on the Oracle Database – but unlike many our competitors we take care of all related layers – from the disk (storage) up to the user. We offer any kind of evaluations (business and current database requirements, current licensing state), recommendations (how to implement Oracle to address your neads and get the most out of your investment) and implementation, addressing performance up to troubleshooting.