
Small introduction
We in IBS wanted to go to Habr from the time when computers looked like in this picture. But constantly something was missing: now, time, then experts, or interesting stories. Finally, everything coincided and we run our blog - mainly about our test lab's internal lab (IBS InterLab), where IT infrastructure solutions are developed and technologies are tested for clients, plus a little in other areas of corporate IT. We will rarely write, at least this summer, but we will try to give the most useful material. Thank.
')
Go
Today, probably, almost all large business somehow works with ERP or other heavy business applications. Naturally, over time, there is a need to move to virtual machines. Solving this problem with a swoop is a dangerous business, since along the way a whole bag of surprises comes out, each of which may well turn into a complete failure of the project. To prevent this from
happening , the
IBS InterLab team is testing various technologies for the client's tasks, and in the framework of such studies we managed to get results that may be interesting and useful.
How did it all start?
We performed comparative testing of the performance of the tandems SAP + Oracle Database and SAP + HANA for the tasks of one of the customers. We pursued several goals: find out the peculiarities of the new for the Russian market HANA DBMS and look at the capabilities of the Huawei-certified HANA computing complex (we will definitely tell about this separately).
However, life is a dynamic thing, and very soon (by the standards of the universe, as well as NTP, too), the universal convergent complex
SKALA-R developed in IBS appeared on the horizon. And although we did not have time to work out the special complex tests (but we will definitely do it in the near future), to assess its real capabilities when working in combat conditions, our hands simply itched. It was important not just to check “works / does not work” - we are 100% sure that everything works.
I wanted to compare the performance with some well-known product. Therefore, we decided to continue testing the SAP + Oracle bundle on our platform in order to compare the bundle test already performed on VMWare.
Input data
The test complex used earlier was implemented in the cloud, which only increased the itching in our hands, since it made the comparison somewhat easier. So, an identical virtual machine was created in the SCALA-R cloud, differing only in the fact that if the previous one was implemented using a different hypervisor, then this test used a hypervisor from Parallels company. This virtual machine was connected to the LUN, assembled on a disk array using Raidix software. And it was with the latter, or rather, with its scaling, that we had certain difficulties: it was necessary to correlate the performance of disk subsystems - used earlier and which is part of SKALA-R.
We solved this problem by comparing direct performance measurements in IOPs. It turned out that in the initial test, the actual performance of the virtual disk subsystem was 1500 IOPs, and the measured performance of the allocated LUN in SCALA-R was 1900 IOPs according to IOMeter, which practically reached the theoretical 1998 IOPs calculated using a
calculator . At the same time, for completeness, it should be noted that we managed to achieve the result of 1900 IOPs only after a significant increase in the size of the test data, since the disk array cache worked too hard on small volumes, and we received prohibitive 37100 IOPs.
That's about this understanding of the ratio of hardware resources, we started testing. But again, it should be noted that the DBMS in the tests used a small (slightly less than 1 GB), which predetermined the active influence of the cache on the test results.
Test process
In the course of testing, runs of three dozens of various kinds of queries were implemented, implemented both through a program written in ABAP and through standard SAP R / 3 functionality.
- The first group of tests . Here we used 2-3 tables with a large number of records and fields in them (about 100), from which several directions were selected. At the same time, the filtering criteria were selected in such a way that the number of records returned for different tests was changed several times:
- Selection of individual database fields (lines 1 and 3). 3 fields were selected, then the filtering conditions ensured the return of a large volume of records (several hundred thousand) or small.
- Selection of all record fields (rows 2 and 4) from the same table. The filtration conditions remained the same.
- Selection of all fields of the record (lines 5-7, 13, 15, 17) from the same table. Filtering was carried out on the key and non-key fields.
- Selection of one record field (lines 8-12, 14, 16, 18) from the same table. Filtering was carried out in the same way - by key and non-key fields.
- The second group of tests. In this case, for lines 19-21, queries of the aggregate type were used, that is, those involving the execution of aggregation operations: in the field “value of the transaction” when grouping records by balance units and currency of the transaction, the amount was calculated. This task was included in the testing program, since HANA is positioned as a system optimized for analytical applications (namely, to perform aggregation queries).
- The third group of tests. The next round of tests included the execution of samples without filtering (lines 22-25), which were carried out using the tool built into SAP (viewing the contents of the table - transaction SE16), which allows viewing the contents of the tables. At the same time, the maximum limit for the number of returned records was set at 50,000.
- The fourth group of tests. The latter carried out synthetic tests that are part of the standard SAP R / 3 functionality (lines 26-29) and proprietary data extractors for the SAP BW analytical reporting system (lines 30-34), which ensure the import of data from R / 3 to BW. Extractors took data according to specified criteria: mainly periods for which documents must be selected.
Each test was performed two times to evaluate the levels of the cache. According to their results, it became obvious that during the first operation all data is read from the data storage system, and during the second execution, the work is optimized by caching mechanisms. At the same time, in order to eliminate the influence of the Oracle server cache, before conducting a test of a new type, we reset it.
And now what it was all about was the test results. In theory, no special explanation is required for them, since everything is clearly presented in the tables. But if you have any questions, you can always ask them in the comments, and we will respond promptly and fully. So, look.

Summing up
Since we did not pretend to absolute scientific accuracy, we will draw conclusions in a fairly free form:
- Firstly , it is quite clearly seen how the Oracle software cache works: a single run is enough for the operation to be significantly reduced. This can be attributed to the benefits of Oracle.
- Secondly , the amount of time spent on each first test is approximately the same, and the performance ratio of disk subsystems (1500IOPs / 1900IOPs) is almost equivalent. It should be noted that the subsequent performance of similar tests on SCALA-R results are improved, which is explained by the inclusion of a disk array in the work of the cache, which in the past tests, apparently, a little filonil.
- Thirdly , we will be as impartial and objective as possible, but at the same time, we note that our SKALA-R has done a very good job of supporting ERP. We have obtained quite adequate results on a platform that has competitive functionality, but with a significantly lower cost compared to the usual solutions on hypervisors and disk arrays from well-known manufacturers. Given that, for the purity of the experiment, it was decided to conduct a test on the “default” settings.
If you have something to add, ask, argue or, so to speak, add to our “book of complaints and suggestions,” we will be happy to continue the discussion in the comments.
Experts worked on the post: Alexander Sashurin and Alexander Ignatiev with the active complicity of Andrei Sungurov.
Thanks to them.
IBS
Interlab Team .