Introduction
This article focuses primarily on technical experts from the field of quality and performance of software products. The article provides detailed information about the capabilities of the high-performance Apache JMeter logging option in the Oracle database. The development was used in combat conditions on load testing projects, which were conducted by employees of the company “Performance Lab Lab”.
The presented solution allows you to provide a centralized storage of load test results that use JMeter, as well as significantly reduce the time for processing and analyzing test results.
The engineer will now have access to all the functionality of the Oracle DBMS for working with data.
Opportunities
The developed set of tools designed for Apache JMeter allows logging test results (transactions, response time, server response, operation execution time) in the Oracle database in real time.
What is it for
The described data processing approach allows you to automate the collection and processing of data with frequent load tests using Apache JMeter.
In the case when the time for processing test results is very limited and the volume of logs is quite large, this tool allows you to quickly generate reports on the results of operations (number, response time, etc.) in the form of tables or graphs. The time to process logs in a few hour test is a few seconds - the execution of an SQL query.
To process the results, you can use all the possible functionality of the Oracle DBMS, thereby obtaining the most different report structure, for every taste.
')
Virtues
- High speed processing of logs from the test;
- The data for all the tests are in one place - the database;
- Convenience when preparing reports - the range for which you want to generate statistics is set, and the SQL query is executed;
- Flexibility of processing logs - the possibilities are limited only by the functionality of Oracle and the tester's imagination;
- High logging speed due to the absence of locks (access to the database) when a large number of VUs operate, since recording is performed only by one stream (by default).
disadvantages
- Requires the deployment of Oracle DBMS, server capacity depends on the planned load profile;
- Initial development of SQL query templates takes a certain amount of time;
- It is necessary to add a BeanShell Listener to the Jmeter test plan and monitor the nesting level of transactions (for more, see the section “Writing to a common queue”);
- Processing of the results during the test may lead to an increase in the time of logging to the database (it all depends on the database power and the presence of indices).
Scheme of the logging system
The logic is simple:
- At the beginning of the test, a stream is started that creates one connection (by default) from the database;
- Logs are written to the general queue;
- There is a synchronization thread that periodically drops the contents of the queue in the database.
The test plan for logging should contain two additional stream groups (Creating Connections and Synchronizing Logs). In order for the result of the sample to be executed in the common queue, you need to add a BeanShell Listener in the “Working Group”, the contents of which will be discussed below.
When the test is started, a connection to the database is performed once, after which every second (the interval can be adjusted) the synchronization thread checks for data in the queue and writes it to the database. Writing to the database is carried out in parallel with the work of the main load, it does not significantly affect it.
The test plan template from the application (
you can download it here ) is recommended to use as a standard configuration (file “db_logger_plan.jmx”).
Connect to the database
To connect to the database, you must specify: host, port, ID, login, password, number of streams (better yet = 1, the name of the table in the database.
For example, the lines look like this:
import ru.perflab.jmeter.OracleDBLogger;
OracleDBLogger.INSTANCE.connect ("127.0.0.1", 1521, "xe", "sa", "gfhjkm", 1, "jm_log_table");
Synchronization
A synchronization call causes the data from the queue to be written to the database (PreparedStatement and AddBatch are used). To call it is enough to specify the line:
import ru.perflab.jmeter.OracleDBLogger;
ResponseCode = new String ("count =" + OracleDBLogger.INSTANCE.insert ());
The optimal launch interval is 1 second.
Write to shared queue
The queue consists of HashMaps, in order to add an element, you must add a BeanShell Listener with the contents to the test:
import java.util.HashMap;
import ru.perflab.jmeter.OracleDBLogger;
HashMap p = new HashMap ();
p.put ("Timestamp", new java.sql.Timestamp (prev.getTimeStamp ()));
p.put ("CurrentTimestamp", new java.sql.Timestamp (prev.currentTimeInMillis ()));
p.put ("Time", prev.getTime ());
p.put ("Latency", prev.getLatency ());
p.put (“IdleTime”, prev.getIdleTime ());
p.put ("Bytes", prev.getBytes ());
p.put ("SampleCount", prev.getSampleCount ());
p.put ("isSuccessful", prev.isSuccessful ());
p.put ("SampleLabel", prev.getSampleLabel ());
p.put (“Hostname”, sampleEvent.getHostname ());
p.put (“ThreadName”, prev.getThreadName ());
p.put ("AllThreads", prev.getAllThreads ());
p.put ("UrlAsString", prev.getUrlAsString ());
p.put ("Request", new String (prev.getDataEncodingWithDefault ()));
p.put ("ResponseData", new String (prev.getResponseData ()));
p.put ("DataType", prev.getDataType ());
OracleDBLogger.INSTANCE.put (p);
We look at the test plan in the application (
file "db_logger_plan.jmx" ).
The entire list of parameters is mandatory, if you do not need to transfer anything, then empty values ​​are indicated, but it is better not to change anything.
IMPORTANT! Sometimes you want to write to the database indicators are not the sample of his parent, then add "prev.getParent ()", for example, prev.getParent (). GetDataType (). This may be necessary when subqueries, such as HTTP. To group subqueries, you can try using Transaction or Simple Controller.
Unpacking the database
For the system is suitable, as a variant of Oracle XE, and Enterprise (preferably the second option). The server's capacity depends on the intensity of the emulated load (the number of samples per second), testing was carried out on the project in “combat conditions”, where the load was about 450 transactions (samples) per second. The database had the following characteristics: 4 * Power7 (64bit) SMT-4, 12 GB RAM, CPU utilization was ~ 15%, there was also enough memory.
Script to create a table
Create a table in the database with the necessary triggers and sequences:
CREATE TABLE JM_LOG_TABLE
(
"T_ID" NUMBER (38,0) NOT NULL ENABLE,
"TIME_STAMP" TIMESTAMP (7), --Parameter prev.getTimeStamp () (time to start the sample)
“JM_DATE” TIMESTAMP (7), - Parameter prev.currentTimeInMillis () (local time of the station when the log was written to the queue)
"DB_TIME_STAMP" TIMESTAMP (7), - DB time when the log was recorded in a table
"ELAPSED_TIME" NUMBER (38,0), - Response Time (ms)
"LATENCY" NUMBER (38.0),
"IDLE_TIME" NUMBER (38,0),
"BYTE_COUNT" NUMBER (38,0),
"SAMPLE_COUNT" NUMBER (10.0),
"SUCCESS" NUMBER (1,0),
"LABEL" VARCHAR2 (1024 BYTE),
"HOSTNAME" VARCHAR2 (200 BYTE),
"THREAD_NAME" VARCHAR2 (200 BYTE),
"THREAD_COUNTS" NUMBER (38.0),
"URL" VARCHAR2 (2048 BYTE),
"REQUEST_MSG" BLOB, - It is assumed that the request to the server, but it was not found how to get this parameter in JM, you can always correct the listener
"RESPONSE_MSG" BLOB, - To be completed only if there was an error.
"DATA_TYPE" VARCHAR2 (200 BYTE) - Useless stuff - server response type
);
create sequence jm_log_table_t_id_seq start with 1 increment by 1;
CREATE OR REPLACE TRIGGER JM_LOG_TABLE_T_ID_INSERT before
INSERT ON jm_log_table FOR EACH row DECLARE
BEGIN
SELECT jm_log_table_t_id_seq.nextval, sysdate INTO: new.t_id,: new.db_time_stamp FROM dual;
END;
/
--Indexes are added as needed, so that there are no problems during restructuring, when there is a large stream of logs.
CREATE INDEX JM_LOG_TABLE_JM_DATE ON JM_LOG_TABLE (JM_DATE);
results
To collect statistics for a test, you need to execute a SQL query that you must first prepare. For example, you can do this while the test is in progress. A huge plus of this approach is that new requests need to be created quite rarely, since the format of the data required in the report does not change much during the project. Examples of requests in the annex to the article (
folder "SQL_Requests_for_DB_Logger" ).
Output data format
As a result, the execution of queries in the database leads to the formation of a table, of the following form:
On a similar table, you can build something like this:
Performance
Actual system performance will depend on the size of the test plan, the load station's iron, the database server's hardware and the communication channel.
At the moment, it was not possible to fix the maximum, since there was not enough memory on the laptop of the loading station (it was only at hand with 2GB). In a real project, the ability to determine the ceiling was not there either, so it was fixed that 1000 samples per second were logged without problems (there was no need for tests anymore).
Conditions. The “logger” test was conducted on two laptops:
Heat load Station - Lenovo v360 (Pentium 2 * P6100 2000 Mhz, Ram 2Gb); under JMeter v 2.8 allocated 512 MB, Win 7.
Oracle Database (XE 11g) - Lenovo Y550P (CPU Core i7 8 * 1.66 Ghz, Ram 4 Gb), Win XP. Only 1 index was built for the table (jm_date).
Network - 1 Gigabit via cross-cable.
The maximum amount managed to be squeezed out of the load station is 1800 samples per second, after which JMeter issued Out of memory and “stopped”.
The test plan contained a Dummy Sampler with a delay of 500 ms, the number of threads 1000.
The figure above shows the transaction schedule (samples) / per second. It can be seen that problems with the performance of the load station (not enough laptop power) appeared at the request rate of 1800 samples per second, which is already very good.
Loading system resources on the database:
• CPU 10-15%,
• 1 GB free memory (25%),
• disc utilization of 5-10%,
• network <1%.
At the load station:
• CPU 30%,
• memory is over (“ate” JMeter and OS).
To test the reliability of the development, a test was conducted on 700 threads and for a duration of 2 hours, the performance was 1,400 samples per second.
The figure shows a transaction schedule per second (aggregation 1 sec), as you can see, the load worked stably during the whole test, and the logging functionality did not affect the main JMeter requests.
The stability of the development is confirmed by the graph above, which shows that the time of logging (1400 records per second) to the database did not exceed 600 ms (pink line).
The operating time of the samplers emulating the load (Dummy Sampler) corresponded to the 500 ms indicated in the test plan, with the exception of rare deviations.
In this test, the power of laptops was enough for the base and JMeter to work.
Loading system resources on the database:
• CPU 10%,
• 1 GB free memory,
• disc utilization 5-7%,
• network <1%.
At the load station:
• CPU 30%,
• free memory ~ 25%.
Conclusion
This article presents one of the possible options for the centralized storage of the results of numerous load (load) tests using the Apache Jmeter load tool and Oracle DBMS.
Countless files (csv, xml) with JMeter logs from the tests performed can be replaced with a single database, where the results are safely stored and can be accessed by all participants in the load testing project.
The proposed solution is easily incorporated into the already existing test plan (two additional “threads-gupps” and listener), does not require significant computational power for the database and allows for flexible and convenient processing, analysis of test results.
By implementing this development in your project, you can save time when analyzing the results for more interesting things, such as talking with friends, tea / coffee, or whatever you like ...
In the next article
A development based on MS Excel will be described, with the help of which, by connecting to the database, it is possible to form tables with test results and graphs using the ODBC DataSource of the operating system.
The Excel file contains a VBA macro that connects to the database and executes an SQL query to select test results into an Excel spreadsheet, from which the necessary graphs are automatically built.
This development turned out to be an excellent addition to the “logger” described in this article, since the processing of the results of each new test takes only a few seconds.