Performance Measurement Play Framework 2.0
I
have already talked about the software platform
Typesafe Stack 2.0 . That post dealt with one of the platform components - the
Akka 2.0 framework, which implements the model of actors on the JVM. Today I want to write about the capabilities of another component Typesafe Stack -
Play 2.0 framework. Although the functionality of this component has already been described
here and
here , the theme of the performance of solutions running Play 2.0, in my opinion, was not disclosed.
The framework will be tested using a simple application developed on its basis. As a result of the tests, you need to answer the following questions. What is the maximum possible number of simultaneous connections? How much RAM do these connections consume? How many requests per unit of time can the application under test handle?
Test application
Before proceeding to the description of the application under test, the main architectural features of the Play 2.0 framework should be clarified. The HTTP Play server is based on the high-performance
Netty library. This not only allows you to use it out of the box, eliminating the setting of the servlet container, but also provides the possibility of asynchronous processing of client requests. In the classic synchronous processing variant, any incoming request that needs some calculation to be answered will take up the operating system thread all the time the given calculation is performed. Play, on the other hand, allows for the duration of the calculation to return the stream to the server pool and reopen the stream for a response when the calculation is ready. Technically, this means the possibility of simultaneously connecting more clients than in the synchronous version.
The application under test will perform three main functions:
- create a comet connection with the client’s browser (
/wait?cid={connection_id}
) - accept incoming value and send it to the console of browsers of all available comet connections (
/put?v={value}
) - close all existing comet connections (
/closeall
)
During development, the Akka 2.0 library was used. The application is developed in the Scala language, as from my point of view it is more convenient to work with Akka, compared to Java. Below I will give the main parts of the code to only show the simplicity of working with connections in Play 2.0 and not to leave the essence of this post. All code can be obtained from the git-repository, a link to which is given at the end of the publication.
')
Actor comet-connections
... lazy val channel: PushEnumerator[String] = ch ... def receive = { case Message(message) => { channel.push(message) } ... case Close => { channel.push("closed") channel.close() self ! Quit } } ...
The variable
channel
is the data source for the comet-connection (type -
Enumerator
), which, as shown below, is transmitted to the comet-connection via the
Comet
adapter (type -
Enumeratee
). More information about working with sources-converters-consumers of data streams in Play can be read
here . Data transfer to the comet-socket is performed by calling the
channel.push(message)
function. Close a comet socket by calling
channel.close()
.
Main application actor
The functions of the ConnectionSupervisor actor include: creating a comet connection, sending a message to the created connections, closing all connections.
... var connectionActors = Seq.empty[ActorRef] def receive = { case SetConnect(connectionId) => { lazy val channel: PushEnumerator[String] = Enumerator.imperative( onComplete = self ! Disconnect(connectionId) ) val connectionActor = context.actorOf(Props(new ConnectionActor(channel)), connectionId) connectionActors = connectionActors :+ connectionActor sender ! channel } case BroadcastMessage(message) => { connectionActors.foreach(_ ! Message(message)) } case CloseAll => { connectionActors.foreach(_ ! Close) } } ...
References to the created actors are stored in the sequence connectionActors (type - Seq [ActorRef]). When a connection is established, the channel is created and transmitted to the new actor, ConnectionActor. The actor is added to the list of actors. How messages are sent and connections closed should be clear from the code.
Actor storing the current value
It is assumed that a value is sent to
StorageActor
, some actions are performed and the value is sent to all comet connections, and is also returned to the client. This simulates the behavior of a real application when a client makes a request and waits for a response.
... var value = "" def receive = { case Put(v) => { value = v connectSupervisor ! BroadcastMessage(value) sender ! value } } ...
Application Controller
... object Application extends Controller { ... def waitFor(connectionId: String) = Action { implicit val timeout = Timeout(1.second) AsyncResult { (ActorsConfig.connectSupervisor ? (SetConnect(connectionId)) ).mapTo[Enumerator[String]].asPromise.map { chunks => Ok.stream(chunks &> Comet( callback = "console.log")) } } } def broadcastMessage(message: String) = Action { ActorsConfig.connectSupervisor ! BroadcastMessage(message) Ok } def putValueAsync(value: String) = Action { implicit val timeout = Timeout(1.second) Async { (ActorsConfig.storageActor ? Put(value)).mapTo[String].asPromise.map { value => Ok(value) } } } def closeAll = Action { ActorsConfig.connectSupervisor ! CloseAll Ok } }
The methods of this controller are displayed addresses from incoming HTTP requests (description of the routes is in the file
conf/routes
). Of greatest interest here is the
waitFor
method, which creates a comet-socket and connects with it an
Enumerator[String]
channel. The channel to the socket is sent by the actor in response to a
SetConnect
message. Each message arriving on the channel is passed to the client’s browser as a parameter of the function specified in the
Comet( callback = "console.log")
object
Comet( callback = "console.log")
. In this case, it is a
console.log
function.
On the client side, a comet connection is created using a hidden
iframe
element, for example:
<iframe src='/wait?cid=1' style='display:none'/>
Testing process
The application under test was launched on a virtual machine running Ubuntu 11.10 (32-bit) with 1 GB of RAM and 1 processor core (physical computer processor - Intel Core i5-2400 3.1GHz).
Testing with standard tools (JMeter, Visual Studio Load Test) failed because the launch of even 700 parallel threads puzzled the testing system so much that it was impossible to create any significant load. Using a special testing tool such as the
Gatling Stress Tool (the architecture of which is also based on Akka) turned out to be impossible due to the lack of testing function for comet connections. At the same time, it also turned out to be a difficult task, since developer documentation is under construction. For these reasons, a proprietary testing tool was developed.
Test script
The scenario consists of three steps:
- creates a specified number of comet connections with a certain frequency
- the specified number of values are transmitted with a certain frequency
- comet connections are closed by the corresponding request
The testing process collects data on the number of established comet connections, the number of received values on average in each comet connection, the number of requests for sending values and from them successful, as well as the time for the execution of these requests on average. During the execution of each step of the script, the processor load and the amount of RAM occupied (for comet connections) were recorded.
Test results
- Measurement of the maximum number of comet connections and the amount of RAM occupied.
In this group of tests, after the installation of comet connections, a single value is sent, after which the connections are closed.
Number of comet connections | Connection frequency (1 / ms) | Memory occupied (MB) | Max. Processor load when establishing connections | Max. Processor load when sending value |
---|
500 | 50 | 36 | 15% | 15% |
1000 | 40 | 56 | 17.5% | 15% |
3000 | 20 | 145 | 25% | 62% |
3000 | ten | 142 | 40.6% | 61% |
3000 | five | 140 | 59.9% | 70.8% |
3000 | 3 | 138 | 98% | 69% |
6000 | four | 262 | 80.5% | 93.4% |
8,000 | four | 394 | 73.7% | 80.9% |
10,000 | four | 485 | 77.1% | 100% |
The processor load in the process of establishing comet-connections with a period of 4 ms is within reasonable limits, so adding additional connections is only a matter of the amount of RAM.
- Measurement of the maximum flow of value send requests and short-term peak load.
As can be seen from previous tests, sending values to comet connections is a resource-intensive operation, therefore in this group of tests, to measure the maximum throughput of the server, the number of connections will be reduced.
Number of comet connections | Number of sent values | Send Frequency (1 / ms) | Max. Processor load when sending | Average query time (ms) | Number of rejected requests |
---|
1000 | ten | 100 | 85.7% | 163 | 7 |
1000 | ten | 500 | 76.5% | 45 | 0 |
1000 | ten | 200 | 100% | 374 | 2 |
500 | ten | 200 | 77% | 39 | 0 |
100 | ten | 100 | 25% | nineteen | 0 |
100 | 100 | 50 | 68.9% | 35 | 0 |
100 | 100 | thirty | 100% | 250 | 0 |
ten | 100 | 20 | 31% | 12 | 0 |
ten | 1000 | ten | 61.7% | 18 | 0 |
ten | 1000 | five | 98.7% | 47 | 0 |
one | 1000 | four | 58.2% | 27 | 0 |
one | 1000 | 2 | 92.3% | 33 | 0 |
one | 10,000 | 2 | 100% | 400 | 4636 |
one | 8,000 | four | 100% | 415 | 3217 |
one | 5000 | five | 100% | 399 | 292 |
one | 3000 | 7 | 100% | 129 | 0 |
one | 2000 | 7 | 98% | 58 | 0 |
The tests performed show that the application copes with short-term peak loads of up to 500 requests per second and works normally with a load of 100-150 requests per second.
UPDATE:
Test application: git: //github.com/tolyasik/testeeApp.git
Testing application: git: //github.com/tolyasik/testerApp.git