At the end of 2014, we presented a new product in the line of office controls -
ASPxRichEdit . Our goal was to create a powerful tool for working with documents online. The implementation of the requirements imposed by users on a text editor — supporting text and paragraph formatting styles, loading and saving documents of popular formats without losing content, setting up printing — all this implies intensive interaction between the client and the server.
In this article I will talk about the approaches to testing this interaction, which we used in the development process.

Tools used
When designing the architecture of any serious project, regardless of the platform chosen and the tools used, there is a very important point - all the readability, portability and structure of the code are irrelevant if this code cannot be covered with tests. Tests should be written and run easily and quickly, using only the minimum necessary code. In this case, developers will write code and immediately cover it with tests, or, guided by the mantra "red / green / refactoring", write tests, and then implement the new functionality. But if in order to write tests, sacral knowledge, available only to the project architect, is needed, the code will not be covered with tests.
')
The choice of tools for independent testing of server and client code was not difficult for us - we stopped at using
NUnit as a server test framework and
Jasmine for testing client code. As a runner for client tests, we used the now almost standard
Chutzpah .
Client-server interaction model
However, in the case of ASPxRichEdit, it was important to cover with tests not only the process of sending and processing requests, but also the synchronization of the client and server state. The main task of integration testing in this case was to make sure that any state of the server model is correctly interpreted by the client. In turn, any change in the document on the client must correctly go to the server and lead to the corresponding changes in the server model.
In our case, the client model largely replicates the server version - the desktop version of the Rich Editor has been developing in DevExpress for more than eight years, so for the server part it was decided to do without the invention of a bicycle, accompanied by fascinating rakes, and having a “mirror” model on the client simplifies synchronization. In my opinion, in this approach there is no particular specificity, for sure the same situation can be observed in many applications based on the "old" server code. In this case, to ensure interoperability, code is needed that is able to convert JSON based on the server model and modify this model based on the JSON that came from the client, and code that solves the same tasks on the client. The easiest way to make such a code is auto-generated, with which the studio mechanism of the
T4 Text Templates templates does a great job.

Using PhantomJS for integration tests
Thus, we need to test how client requests are interpreted by the server, and how the client responds to the response received from the server. The server part of the test is written using the already mentioned NUnit, and we decided to use
PhantomJS to launch the client part. The latter is a full-fledged browser based on WebKit (JavaScript, CSS, DOM, SVG, Canvas), without UI elements, but fast enough and easy. Such a bundle allows us to test client initialization based on the server model, applying client changes on the server, and server model changes on the client, as well as possible collisions during state synchronization.
In general, the test is a fairly simple cycle. First, the server model is created and configured, then the working session generates the initial JSON for client initialization (in the case of real documents, the model breaks into parts and only the first fragment is transferred during the first load, and the rest is loaded asynchronously - until the server returns the subsequent parts, the client will miscalculation of the existing part; in the tests, the documents are small, so the initialization JSON contains the full model). Next, the server code runs PhantomJS with our libraries and startup script. The script creates an instance of the client control and initializes it with a JSON object, which is linked to the server. Further logic varies depending on the purpose of the test.

If we test model initialization, the resulting model is immediately serialized back to JSON and output to the console, while the server code analyzes the contents of the console and checks for the creation of the client model. If we test the creation of JSON objects on the client and their interpretation on the server, then in this case, the client performs the necessary operations, and all the requests, instead of being sent to the server, are again written to the console. Next, the server code reads the contents of the buffer, changes the model, and checks how correctly the incoming commands are processed.
The described algorithm can be illustrated with a specific example of an integration test:
[Test] public void TestParagraphProperties() { ChangeDocumentModel();
As you can see, the code of the resulting test is quite simple. After setting up the server model, we run PhantomJS. At the same time, the RunClientSide () function takes an array of actions that must be performed on the client (for example, executing commands that change the model, getting the serialized state of the client model). The result of each action will be saved to the output array, for example:
function getClientModelState() { var model = control.getModel(); buffer.push(JSON.stringify(model)); }
Next, the resulting array is serialized in JSON and recorded in console.log (i.e. output of the application):
function tearDownTest() { console.log(JSON.stringify(buffer)); }
Test runner implementation code:
string StartPhantomJSNoDebug(string phantomPath, string bootFile, out int exitCode) { StringBuilder outputSb = new StringBuilder(); StringBuilder errorsSb = new StringBuilder(); exitCode = -1; using (var p = new Process()) { var arguments = Path.Combine(TestDirectory, bootFile); //... //set up process properties p.OutputDataReceived += (s, e) => outputSb.AppendLine(e.Data); p.ErrorDataReceived += (s, e) => errorsSb.AppendLine(e.Data); p.Start(); p.BeginOutputReadLine(); p.BeginErrorReadLine(); if (!p.WaitForExit(15000)) { p.Kill(); p.WaitForExit(); Assert.Fail("The PhantomJS process was killed after timeout. Output: \r\n" + outputSb.ToString()); } else p.WaitForExit(); exitCode = p.ExitCode; } if (!string.IsNullOrWhiteSpace(errorsSb.ToString())) Assert.Fail("PhantomJS errors output: \r\n" + errorsSb.ToString()); return outputSb.ToString(); }
If you need to look under the debugger, what happens in the tests, the runner will be like this:
string StartPhantomJSWithDebug(string phantomPath, string bootFile, out int exitCode) { StringBuilder outputSb = new StringBuilder(); StringBuilder errorsSb = new StringBuilder(); exitCode = -1; using (var p = new Process()) { var arguments = Path.Combine(TestDirectory, bootFile); arguments = "--remote-debugger-port=9001 " + arguments;
Then the resulting JSON is processed on the server, after which the actual testing is performed - checking the status of the client model, applying JSON obtained as a result of executing the client code, and checking the status of the server model.
Thus, with the help of PhantomJS, we managed to write integration tests that allow us to verify starting initialization and subsequent synchronization of a complex client-server application.