
I suggest to talk today about the wonderful testing framework for TextTest. This is a cross-platform functional testing tool with a “record-replay” paradigm. As the name TextTest suggests, the text-oriented testing approach, which is rather unusual today, makes it easy and simple to write and read tests. In addition to TextTest, we will talk about StoryText - which is strictly a separate GUI testing tool, but with TextTest it allows you to do this much more pleasantly. Also mention the third module from the author - CaptureMock.
So, how it all began: I needed a cross-platform library for testing GUI on Tkinter (standard python module for writing GUI) with a theoretically possible transition later to another framework. Having rummaged in google, I was absolutely desperate to find something suitable. But I met the mention of TextTest, which was able not only to test the interface logic on Tkinter, but also provided the opportunity to work with a bunch of other graphical GUI libraries. Yes, and also contained such a count of various other goodies that I immediately fell in love with him. So let's get started.
Brief summary
Current title: TextTest + optional modules StoryText and CaptureMock
Old name: PyUseCase
Posted by: Geoff Bache
First commit: 04/02/2003
Documentation site: sourceforge
Source codes can be found here: launchpad right there is a bug-tracker
License: GNU LGPL v3
As we can see, the project is mature, it has turned almost 10 years old and nevertheless it continues to develop actively. The author claims that he writes it full-time and judging by the activity on the launchpad of several dozen commits per month, I am inclined to believe him. In addition to him, another person participates in the project on an ongoing basis In the article I will try to at least tell you in general terms what they managed to create during these 10 years.
')
main idea
The framework helps your application in one way or another to write a plain text file, which reflects all the important actions that the program has taken. When you run the test for the first time, you check that the output of the program is correct and mark this set of output files as the correct “golden copy”.
After making changes to your application, the test is run again and the new resulting set of files is compared with the original ones. If everything is the same, the test is considered passed. It did not coincide - we look at the changes, if they are correct, we mark the new files as “golden copy”. It's simple.
So, what are the ways to create such files with information about the program:
- TextTest can, when the program starts, save its stdout, stderr to files.
- You can tell TextTest which files the program generated during its work (for example, logs).
- With CaptureMock, you can automatically generate wrappers for individual functions and even modules, including standard python libraries. Well, for example, you can ask to log math.fabs, then any call to this function (input and output values) will be written to the log.
- StoryText is a set of wrapper classes that transparently replace your GUI interfaces for your program, which allows you to record all actions made by a person on a form, followed by playback, as well as log the application response.
If it looks quite simplistic like this: we launch the program, click on the button, which causes a change in some widget and exit. When playing the test, StoryText launches the program, presses the same button itself and checks that the widget has changed in the same way as when recording the test.
Such wrappers exist for: PyGTK / Tkinter / wxPython / SWT / Eclipse RCP / GEF / Swing, and it’s quite possible to expand the list yourself for other libraries.
As you can see, there is a rich set of ways to save information about the program, without changing a single line in the source code. Next, I will try to demonstrate in the most general terms how to use this wealth with examples, and at the same time it will become clearer. Undoubtedly, it is not possible to cover everything in one article and the main part of the framework will remain behind the scenes. And the examples are forced to be made greatly simplified and short. But I hope you will be able to get at least a general idea of the possibilities of TextTest and may be interested in them.
Installation
I will briefly describe what needs to be set up for working with the Windows framework, dwelling on some nuances, I hope that for other systems it’s still not difficult.
For tests, we need to install
python 2.6 .
Then we download and install
texttest 3.24 , note that PyGtk is needed for the GUI to work (I removed the tick for the first time), just do not forget to tick the default tick for “StoryText for python”.
For some reason, CaptureMock did not get into the previous set, so we will install it separately using the “easy_install capturemock” command (if you don't have easy_install installed, you can take it from the
link, it will be useful to you besides this article, you can read more about it
here )
Next, for StoryText to work correctly in environment variables ("system properties" \ "optional" \ "environment variables") add the variable TCL_LIBRARY with the value c: \ Soft \ Python26 \ tcl \ tcl8.5 (change the path to your own).
Then you need to restart the system so that the set environment variables are applied.
Create a project
We start TextTest, it proposes to create a project when it is first started, there are no special secrets there. You need to specify a name for projects, an extension for configuration files, a name for a project folder, select what to test in the drop-down list, we will not have a GUI program and specify the path to the application under test (the script described later in the article can be downloaded
from here ), we get something like this:
Next, go to the folder with the projects for TextTest (by default, it is here: c: \ Tests) and see what happened there. We find inside the file config.cfg, in which the project settings are described, it is a text file with an ini-like syntax: “name: value” + sometimes there are sections there. Let's fix it right away by changing the executable value to:
executable:${TEXTTEST_ROOT}/test.py
where TEXTTEST_ROOT is an environment variable that refers to the current project directory.
Now you can put test.py next to config.cfg, for greater mobility, so to speak. Full list of environment variables
here
test.py - contains 3 functions and one GUI class on Tkinter, which we will test. The parameters for the functions and the choice of the function itself can be driven by running a script with different parameters. The first characterizes the called function, the rest are transferred to the parameters of this function. In order not to manually repeat everything that will be described below, you can immediately download the final project
here , and put the simpletests folder in “c: \ Tests”. And then run the tests from it as you read.
First example. Test output to stdout
We will test the mul function from test.py
def mul(a, b): print 'params: %s, %s' % (a, b) result = a * b print 'result = %s' % result if result > 0: print 'positive' elif result == 0: print 'zero' else: print 'negative'
As you can see, the function is elementary - it multiplies two numbers and outputs the values of the parameters, the result and the sign of the result to stdout. To call it, you need to run test.py with the first parameter “mul” and two others, which will be the function arguments.
Create a new test suite through the menu and call it for example “Suite_Mul”, create a test “Test_Negative” for it, specify “mul 1 -2” as command line parameters, run it.
Since the test was launched for the first time, TextTest will reasonably indicate that the values of stderr.cfg and stdout.cfg have changed, they would not exist at all. We look at what was brought out in them, the first one turned out to be empty, while in the second there should be such a text:
params: 1, -2 result = -2 negative
Well, that's right, click on Save, saving the results as a “golden copy”. When you restart the test, it will be successful.
You can try to change the program to give a different result and see what the error looks like. If you double click on the erroneous, highlighted in red file, you can clearly see what has changed
I think you can easily add tests for the other two cases "Test_Positive" and "Test_Zero" to practice. Then you should see which files and folders were added to the project to make sure that the structure of the project is logical, and all files are easily readable by humans, there is no binary data, no xml, only plain text.
The second example. We are testing the output to the log
Will test function file_write
def file_write(s): f = open('log.txt', 'wt') f.write('%s %s-%s\n' % (time.strftime("%H:%M:%S"), s, s[::-1]))
It displays the current time in log.txt and then the line from the input parameter + the same line is only inverted.
Add a new test suite "Suite_File" and a test to it "Test_File" with the parameters "file HelloWord".
To indicate to TextTest that the program will generate the “log.txt” file in the “Suite_File” folder that appears, add (manually, because the poor interface doesn’t have much to do with it) the config.cfg file with the text:
[collate_file] logfile:log.txt
Run, save the result. Run the second time. If you are not the fastest cowboy in the wild west, you will see that the test passed with an error, because log.txt this time is another time. A similar problem occurs quite often when we work with changing data. These can be logged: time, date, ip, car name, etc. Fortunately, TextTest has the
means to change output files. We will limit ourselves to replacing the real time at 00:00:00 with this line in the config.cfg created above:
[run_dependent_text] logfile:[0-9][0-9]:[0-9][0-9]:[0-9][0-9]{REPLACE 00:00:00}
After that, any changes in time will not affect the passing of the test. You can run and check.
The third example. We intercept a function call
Now we will test the formula function
def formula(val): print math.floor(math.fabs(val))
And suppose that for some whim, we want to know what values come into the function math.fabs and what it returns. The CaptureMock library will help us with this, it can “wrap” math.fabs with its own code, which will add incoming parameters to the log, execute the original math.fabs and also put the results into the log.
Add a new test suite "Suite_Formula" and a test to it "Test_Formula" with the parameters "math -4.5". Then we connect CaptureMock, for this we add a line to the main config.cfg file (the one that lies at the root of simpletests):
import_config_file:capturemock_config
And in the Suite_Formula folder, add the capturemockrc.cfg file, in which we indicate that for Test_Formula we want to log math.fabs:
[python] intercepts = math.fabs [general] server_multithreaded = False
After that, if you restart into the editor, in the tabs of Running, and inside it in the tab of Basic, we will find the “CaptureMock” tinctures that have appeared. Switch the tick to “Record”, which indicates that we want to use CaptureMock when recording “golden copy” and run the test. After launch, we will see that in addition to the standard stderr and stdout, a new pythonmoks.cfg has appeared with the following content:
<-PYT:math.fabs(-4.5) ->RET:4.5
those. the input and output values of the math.fabs function were written there.
After the first launch, a tick in the “CaptureMock” block can be returned to Replay (which, however, will be done automatically the next time the editor is restarted).
An example, of course, turned out to be somewhat contrived, but the functionality is extremely useful. Well, for example, you write a chat, which by tcp / ip sends a bunch of service data. This data needs to be somehow checked. But as? Well, do not log every sneeze. With the help of CaptureMock, you can easily override the functions of data transmission over the network and log the data transmitted through them.
CaptureMock can perform many more interesting things, but the format of the article does not allow to tell about everything.
A little later, I will publish the final part of the article, where I will show how you can test the GUI and generate reports.
UPD: Continued. Part 2