📜 ⬆️ ⬇️

Static testing or save Private Ryan

The release often sneaks up unnoticed. And any mistake, suddenly discovered before him, threatens us with a shift of time, hotfixes, work until the morning and spent nerves. When such a hindrance began to occur systematically, we realized that it was impossible to live like this anymore. It was decided to develop a system of comprehensive validation to save private Ryan developer Artem, who went home at 9 pm, or 10, or 11, before the release ... well, you understand. The idea was for the developer to learn about the error, while the changes had not yet entered the repository, and he himself had not lost the context of the task.


Today, changes are carefully checked first locally, and then a series of integration tests on the assembly farm. In this article we will talk about the first stage of testing - static testing, which monitors the correctness of resources and analyzes the code. This is the first subsystem in the chain and accounts for the majority of the errors found.

How it all began


Manual process of checking the game before the release began in QA a week and a half before the release. Naturally the bugs that are at this stage need to be fixed as soon as possible.

Because of the lack of time for a good solution, a temporary “crutch” is added, which then takes root for a long time and acquires other not very popular solutions.
')
First of all, we decided to automate the finding of overt errors: falls, impossibility to make a set of actions for the game (open a store, make a purchase, play a level). To do this, the game starts in a special mode of auto-play and, if something went wrong, we will know about it immediately after passing the test on our farm.

But most of the errors that were found by testers and our automated Smoke test is the lack of a resource or incorrect settings of different systems. Therefore, the next step was static testing - checking the availability of resources, their interrelations and settings without launching the application. This system was launched as an additional step on the assembly farm and greatly simplified the finding and repair of errors. But why waste the resources of the assembly farm if you can detect an error before committing and entering the problem code into the repository? This can be done with pre-hook hooks , which are just launched before creating a commit and sending it to the repository.

And yes, we are so cool that static testing before a commit and on an assembly farm is performed by one code, which greatly simplifies its support.

Our efforts can be divided into three areas:


A separate task was to organize the launch of tests on the machine from the developer. It was necessary to minimize the execution time locally (the developer does not have to wait 10 minutes to commit one line) and make sure that every system that makes a change has our system installed.

Many requirements - one system


When developing there is a whole set of assemblies that can be useful: with and without cheats, beta or alpha, iOS or Android. In each case, you may need different resources, settings, or even a different code. Writing scripts for static tests for every possible build results in an intricate system with many parameters. In addition, it is difficult to maintain and modify it, each project also has its own set of crutches, bicycles.

Through trial and error, we arrived at one system, each test in which can take into account the launch context and decide whether or not to run it, what and how to check. At the test run, we identified three basic properties:



Static test system


The system kernel and the main set of static tests are implemented in python. The basis is only a few entities:


The context of testing is a broad concept. It stores both the build and run parameters that we talked about above, as well as the meta information that the tests fill out and use.

First you need to understand what tests to run. To do this, the meta-information contains the types of resources that we are interested in specifically in this launch. Resource types are determined by tests registered in the system. A test can be “associated” with a particular type or several, and if at the time of the commit it turns out that the files that this test checks have changed, then the associated resource has changed. This conveniently fits into our ideology - to run locally as few checks as possible: if the files for which the test is responsible have not changed, then it is not necessary to run it.

For example, there is a description of a fish in which a 3D model and texture is indicated. If the description file has changed, then it is checked that the model and texture specified in it exist. In other cases, there is no need to start checking the fish.

On the other hand, changing a resource may require changes and entities dependent on it: if the set of textures that is stored in xml files has changed, then it is necessary to check additionally 3D models, as it may turn out that the necessary model texture has been deleted. The optimizations described above are applied only locally on the user's machine at the time of the commit, and when running on an assembly farm, it is considered that all the files have changed and we run all the tests.

The next problem is the dependence of some tests on others: you cannot check the fish before finding all the textures and models. Therefore, we divided the entire implementation into two stages:


In the first stage, the context is filled with information about the resources found (in the case of a fish, with identifiers of models and textures). In the second stage, using the saved information, simply check whether the necessary resource exists. Simplified context is presented below.

class VerificationContext(object):   def __init__(self, app_path, build_type, platform, changed_files=None):       self.__app_path = app_path       self.__build_type = build_type       self.__platform = platform       #          self.__modified_resources = set()       self.__expected_resources = set()       #      ,              self.__changed_files = changed_files       # -  ,          self.__resources = {} def expect_resources(self, resources):   self.__expected_resources.update(resources) def is_resource_expected(self, resource):   return resource in self.__expected_resources def register_resource(self, resource_type, resource_id, resource_data=None):   self.__resources.setdefault(resource_type, {})[resource_id] = resource_data def get_resource(self, resource_type, resource_id):   if resource_type not in self.__resources or resource_id not in self.__resources[resource_type]:       return None, None   return resource_id, self.__resources[resource_type][resource_id] 

Having identified all the parameters that affect the test run, all the logic was hidden inside the base class. In a specific test, it remains to write only the check itself and the necessary values ​​for the parameters.

 class TestCase(object):  def __init__(self, name, context, build_types=None, platforms=None, predicate=None,               expected_resources=None, modified_resources=None):      self.__name = name      self.__context = context      self.__build_types = build_types      self.__platforms = platforms      self.__predicate = predicate      self.__expected_resources = expected_resources      self.__modified_resources = modified_resources      #               #   ,          self.__need_run = self.__check_run()      self.__need_resource_run = False  @property  def context(self):      return self.__context  def fail(self, message):      print('Fail: {}'.format(message))  def __check_run(self):      build_success = self.__build_types is None or self.__context.build_type in self.__build_types      platform_success = self.__platforms is None or self.__context.platform in self.__platforms      hook_success = build_success      if build_success and self.__context.is_build('hook') and self.__predicate:          hook_success = any(self.__predicate(changed_file) for changed_file in self.__context.changed_files)      return build_success and platform_success and hook_success  def __set_context_resources(self):      if not self.__need_run:          return      if self.__modified_resources:          self.__context.modify_resources(self.__modified_resources)      if self.__expected_resources:          self.__context.expect_resources(self.__expected_resources)   def init(self):      """        ,                    ,          """      self.__need_resource_run = self.__modified_resources and any(self.__context.is_resource_expected(resource) for resource in self.__modified_resources)  def _prepare_impl(self):      pass  def prepare(self):      if not self.__need_run and not self.__need_resource_run:          return      self._prepare_impl()  def _run_impl(self):      pass  def run(self):      if self.__need_run:          self._run_impl() 

Returning to the example with a fish, you need two tests, one of which finds textures and registers them in context, the other looks for textures for the found models.

 class VerifyTexture(TestCase):  def __init__(self, context):      super(VerifyTexture, self).__init__('VerifyTexture', context,                                          build_types=['production', 'hook'],                                          platforms=['windows', 'ios'],                                          expected_resources=None,                                          modified_resources=['Texture'],                                          predicate=lambda file_path: os.path.splitext(file_path)[1] == '.png')  def _prepare_impl(self):      texture_dir = os.path.join(self.context.app_path, 'resources', 'textures')      for root, dirs, files in os.walk(texture_dir):          for tex_file in files:              self.context.register_resource('Texture', tex_file) class VerifyModels(TestCase):  def __init__(self, context):      super(VerifyModels, self).__init__('VerifyModels', context,                                         expected_resources=['Texture'],                                         predicate=lambda file_path: os.path.splitext(file_path)[1] == '.obj')  def _run_impl(self):      models_descriptions = etree.parse(os.path.join(self.context.app_path, 'resources', 'models.xml'))      for model_xml in models_descriptions.findall('.//Model'):          texture_id = model_xml.get('texture')          texture = self.context.get_resource('Texture', texture_id)          if texture is None:              self.fail('Texture for model {} was not found: {}'.format(model_xml.get('id'), texture_id)) 

Distribution to projects


Playrix game development takes place on its own engine and, accordingly, all projects have a similar file structure and code using the same rules. Therefore, there are many common tests that are written once and are in common code. It is enough for projects to update the version of the testing system and connect a new test to themselves.

To simplify the integration, we wrote a runner, which is fed into the configuration file and design tests (more about them later). The configuration file contains the main information about which we wrote above: the type of assembly, the platform, the path to the project.

 class Runner(object):  def __init__(self, config_str, setup_function):      self.__tests = []      config_parser = RawConfigParser()      config_parser.read_string(config_str)      app_path = config_parser.get('main', 'app_path')      build_type = config_parser.get('main', 'build_type')      platform = config_parser.get('main', 'platform')      '''      get_changed_files         CVS      '''      changed_files = None if build_type != 'hook' else get_changed_files()      self.__context = VerificationContext(app_path, build_type, platform, changed_files)      setup_function(self)  @property  def context(self):      return self.__context  def add_test(self, test):      self.__tests.append(test)  def run(self):      for test in self.__tests:          test.init()      for test in self.__tests:          test.prepare()      for test in self.__tests:          test.run() 

The beauty of the config file is that it can be generated on an assembly farm for different assemblies in automatic mode. But transferring settings for all tests through this file may not be very convenient. For this, there is a special configuration xml, which is stored in the project repository and lists the ignored files, masks for searching in the code, and so on.

Sample configuration file
 [main] app_path = {app_path} build_type = production platform = ios 

Example of setup xml
 <root> <VerifySourceCodepage allow_utf8="true" allow_utf8Bom="false" autofix_path="ci/autofix"> <IgnoreFiles>*android/tmp/*</IgnoreFiles> </VerifySourceCodepage> <VerifyCodeStructures> <Checker name="NsStringConversion" /> <Checker name="LogConstructions" /> </VerifyCodeStructures> </root> 

In addition to the common part, the projects have their own features and differences, therefore there are sets of design tests that are connected to the system through runner configuration. For the code in the examples, a couple of lines will be enough to run:

 def setup(runner):  runner.add_test(VerifyTexture(runner.context))  runner.add_test(VerifyModels(runner.context)) def run():  raw_config = '''  [main]  app_path = {app_path}  build_type = production  platform = ios  '''  runner = Runner(raw_config, setup)  runner.run() 

Collected rakes


Although python itself is cross-platform, we regularly had problems with the fact that users have their own unique environment in which there may not be the version that we expect, several versions, or the interpreter may be absent altogether. As a result, it does not work the way we expect or does not work at all. There were several iterations to solve this problem:

  1. Python and all packages are installed by the user. But there are two “but”: not all users are programmers and installation via pip install for designers, and for programmers too, can be a problem.
  2. There is a script that installs all the necessary packages. This is better, but if the user has the wrong python installed, then collisions may occur.
  3. Deliver the correct version of the interpreter and dependencies from the artifact store (Nexus) and run tests in a virtual environment.

Another problem is speed. The more tests, the cumulatively longer it takes to check the changes on the user's computer. Every few months there is profiling and optimization of bottlenecks. So the context was improved, the cache for text files appeared, the mechanisms of predicates were refined (definitions that this file is interesting for the test).

And then it remains only to solve the problem, how to implement the system on all projects and force all developers to include pre-commit hooks, but this is a completely different story ...

Conclusion


In the development process, we danced on the rake, fought hard, but still got a system that allows us to find errors during a commit, reduced the work of testers, and the tasks before the release of the loss of texture were a thing of the past. For complete happiness, there is not enough simple setting of the environment and optimization of individual tests, but golems from the ci department are working hard on this.

The full sample code used in the article as examples can be found in our repository .

Source: https://habr.com/ru/post/452926/


All Articles