We often experiment with architecture, code, performance. Constantly add new functionality. We gradually tie Yii with our “architectural” layer - sharding, working with temporarily inaccessible data, various caches and much more. Yes, the fruit of our work, when it is committed, will go to Open Source.
The task of Continuous Integration (CI) used here is not testing. The task of CI is to protect itself from destructive changes as a result of refactoring, adding new functionality, and changes in architecture. We also defend against “bad code”, often repeated bugs, “curves” merge.
For our CI, we use Jenkins for Debian. I spent the time on the CI scan 12 hours - until fully operational. I do not spend a minute a day on CI support - I do not write tests for every little thing, I do not practice TDD. However, CI works and saves us from stupid mistakes.
')
“Let's be more careful” / “Let's not make mistakes” - I appealed to the developers, but this helped only temporarily and then not at all 100%. People tend to make mistakes, forget, make mistakes. No, I did not invent the “silver bullet” for web projects and even a small bullet for Yii - I figured out how to stabilize my application. Your application is different from mine and my methods may not work for you, and they should not - I did not do them for your application, if my methods work for you - accept it as a miracle or as luck. But the idea of such a CI will work everywhere. Just an idea.
What's the idea
The idea is to regressively check the application for “fallen off” functionality without spending N hours on tests per day. To achieve this is simple - if you write one test for an “abstract entity”, then the test must pass with all its “concrete” implementations. If you standardize a code, you can make its different parts by implementing several abstractions, for example, three — you can cover the whole code with 3 checks. Yes, it is “checks”, not tests - I do not test the functionality, I check that “the code works, does not fall, does not fatalit”. With the right code, business logic is rarely broken so as not to cause a fatal error. At least with us. We try to write code like this - if the logic works correctly - it works, if not - FatalException is thrown or other fatal errors occur. I think this “hard” way is correct, because otherwise it will be very difficult to look for broken logic.
We have standardized the code to the following abstractions: the model (it is already quite standard in Yii and has a completely understandable interface with the methods find, save, delete). controller (it is also quite standard), action (action), component, library.
If everything is simple with models, then we had to tinker with controllers and action games. We decided that any external call (http, console) should not cause a fatal error (http> = 500): no entity means 404, a request curve means 400, no access 403. If your controller is fatal when accessing it with a non-existent id or with some other crooked parameters - this is wrong behavior from the point of view of the http protocol - user error is 4xx, not 5xx - you do not need to be fatal with curve requests, you need to give the user a meaningful error “what he does wrong”.
Actually checking the controllers was built according to this principle - we construct a module, controller, pull action - see what happened - ExceptionPage404 - this is normal (we did not send the data in $ _GET), but if FatalException or PHP Error is already bad - a test I failed.
The components for Yii that we write we also standardized. In our case, any component is an extension of the existing functional Yii. For example, adding sharding, global database cache. Such a functional simultaneously implements two abstractions - the Yii model and our extension component. It is also checked 2 times - on the example of all models, and separately as a component.
Libraries are written by us absolutely “left” functional, not related to Yii, but implementing only a special case of logic, most often - interaction with other services of our service-oriented architecture. Checks and tests on them are a topic for a separate article, I will say only one thing - we check them as separate projects in our CI and perform “integration tests” inside the main application.
Implementation on our example
We have 4 steps of assembly:
- Deploy, migration, installation / update dependencies - it is not related to the idea described above, just say that it is.
- Code quality check
- Interface code verification - implementation of the idea described above.
- A small number of Unit tests for frequently pop-up bugs (like phpunit or selenium + phpunit). They are rarely “supported” or added - so I’ve written “I don’t spend N hours a day on tests” - I spend on the strength of 1 hour a month writing 1 test, 1 annoying bug.
Step One - Deploy
It is checked in 2 versions - migration from the previous version (identical to the current production status) and scanning from scratch (with automated installation of virtual machines, automatic puppet configuration, application and base scanning)
I will not say anything concrete, since This does not apply to this article and a completely different story.
Step Two - Code Quality
The first thing we check is: “php -l” - whether everything parses with us - without this, further no sense. The second thing we are looking for in the code is prohibited calls: die, var_dump, ini_set, exit. Then we look for the consequences of a curved merge with the help of the usual fgrep: “<<<<<<<”, “>>>>>>”, “=======” - such garbage sometimes slips when you hold your hands in one place. did not notice and did not resolve the conflict.
We are also looking for the following using regular expressions:
- Methods are long in many code screens.
- Too nested code like “5 nested if”
- Too “loaded code” of the form print (preg_replace ('/ @ /', “/% /”, “a” .substr (1, 5, $ lala). (Int) (bool) $ d)); - hard to read, hard to write and do not want to look at this code.
Step Three - Interface Code Verification
It is divided into several “sub-steps” - checking models, controllers, components, universal selenium-checks (yes, there is one too! I’ll tell you a little), integration tests with libraries.
I will talk in detail about the simplest and most interesting. The simplest is models.
Any model should: be saved, selected, deleted. It is precisely for this that it exists. Especially for this test, we added to each model a static method that creates a “default valid model” - a model that you can create, save, remove from the database that is validated.
In fact, in 250 models we did not write 250 methods for creating such models. We wrote one method from the parent - it extracts out the parameters of the rules from the rules and fills the fields with valid values. I spent on it - 2 hours.
As a result, for each model in the cycle we do something like the following:
$model = ModelClass::createDefault();
This is not a tricky test, we did the following: we made sure that the sharding layer works, the database cache does not interfere with normal operation, the table in the database has migrated normally and this model is saved in this table (+ triggers do not fall).
The most interesting thing is selenium checks.
We studied our interface and came to a joyful conclusion - it is quite standardized. There are 4 main options for user interaction:
- Global page change
- Change Taba
- Opening a dialog box
- Submitting a form in a dialog box
The first three points were automated very easily - the A and Button tags that change the global page have the global css class, the tab changes — have the data-tab attribute, open the dialog — they have the data-msgbox attribute. It was enough for us to make 3 nested loops: changes the page (stupidly clicks the button), changes the tab (also presses the button), opens the dialog (and also just a click). At each of the nested stages, we check whether the content of the page has changed, whether the content has changed in the div for the tabs, whether the dialogue has opened. Along the way, we collect js errors from the browser.
With forms it was a little trickier. We had to add data-type attributes to the form elements with possible valid data values - data-type = “email”, “anyString”, “checkboxChecked”, “phone”, “anyFile”, and others. And so! Forms are standardized and we have a common interface for all inputs - in the fields with valid email we fill in the email, in the fields with the phone - the phone, and so on in all the fields. We send the form and check that the dialogue was closed without errors - it means the data has been saved. Then we repeat all the same with invalid data, for example, in the email field we write the phone - and check that the form did not go, and the error occurred for the user.
I spent about 1.5 hours adding attributes to the form fields. And we have quite a few forms - it’s just a simple job and if you sit down and do it, it’s not a long time.
With such an uncomplicated (and maybe even a tricky) method, we checked the entire UI for the subject:
- Fatalov when opening pages, tabs, windows
- Fatalov when sending valid and non-valid forms.
- What forms are saved with valid data
- That forms give errors with invalid data.
Unit tests and selenium tests
Honestly, there are few, very few. We add them only when a repeated bug appeared and once again testers said “well, it doesn’t work again!”
We never change old, written tests - we develop an application with backward compatibility. This is necessary not only for the sake of tests - the application has an API for mobile phones / desktops and it must be backward compatible.
So, what is next?
A bit later we standardized our js-code and covered it (thanks for the testit comrade
titulusdesiderio - we adapted it for launch from under the nodejs and there we are testing our js)
Later, we covered the css + html layout with tests - we checked the collapsed layout with the help of diff screenshots for curvature.
I will tell about all this separately, if you are certainly interested.
PS Before scolding with the phrases “this is not testing”, “it covers 5% of the functionality” and the like: we do not test. We exactly do a health check.
It is like checking that the light bulb is on fire in the store - we do not check its heat, do not measure the emitted light, do not try to screw it into an unsuitable cartridge - we check just that it is on. We do the same with the code. Simple, and does not require support method.
UPD. The implementation example described here is just an example and no more. The main thing is the idea of standardizing parts of the system, individual entities. Standard “nuts” can be checked one way to check, but on non-standard - you have to come up with different.