I read about BDD, and realized one thing: BDD is a blablabla blabla blah. He has no normal definition. Here, for example, is written:
BDD combines the basic TDD techniques and practices and DDD ideas in order to provide programmers, testers, analysts, and managers with a common interaction process as they develop software.
All clear? Nothing for me. Therefore, I will tell you what we do and why, from what may be relevant to BDD.
Starting planning features, we describe it in terms of "system behavior", for example:
Given Jira Issue with several bugs
When all that bugs are fixed
Then Issue is closed
We write this directly on the board, in the course of planning, together with the Product Owner, in such a volume so that it can be understood by the customer (product owner) and testers, and developers: what should we do and how to test it. In fact, these are requirements / tests that are recorded in such a “short” (and notice, not clear) form, with the whole team, without spending much time on it. And no more documentation, just words. I understand that this is a rather “wild” case, and everything is not always so simple, but in any case, we try to isolate the business problem, and write down the most concise solution, without going into details. Details go later, they can be negotiated on the plan or it can be found out already during the development process with PO (yes, our customer quickly answers the questions), but they are not documented and not tested and no one will read them.
Just do not confuse the description of the behavior and system documentation. Behavior can (and will) change. Today, this digging does one thing, tomorrow is another. The above description of the behavior becomes outdated with each sprint / iteration, it is impossible to rely on it and find out how the feature is arranged after some time. Therefore, there is a system documentation, which describes
all the current functionality. No, we do not have such documentation, we have everything in our head and code. And so far, personally, I have not experienced inconvenience. And what will happen if the head gets under the bus? I also asked such a question. I was told: it will be bad, but we do not believe that the documentation will save, they are unlikely to read it all, and it is unlikely to be so relevant and be able to give answers just like a head.
Now about the tests. There is a wonderful thing - Cucumber (and more similar:
concordion ). It allows each simple text line (such as: “all that bugs are fixed”) to have its own method on .Net, Java, Ruby and other programming languages, and even allows you to select variables using regular expressions so that you can reuse methods.
And thanks to cucumber, all tests have an excellent readable form, they reflect the current state of the system and can be a kind of system documentation, and besides, they are already written in the planning!
Everything seems fine, but we do not use Cucumber, because:
- Our test methods, (the time test API, rather than those with the @
Test
annotation), take as input complex values ​​/ objects (Map) that are difficult to write in plain text form and parsed using regular expressions. - Test methods return and use variables, for example, return the Primary Key of the created entities, then this PK is used in other methods. And although the support of "variables" is possible in the proto text language, it is still inconvenient and is a big disadvantage.
- We use expressions as variables. For example, different builders to set the date of "last Sunday" or "next Monday." By the way, we generate random / random test data wherever possible, and do all the checks based on them, so we often use expressions and variables.
- Context. Most of our actions are tied to the current user session. When it was necessary to create two parallel sessions, it did not cause any problems. In the case of cucumber, would have to allocate completely new phrases / methods that would work, both with the indication of the session, and without.
- Checks data. For this, we use hamcrest, and with just a few matchmakers, we can record a bunch of diverse and challenging conditions. With cucumber, for this purpose it is necessary to allocate separate methods with one line of code for each combination of matchrs. And in the worst case, one method for each combination of matchrs and the tested value.
- Refactoring. I refactor tests much more than the code, (well, it turns out) and the IDE helps me a lot with this by automating quite complex actions. How to refactor a simple text language? And what will happen if before there was one method, and then the code changed and you need to divide it into two methods, with different variables or different names. Is it possible to do this automatically, throughout the entire database of a simple-source code?
Instead of spending effort on developing plain text scripts with Cucumber, we write them in Java (like the rest of the project) and spend efforts to keep the code clean and readable.
The above is not a criticism of Cucumber, it is an excellent and necessary thing, just in our case, it would create more problems than solutions (in my opinion, subjective opinion).
Consider the code sample that corresponds to the above description of the behavior:
')
String issueId = issueHelper().createIssue(); List bugIds = new ArrayList(); int numberOfBugs = Random.getNumberBetween(1, 5); while(numberOfBugs>0){ bugIds.add(issueHelper().createBug(issueId)); numberOfBugs--; } for(String bugId : bugIds){ navigator().goto(bugId); workflowHelper().doAction("fixed"); } navigator().goto(issueId); assertThat(getField("status"),is("Closed"));
When executing the test, this code displays the following:
Create Issue with arbitrary parameters (Key of new issue HR-17)
Create Bug on issue HR-17 (Key for new bug BG-26)
Create Bug on issue HR-17 (Key for new bug BG-27)
Go to the page on the key BG-26
Execute fixed action
Go to the page on the key BG-27
Execute fixed action
Go to the HR-17 key page.
Check that the value of the status is "Closed"
Such a test output is very similar to the cucumber script in my previous project, and can be read by analysts / managers (I checked). They can even carry out a kind of “code review” of how much the log corresponds to the original BDD description.
That's why we get “BDD the other way around.”
Sometimes analysts or developers may not like something in the course of code review, then we can refactor the code in just a couple of clicks, for example, combining the “Switch to page by key” and “Execute action” methods into one method.
You may have noticed that there are no comments or special calls in the test code that could output such a log, and this is not accidental.
All log output is made on the basis of AspectJ technology, which allows you to intercept any method calls and wrap them with your codes. In our case, we log the method description from JavaDoc, substituting in it the values ​​of the parameters with which the call was made and, if possible, the value returned by the method.
We do the same thing as Cucumber technology, exactly the opposite. In ucumber, we associate each method with a regular expression (or pattern), according to which we will select variables from the text line, in our case, we match the template into which we will substitute the variables into the output log line.
What is the point of all of this?
From the point of view of BDD, documentation and collaboration all together, in an embrace - I do not know.
But I know for sure, in the beautiful test log, there is a sense:
- This greatly helps to improve the quality of the test code. True, very.
But only if you are logging descriptions of method calls, and not forcibly output: log("I did something")
. Then you have to allocate methods that correspond to business concepts, and then, you see, the code is less duplicated, and it is better structured. And an analyst can follow this, with no programming experience, who will say that your code operates on business concepts, or it presses some strange buttons, i.e. rolled down somewhere, to the lower level of the interface ticks. Accordingly, the tests are shorter and clearer. - This allows you to understand why the test "fell."
And we can log in as calls to the first level methods, i.e. those that are written in the test script, and calls internal methods. And not just calls, but with all parameters and returned values. - Sometimes it's easier to understand the log than the code to figure out what the test does.
I myself am shocked by this, like trying, writing beautifully, refactoring, but still, after some time the code is not clear, and the “Russian” log helps. Well, that means there is still room to try and improve the code. Sometimes, by the way, it helps to look at the BDD description, which we rewrite from the board in the annotation to each test. These descriptions go into a test report, because they are still smaller and clearer. This once again convinces me that it was right to separate such descriptions from the test script. - This makes writing JavaDoc to a test API, at least some. And this, in turn, simplifies writing tests, especially if they are written by more than one person. Interestingly, when using ucumber, can Ctrl + Space get hints of available methods, with a description of the parameters? Went to <Ctrl + Space>
- You can distract and learn Aspect Oriented Programming. As an idea for the future: I want to make it so that when calling
doSomething(getSomething())
in the doSomething()
log, doSomething()
not just the result of the getSomething()
method (for example, just 5) that is substituted with the description from the JavaDoc method getSomething()
, so that it is clear what this result means (5) and where it came from. I will definitely deal with this when I am once again “everything is assured.”
If someone is interested in technical details: how to parse JavaDoc, how to work with AspectJ, write a comment, and I will prepare a separate post about it, in which I will also tell you how to stuff a table with test data in JavaDoc in plain text form (copy it from Excel), and make sure that the test method is called for each row of this table - this is exactly how Cucumber works, analysts like it, but I don’t see the charms in it. What do you think?
The moral of this story is: experiment, write tests, and a log to help you.
