My last two articles are interviews with speakers of the last conference. It seemed to me interesting to talk to a man who at one time refused to speak at this conference, “because of one small family circumstance”. This man is Sergey
SergeyT Teplyakov, MVP, the author of an excellent book on design patterns, an adherent of TDD, now a developer of Tools for Software Engineers at Microsoft and a maintainer of the Code Contracts library.
Under the cut there is a lot of text about conferences, TDD, pair programming, Code Contracts architecture, Habra.
About conferences from the point of view of the Listener
Sergey, good afternoon. Why do people attend conferences? Why are they worth visiting?I honestly admit that I didn’t attend conferences unless I was a speaker there. But every time I was there, the main thing that I took out from there was inspiration. Yes, maybe it sounds overestimated, but conferences and communication with smart colleagues for me have always been the best motivation: learn something new, share something old or just continue my development as a specialist and as a person.
')
You read and gave reviews on a lot of books. What can you advise in terms of conferences?It is rather difficult to give a specific question to a rather general question. It is important to understand what you need there: to recharge your batteries? Meet and / or chat with interesting people? Add some specific knowledge? If the latter, then instead of the conference you need to take a couple of sensible books, but if one of the first points, then again you need to find out the quality and interestingness of the conference and decide for yourself whether to go or not.
How to evaluate the conference program?The conference should be evaluated by the speakers and you need to go not on the topic, but on the person.
If Andrei Akinshin speaks, then I will go to listen to him no matter what he talks about, about the Stopwatch device or about a topic not related to performance at all. Since I am sure that the quality will be high and I will learn quite a lot from it.
Ok, and how to evaluate the speakers?Pretty simple, actually. If the speaker is known for good performances - go. If the speaker is not familiar, but a quick google post takes you to his blog, github or habr - you rate the material. If the speaker is not familiar and not googled - it is better to go for something else.
About conferences from the point of view of the Speaker
The visit is understandable. Can you tell us why people speak at conferences?I think this is a question of self-realization or confirmation of a certain level of development: “Yes, I have grown to a certain level, when I am not afraid to go out in front of many people and share my experience: both positive and negative. I am not ashamed to admit that I don’t know, because I know enough useful things that are interesting to share. ” For me, speaking in general, and at conferences in particular, was such an indicator of development for myself. It all started with a conversation over a cup of coffee, or in the process of pouring into a new team, when you begin to more confidently communicate with colleagues, sharing experience, defending your opinion. After that, there are local performances at all sorts of meetings and user groups, after which it is no longer scary to speak in front of several hundred.
A separate plus: the validation of knowledge in the preparation process. You know the phrase: if you want to learn something, teach the other. Preparing for a speech can take 10-20 times longer than the speech itself, and all this time you purposefully dig one topic. There are, of course, experts who tell / write something interesting without a plan, but there are few of them. On the move, I can only recall that Chris Bruma, one of the architects of the CLR. He wrote most of the posts directly from his head, and
once made an introduction, “I used to raise topics that did not require additional research from me. I gave this article to a preliminary review, because I had no answers to some questions. ” So the conference is still a motivator to study something in more detail, to structure knowledge.
And how to understand that “now I know enough useful things, and I really have something to tell”?As I said, the easiest way to break in material "on cats". These can be coffee breaks, corporate presentations, blog notes or any other kind.
It is very important to validate your knowledge on someone so that it does not turn out that your knowledge is not very consistent with reality or is not so deep and / or interesting. There is a way to validate your experience with githubs and other open source sources. If you fixed a couple of bugs in Roslin and suggested a design option that was approved by the project’s old-timers or other participants, then this should be sufficient evidence of your experience and maturity.
That is, you are not preparing in detail? Only outline?The outline, the obligatory subscript (although I don’t watch it during the performance), that's all. The difference between speaking and writing a post is not so great: introduction, story, conclusion; you need to give a thought, you estimate what the reader knows, which places you need to clarify, which ones just to mention. In the post, however, you can refer to something, but at the conference to say “guys, first read these three little articles, and then come back” will not work.
Do you perform in the States?Not yet. Priorities are different. Plus, the level of spoken English does not allow to speak freely. As I said, I usually do not pronounce everything in advance, but simply prepare a plan and tell about it, but in such things the language is very important. It is difficult to speak when you can not say anything beyond the prepared, when you can’t even joke. And still need confidence. We have a tradition at MS: sometimes the guys are going to a team or a project in a room with lunch, and there someone tells colleagues about interesting technical things. So far I speak only there.
What is the audience in your MS gatherings? Compared to a conference?Sometimes there are people among the listeners who come to measure patterns. Among colleagues, this usually does not happen, but at conferences it happens. One should be able to work with such comrades.
I am ashamed to admit, but once I acted as such a comrade. How to work with such?Here we must understand what the listener wants; perhaps, by asking a question, a person is really trying to understand something, for example, he is a specialist affiliate, or simply a colleague with a wealth of experience in another language. If you think you can answer the question, but it’s long / difficult - suggest discussing after the talk. If he brings you to the answer “I don’t know” - say “I don’t know, I’ll google, I’ll contact you later”. There are, you refer to Richter, and disputes such as “Richter said not so ...”, “this was not ...” start with you, such disputes are usually non-constructive and it is enough to refer to “after the performance”, politely.
Tdd
The classic scheme: test code refactoring. As far as I understand from your articles, you follow her as far as. How to determine when to follow it, and when not? How to keep from rolling to “oh well, the whole project is already without tests, and I will not write anything to this feature”?Honestly, I practically do not use TDD in its classic presentation. I write a lot of tests, but I do not write them before the code. My cycle looks like this:
- We look at the problem and try to see the high-level components. Perhaps we take a pencil and draw squares with relationships. This allows you to see the whole picture and what subcomponents it contains. After that, you can sketch a high-level test case, but usually I do without it.
- After that, we begin to decompose the system from the bottom up (before that we tried to parse / paint the problem on a top-down basis). We take one of the leaf classes and begin to think what it is.
- After that, I turn to the implementation, throwing the frameworks of functions, possibly placing contracts.
- Then I turn to the implementation of a particular function and only after writing it I add a test.
It turns out not test-first at all, but it is more convenient for me to “see” the design in the code and already validate its correctness with the help of tests. It is not convenient for me to “grow” a design with the help of tests, for me such a switch “test” - “code” is very distracting.
Now let's move on to the second question: “how to keep from rolling”. It's all easier.
A few years ago, I was promoting (“selling”) unit testing to a customer. And I did not then, not now, focus on the regression component, on the quality of the design, etc. I tried to show how tests help here and now: “OK, we need to add a new feature. But instead of checking its responsibility by deploying a server followed by proclaiming 73 points in the UI, why not add a simple unit test that will test the most complex functionality? Yes, it is necessary to think a bit how to select this feature into a separate component, but this is faster than all this dregs with manual testing. ”
In other words, if I need to make a minimally sane addition of non-trivial complexity, I will think about how to bring the new logic into a separate class / method so that it can be easily tested. When everything is very running, it is not always easy to do and it does not always work, but, as it is not strange, it works for me in most cases.
It happens that everything is very neglected and the new functionality is a variation of the old, when changes are made to the group of existing classes. In this case, we think about how to cover all this stuff with integration tests, after which we decide whether a wedge-up of the existing code will be justified, will there be significant changes in these pieces or not? If we are talking about bad code that we are going to actively develop, then I will try to justify more sane changes before making significant functional changes. If the code is dead and does not really develop, then it will have to be tested with pens.
When tests do not need to write? Or are you testing everything down to PS scripts?There are a number of things that cover tests that make no sense. PS scripts are one of them. Since PowerShell is inherently tight with the environment, it is very difficult to test scripts in isolation. In addition, PowerShell is a bit of a crazy thing, and without real data you just won’t know what to expect from it and whether your script works or not.
There are quite a few things that I do not test directly. If there is a lot of legacy code, then several integration tests are needed, and unit tests with high coverage should be written for new components. There are things that should be covered by tests, but these will not be modular tests, but integration tests, which are more likely to check the basic validity. This includes database and communication tests such as WCF.
As with many other things, you need to understand that tests are a tool, not an end in themselves. At the same time it is necessary to change the attitude to the tool in the process of work. If there are a lot of tests and they catch constant regressions, this is good. If there are a lot of tests and they constantly break, then this is bad. But here it is important not to rush from one extreme to another, not to beat all the tests in one fell swoop, considering them useless. Here you need to think about how we test, what we test, is our design good? Typically, fragile tests are a symptom of a more general problem — problems with design, high connectivity, and tests that check implementation details, rather than abstraction behavior.
Pair programming
How is writing tests combined with pair programming? You had a couple of articles on the topic of pair programming, but I did not notice a direct connection with the tests.Theoretically, working in pairs implies the separation of roles, when one labs the tests, and the second - logic. Unfortunately, I don’t have too much experience with classic pair programming, in which we would follow this principle.
As a result, I always worked in pairs so that it was comfortable for both. This means that the tests were written as we considered necessary. Often these were integration tests that covered rather large pieces, after which we worked together on implementation. We then added full tests one at a time.
But here you need to understand that if TDD is a matter of some degree personal, then TDD in a pair is the matter of this pair itself. The couple must decide how to be effective. In my work, we were not shy about talking about what is now worth doing and what is not. There should be no dogma.
How to negotiate with aggressive partners, as with shy, closed?I do not have much experience, after all, pair programming is an approach quite rare in the industry, both in the States and in Ukraine. We have a lot of smart people on projects, but I am not ready to go with a partner from the ceiling. In my opinion, pair programming is inefficient if one person is neat in the code, another is not, or one quickly writes, another is thoughtful, or one is aggressive, the other is shy. If I were in such a pair, I would try to avoid it. Very few people will fit me purely psychologically. This is exactly what I'm talking about pair programming. If you need to mentor - I think 90 percent will work for me, I will find an approach to them.
How did you start to code in a pair?I participated in the hackathon, and zzhojunilsya to one of my colleagues to create the next version of Code Search, this is the search engine for
referencesource.microsoft.com - the previous version kept a lot of unnecessary data in memory. I had the experience suitable in Elastic Search, I zazhoynilsya. You know the hackathons: there is not enough time, you need to do a lot, there was no time and opportunity to divide the work, and we sat at one computer typing one code. It turned out well, and since then some tasks have been coded by a couple, since after a few months we have been on the same project. But once again, this is rare. In Ukraine, I have never heard of such a thing, at MS I heard about one team practicing regular pair programming, but this is a drop in the ocean.
We happened to our colleagues that we were arguing for some reason a little longer than necessary, too uncompromising. Such discussions can be considered PP?This is always the case, especially if someone in the team is new. When you pretend, you will not fail - you can try to pair with a pair. When you already find out who is worth something, when you realize that “well, yes, a stubborn person, but thoughtful, well, yes, I am not a gift, but I also bring benefits,” then you can try. Before that, it will be quite difficult to communicate, and there will be little benefit. If you worked with a pair programming pro who adapts very quickly, you could try, but this phenomenon is even more rare than the usual “greenhouse”.
How do you prepare for pair programming?No At all. Unless, it is necessary to grasp the context before starting work, and nothing is needed at all.
How does the language barrier interfere?Does not interfere at all. Common project, common programming language, general subject area. In the most extreme case, you can take a piece of paper and draw something on it.
Instrumental barrier?Yes, it is a classic. My buddy is R # Heyter, but rather the opposite. Sometimes we rzhem with this. That is, you cod this, put a curly bracket, here you have all the alignment with unaccustomed flies, and you such, “stop, potty, do not boil, Ctrl + Z, everything is canceled ...”. It seems like little things, other hyphenation, brackets, hot keys, but it's funny to watch the process. On the other hand, a good way to learn how to use tulza. Recently, I have heard phrases like “I don’t want to admit it, but I like this feature R #”. That is, at first the person did not like R # because of the brakes on large projects, and now he is starting to appreciate. Plus, we had differences in the tools for the diff-merge, it was interesting to see the analogue. In general, I like the observation most of all in the process, from the approach to design to the used ones. That is, you do not read the manual, but look at how a living person copes with the task on the fly. Very comfortably.
Code Contracts
You are one of the developers of Code Contracts. How did you start working with the project?I am a longtime fan of contract programming. My love began after I got acquainted with the book by Bertrand Meyer “Object-Oriented Building of Software Systems”. Thinking design in terms of responsibility allows you to significantly simplify the development, allows you to make it cleaner.
The CodeContracts library has existed for a long time on the .NET platform, and I have been using it for a long time. Somewhere a year before moving to MS, I wrote a
plugin for R # to simplify contract programming, and use this. The plugin catches typical Code Contracts errors, allows you to add Contract.Requires (arg! = Null) contracts and does a number of other useful things. In my first year at MS, I made a small tool for Applications Insights. It turned out to be useful for Mike
Barnet , one of the authors of Code Contracts. Then I moved to another team, and it turned out that this team is using Code Contracts with might and main and this is one of the largest users of Code Contracts in MS. After the release of VS 2015, we could refuse Code Contracts, or fix the library so that it could support C # 6.0. As a result, I spent a month or two actively developing Code Contracts. It was during this period of time that I was the most active on github. In addition to adding new features, a number of features were repaired, which Code Contracts never normally supported, in particular, postconditions in asynchronous methods. After that, I became one of the maintainers of the library. Now I am engaged in a project in my spare time, but I do much less time than I would like.
Why?Other priorities.
New project?New but related to Code Contracts.
I do not understand.Code Contracts have several serious problems that make development very difficult. The project is large, complex, hundreds of thousands of lines in some tangled code. In addition, the library is designed to analyze and change IL code, without being tied to a specific .NET language or compiler.
The library contains three main components.
- Infrastructure (Common Compler Infrastrucutre, CCI) - decompilit IL object model (Abstract Syntax Tree, AST). The AST output is mutable, low level. No closures, no async, no iterator blocks.
- Static verifier CC Check, verifying the validity of contracts during the build. That is, it parses the AST, and if it sees the method with the NotNull “annotation” it tries to prove that Null will never come to this method.
- Runtime IL Rewriter (CC Rewrite). Engaged in the transformation of dll files during the build. Converts calls to the Contract class to various constructs: to the generation of an exception, to Debug.Assert, or simply to delete the entire statement. This is a separate utility that is not associated with the verifier.
This approach has an important feature: different compilers and even different versions of the same compiler generate different IL-code for the same syntactic structures. And CCI provides only a thin wrapper over IL: there are no Lyabda expressions, iterators or asynchronous methods. You see that you create an object of some class, but you do not know, this is a real class created by a programmer, or generated by compilers as a result of using lambda expressions. More precisely, you can find out about it, because compilers use special code generation patterns. Therefore, ccrewrite analyzes the code, looks at the names of the types and determines from them whether it is a closure class, a class for an iterator or an asynchronous method, or something else.
At the same time, patterns change in different versions of the compiler. For the same lambda C # 5.0 generated a static field with a delegate, and in C # 6.0 a new type of closure class appeared.
The developers of the C # language found out that the instance methods are cheaper than the static ones, so now for “non-catching” lambda expressions a singleton is generated instead of a static variable.A completely insane example can be given for asynchronous methods. The C # 6.0 compiler generates a different code depending on the number of await operators in the method: if there are no awaits, one will result in one code, if only one await is another, and so on.. — , Rewriter. — : , , , .
: . 8 . , !T, !!T ( , , ).
In general, the AST generated by the CCI is too low-level, which translates into too complex support. With each new version of the language / compiler you have to do more and more difficult and ungrateful work.Another problem: the negative impact of CC Rewrite on build time. CC Rewriter is non-deterministic. That is, you take one DLL, call CC Rewriter twice - and you are guaranteed to get two different binaries. For modern build systems this is a very big problem; in fact, caching and incremental builds are lost. That is, if you use CC Rewrite and make a change to one file - you are guaranteed to re-issue the entire solution. If you do not use CC Rewrite, you can only rebuild one project and all of its direct customers. On hundreds of projects, using CC Rewrite increases the build time at times.Plus, CC Rewriter hacks pdb when rewriting, and it does not always do it correctly, which has a negative effect on debugging. From the recent - my colleague simply could not engage in debugging without turning off the CC Rewriter, the call stack was moving down, the local variables were incorrect, the whole debug experience was lost.Because of these problems, my colleague and I are currently working on a Source Level Rewriter (SL Rewriter). That is, we do not operate IL, we change the high-level code. Among the shortcomings, this is only a utility for C #.That is, you take the Roslyn-tree, change it, and give it to the compiler? Normal fixer?Exactly.
Roslyn provides exactly the mechanism we need. We take our Workspace, look for all recourses to contracts, and insert the fragments we need.If you use Roslyn, are you limited to C # 6.0?Nearly. Actually, we are more concerned with integration with the compiler, but this is a topic for a separate discussion.Where can I find Source Level Rewriter?Now it is an internal product, which saws 2 people. So far he is not ready, but as soon as he works correctly in our team, we will share it.Any predictions about performance? Now Code Contracts are often blamed for “inhibition”.Look, CC Check is very slow. To work correctly, you must arrange a lot of annotations, and the build of one hundred thousand lines of code will take minutes, or even tens of minutes. Then still need to analyze the output. In short, a slow and inefficient tool from a developer’s point of view. More precisely, the idea is good, but the task is very complicated and slowly solved. I know literally several teams actively using it. My team abandoned it about two years ago and nobody wants to use it again.Now CC Rewrite. As I said, it significantly affects the build process. There are finite tricks, for example, disable it locally and use only on the build machine, but there is still a slowdown.Much more important influence on rantaym. With contracts it is very easy to breed extra checks. So, if you have heavy invariants on a class, checks will be added to the beginning and end of each open method. Or contracts for the collection - you have to iterate the entire collection. This can be especially noticeable if you have a chain of five methods running through the collection. Or check recursion, to restrict calling contracts in depth. It uses a call to thread local, which adds brakes. Just by turning off this check, we increased end-to-end performance by 15%. There are not too many similar details, but you should know about them.On the other hand, we are now sawing a build engine, this is system software and a considerable code base. We have to set up contracts for minimal impact on performance in runtime, but we don’t want to give up contracts: the messages from contracts in the crash dumps we receive provide a lot of useful information, to achieve this by other standard means is much more difficult.In addition, it must be said that for most applications overhead for runtime will be minimal.That is, for small projects the tool can be used without settings?For small ones, yes. For averages, you can only put a check on preconditions and open methods, then the impact will be minimal. Key functionality carefully covered in tests can be configured to remove all contracts from runtime. The tool is customizable - it is quite possible (and worth) to use it. Tune it will have only large projects.Questions about Habra.
Why did you come to her?Wow ... long ago it was. I came to Habr in 2010, when rsdn was already dying, dou.ua was already, but was not so popular. Habr was actively developing at that time, with many deep technical posts and, most importantly, with a high level of signal-noise. It was interesting to read, it was interesting to write. Publications there - it was the easiest way to discuss some technical problem. Yes, not all comments were equally useful, but at that moment it was interesting to me.Why did you leave Habra?First of all, I have changed, my interests have changed. I continue to write in my blog, often choosing more philosophical topics, and there has already formed some audience with whom it is interesting, who knows me and my interests. To write in parallel in a blog and on Habré is not quite “right”. After my last “parallel” posts collected more comments on the blog - I practically stopped writing on Habr.I still read Habr, but more often thanks to social networks and other methods. The level of noise by reading via RSS is too high for me: apparently, I'm not interested in everything that is interesting to the Habr audience, which has become even more diverse. At the same time, I often find myself on excellent articles, thoughtful, deep and interesting., – IT-, . , .
: . - — . , : 11 .