At first everything was simple. Youth, enthusiasm. The project was sawed by several programmers. All the codes, as soon as they were ready, copied the code to a common virtual machine, occasionally they kicked the admin on the subject to deliver some package or fix the config. As soon as they understood that everyone was going to make a release. First, backup, then the older one collected all his fistfulness into a fist, copied the project on the production server and, with the assistance of the administrator, ensured that it worked there. The team waited two days, made sure that the queues of grateful users with hatchets did not form, and, with a sense of pride for the work done, went to drink beer.
Then they all matured a bit. Redmine / jira / etc, git / svn, jenkins, spinx-docs / rubydoc / doxygen / etc, wiki, unit tests appeared and began to be used somehow. There were subprojects, the stand grew. Production servachkov became several. The admin picked up salt / puppet / etc, monitored, sat in his den like a spider, ruled the configs on the salt-master and pulled the state.highstate out of there.
A life
And this is the right time to sit down and think a little about life (of the project).
Stages life cycle of only seven.
')
- Conceptual design. At this stage, you need to understand what to do.
- Architectural design. At this stage, you need to understand how to do it.
- Implementation. This is direct coding and unit testing.
- Verification. Check that the program performs all its intended functions.
- Validation. Check that the program can still be used. From the preceding paragraph, this does not suddenly follow.
- Commissioning. It usually includes rolling out the release, data migration, user training.
- Actually the operation itself.
- Removal from service
Eight. Everybody forgets about the last item. But it is also very important (and not only for the nuclear power plant). For a software project you need to take care of the data. At the stages prior to commissioning, it is necessary to ensure that all the necessary data from it can be retrieved, and at the stage of decommissioning that the data have actually been extracted.
This is the basic scheme adopted in system engineering. Depending on the scale, specifics of the industry and religious beliefs of the PMA, the stages can be renamed, glued together or, on the contrary, split up, but you can always relate the sane process to this scheme. If agile is accepted in the team, the diagram describes the life cycle of the individual story.
What is it all about? In this context, configuration management is the process of maintaining the product in a holistic state. It begins somewhere in the region where the first stage is completed and ends only with the death of the project. Moreover, if this process is neglected, death can be sudden.
What could break?
Versions of libraries. Gathered, sketched a diagram of classes, agreed to use libcrutch. One team had been sitting on libcrutch-1.0 for a long time, the second had only learned about it and downloaded libcrutch-2.0 from the Internet. And it turns out this is only on integration testing. You can even catch a bug on the differences between libcrutch-1.2.14 and libcrutch-1.2.15. And any LD_PRELOAD or docker only add fuel to the fire. Even if the project is all from itself on microservices, the interfaces can be exchanged with data received from libcrutch and having different formats in different versions.
Component version mismatch. Some are sawing libbase, others libManagementFacade. In the process it turned out that in libbase-1.14.3 there is a small but treacherous bug. We talked, corrected, forgot. We tested on libbase-1.14.4, and libbase-1.14.3 was released.
Changing the configuration of the environment. One POST request suddenly began to work for a long time. We looked, it is not so important, let it work. Increased in nginx timeout waiting for a response backend'a. Admin on the stand straightened and forgotten. Rolled out and again the old bugs to catch, but now in combat conditions.
Change design decisions. We started doing it under Windows, then imbued with the ideas of RMS, decided to switch to Ubuntu, but they didn’t get the solution. They started to collect, all brought deb packages, and someone who was in the tank, exe'shnik.
Loss of user-relevant functionality. They brought a new version, talked for a long time about the change of design, about new frameworks, about advanced algorithms. Users listened, nodded their heads, and said: “It's all good, but you made a mold for us at our request. Previously, it was the fifth sub-item in the third menu item, where is it now? ”Lost at some merge request.
What to do
Programmers are very lucky to have git. He takes the brunt of the blow and requires a little bit from them.
- Determine all the components that are needed for the project to function, make sure that they are correctly versioned. The configuration in the first approximation is a list of components and their versions.
- Understand how to transfer configuration from the stand in production.
- Begin Requirements Management. Generally speaking, requirements management is a separate process. As part of configuration management, you need to make sure that for each component that is included in the release set, documentation is attached that accurately describes the requirements for this component and their statuses: completed, not met, partially fulfilled, with reservations.
- Anyway, each component should have documentation that describes what and how it does.
At the stage of completing the
conceptual design , when the subject specialists say: “We need such a system!”, - the techies unanimously say, “We will do it!”, - the managers give the go-ahead: “We will allocate resources - do it!” - we need to make sure that The agreed description of the system was taken out of the experts' head, cut to requirements, and put into documentation. During the development process, this description will change. You need to make sure that the description is versioned. A good option, if it's text, pick it up in git.
At the stage of
architectural design , when the architect said how he sees it, you need to make sure that this vision is taken out of his head, the documentation with the version label is pasted. If it is a tetrad sheet with a diagram, it must be scanned, put into the file (or wiki) and link to it.
At the
design stage
, you need to make sure that the code is documented. Not bad on modules to get separate documents (in git), which describe requirements to them and their features of behavior. Leaving a lot of information in redmine / jira is not worth it. After finishing the big feature, before merge to master, you need to make sure that its description from the task tracker is correctly transferred to the documentation. Just because after some time within another task, the behavior can change and it will be difficult to collect documentation for several tasks. Task tracker does not provide a complete picture.
User documentation is good to do at the design stage. Keep (if possible) in git and edit in parallel with the code. If there are no special technical writers for this, the context will go away, everyone will forget, there will definitely be no documentation.
During
verification , the program is checked for compliance with the requirements. At the end you need to make sure that all requirements are assigned the status of completed / not fulfilled.
At the
validation stage, it is checked whether the program can be used. You need to make sure that all changes made to the behavior of the program are immediately reflected in the documentation.
During the
commissioning phase, the correctness of the preparation and rolling of the release is checked. We must make sure that all components of the correct versions are stitched into it. The main punch here is holding salt / puppet. You can without them, simply by issuing instructions for installation, but with them easier. Prepare them properly and ahead of time.
About the stage of
operation everything is clear. You just need to follow the manufacturer's instructions.
At the decommissioning stage, it is necessary to make sure that all the necessary data has been removed.
About the assembly and salt / puppet. This is the second line of defense (immediately after git). The working scheme of the application is approximately as follows:
- Make sure that the situation with each third-party package is clear: where it came from, which version, which patches were superimposed. If some kind of radish (a bad person) sticks the same versions to physically different files, you need to convince him that he is wrong, or to attach an additional version to all his products.
- If all rpm's are dumped into one repository, you need to make sure that it is clear which version will be rolled up. A good option is to have the script rebuild the entire repository and paste the version on the entire repository as a whole. The other is that the version must be explicitly indicated in the manifest of the / sls file. By the way, puppet has a bug, the package resource does not know how to lower the version. Why they are not ashamed, I do not know.
- All manifests / sls files are stored in git. In pillar for salt or class parameters for puppet, only what distinguishes the stand from production is made. Such things, for example, are ulr web services, parameters like shared_buffers for postgres, flags that include debug mode. Everything else is ruthlessly hard'koditsya. Parameters are set once when the bench is deployed and rarely change later. sls files are perceived as code, it rolls onto the stand, is tested and in unchanged form is transferred to production.
That's all. Manage the configuration correctly, and do not forget that a good process is one that bypasses all the rakes and gives an excellent result the first time.