📜 ⬆️ ⬇️

Continuous Delivery & Sitecore: our implementation

I want to introduce you to our concept of Continuous Delivery (hereinafter, CD) in relation to the main CMS in which our company is developing - Sitecore. Our CD concept is based on three pillars:

In this article, I will try to describe all the aspects involved.

Git


The choice of this version control system is due to our CD concept, in which we have three main branches in each repository: dev, acceptance and master. In accordance with the names, each branch reflects the status of the code on one of the three servers:

All development begins and is maintained in a separate branch, the root of which is master. Upon completion of development, a merg in dev occurs, and the task is given for testing.
If any bugs were found - fix in the branch, merge and retesting.
After the tester considers the task completed - it is merg in acceptance, the delivery and the task is given to the testers and the customer for verification. If any problems were found, the above process is repeated again.
If the task is completed in the customer's opinion, a merge of the branch takes place, in which development was carried out in master and delivery to production.
Thus, Git is a very important part of our development and delivery process due to simple and fast branches and ease of management.

TeamCity and delivery process with involved scripts / applications


Easy to set up, free for small projects (no more than 20 build configurations), understands most common version control systems (including our favorite Git), has built-in runners for MSBuild, nANT, Visual Studio, and also provides access to its restAPI for receiving data and managing configs. In my opinion, one of the most convenient in its class build servers (as well as other products of JetBrains). Also, one very handy feature of TeamCity is the ability to configure parameters at the project level or at the build level right through the web interface. It is possible to configure the parameters at the server level, as well as access to the system parameters of the host (for example, PATH).
A typical project build that automatically delivers and verifies that a web application is live after delivery (page returns 200) consists of the following configuration aspects:
General settings : based on the results of each build, we collect artifacts - an archive with a web application, a pekedzhi archive (Sitecore uses xml files created and archived for pending files between environments, which we call pekedzhi) and an archive with SQL scripts, if The solution uses additional databases. As well as an auto-increment build counter, which is used in the patcher built into TeamCity AssemblyInfo, incrementing the build version.
Version control system settings : in TeamCity are simple to disgrace and easily configured. The only interesting point here is the ability to tag the results of the build, which, theoretically, allows you to return exactly to the code in which there was / appeared any problem. If possible, an incremental checkout is used here in order to speed up the build and reduce traffic.
Actually, the build itself :
  1. Build and Publish solution using Visual Studio tools (VS of the required version should be installed on the server). Specify the path to the sln file, specify the variable which build configuration for this sln file to call (each project has several build configurations for each environment involved). Publish is performed in a local folder on the server, because it was not possible to organize the delivery using the MS Publish Tool directly to the server you need. The first and last problem I stumbled on was that we were in different domains. If an error occurs MSBuild - the build stops, and an informational message is sent to the following persons: people whose commits participated in the build, build engineer, and testers.
  2. Preparation of files and the target application to install pekedzhey (script). Due to the fact that Sitecore does not allow the creation of nodes in the database containing special characters of languages ​​and reserved characters by default, and sometimes it is impossible to do without them during development - in this step a special patch is delivered to the target application, which allows the use of these symbols . In the same step, after delivery of the patch config, which causes the application to restart, wget is used to get the main page of the site. This ensures that the application is operational and ready for the next step.
  3. Delivery of nodes to Sitecore databases. WCF service is deployed to a web application, which accepts pekedzhi and installs them (let me remind you that site uses xml files created and archived for delivery between environments in a special way, which we call pekedzhi), which allows you to automate this procedure as follows: necessary for delivery to the target application pekedzhi in a designated folder, which is also stored in surs-control. TeamCity independently collects data about the changes involved in this build and provides access to them via restAPI. The application responsible for delivery reads xml from restAPI, selects pekedges participating in the build and sends them to the WCF service, after which the WCF service installs them. All the necessary data for the collector application is transmitted through parameters that are configured at the project level, since they are the same for the entire repository. Unfortunately, there is one problem related to the WCF service and settings of the target application: if the size of the pekedzha is too large, or its installation takes more than 20 minutes - the service breaks the connection. If an error occurs - the service returns an error, the build stops, notifications are sent to the same person as in the first step. WCF service will be called only if there is something to deliver to Sitecore, which also contributes to the acceleration of the build.
  4. Publish nodes in content database. Since Sitecore works with two databases (master — in which content is created and web — from which content is delivered to the end user), another application was created to implement Sitecore’s built-in process of transferring data from a master to a web database. which is called Publish. This application works on the same principle as the application from step 3. The developer commits a file that describes the nodes that need to be zapped out line by line, the application retrieves these files via restAPI (files are selected from the commit from a specific folder in the repository, the limitation is that the files must have a certain extension), reads the contents of the files and sends it to the WCF service, which, in turn, will publish the nodes with their children. Unlike the previous step, when an error occurs in this step, the build does not stop. WCF service will be called only if there is something to publish in Sitecore, which also contributes to the acceleration.
  5. Code preparation and delivery to the target server. In this step, the usual cmd script packages the pre-existing solution and, via SSH, delivers the resulting archive to the target server in a temporary directory, unpacking the archive there as well. After that, App_Offline.htm is placed in the web application folder, which stops the web application and allows users to display a message indicating that the application is currently being updated.
  6. Update third-party databases (Optional step). If the application uses not only the standard Sitecore data, but also additional ones - in this step, these databases are updated using scripts stored in the resource control in a special folder, by analogy with the files from steps 3 and 4. At this point, the choice of the necessary script and its application is carried out using a special file name format (version. name). If the version files are higher than the database version (stored in the Extended properties field) - the database is updated by the script from the file. In a short time, files for updating the database will be received by the application-collector, similar to steps 3 and 4. Also, soon it is planned to switch this step from execution on the host server build to execution on the WCF service (however, in this case you will have to refuse from App_Oflline.htm), which should increase security.
  7. Delivery code in the application. The script clears the folder with configs and binaries in the application, then from the folder unpacked in step 5, delivers the code and accompanying files to the web application folder. The last step of the script removes App_offline.htm.
  8. With the help of wget we get the page (as a rule, the main one) of the site to make sure that the application is alive (code 200). If the code is different from 200 - the build is considered to have failed.

This is how the automatic delivery of web applications based on Sitecore CMS works.

Pros / cons / unrealized buns of this approach


Strictly speaking, there are a lot of advantages, but the main thing that I would like to single out is that we have removed the person from the process of delivery, with his usual forgetfulness and laziness. Also, due to automation, delivery has accelerated significantly. Of course, the source is still the person (the developer). If he didn’t commit something, it will not be delivered, zakkomtitil wrongly - it will be delivered incorrectly. But such purely human errors, as a rule, are caught on internal testing.
Cons, as well as unrealized buns, however, is also missing. I would like to highlight a few:
  1. During the entire build period it is necessary that the Internet is connected. If there is a problem with the Internet connection, the build will fail. This will not be a problem in the steps to the fifth. However, if at the 5th step the Internet falls off, the application will remain unused, and it will only be possible to fix it by hand.
  2. I do not see an opportunity to implement automatic rollback to the previous version in case of a failure in the steps above the second: database backup is done automatically, but recovering it is a rather long process, and versioning of all sites without third-party (and rather expensive applications) is impossible. As with the database, I don’t yet imagine the possibility of automatic code rollback.
  3. There is no 100% criterion that the delivery was successful: there are situations when the test page returns a code of 200, while others may fall with errors.
  4. Delivery via SSH is very good for its stability and speed, but, unlike MS Publish Tool, it leaves files on the target web application that were excluded from the solution. Not that it was a major problem, however, just plain ugly.

')

Source: https://habr.com/ru/post/154327/


All Articles