📜 ⬆️ ⬇️

Typescript library automation

I want to immediately make a reservation: this article does not give a ready-to-use recipe. This is rather my story of traveling to the world of Typescript and NodeJS, as well as the results of my experiments. However, at the end of the article there will be a link to the GitLab repository, which you can see, and maybe take something you like. It may even be in my experience to create your automated solution.

Why do you need it


So why bother to create libraries at all, or in the specific case of NPM packages?

  1. Reuse code between projects.

    It all started with the fact that I noticed the habit of creating a folder / tools in projects. In addition, I also take most of this folder with me when I switch to a new project. And then I asked myself the question, why not make an NPM package instead of copy-paste and then just connect it to any project?
  2. Different life cycle. In one of the applications, as a dependency, there was a large corporate component assembly. It was possible to update it entirely, even if only one component was updated. Changes in other components could break something and we didn’t always have enough estimated time to retest it. This model is very inconvenient. When each package serves its purpose or a small set of related goals, they can already be updated when it is really needed. Also, the next versions of the package are released only when they have some changes, not “for the company”.
  3. Separate the secondary code from the main business logic. DDD has the principle of domain distillation; it assumes that parts of the code that do not belong to the main domain are identified and isolated from them. And how to better isolate than to carry this code into a separate project.
    By the way, the distillation of the domain is very similar to the principle of SRP only at a different level.
  4. Own code coverage. In one of the projects where I participated, the code coverage was about 30%. And the library that I took out of it has a coverage of about 100%. The project, although it lost the percentage of coverage, as it was in the red zone before rendering, remained. And the library has such enviable indicators to this day, after almost a year and 4 major versions.
  5. Open Source. The code does not contain business logic - the first candidate for separation from the project, so it can be made open.

Start new libraries "expensive"


There is such a problem: in order to create a library, it is not enough to create a git repository under it. You also need to configure the task so that the project can be collected, conduct a static check (lint) and test. Also, in addition to testing, it is advisable to collect code coverage. In addition, the package will have to publish each time manually. And still need to write a readme. Here only I can not help with readme.
')
So, what can be done with all these tedious, uninteresting tasks?



First step: Seed


I started everything by creating a seed project. A sort of starter kit, he had the same structure as my first project to make the code in a separate package. I created in it gulp task and scripts that would build, test and assemble the package cover in one action. Now, to create another project, I had to clone the seed into a new folder and change origin, so that it points to the newly created GitHub repository (then I also used GitHub).

This way of creating projects gives another advantage. Now changes concerning construction or testing of the project are made once, in the project seed. And copy-paste these changes are no longer necessary. Instead, in the final project, the next time I have to work with it, I create a second remote with the name seed and take these changes from there.

And it worked for me for a while. Until I used seed in a project where several developers participated. I wrote a three-step instruction: take the last master, build and publish it. And at some point, one of the developers, for some reason, performed the first step and the third. How is that even possible?

Second step: Automatic publishing


Despite the fact that it was a single mistake, such manual actions as publishing are boring. Therefore, I decided that it is necessary to automate it. In addition, CI was needed to prevent red commits from entering the master. At first I tried to use Travis CI, but rested on the following restriction. He considers pull-request in master to be equivalent to commit in master. And I had to do different things.

One of my colleagues advised to pay attention to GitLab and his CI, and everything that I wanted worked there.

I created the following project management process, applied when I need to fix a bug, add new functionality, or create a new version:

  1. I create a branch from the master. I am writing code and tests in it.
  2. I create merge request.
  3. GitLab CI automatically runs tests in the node: latest container
  4. The request is Code Review.
  5. After the request is empty, GitLab runs a second set of scripts. This set creates a tag on the commit with the version number. The version number is taken from package.json if it is manually increased there, if not, then the last published version is taken and auto-incremented.
  6. The script builds the project and runs the tests again.
  7. The last steps the version tag is sent to the repository and the package is published in NPM.

Thus, the version specified in the tag always coincides with the version of the package published from this commit. In order for these operations to work, in a project on GitLab, you need to specify environment variables with repository access keys and NPM.

Last step: Automate everything


At this point, I have already automated a lot, but there still remained a lot of manual actions to create a project. This, of course, was progress anyway, because actions were done once on a project, and not on each version. But still, the instruction consisted of 11 steps. And I myself was mistaken a couple of times while doing these steps. Then I decided that once I began to automate, then I need to bring this to the end.

For this full automation to work, but the computer I need to have 3 files in the .ssh folder. I figured that this folder is fairly secure, since the private key id_rsa is already stored there. This file will also be used to enable GitLab CI to transfer tags to the repository.

The second file I put there is “gitlab”, it contains the access key to the GitLab API. And the third file is “npm”, the access key for publishing the package.

And here begins the magic. All you need to create a new package is to run one command in the seed folder: “gulp startNewLib -n [npmName] / [libName]”. Done, the package is created, ready for development and auto-publishing.

For example, the “vlr / validity” package was created this way.

This command does the following:

  1. Creates a project on GitLab
  2. Clones the seed into a local folder next to the folder from which the command is running.
  3. Changes origin to the project created in step 1
  4. Pulls the master branch
  5. Creates environment variables in a project from files in the .ssh folder
  6. Creates the firstImplementation branch
  7. Changes the library name in package.json, commits and pulls the thread

All that is needed after this is to put the code there and create a merge request.

As a result, one can be proud of, from the moment when a decision is made to render some code in a separate project until the publication of the first version takes about five minutes. Four of which take two GitLabCI pipelines, and one minute to run the above command, drag the code, and click buttons in the GitLab interface to create and then query.

There are some limitations: The GitLab name must match the name in npm. And yet, this command, unlike the rest of the functionality in the seed project, works only on Windows.

If you are interested in this seed project, you can study it at the following link .

Source: https://habr.com/ru/post/450262/


All Articles