📜 ⬆️ ⬇️

Checklist for creating and publishing web applications

In order to create your web application in our time is not enough to be able to develop it. An important aspect is setting up tools for application deployment, monitoring, and management and administration of the environment in which it operates. The era of manual deployment goes into oblivion, even for small projects, automation tools can bring tangible benefits. When deploying with “hands”, we can often forget to transfer something, take into account one or another nuance, run a forgotten test, the list goes on for quite a while.

This article can help those who only comprehend the basics of creating web applications, and want to understand a bit of basic terms and conventions.

So, building applications can still be divided into 2 parts, this is all that relates to the application code, and all that relates to the environment in which this code is executed. The application code, in turn, is also divided into server (the one that runs on the server, often: business logic, authorization, data storage, etc.), and client (that runs on the user's machine: often the interface, and the associated with him logic).

Let's start from Wednesday.
')
The basis for the operation of any code, system, software is the operating system, therefore, below we consider the most popular systems on the hosting market and give them a brief description:

Windows Server is the same Windows, but in server variation. Some functionality available in the client (normal) version of Windows is not present here, for example, some statistics collection services, and similar software, but there is a set of utilities for network administration, basic software for server deployment (web, ftp, ...). In general, Windows Server looks like ordinary Windows, quacks like ordinary Windows, however, it costs 2 times more than its usual counterpart. However, given that you will most likely be doing application deployment on a dedicated / virtual server, the final cost for you may increase, but not critical. Since the Windows platform occupies an overwhelming position in the user OS market, its server edition will be the most familiar to most users.

Unix is ​​a similar system. Traditional work in these systems does not imply the presence of a familiar graphical interface, offering the user as a control only a console. For an inexperienced user, working in such a format can be a challenge, which only means getting out of the text editor Vim , which is quite popular in the data, the question related to this has already gained more than 1.8 million views in 6 years. The main distributions (editors) of this family are: Debian is a popular distribution, the package versions in it are mainly focused on LTS ( Long Term Support - support for a long time), which is reflected in a rather large reliability and stability of the system and packages; Ubuntu - contains distributions of all packages in their latest versions, which may affect stability, but allows you to use the functionality supplied with the new versions; Red Hat Enterprise Linux - the OS positioned for commercial use is paid, however, includes support from software vendors, some proprietary packages and driver packages; CentOS - opensource variation of Red Hat Enterprise Linux, is notable for the lack of proprietary packages and support.

For those who only comprehend the development of this area, my recommendation would be Windows Server systems, or Ubuntu . If we consider Windows, then this is first of all the familiarity of the system, Ubuntu is more tolerance for updates, and in turn, for example, fewer problems when launching projects on technologies that require new versions.

So, having decided on the OS, let's move on to a set of tools that allow you to deploy (install), update and monitor the status of the application or its parts on the server.

The next important decision is to host your application, and the server for it. At the moment, the most common are 3 ways:


Depending on the chosen path, only the fact that, for the most part, is responsible for one or another area of ​​administration, will change in the future. If you host yourself, then you should understand that any interruptions in electricity, the Internet, the server itself, or the software deployed on it all lie on your shoulders. However, for training and testing, this is more than enough.

If you do not have an extra machine capable of playing the role of a server, then you will want to use the second or third way. The second case is identical to the first, except that you shift the responsibility for the availability of the server, and its power on the shoulders of the hoster. Server and software administration is still under your control.

And finally, the option of renting the capacity of cloud providers. Here you can customize the automated control of almost anything, without going into technical details. In addition, instead of a single machine, you can have several parallel instances (instances) that can, for example, be responsible for different parts of the application, while not significantly differing in cost from owning a dedicated server. And yet, there are tools for orchestrating, containerizing, automatic deployment, continuous integration and much more! Some of these things will be discussed below.

In general, the server infrastructure looks like this: we have the so-called “orchestrator” (“orchestration” is the process of managing several server instances), managing changes of the environment on the server instance, a virtualization container (optional, but often used) that allows you to split the application on isolated logical layers, and software for Continuous Integration - allowing to update the placed code by means of “scenarios”.

So, orchestration, allows you to see server statuses, roll up, or roll back server environment updates, and so on. At first, this aspect is unlikely to affect you, because in order to orchestrate something, you need several servers (you can have one, but why do you need it?), And in order to have several servers, you need them. Of the tools in this area, the rumor is mainly Kubernetes, developed by Google .

The next step is OS-level virtualization. Now the notion of “dokerizatsiya”, which came from the Docker tool, providing functionality of containers isolated from each other, but running in the context of a single operating system, has become widespread. What this means: in each of these containers, you can run an application, or even a set of applications that will assume that they are the only ones in the entire OS, without even knowing about the existence of someone else on this machine. This feature is very useful for launching identical applications of different versions, or simply conflicting applications, as well as for separating application pieces into layers. This image of the layers can later be recorded in an image that can be used, for example, for application deployment. That is, installing this image, and expanding the containers that it contains, you get a ready-made environment for running your application! In the first steps, you can use this tool both for informational purposes and to get very real benefits by spreading the logic of the application to different layers. But, here it is worth saying that not everyone needs doderatization, and not always. Decommissioning is justified in cases when the application is “fragmented”, divided into small parts, each responsible for its task, the so-called “microservice architecture”.

In addition, in addition to providing the environment, we need to provide a competent application deployment, including various code transformations, installation of application-related libraries and packages, test runs, alerts on these operations, and so on. Here we need to pay attention to such a thing as "Continuous Integration" ( CI - Continuous Integration ). The main tools in this area at the moment are Jenkins (software for CI, written in Java, may seem somewhat complicated at the start), Travis CI (written in Ruby, subjectively, Jenkins is somewhat simpler, however, some knowledge is still needed Deploy settings areas), Gitlab CI (written in Ruby and Go ).

So, having talked about the environment in which your application is to work, it is time to finally see what tools the modern world offers us to create these applications.

Let's start with the basics: Backend (backend) - server part. The choice of language, a set of basic functions and a predetermined structure (framework) is mainly determined by personal preferences, but nevertheless, it is worth mentioning for consideration (the author’s opinion about languages ​​is quite subjective, albeit with a claim for unbiased description):


Well, the final part of our application - the most tangible for the user - Frontend (frontend) - is the face of your application, it is with this part that the user interacts directly.

Without going into details, the modern frontend stands on three pillars, frameworks (and not so), to create user interfaces. Accordingly, the three most popular are:


Summarizing the above, we can conclude that now the deployment of the application is fundamentally different from how this process proceeded before. However, no one bothers to implement "deploy" in the old manner. But is it worth a little time saved at the start - a huge number of rakes, on which the developer who has chosen this path has to step on? I think the answer is no. Having spent a little more time to get acquainted with these tools (and more is not required, because you need to understand whether you need them in the current project or not), you can play it back by significantly reducing, for example, cases of ghostly errors depending on the environment and manifesting only on the production server, night analyzes, what caused the server to crash, and why it does not start, and much more.

Source: https://habr.com/ru/post/446642/


All Articles