📜 ⬆️ ⬇️

Serverless racks


Serverless is not about the physical absence of servers. This is not a "killer" of containers and not a passing trend. This is a new approach to building systems in the cloud. In today's article we will touch on the architecture of Serverless applications, let's see what role the Serverless services provider and open-source projects play. In the end we will talk about the use of Serverless.

I want to write the server part of the application (but at least an online store). This can be a chat, and a service for publishing content, and a load balancer. In any case, there will be a lot of headaches: you will have to prepare the infrastructure, determine the dependencies of the application, think about the host operating system. Then you need to update the small components that do not affect the work of the rest of the monolith. Well, let's not forget about scaling under load.

But what if we take ephemeral containers in which the required dependencies are already preinstalled, and the containers themselves are isolated from each other and from the host OS? We divide the monolith into microservices, each of which can be updated and scaled independently of the others. By placing the code in such a container, I can run it on any infrastructure. Already better.

And if you do not want to customize the containers? I do not want to think about scaling the application. I do not want to pay for simple running containers, when the load on the service is minimal. I want to write code. Focus on business logic and bring products to market at the speed of light.
')
Such thoughts led me to serverless calculations. Serverless in this case does not mean the physical absence of servers, but the absence of a headache for infrastructure management.

The idea is that the application logic is broken down into independent functions. They have an event structure. Each of the functions performs one “microtask”. All that is required from the developer is to load the functions into the console provided by the cloud provider and correlate them with the event sources. The code will be executed on request in an automatically prepared container, and I will pay only during the execution time.

Let's see what the application development process will look like now.

From the developer’s side


Earlier we started talking about the application for the online store. In the traditional approach, the main logic of the system is executed by a monolithic application. And the server with the application is running constantly, even if there is no load.

To go to serverless, we divide the application into microtasks. Under each of them we write our own function. Functions are independent of each other and do not store stateless information. They can even be written in different languages. If one of them "falls", the application will not stop entirely. The architecture of the application will look like this:


The division into functions in Serverless is similar to working with microservices. But microservice can perform several tasks, and a function should ideally perform one. Imagine that the task is to collect statistics and display at the user's request. In the microservice approach, the task is performed by one service with two entry points: for writing and for reading. In serverless calculations, these will be two different functions that are not related to each other. The developer saves computational resources, if, for example, statistics are updated more often than unloaded.

Serverless functions must be performed within a short timeout (timeout), which is determined by the service provider. For example, for AWS timeout is 15 minutes. This means that the long-lived functions (long-lived) will have to be changed to meet the requirements - this is how Serverless differs from other technologies that are popular today (containers and Platform as a Service).

We assign an event to each function. An event is a trigger for an action:
EventThe action that the function performs
In the repository uploaded a picture of the goods.Compress image and upload to directory
The database has updated the address of the physical storeLoad new location in maps
Customer pays for goodsStart Payment Processing
Events can be HTTP requests, streaming data, message queues, and so on. Sources of events - is the change or the appearance of data. In addition, the functions can be run on a timer.

The architecture worked, and the application almost became serverless. Next we go to the service provider.

From the provider


Cloudless computing is usually offered by cloud providers. Different names are called: Azure Functions, AWS Lambda, Google Cloud Functions, IBM Cloud Functions.

We will use the service through the console or personal account of the provider. Function codes can be downloaded in one of the following ways:


Here we set up the events that call the function. Different providers may have different event sets.



The provider built and automated Function as a Service (FaaS) system on its infrastructure:

  1. The code of functions gets to storage on the side of the provider.
  2. When an event occurs, containers with a prepared environment are automatically deployed on the server. Each function instance has its own isolated container.
  3. From the storage function is sent to the container, calculated, gives the result.
  4. The number of parallel events is growing - the number of containers is growing. The system automatically scales. If users do not access the feature, it will be inactive.
  5. The provider sets the idle time of the containers - if during this time the functions do not appear in the container, it is destroyed.

This way we get Serverless out of the box. We will pay for the service according to the pay-as-you-go model and only for the functions that are used, and only for the time when they were used.

To familiarize developers with the service, providers offer up to 12 months of free testing, but limit the total computing time, the number of requests per month, money or consumed power.

The main advantage of working with the provider is the ability to not worry about the infrastructure (servers, virtual machines, containers). For its part, the provider can implement FaaS both on its own developments, and using open-source tools. We will talk about them further.

From the open source side


For the past couple of years, the open-source community has been actively working on Serverless tools. Including the contribution of the largest market players to the development of serverless platforms:


Developments are underway in the direction of serverless frameworks. Kubeless and Fission are deployed inside pre-prepared Kubernetes clusters, OpenFaaS works with both Kubernetes and Docker Swarm. The framework acts as a kind of controller - it prepares a runtime environment inside the cluster upon request, then starts the function there.

The frameworks leave room for the configuration of the tool to fit your needs. So, in Kubeless, the developer can configure the timeout of the function execution (the default value is 180 seconds). Fission, in an attempt to solve the cold start problem, offers some of the containers to keep running all the time (although this entails the cost of idle resources). And OpenFaaS offers a set of triggers for every taste and color: HTTP, Kafka, Redis, MQTT, Cron, AWS SQS, NATs and others.

Getting started instructions can be found in the official framework documentation. Working with them means having a bit more skills than working with the provider - at least the ability to run the Kubernetes cluster through the CLI. At the maximum, include other open-source tools (for example, the queue manager Kafka).

Regardless of whether we work with Serverless through a provider or using open-source, we will get a number of advantages and disadvantages of a Serverless approach.

In terms of advantages and disadvantages


Serverless develops ideas of container infrastructure and microservice approach, in which teams can work in a multilingual mode, without being tied to a single platform. The construction of the system is simplified, and it becomes easier to correct errors. Microservice architecture allows you to add new functionality to the system much faster than in the case of a monolithic application.

Serverless cuts development time even further, allowing the developer to focus exclusively on the business logic of the application and writing code. As a result, time to market development is reduced.

Bonus we get automatic scaling under load, and pay only for the resources used and only at the time when they are used.

Like any technology, Serverless has flaws.

For example, such a disadvantage can be a cold start time (on average up to 1 second for languages ​​such as JavaScript, Python, Go, Java, Ruby).

On the one hand, in fact, the time of a cold start depends on many variables: the language in which the function is written, the number of libraries, the amount of code, communication with additional resources (the same databases or authentication servers). Since the developer controls these variables, he can reduce the start time. But on the other hand, the developer cannot control the launch time of the container - it all depends on the provider.

A cold start can turn warm when the function re-uses the container launched by the previous event. This situation will arise in three cases:


For many applications, a cold start is not a problem. Here you need to build on the type and objectives of the service. Delaying a start for a second is not always critical for a business application, but may become critical for medical services. Probably, in this case the serverless approach will no longer work.

The next disadvantage of Serverless is the short lifetime of the function (timeout, for which the function should be executed).

But, if you have to work with long-lived tasks, you can use a hybrid architecture - to combine Serverless with another technology.

Not all systems will be able to work on a serverless scheme.

Some applications will still store data and status at run time. Some architectures will remain monolithic, and some functions will be long-lived. However (as was once cloud technology, and then containers), Serverless is a technology with a great future.

In this vein, I would like to smoothly move on to the issue of using the Serverless approach.

Application side


For 2018, the percentage of use of Serverless has increased by half . Among the companies that have already implemented technology in their services are the market giants like Twitter, PayPal, Netflix, T-Mobile, Coca-Cola. At the same time, you need to understand that Serverless is not a panacea, but a tool for solving a certain range of tasks:


Suppose there is a service that comes to 50 people. Under it is a virtual machine with a weak iron. Periodically, the load on the service increases significantly. Then weak iron fails.

You can include a balancer in the system, which will distribute the load, say, to three virtual machines. At this stage, we can not accurately predict the load, so we keep some amount of resources running “in reserve”. And overpay for idle time.

In such a situation, we can optimize the system through a hybrid approach: after the load balancer, we leave one virtual machine and link to the Serverless Endpoint with functions. If the load exceeds the threshold - the balancer starts instances of functions that take over part of the query processing.


Thus, Serverless can be used where it is necessary not to frequently, but intensively process a large number of requests. In this case, it is more advantageous to run several functions for 15 minutes than to keep a virtual machine or server all the time.

With all the advantages of serverless computing, before implementation, it is first of all necessary to evaluate the application logic and understand what tasks Serverless can solve in a particular case.

Serverless and Selectel


At Selectel, we have already simplified working with Kubernetes in a virtual private cloud through our control panel. Now we are building our own FaaS platform. We want developers to solve their problems using Serverless through a convenient, flexible interface.

If you have ideas about what an ideal FaaS platform should be and how you want to use Serverless in your projects, share them in the comments. We will take into account your wishes when developing the platform.

Materials used in the article:

Source: https://habr.com/ru/post/452266/


All Articles