Today, I would like to talk about technology that is actively gaining momentum in the IT world - about one of the cloud technologies, namely, serverless application architecture (Serverless BSA). Recently, cloud technologies are gaining increasing popularity. This happens for a simple reason - easy accessibility, relative cheapness and lack of initial capital - both knowledge to maintain and deploy infrastructure, and monetary nature.
Serverless technology is becoming more and more popular, but for some reason there is very little coverage in the IT industry, unlike other cloud technologies such as IaaS, DBaaS, PaaS.
For writing this article, I used AWS (Amazon Web Services) as the undoubtedly the largest and most thoughtful service (based on Gartner's analysis for 2015).
Today we need:
Serverless - serverless application architecture. In fact, it is not so serverless. The basis of the architecture is microservices, or functions (lambda) that perform a specific task and run on logical containers hidden from prying eyes. Those. the end user is given only the interface of loading the function (service) code and the possibility of connecting event sources (events) to this function.
Considering the example of the Amazon service, the source of events can be many of the same Amazon services:
Schematically, the work of microservice can be presented as follows:
In fact, as soon as you upload the function code to Amazon, it is saved as a package on the internal file server (like S3). At the moment of receiving the first event, Amazon automatically launches the mini-container with a specific interpreter (or virtual machine, in the case of JAVA) and runs the received code, substituting the generated event body as an argument. As is clear from the principle of microservices, each such function cannot have a state (stateless), since there is no access to the container, and the time of its existence is not determined by anything. Due to this quality, microservices can easily grow horizontally, depending on the number of requests and workload. In fact, on the basis of practice, the balancing of resources in the Amazon is done fairly well, and the function “grows” fairly quickly even with abrupt increases in load.
On the other hand, another advantage of such a stateless launch is that payment for using the service, as a rule, can be made based on the execution time of a particular function. Such a convenient payment method - in English-language literature Pay-as-you-go - makes it possible to launch startups or other projects without initial capital. After all, the need to redeem hosting for placing a code is not. Payment can be made in proportion to the use of the service (which also allows you to flexibly calculate the necessary monetization of your service).
Thus, the advantages of such an architecture are:
The disadvantages include:
In general, the technology has its own segment of demand and its consumer market. I find the technology very suitable for the initial stage of startups, ranging from the simplest blogs, ending with online games and more. Particular emphasis in this case is placed on independence from the server infrastructure and unlimited performance gains in automatic mode.
As mentioned above, one of the drawbacks of the BSA is the fragmentation of the application and the very heavy control of all the necessary components - such as events, code, roles, and security policies. I must say that in projects a bit more complicated than Hello World, the regulation of all the listed components is a huge headache. And not rarely leads to the failure of services with the next update.
To avoid this problem, good people wrote a very useful utility with the same name - Serverless . This framework has been sharpened solely for use in the AWS infrastructure (and, although the 0.5 version branch was completely sharpened for NodeJS, a big plus was the redirection of branch 1. * towards all AWS-supported languages). In the future we will talk about the branch 1. *, since, in my opinion, its structure is more logical and flexible to use. Moreover, in version 1 most of the garbage was cleaned up and support for Java and Python was added.
What is the usefulness of this decision? The answer is very simple - the Serverless Framework concentrates all the necessary project infrastructure, namely: control of the code, testing, creation and control of resources, roles and security policies. And so it’s all in one place, and can easily be added to git for version control.
Having read the basic installation instructions for the framework and its configuration, you probably already managed to install it, but in order to preserve the usefulness of the article for beginners, let me list the necessary steps. Having read to this point, I hope you already have a console with Centos, so let's begin our acquaintance with the installation of NPM / Node (as the serverless package, however, is written in NodeJS).
I prefer NVM to control node versions:
curl https://raw.githubusercontent.com/creationix/nvm/v0.31.6/install.sh | bash
Overload the profile as indicated at the end of the installation:
. ~/.bashrc
Now install the Node / NPM build - (in the example I use 4.4.5, since it was just at hand)
nvm install v4.4.5
After a successful installation, it’s time to set up access to AWS - in this article I’ll skip setting up a specific AWS account for development and its role - detailed instructions can be found in the framework’s manuals.
Usually, to use an AWS key, it’s enough to add 2 environment variables:
export AWS_ACCESS_KEY_ID=<key> export AWS_SECRET_ACCESS_KEY=<secret>
Suppose the account is set up and configured (Please note that the SLS framework requires administrator access to AWS resources — otherwise, you can spend hours trying to figure out why things aren't working the way they want).
Install Serverless in global mode:
npm install -g serverless@beta
Create a project directory. And, at this stage - a small digression about the architecture of the project.
Let's move on to how the lambda function can be loaded into Amazon. Namely, these two ways:
In our case, Serverless uses the second method — that is, it prepares the existing project and creates the necessary zip package from it. Below I will give an example of a project for NodeJS, otherwise the same logic will not be difficult to apply for other languages.
|__ lib // |__ handler.js // |__ serverless.env.yaml // |__ serverless.yml // |__ node_modules // |__ package.json
I wouldn’t like to overload the article, but, unfortunately, the documentation on the configuration of the framework is very incomplete and fragmented, so I would like to give an example from my own customization practice. The entire configuration of the service is in a serverless.yml file with the following structure:
service:
provider:
name: aws
runtime: nodejs4.3
iamRoleStatement:
$ref: ../custom_iam_role.json #JSON , IAM . http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_policy-examples.html. JSON Statements
vpc: # VPC ( , VPC )
securityGroupIds:
- securityGroupId1
subnetIds:
- subnetId1
stage: dev # ( , , - )
region: us-west-2 #
package: #
includee: # – , , , – (handler). , , .
- lib
- node_module # , , – .
exclude: # , , , .
- tmp
- .git
functions: # . – . , . , , , lambda .
hello:
name: hello #
handler: handler.hello #
MemorySize: 512 #
timeout: 10 #
events: # ,
- s3: bucketName
- schedule: rate(10 minutes)
- http:
path: users/create
method: get
cors: true
- sns: topic-name
vpc: # VPC
securityGroupIds:
- securityGroupId1
- securityGroupId2
subnetIds:
- subnetId1
- subnetId2
resources:
Resources:
$ref: ../custom_resources.json # JSON , .
For the most part, this configuration file is very similar to the CloudFormation configuration of the Amazon Service - I will write about this, perhaps in the next article. But in short - this is a service to control all the resources in your Amazon account. Serverless relies entirely on this service, and usually, if an incomprehensible error occurs during the installation of a function, you can find detailed information about the error on the CloudFormation console page.
I would like to note one important detail about the Serverless project - you cannot include directories and files located higher in the directory tree than the project directory. Or rather - ../lib will not work.
Now we have a configuration, let's move on to the function itself.
We create the project with the default configuration
sls create —template aws-nodejs
The function itself is in the file handler.js. The principles of writing a function can be read in the documentation of Amazon . But in general terms, an access point is a function with three arguments:
For example, let's try to create the simplest Hello World function:
exports.hello = function(event, context, callback) { callback({'Hello':'World', 'event': event}); }
Loading!
sls deploy
Thank you all for your attention - please share your impressions and comments in the comments! And, I hope, you will like the article - in this case, I will continue the cycle of deepening into technology.
Source: https://habr.com/ru/post/309370/
All Articles