We decided to create a small framework for serverless web applications in AWS. It may be more correct to call this not a framework, but a blank - I do not know. But the bottom line is to create the foundation for the rapid development of serverless AWS applications. The code is posted on
GitHub and is open to any improvements, of which there are plenty to be.
The article will discuss how to develop and test serverless applications locally, about routing on the frontend and backend, about Amazon services and things like that. Who cares, welcome under the cat!
Something like a preface
Until recently, the development of serverless applications was greatly complicated by the fact that there was no means for fully-fledged local testing of lambda functions and APIs. When creating applications, you had to either work all the time online, editing the code in the browser, or constantly archive and upload the source code of lambda functions to the cloud.
In the summer of 2017 there was a breakthrough. AWS created a new simplified standard for CloudFormation templates, which they called the
Serverless Application Model (SAM) and simultaneously launched the
sam-local project. First things first.
')
Amazon CloudFormation is a service that allows you to describe all the AWS infrastructure you need for your application using a template file in JSON or YAML format. This is a very, very handy thing. Because without it, you need to manually create many of the necessary resources: Lambda functions, database, API, roles and policies ...
With CloudFormation, the infrastructure can be drawn either in a special designer, or you can write it in the template with your hands. In any case, the result is a template file, with which you can continue in a couple of clicks or with one command to raise everything that is needed for the application. And then, if necessary, make changes to this template and apply them again with one command. This makes supporting the application infrastructure much easier. It turns out, infrastructure, as a code.
CloudFormation is beautiful, its templates
allow you to describe almost 100% of AWS resources. But because of its versatility, this is a rather “verbose” format - templates can quickly grow to a decent size. Realizing this and pursuing the goal to make the creation of serverless applications easier, AWS created a new
SAM format.
You can conditionally assume that the usual CloudFormation templates are written in a low-level language. And SAM templates are in a high-level language, thus allowing you to describe the infrastructure of serverless applications using a simplified syntax. SAM templates are transformed by CloudFront into regular templates with a layer.
What is
sam-local ? This is a command line tool that allows you to work locally with serverless applications described by SAM templates. Sam-local allows you to test lambda functions, generate events from various AWS services, run API Gateway, check SAM templates — and all this locally!
Sam-local uses the docker container to emulate the Gateway and Lambda APIs. The principle of operation is as follows. When running, sam-local searches for the SAM template file in the project folder. It analyzes the template file and launches the resources allocated in the template in the docker-container: opens the API and connects the Lambda functions to them. And the support is very close to the work of real lambda-functions (limits, the amount of memory used and the duration of execution are shown).
It looks like this
Georgiy@Baltimore MINGW64 /h/dropbox/projects/aberp/lambda (master) $ sam local start-api --docker-volume-basedir /h/Dropbox/Projects/aberp/lambda "aberp" ←[34mINFO←[0m[0000] Unable to use system certificate pool: crypto/x509: system root pool is not available on Windows 2018/04/04 22:33:49 Connected to Docker 1.35 ←[34mINFO←[0m[0001] Unable to use system certificate pool: crypto/x509: system root pool is not available on Windows 2018/04/04 22:33:50 Fetching lambci/lambda:nodejs6.10 image for nodejs6.10 runtime... nodejs6.10: Pulling from lambci/lambda ←[1B06c3813f: Already exists ←[1B967675e1: Already exists ←[1Bdaa0d714: Pulling fs layer ←[1BDigest: sha256:56205b1ec69e0fa6c32e9658d94ef6f3f5ec08b2d60876deefcbbd72fc8cb12f52kB/2.052kBB Status: Downloaded newer image for lambci/lambda:nodejs6.10 ←[32;1mMounting index.handler (nodejs6.10) at http://127.0.0.1:3000/{proxy+} [OPTIONS GET HEAD POST PUT DELETE PATCH]←[0 m You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions, changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template.
Further, a call to the local API and a call to the corresponding lambda-functions is displayed in the console in general as well, as the lambda-functions output information to the CloudWatch logs:
2018/04/04 22:36:06 Invoking index.handler (nodejs6.10) 2018/04/04 22:36:06 Mounting /h/Dropbox/Projects/aberp/lambda as /var/task:ro inside runtime container ←[32mSTART RequestId: 9fee783c-285c-127d-b5b5-491bff5d4df5 Version: $LATEST←[0m ←[32mEND RequestId: 9fee783c-285c-127d-b5b5-491bff5d4df5←[0m ←[32mREPORT RequestId: 9fee783c-285c-127d-b5b5-491bff5d4df5 Duration: 476.26 ms Billed Duration: 500 ms Memory S ize: 128 MB Max Memory Used: 37 MB ←[0m
Sam-local is still in the status of a public beta, but it seemed to me that it works quite stable.
All this as a whole allows you to work on creating a serverless application on a local computer and this is no more difficult than creating traditional web applications.
I can not fail to mention. The sam-local has an analogue - this is the
Serverless framework . Serverless framework is quite popular, largely due to the fact that before there were no alternatives. I have no particular experience of using it, but as far as I know, it does not provide such a complete local environment as sam-local. Sam-local is developed in AWS itself, and a separate team of enthusiasts makes the serverless framework. In favor of the serverless framework, however, it can be attributed to the fact that it allows you to make applications less tied to a specific vendor.
About the framework
As I already wrote, it is needed in order to provide a quick start when creating new serverless applications. Currently, it only implements authorization on web tokens. Next we plan to add error handling, work with forms and tabular data output, and set up a deployment mechanism. In general, in order to be able to clone the AB-ERP repository in the future and quickly start working on applications.
We create ERP-systems, therefore we called it
AB-ERP by analogy with the names of our other products:
AB-TASKS and
AB-DOC . At the same time, AB-ERP is not necessary for the creation of ERP systems, any serverless web applications can be made on the basis of it.
The application has a frontend code and a backend code. Accordingly, there are 2 folders in the project root:
lambda (backend) and
public (frontent):
+---lambda | +---api | +---core \---public +---css | \---core +---img +---js | \---core \---views
AB-ERP works on the principle of a one-page web application (SPA). When deploying an application, the frontend code will need to be placed in AWS S3 and configured in front of it by CloudFront. This was described in my
previous article on AB-DOC in the “Development and Deployment” section.
When deployed, the backend code will be loaded into the AWS Lambda service.
AB-ERP uses MariaDB as its database. MariaDB is deployed in the AWS RDS service. If desired, AB-ERP can be reconfigured, for example, to work with AWS DynamoDB.
User files will be stored in AWS S3.
This is how the application architecture looks like:

Backend
At the moment everything is very, very simple. Only one API Gateway resource and just one lambda function.
This is how the SAM template looks like:
AWSTemplateFormatVersion : '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: An example RESTful service Resources: ABLambdaRouter: Type: AWS::Serverless::Function Properties: Runtime: nodejs6.10 Handler: index.handler Events: ABAPI: Type: Api Properties: Path: /{proxy+} Method: any
In the SAM template, we see one of our resources, ABLambdaRouter, which is a lambda function. ABLambdaRouter is called only by one ABAPI event that comes from the API.
Our API Gateway resource accepts requests by any methods (
ANY ) to any paths in the URL:
/ {proxy +} . That is, in other words, acts as a normal two-way proxy. Lambda-function, respectively, should take on the role of a router that will execute different code depending on the requests.
Code lambda-function (router) 'use strict'; const jwt = require('jsonwebtoken');
The API has a two-level hierarchy: the first level is the module, the second level is the action. URLs are as follows
api.app.com/module/action . The router function analyzes the
pathParameters of the incoming request, tries to connect the required module from the
lambda / api folder and further transfer the request to the desired function in this module.
By default, functions in modules require authorization, so before calling a function from a module, our router checks for a valid token in the
X-Access-Token request header. If the token is valid, the function from the module will be called; if not, error 403 will be returned.
Why did we choose this approach, instead of creating many individual API Gateway resources and many lambda functions? First, and most importantly, ease of configuration, deployment, and the actual work with such an architecture. Secondly, this approach minimizes cold starts of the function. The fact is that if the function has no long calls, AWS deletes its container and then with a new call it takes more time to process the request.
There are also disadvantages to this approach. We will not be able to make any special settings for different API resources at the API Gateway level.
Maybe someone has a question, why then do we need an API Gateway, why not contact lambda directly from the browser? API Gateway provides many benefits. It can work as a CDN, in Edge Optimized mode, there is a caching of answers, it can respond to OPTION requests itself without accessing the backend (MOCK integration) - all this significantly speeds up the application. It also has DDOS protection and the ability to control traffic using restrictions. Well, it also allows you to open API applications for third-party developers.
Frontend
For the frontend, we decided not to use “big” frameworks, like React, Vue.js or Angular.js, so we wrote a small router for our SPA application.
The router stores the description of each page: which html-template and which css, js-files it needs. When prompted to the page, the router loads all the necessary files in plain text, merges them and inserts them into the div-container of the application interface. When inserted into a container, the JavaScript of the page being opened is executed.
Setting up the environment
All that would be required to run the project on my computer, I tried to explain the steps in the README on the project
github . If something does not work, write in the comments - we will try to help. Accordingly, README will replenish.
For local testing, I wrote a small HTTP server on Node.js:
const express = require('express'); const app = express(); app.use(express.static('public')); app.use(function(req, res, next) { req.url = 'app.html'; next(); }); app.use(express.static('public')); app.listen(80, () => console.log('Listening..'))
Before you begin, you need to run it with the
node command
abserver.js . When a request comes in, he searches for a file in the
public folder and gives it away if he found it. If the file is not found, it gives the main application file
public \ app.html . This is quite enough for the SPA application to work. In production, Amazon CloudFront solves this problem.
Conclusion
AB-ERP is still very raw. We welcome any suggestions and comments, and even more commits.
Currently, only authorization is more or less implemented in AB-ERP - I plan to talk about it in one of the following articles. What authorization options are there when working with API Gateway and why we did not implement custom authorizer or integration with Cognito.
Some plans for the further development of the project.The key components for any data application are data entry forms and a table for their output. Therefore, the functionality for working with forms and tables will be added first.
There is an idea to standardize work with forms (building forms on a page, validating to backend and frontend, saving in the database) through the use of YAML templates. That is, to make it possible to describe forms in YAML templates, and then all the rest of the work on the frontend and backend is done with the code AB-ERP. For the tables, we will use the
Datatables library, which we used in our task tracker AB-TASKS.
The following tools helped me in writing this article:- Online drawing service draw.io diagrams
- Windows command line tree command for directory tree rendering