📜 ⬆️ ⬇️

Our experience using AWS at launch

Our task was to ensure the smooth operation of Staply , minimizing costs, while maintaining the flexibility and simplicity of the architecture.
In this article we will describe what server configuration we use during the transition from closed beta to open use. The period when the question of cost is most acute, as there is a load, but there is still no profit.




The lack of materials on the period of average loads and the request of synck led to the writing of this article.
')
Our results:


From the beginning of development, we used Amazon AWS E2, which allows us to create our own architecture without having to search for ways to bypass hosting restrictions, as in the case of Heroku, while offering extensive cloud solutions.
To maintain the service in working condition, we tried to avoid a sharp increase in load, gradually increasing traffic, which allowed us to identify bottlenecks in the system and eliminate them in the operating mode, without emergency situations.

Amazon recommendation configuration:


Our configuration:


Daily bill: ~ $ 8.21

Attention! Check your detailed bills on a regular basis, Amazon splits the payment into many small items of expense, and the bill for an unused item may be unpleasantly surprising.


Server configuration

The service is written in Rails, however, this fact is affected minimally in the article.
At the beginning of development, just one t1.micro instance with swap enabled. Reduce costs to zero allows AWS Free Tier with a free period of use.

When the service goes into open access, it is better not to skimp and from the very beginning to lay in the architecture the possibility of a sharp increase in capacity. Distributed multi-server architecture allows you to work with each element separately, without affecting the rest (for example, if you need to restart the instance, or increase its power).
Be sure to use monitoring, a free plan of NewRelic will be enough.

When launching the public version of the project, we launch the second server, m3.medium (1 CPU Intel Xeon, 3.7 GB RAM, magnetic store, configured swappiness).

Server capacity was calculated based on the Rails application configuration:


For convenient routing of requests between development and production servers, we used one t1.micro instance with Haproxy installed and configured SSL termination .
Significantly reduce the delay between servers allows the use of Private IP , instead of Public in Haproxy configuration.

Do not be afraid of premature optimization, now is the right time for it. Each identified bottleneck in the service code, each reduction in response time to hundredths of a second will allow you to support more clients and save. Pay maximum attention to the cycles in the code, in our case, the greatest potential for optimization was in them. Bringing all the excess out of cycle limits, we have reduced the response time tenfold.

To send emails use Amazon SES ,
It integrates perfectly into Rails with Action Mailer, and provides a free quota of 10,000 posts / day.

Files

Files are stored in S3, but you should not use it to store statics (scripts, styles, images), the delay will be greater than when storing files on the instance itself.
The delay in accessing the file:

Enable content caching from S3 allows adding header {'Cache-Control' => 'max-age = 315576000'}.

To reduce the delay, Amazon offers to use the CloudFront service, which distributes content from S3 across regions.
Traffic with CloudFront is cheaper than traffic with S3.

Database

For the base, you can use the same instance as the server, but this increases the communication between the elements and reduces the flexibility of the architecture. For the database, we use the RDS db.m1.small instance with the magnetic store, which allows us not to care about backups and configuration.
Starting from the earliest beta version, the service was used by clients and there was data in the database, the safety of which we had to ensure

Regions

It is necessary to take into account the geography of potential customers and the market on which we want to work: You can immediately reach the whole world, but the network delay will be serious.
All elements of the architecture must be in the same region.

Download speed between regions:
image

In practice, the delay from St. Petersburg to servers in the US-EAST region can be 6 times greater than when requesting EU-WEST servers.

Building a simple and modular architecture from the very beginning, will lay a high potential for smooth growth and transition to high loads.

We are happy to hear your advice and comments, feedback from Habr helped us to seriously improve many points in our service.

Resources used:

Source: https://habr.com/ru/post/253063/


All Articles