📜 ⬆️ ⬇️

AWS AMAZON - how do you optimize resources

Good day!

I have been using AWS AMAZON for quite some time, if necessary, to increase the calculated capacity. Satisfies, well, in general, everything. In short, for the tasks, both permanent servers and spot servers were launched. Different instances and auto start / stop charts were used.
In general, all this is very comfortably regulated, as a first approximation, with up to a dozen servers and with my direct understanding of the entire kitchen. And of course, accounting, export / archiving / disposal of stored data.

I wonder what your “gentleman's set” is for optimizing resources and, ultimately, costs?
')
Thank you for your comments, recommendations!

...

The question is, for managing more than a hundred servers *, of which the same dozen can be launched at the same time, for example, but each server has its own “ballast” from image snapshot disks, etc. paraphernalia. Elastic IPs that migrate / can be “bound” as permanent. And about this all you need to know (know where to look).
Still, size does matter, and if with “10” a significant saving comes from starting servers during the user's working time (up to ~ 60%) / replicating data to a more powerful spot for fast data processing and subsequent terminate to non-existence. That for "100+" is already a more capacious question.

Perhaps you implemented AMI / Snapshot storage in Glacier or are there tricky schemes? The question, by the way, is quite interesting - if I am not mistaken in Glacier, you can store archives / data outside the AWS console, and the idea to feed 0.01 in the Glacier AMI is very much the case.

I would like to think about the prospect of expansion, and indeed the division of labor is possible, it was not for nothing that IAM was implemented. And if you were transferred (planned) to a certain “100+” server pool with your own ballast?

... Or "Elasticfox", scripts and tables + mushr, in terms of obtaining information from customers, what would then join the tails?

Or maybe, in general, there is a cardinal sense to move to competitors for permanent residence, to minimize costs, and this is already a transfer task and should be justified significantly, can anyone have experience?

A pair of tables, only RAM and CPU were compared, pulling “behind the ears” to AMAZONA prices.

Clouds overseas
TypeCloudname (instance-types)RAM GiBCPUWindows Usage (per hour)
Swin azureSmall (A1)1.70one$ 0,090
SAWSSmall instance1.70one$ 0.091
ShpcloudSmall2.002$ 0,120
Srackspace2 GB2.002$ 0,120
Ssoftlayer2 Core + 2GB RAM2.002$ 0,250
Mwin azureMedium (A2)3.502$ 0.180
MAWSMedium instance3.752$ 0.182
MhpcloudMedium4.002$ 0,240
Mrackspace4 GB4.002$ 0,240
Msoftlayer4 Core + 4GB RAM4.00four$ 0.390
Lwin azureLarge (A3)7four$ 0.360
LAWSLarge instance7.50four$ 0.364
Lsoftlayer4 Core + 8GB RAM8.00four$ 0,440
LhpcloudLarge8.00four$ 0,480
Lrackspace8 GB8.00four$ 0,480


Domestic clouds:
TypeCloudname (instance-types)RAM GiBCPUWindows Usage (per hour)
Sselectel.ruSmall1.70one$ 0.063
Soversun.ruSmall2.002.6$ 0,070
SAWSSmall instance1.70one$ 0.091
Sscalaxy.ruSmall1.5041$ 0.155
Mselectel.ruMedium3.752$ 0.132
Moversun.ruMediumfour2.6$ 0.180
MAWSMedium instance3.752$ 0.182
Mscalaxy.ruMedium4.0041$ 0.321
Lselectel.ruLarge7.50four$ 0.264
Loversun.ruLargeeight5.2$ 0.323
LAWSLarge instance7.50four$ 0.364
Lscalaxy.ruLarge8.0041$ 0.588

Data sorted by type and price.
As can be seen from the tables, only the “Windows” server was compared, since they make up 99% in my park.

PS:
I apologize in advance for multi-letters, but the question, "What is your" gentleman's set "in AWS, somehow also looks like a curse.
* Of course, all useful opinions are interesting, “100+” is a kind of figurative value.

Source: https://habr.com/ru/post/178091/


All Articles