📜 ⬆️ ⬇️

Meet Otto, the heir to Vagrant

Otto is a new product from Hashicorp, the logical heir to Vagrant, designed to simplify the process of developing and deploying programs in the modern world of cloud technologies. Conceptually a new approach to the problem, proven technology under the hood and open source. Personal DevOps Developer Assistant.



Introduction


The first product of Mitchell Hashimoto, the founder of Hashicorp - the well-known Vagrant - laid the foundation for a whole chain of high-quality products for automating development processes and deploying programs. Packer helps you create the final images of virtual machines, be it VirtualBox, Docker or Google Cloud. Terraform creates and brings to the configuration level the most complex process of building entire infrastructures in the cloud, also without being tied to a specific provider. Consul and Serf are responsible for communications in the cloud - the discovery of services, the monitoring of crashes, and so on. Vault is a secure, distributed repository of secrets, passwords, and sensitive data, with the ability to audit, control access, and revoke keys.

All of these products are open source, very well written, well documented, easy to install, because they are written in Go, have an intuitive interface and API, and do well the tasks for which they were created. If used correctly, then in a bundle they make life easier for a modern developer who works with microservices, clouds, virtual machines and containers.
')
Developers no longer need to be devops-pros to perform simple things in the cloud - the above programs try to take the main headache on themselves and / or shift it to a separate devops-department. But still, these are 6 separate programs that still need to be mastered, read the documentation and study. This leads to the fact that the good old copy-paste from the first pages of Google’s output to the query “how to close up [ruby / php / etc] to [aws / gae / do / etc]” is still the most common way to deploy to a cloud of one or another stack. .

Moreover, be it a copy-paste or Packer / Terraform-config written by someone or Vagrantfile - all of them sooner or later become obsolete, as the versions change, the URLs change, the transition to new protocols and so on. Plus, Vagrant users from the very first versions are asked to add the ability to deploy the application. But Vagrantfile is a completely different level of abstraction to describe such things.

All this has led Hashicorp to understand that a new approach to the problem is needed.

Codification vs. Fossilization


I don’t know how to translate these terms correctly, but let it be “coding” versus “mummification”. I was lucky to be at the Otto presentation (and Nomad , Hashicorp Sheduler) at the DigitalOcean office in New York, literally a couple of days after both products were announced at the Hashiconf conference, and it was with these terms that Hashimoto described Otto’s main idea and conceptual difference from Vagrant.

What Vagrant, Packer, and Terraform do is “mummify” the developer environment. You prescribe everything you need to develop your program, all the settings, links and commands, and this ensures that even after 10 years, any developer will be able to raise the same development environment as now, one-on-one.

But what if, after 10 years, the URLs, from where the right compiler or framework comes from, have changed? Or has the world switched to the new protocol YTTP3? Everyone must update their Vagrant files. Now Packer knows how to upload an image to Amazon and DigitalOcean and how to create a VPC, you have carefully written it, but what if in a year Amazon changes its API, introduces a new security model within networks, or adds any new features that your Packer automatically does / Terraform file obsolete?

Otto offers a conceptually new approach to the issue and this is the "coding" of the process of creating the development environment and deployment. You say otto what you want (“my application on Go should run on AWS, communicate with the mysql-base and look outside on the port of such and such”) and otto does all the magic for you, knowing better than most do right.

Sounds scary? Let's see more in detail.

Details


Under the hood, otto uses the same Vagrant, Packer, Terraform, Consul and Vault, and, in fact, eliminates the need to even know about their existence. If something is not installed, otto will ask himself in a convenient form whether to download them and install them for you or not.

Further, the standard workflow is very similar to working with Vagrant:


Otto uses only one file - Appfile , which for simple cases is not even necessary, since otto can, for example, guess the type of project. The file format is Hashikorpovskiy HCL, which is easy to read and write. Sample Appfile:

application { name = "otto-getting-started" type = "ruby" } project { name = "otto-getting-started" infrastructure = "otto-getting-started" } infrastructure "otto-getting-started" { type = "aws" flavor = "simple" } 

The “compile” phase ( otto compile ) reads the Appfile and (re) creates a .otto subdirectory that looks like this:



This is an important point that reflects the difference between “coding” and “mummification”. Every time the Appfile changes or otto is updated, the ` otto compile` command will update all these engine compartments, creating the necessary configuration for Vagrant, Packer and Terraform. If the “best practices” of how to install dependencies and prepare the environment have changed - then at the compilation stage your otto-environment will be updated. If you do not run the compile command, otto will work with the already compiled version of Appfile.

The stage of preparation of the environment - otto dev - actually replaces the vagrant init and vagrant up . A virtual machine is being raised (so far only Ubuntu hashicorp / precise64, but in the future the OS will also be a choice), the network is configured, SSH-keys are installed, dependencies and necessary packages are installed - in general, all the magic that allows any newly arrived developer to the project command ` otto dev ssh` and get into the ready development environment.

When the program is ready, otto can take on the task of deploying the application to the cloud. The developer now does not need to know all the subtleties of setting up web servers, virtual private networks and other details. The deployment cycle with otto consists of three steps - building the infrastructure, building the image of the application and, in fact, the deployment:


The “infrastructure” here means all the resources associated with each specific cloud service provider. ` otto infra` creates subnets, configures routing, bastion hosts, gateways, VPC, etc., which you usually have to read a lot and take a long time to understand how it works. Otto takes all this burden on himself - he “knows” how to work with all this and makes the most optimal way, while respecting all the best practices. Different infrastructure options are called “flavors” and describe various options - “simple machine in the cloud with access via SSH”, “private network with IP looking outwards”, etc. Under the hood, this is all Terraform does.

Further steps - `otto build` and` otto deploy` - build an image ready to run in the cloud and launch an instance. This may be AMI or Docker-container, or anything else that otto will support in the future.

Just like that. Now, even a PHP website designer can synchronize a project, launch otto, and launch a website in the cloud, without a single knowledge of how it works and how it works.

And finally, in a typical developer workflow, the `destroy` command.


Once again I want to focus on the fact that otto will evolve as cloud technologies, languages ​​and frameworks evolve and change, but the sequence of actions and the algorithm for working with otto will remain unchanged.

Microservices


Modern cloud applications often use microservice architecture in one form or another, and each application often depends on others and it can be very difficult to raise all dependencies correctly. Otto also tries to take on this issue, and uses the concept of dependencies (dependencies), which are written in the Appfile and have the form of a URL to the dependency, which is also a otto project. For example, if the project has a dependency on MongoDB:

 application { name = "otto-getting-started" type = "ruby" dependency { source = "github.com/hashicorp/otto/examples/mongodb" } } 

Logged in to your dev environment, we will be able to contact mongodb at the DNS address ` mongodb.service.consul` .
All this should, in theory, greatly simplify the development of services that have a lot of complicated dependencies.

Current restrictions


Otto was released just a week ago, it is in version 0.1, and so far it does not support many things. At the moment, there is support (magic ie) for Go, PHP, Docker (for dependencies), Node.js and Ruby, although it is also very limited. Deployment is currently supported only for Amazon, but other providers will be added soon. You can be optimistic here, as otto doesn’t do it on its own, but uses Terraform and Packer, which supports Azure, CloudFlare, DigitalOcean, GAE, Heroku, OpenStack and much more.

Everywhere it is possible to specify your custom Vagrantfile or Terraform-configs, which makes otto very extensible and applicable even for very non-standard and sophisticated schemes.

findings


At the time of this writing, otto is still a wonder, although it uses tools that are well-proven under the hood. As far as the idea of ​​otto - the magical DevOps-assistant for developers - shows itself, time will tell.

Personally, my Hashicorp activity has long left one impression - they know what they are doing, and slowly but surely move towards this goal. Mitchell said in a speech that Otto had an idea for a long time, but he understood that such a project could not be created from scratch. Therefore, year after year, preparing the soil, cubes for its implementation. By the way, nomad is also one of these cubes, and very soon it will also be supported in otto.

Moreover, the development is very active, the Hashicorp code is very high-quality, the legends of the Hashimoto people are legendary, and the last few years have shown impressive progress. Hashicorp create a whole ecosystem for the convenience of working in the cloud.

So keep your finger on the pulse.

Links


Otto has a great website and documentation: www.ottoproject.io
Hashicorp website: hashicorp.com

Source: https://habr.com/ru/post/268497/


All Articles