📜 ⬆️ ⬇️

Terraform provider selectel



We launched the official Terraform-provider to work with Selectel. This product allows users to fully implement resource management through the Infrastructure-as-code methodology (infrastructure as code).

Currently, the provider supports the management of Virtual Private Cloud service resources (hereinafter referred to as VPC). In the future, we plan to add to it the management of the resources of other services provided by Selectel.

As you already know, the VPC service is based on OpenStack. However, due to the fact that OpenStack does not provide native tools for maintaining the public cloud, we have implemented the missing functionality in a set of additional APIs that simplify the management of complex composite objects and make the operation more convenient. Some of the functionality available in OpenStack is not directly usable, but is available through our API .
')
Terraform-provider Selectel now has the ability to manage the following VPC resources:


The provider uses our public Go library to work with the VPC API. Both the library and the provider itself are open-source, they are being developed on Github:


You can use Terraform-provider OpenStack to manage other cloud resources, such as virtual machines, disks, Kubernetes clusters. Official documentation for both providers is available at the following links:


Beginning of work


To get started, you must install Terraform (instructions and links to installation packages can be viewed on the official website ).

To work, the provider requires the Selectel API key that is created in the control panel for the account .

Manifestos for working with Selectel are created using Terraform or using a set of ready-made examples that are available in our Github repository: terraform-examples .

The repository with examples is divided into two directories:


After installing Terraform, creating the Selectel API key, and familiarizing yourself with the examples, proceed to the practical examples.

An example of creating a server with a local disk


Consider an example of creating a project, a user with a role and a virtual machine with a local disk: terraform-examples / examples / vpc / server_local_root_disk .

The vars.tf file describes all the parameters that will be used when calling modules. Some of them have default values, for example, a server will be created in the ru-3a zone with the following configuration:

variable "server_vcpus" { default = 4 } variable "server_ram_mb" { default = 8192 } variable "server_root_disk_gb" { default = 8 } variable "server_image_name" { default = "Ubuntu 18.04 LTS 64-bit" } 

The main.tf file initializes the Selectel provider:

 provider "selectel" { token = "${var.sel_token}" } 

Also in this file is the default value for the SSH key that will be installed on the server:

 module "server_local_root_disk" { ... server_ssh_key = "${file("~/.ssh/id_rsa.pub")}" } 

If necessary, you can specify a different public key. The key does not have to be specified in the form of a file path, you can also add a value as a string.

Next, in this file, the project_with_user and server_local_root_disk modules are run, which manage the necessary resources.

Let us consider these modules in more detail.

Creating a project and a user with a role


The first module creates a project and a user with a role in this project: terraform-examples / modules / vpc / project_with_user .

The created user will be able to log in to OpenStack and manage its resources. The module is simple and manages all three entities:


Creating a virtual server with a local disk


The second module deals with the management of OpenStack objects, which are necessary to create a server with a local disk.

Attention should be paid to some of the arguments that are specified in this module for the openstack_compute_instance_v2 resource:

 resource "openstack_compute_instance_v2" "instance_1" { ... lifecycle { ignore_changes = ["image_id"] } vendor_options { ignore_resize_confirmation = true } } 

The ignore_changes argument ignores the change in the id attribute for the image used to create the virtual machine. In the VPC service, most public images are updated automatically once a week, and their id also changes. This is due to the peculiarities of the OpenStack - Glance component, in which the images are considered to be immutable entities.

If an existing server or disk is created or modified and the public image id is used as the image_id argument, then after this image is updated, the Terraform manifest rerun will restart the server or disk. Using the ignore_changes argument avoids this situation.

Note: the ignore_changes argument appeared in Terraform a long time ago: pull # 2525 .

The ignore_resize_confirmation argument is needed to successfully resize the local disk, cores, or server memory. Such changes are made through the OpenStack Nova component using the resize request. By default, Nova, after a resize request, places the server in the verify_resize status and waits for the user to confirm. However, this behavior can be changed so that Nova does not wait for additional actions from the user.

The specified argument allows Terraform not to wait for the verify_resize status for the server and to be ready for the server to be in active status after changing its parameters. The argument is available from version 1.10.0 of the Terraform provider OpenStack: pull # 422 .

Resource creation


Before launching the manifests, it should be noted that in our example two different providers are launched, and the OpenStack provider depends on the resources of the Selectel provider, because without creating a user in a project it is impossible to manage objects belonging to him. Unfortunately, for the same reason, we cannot just run the terraform apply command inside our example. We need to first apply for the project_with_user module and after that for everything else.

Note: this problem has not yet been resolved in Terraform, you can follow its discussion on Github in issue # 2430 and issue # 4149 .

To create resources, go to the directory terraform-examples / examples / vpc / server_local_root_disk , its contents should be like this:

 $ ls README.md main.tf vars.tf 

We initialize the modules with the command:

 $ terraform init 

The output shows that Terraform downloads the latest versions of the providers used and checks all the modules described in the example.

First, we apply the project_with_user module. In this case, it is required to manually transfer the values ​​for variables that have not been set:


The values ​​for the first two variables should be taken from the control panel .

For the last variable, you can think of any password.

To use the module, you must replace the values SEL_ACCOUNT , SEL_TOKEN and USER_PASSWORD by running the command:

 $ env \ TF_VAR_sel_account=SEL_ACCOUNT \ TF_VAR_sel_token=SEL_TOKEN \ TF_VAR_user_password=USER_PASSWORD \ terraform apply -target=module.project_with_user 

After launching the command, Terraform will show you what resources it wants to create and will require confirmation:

 Plan: 3 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes 

Once the project, user and role are created, you can start creating the remaining resources:

 $ env \ TF_VAR_sel_account=SEL_ACCOUNT \ TF_VAR_sel_token=SEL_TOKEN \ TF_VAR_user_password=USER_PASSWORD \ terraform apply 

When creating resources, pay attention to the Terraform output with an external IP address where the created server will be available:

 module.server_local_root_disk.openstack_networking_floatingip_associate_v2.association_1: Creating... floating_ip: "" => "xxxx" 

You can work with the created virtual machine via SSH using the specified IP.

Editing Resources


In addition to creating resources through Terraform, they can also be modified.

For example, let's increase the number of cores and memory for our server by changing the values ​​for the server_vcpus and server_ram_mb parameters in the file examples / vpc / server_local_root_disk / main.tf :

 - server_vcpus = "${var.server_vcpus}" - server_ram_mb = "${var.server_ram_mb}" + server_vcpus = 8 + server_ram_mb = 10240 

After this, we check how it will lead to changes with the help of the following command:

 $ env \ TF_VAR_sel_account=SEL_ACCOUNT \ TF_VAR_sel_token=SEL_TOKEN \ TF_VAR_user_password=USER_PASSWORD \ terraform plan 

As a result, Terraform made a change to the openstack_compute_instance_v2 and openstack_compute_flavor_v2 resources .

Please note that this will result in a reboot of the created virtual machine.

To apply the new virtual machine configuration, use the terraform apply command , which we already launched earlier.

All created objects will be displayed in the VPC control panel :



In our examples repository, you can also see the manifests for creating virtual machines with network drives.

Kubernetes cluster creation example


Before proceeding to the next example, we will clear the previously created resources. To do this, in the root of the terraform-examples / examples / vpc / server_local_root_disk project, run the command to delete OpenStack objects:

 $ env \ TF_VAR_sel_account=SEL_ACCOUNT \ TF_VAR_sel_token=SEL_TOKEN \ TF_VAR_user_password=USER_PASSWORD \ terraform destroy -target=module.server_local_root_disk 

Then run the command to clean the objects Selectel VPC API:

 $ env \ TF_VAR_sel_account=SEL_ACCOUNT \ TF_VAR_sel_token=SEL_TOKEN \ TF_VAR_user_password=USER_PASSWORD \ terraform destroy -target=module.project_with_user 

In both cases, you will need to confirm the deletion of all objects:

 Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes 

The following example is in the directory terraform-examples / examples / vpc / kubernetes_cluster .

This example creates a project, a user with a role in the project, and raises one cluster Kubernetes. In the vars.tf file, you can see the default values, such as the number of nodes, their characteristics, the version of Kubernetes, and so on.

To create resources similar to the first example, first of all we will start initializing the modules and creating resources for the project_with_user module, and then creating the rest:

 $ terraform init $ env \ TF_VAR_sel_account=SEL_ACCOUNT \ TF_VAR_sel_token=SEL_TOKEN \ TF_VAR_user_password=USER_PASSWORD \ terraform apply -target=module.project_with_user $ env \ TF_VAR_sel_account=SEL_ACCOUNT \ TF_VAR_sel_token=SEL_TOKEN \ TF_VAR_user_password=USER_PASSWORD \ terraform apply 

Let's transfer creation and management of Kubernetes clusters through the OpenStack Magnum component. You can learn more about how to work with a cluster in one of our previous articles , as well as in the knowledge base .

When preparing the cluster, disks, virtual machines will be created and all necessary components will be installed. Preparation takes about 4 minutes, at this time Terraform will display messages like this:

 module.kubernetes_cluster.openstack_containerinfra_cluster_v1.cluster_1: Still creating... (3m0s elapsed) 

After the installation is complete, Terraform will notify you that the cluster is ready and display its identifier:

 module.kubernetes_cluster.openstack_containerinfra_cluster_v1.cluster_1: Creation complete after 4m20s (ID: 3c8...) Apply complete! Resources: 6 added, 0 changed, 0 destroyed. 

To manage the Kubernetes cluster created through the kubectl utility, you must obtain a cluster access file. To do this, go to the project created through Terraform in the list of projects of your account:



Next, go to the link like xxxxxx.selvpc.ru , which is displayed below the project name:



For login information, use the username and password of the user that was created through the Terraform. If you have not changed vars.tf or main.tf for our example, the user will have the name tf_user . As a password, you need to use the value of the variable TF_VAR_user_password , which was specified when you run terraform apply earlier.

Inside the project, you need to go to the Kubernetes tab:



Here is a cluster created through Terraform. Download file for kubectl can be on the tab "Access":



On the same tab is the installation instructions for kubectl and the use of the downloaded config.yaml .

After running kubectl and setting the KUBECONFIG environment variable, you can use Kubernetes:

 $ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-9578f5c87-g6bjf 1/1 Running 0 8m kube-system coredns-9578f5c87-rvkgd 1/1 Running 0 6m kube-system heapster-866fcbc879-b6998 1/1 Running 0 8m kube-system kube-dns-autoscaler-689688988f-8cxhf 1/1 Running 0 8m kube-system kubernetes-dashboard-7bdb5d4cd7-jcjq9 1/1 Running 0 8m kube-system monitoring-grafana-84c97bb64d-tc64b 1/1 Running 0 8m kube-system monitoring-influxdb-7c8ccc75c6-dzk5f 1/1 Running 0 8m kube-system node-exporter-tf-cluster-rz6nggvs4va7-minion-0 1/1 Running 0 8m kube-system node-exporter-tf-cluster-rz6nggvs4va7-minion-1 1/1 Running 0 8m kube-system openstack-cloud-controller-manager-8vrmp 1/1 Running 3 8m prometeus-monitoring grafana-76bcb7ffb8-4tm7t 1/1 Running 0 8m prometeus-monitoring prometheus-75cdd77c5c-w29gb 1/1 Running 0 8m 

The number of nodes in the cluster is easily changed through Terraform.
The main value in the main.tf file is:

 cluster_node_count = "${var.cluster_node_count}" 

This value is substituted from vars.tf :



 variable "cluster_node_count" { default = 2 } 

You can change either the default value in vars.tf , or specify the required value directly in main.tf :

 - cluster_node_count = "${var.cluster_node_count}" + cluster_node_count = 3 

To apply changes, as in the case of the first example, use the terraform apply command :

 $ env \ TF_VAR_sel_account=SEL_ACCOUNT \ TF_VAR_sel_token=SEL_TOKEN \ TF_VAR_user_password=USER_PASSWORD \ terraform apply 

When the number of nodes changes, the cluster will remain available. After adding a node through Terraform, you can use it without additional configuration:

 $ kubectl get nodes NAME STATUS ROLES AGE VERSION tf-cluster-rz6nggvs4va7-master-0 Ready,SchedulingDisabled master 8m v1.12.4 tf-cluster-rz6nggvs4va7-minion-0 Ready <none> 8m v1.12.4 tf-cluster-rz6nggvs4va7-minion-1 Ready <none> 8m v1.12.4 tf-cluster-rz6nggvs4va7-minion-2 Ready <none> 3m v1.12.4 

Conclusion


In this article, we learned about the main ways to work with the Virtual Private Cloud through Terraform. We will be glad if you use the official Terraform-provider Selectel and provide feedback.

All bugs found by Terraform provider Selectel can be reported via Github Issues .

Source: https://habr.com/ru/post/445162/


All Articles