📜 ⬆️ ⬇️

OpenStack - deployed "hands" Kilo

Hello to all Habraluds!

In the last article , I talked about how you can quickly deploy a test environment using DevStack. In this post, I’ll tell you how to deploy your OpenStack “cloud” on two machines (Controller, Compute) in the configuration:


In general, this system will allow us to run multiple virtual machines (how much memory and CPU will allow for compute ), create virtual networks, create virtual disks and connect them to a VM, and of course, manage all of this through a convenient dashboard.
')
Caution! A lot of "tails" with a list of commands and configs!


Immediately I will say:


Do not mindlessly "copy-paste". This, of course, will help establish the OpenStack environment for this guide, but will not teach how to use this knowledge in the field.

What will we use?


OS: Ubuntu 14.04 (You can use CentOS, but the guide will be based on Ubuntu).
OpenStack Edition: Kilo

Training

Network


The original manual uses 4 networks:
Management - 10.0.0.0/24 - VLAN 10
Tunnel - 10.0.1.0/24 - VLAN 11
Storage - 10.0.2.0/24 - VLAN 12
External - 192.168.1.0/24

In our case, the external network looks somewhere into the home network, but by and large this interface can also look into the “world wide web” - everything depends on what you are deploying the cloud for.

It would be very nice to have a workable dns-server. I used dnsmasq.
# cat /etc/hosts 10.0.0.11 controller 10.0.0.31 compute1 


Configuring interfaces
on the controller:
 # cat /etc/network/interfaces auto p2p1.10 iface p2p1.10 inet static address 10.0.0.11 netmask 255.255.255.0 gateway 10.0.0.1 dns-nameservers 10.0.0.1 auto p2p1.11 iface p2p1.11 inet static address 10.0.1.11 netmask 255.255.255.0 auto p2p1.12 iface p2p1.12 inet static address 10.0.2.11 netmask 255.255.255.0 #     auto p3p1 iface p3p1 inet manual up ip link set dev $IFACE up down ip link set dev $IFACE down 



on the compute node:
 # cat /etc/network/interfaces auto p2p1.10 iface p2p1.10 inet static address 10.0.0.31 netmask 255.255.255.0 gateway 10.0.0.1 dns-nameservers 10.0.0.1 auto p2p1.11 iface p2p1.11 inet static address 10.0.1.31 netmask 255.255.255.0 auto p2p1.12 iface p2p1.12 inet static address 10.0.2.31 netmask 255.255.255.0 



We check that both cars see each other and go to the network.

NTP

On the controller:
 # apt-get install ntp -y # cat /etc/ntp.conf server ntp.oceantelecom.ru iburst restrict -4 default kod notrap nomodify restrict -6 default kod notrap nomodify # service ntp stop # ntpdate ntp.oceantelecom.ru # service ntp start 


On compute node:
 # apt-get install ntp -y # cat /etc/ntp.conf server controller iburst # service ntp stop # ntpdate controller # service ntp start 


Kilo repository


 # apt-get install ubuntu-cloud-keyring # echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" "trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list 


Kilo is a pretty young release - April 2015. Most of all in this release I liked the Russian language in the Horizon interface.
More details can be read here .

Updating:
 # apt-get update && apt-get dist-upgrade -y 


SQL + RabbitMQ

As a SQL server, it can be MySQL, PostgreSQL, Oracle, or any other that is supported by SQLAlchemy. We will install MariaDB as in the official manual.
 # apt-get install mariadb-server python-mysqldb -y # cat /etc/mysql/conf.d/mysqld_openstack.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8 # service mysql restart # mysql_secure_installation 

If there are unnecessary HDDs with good performance, then the database files can be put on it and it will not be superfluous if you plan to develop the booth by computational nodes.

And of course RabbitMQ:
 # apt-get install rabbitmq-server # rabbitmq-plugins enable rabbitmq_management # service rabbitmq-server restart 

We install raebita and run the administrative WebGUI, for the convenience of tracking the queues.

Create a user and set rights to him:
 rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack ".*" ".*" ".*" 


Keystone

Keystone is the authorization center for OpenStack. All authorizations go through it. Keystone stores data in a SQL database, but also uses memcache.

Prepare the database:
 # mysql -u root -p CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS'; 

Naturally, do not forget to substitute your password, as elsewhere.

Disable autoloading keystone service and install all the necessary components:
 # echo "manual" > /etc/init/keystone.override # apt-get install keystone python-openstackclient apache2 libapache2-mod-wsgi memcached python-memcache 


In the /etc/keystone/keystone.conf config we write the following lines:
 [DEFAULT] admin_token = ADMIN_TOKEN [database] connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone [memcache] servers = localhost:11211 [token] provider = keystone.token.providers.uuid.Provider driver = keystone.token.persistence.backends.memcache.Token [revoke] driver = keystone.contrib.revoke.backends.sql.Revoke 


ADMIN_TOKEN generic with " openssl rand -hex 16 ".
Synchronize local database with SQL server
 # su -s /bin/sh -c "keystone-manage db_sync" keystone 


Configuring Apache:
footcloth
 # cat /etc/apache2/apache2.conf ... ServerName controller ... # cat /etc/apache2/sites-available/wsgi-keystone.conf Listen 5000 Listen 35357 <VirtualHost *:5000> WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone display-name=%{GROUP} WSGIProcessGroup keystone-public WSGIScriptAlias / /var/www/cgi-bin/keystone/main WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On <IfVersion >= 2.4> ErrorLogFormat "%{cu}t %M" </IfVersion> LogLevel info ErrorLog /var/log/apache2/keystone-error.log CustomLog /var/log/apache2/keystone-access.log combined </VirtualHost> <VirtualHost *:35357> WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone display-name=%{GROUP} WSGIProcessGroup keystone-admin WSGIScriptAlias / /var/www/cgi-bin/keystone/admin WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On <IfVersion >= 2.4> ErrorLogFormat "%{cu}t %M" </IfVersion> LogLevel info ErrorLog /var/log/apache2/keystone-error.log CustomLog /var/log/apache2/keystone-access.log combined </VirtualHost> # ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled # mkdir -p /var/www/cgi-bin/keystone # curl http://git.openstack.org/cgit/openstack/keystone/plain/httpd/keystone.py?h=stable/kilo | tee /var/www/cgi-bin/keystone/main /var/www/cgi-bin/keystone/admin # chown -R keystone:keystone /var/www/cgi-bin/keystone # chmod 755 /var/www/cgi-bin/keystone/* # service apache2 restart # rm -f /var/lib/keystone/keystone.db 


We change the ServerName to the name of our controller.
We take working scripts from the openstack repository.

Set up endpoints. In general, it is thanks to endpoint `s that openstack will know where and which service is running.

Add environment variables in order not to specify them each time in the keystone parameters:
 # export OS_TOKEN=ADMIN_TOKEN # export OS_URL=http://controller:35357/v2.0 


Now we create the service:
 # openstack service create --name keystone --description "OpenStack Identity" identity 

Well, create endpoint API:
 # openstack endpoint create --publicurl http://controller:5000/v2.0 --internalurl http://controller:5000/v2.0 --adminurl http://controller:35357/v2.0 --region RegionOne identity 

RegionOne can be changed to any readable name. I will use it to not "bother".

Create projects, users and roles.

We will continue to do the official mana, so that everything is the same: admin and demo
 # openstack project create --description "Admin Project" admin # openstack user create --password-prompt admin # openstack role create admin # openstack role add --project admin --user admin admin 

Password for admin come up with yourself. In order: created the project “Admin Project”, the user and the admin role, and connect the project and the user with the role.

Now create the service project:
 # openstack project create --description "Service Project" service 


By analogy with admin, we create a demo:
 # openstack project create --description "Demo Project" demo # openstack user create --password-prompt demo # openstack role create user # openstack role add --project demo --user demo user 


Create environment scripts:
scripts
 # cat admin-openrc.sh export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=admin export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL=http://controller:35357/v3 export OS_IMAGE_API_VERSION=2 export OS_VOLUME_API_VERSION=2 # cat demo-openrc.sh export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=demo export OS_TENANT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=DEMO_PASS export OS_AUTH_URL=http://controller:5000/v3 export OS_IMAGE_API_VERSION=2 export OS_VOLUME_API_VERSION=2 



Actually:
 # source admin-openrc.sh 

This completes the keystone service setup .

Glance

Glance is an OpenStack tool for storing templates (images) of virtual machines. Images can be stored in Swift, in Glance`s own repository, but somewhere else - the main thing is that this image can be obtained via http.

Let's start as always with mysql :
 # mysql -u root -p CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS'; 


Create in the keystone information about the future service:
 # openstack user create --password-prompt glance # openstack role add --project service --user glance admin # openstack service create --name glance --description "OpenStack Image service" image # openstack endpoint create --publicurl http://controller:9292 --internalurl http://controller:9292 --adminurl http://controller:9292 --region RegionOne image 

We create the user glance and connect it to the admin role, since all services will work from this role, we create the glance service, set the endpoint.

Now we proceed to the installation:
 # apt-get install glance python-glanceclient 

and setup:
tuning
 # cat /etc/glance/glance-api.conf [DEFAULT] ... notification_driver = noop [database] connection = mysql://glance:GLANCE_DBPASS@controller/glance [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = glance password = GLANCE_PASS [paste_deploy] flavor = keystone [glance_store] default_store = file filesystem_store_datadir = /var/lib/glance/images/ 



Whatever is in the [keystone_authtoken] section, it needs to be deleted. GLANCE_PASS is the password from the glance user in keystone. filesystem_store_datadir is the path to the repository where our images will be located. I recommend that you mount either a raid array or reliable network storage to this directory in order not to accidentally lose all our images due to a disk failure.

In /etc/glance/glance-registry.conf we duplicate the same information from the database, keystone_authtoken, paste_deploy, DEFAULT sections.

Synchronize DB:
 # su -s /bin/sh -c "glance-manage db_sync" glance 


Restart the services and delete the local database:
 # service glance-registry restart # service glance-api restart # rm -f /var/lib/glance/glance.sqlite 


The official manual loads cirros , which, in general, we do not need, so we will upload the Ubuntu image:
 # mkdir /tmp/images # wget -P /tmp/images http://cloud-images.ubuntu.com/releases/14.04.2/release/ubuntu-14.04-server-cloudimg-amd64-disk1.img # glance image-create --name "Ubuntu-Server-14.04.02-x86_64" --file /tmp/images/ubuntu-14.04-server-cloudimg-amd64-disk1.img --disk-format qcow2 --container-format bare --visibility public --progress # rm -r /tmp/images 

You can immediately fill in all the images we need, but I’m waiting for the moment when we get the Dashboard.
Overall - our Glance service is ready.

Nova

Nova - the main part of IaaS in OpenStack. Actually, thanks to Nova, virtual machines are created automatically. Nova can interact with KVM, Xen, Hyper-V, VMware and Ironic (I honestly do not quite understand how it works). We will use KVM, for other hypervisors configs will be different.

Controller


Again we start with the database:
 # mysql -u root -p CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS'; 


Add information to the keystone:
 # openstack user create --password-prompt nova # openstack role add --project service --user nova admin # openstack service create --name nova --description "OpenStack Compute" compute # openstack endpoint create --publicurl http://controller:8774/v2/%\(tenant_id\)s --internalurl http://controller:8774/v2/%\(tenant_id\)s --adminurl http://controller:8774/v2/%\(tenant_id\)s --region RegionOne compute 


Install the necessary packages:
 # apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient 


/etc/nova/nova.conf
 [DEFAULT] ... rpc_backend = rabbit auth_strategy = keystone my_ip = 10.0.0.11 vncserver_listen = 10.0.0.11 vncserver_proxyclient_address = 10.0.0.11 [database] connection = mysql://nova:NOVA_DBPASS@controller/nova [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = NOVA_PASS [glance] host = controller [oslo_concurrency] lock_path = /var/lib/nova/tmp 



Synchronize the database, restart the services and delete the local database.
 # su -s /bin/sh -c "nova-manage db sync" nova # service nova-api restart # service nova-cert restart # service nova-consoleauth restart # service nova-scheduler restart # service nova-conductor restart # service nova-novncproxy restart # rm -f /var/lib/nova/nova.sqlite 


Compute node

Now we finally start working with the compute node. All described actions are valid for each computing node in our system.
 # apt-get install nova-compute sysfsutils 


/etc/nova/nova.conf
 [DEFAULT] ... verbose = True rpc_backend = rabbit auth_strategy = keystone my_ip = 10.0.0.31 #MANAGEMENT_INTERFACE_IP_ADDRESS vnc_enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = 10.0.0.31 #MANAGEMENT_INTERFACE_IP_ADDRESS novncproxy_base_url = http://controller:6080/vnc_auto.html [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = NOVA_PASS [glance] host = controller [oslo_concurrency] lock_path = /var/lib/nova/tmp [libvirt] virt_type = kvm 


MANAGEMENT_INTERFACE_IP_ADDRESS is the address of the compute node from VLAN 10.
In novncproxy_base_url controller must correspond to the address through which it will be possible to access through the Web-browser. Otherwise, you will not be able to use the vnc console from Horizon.

Restart the service and delete the local copy of the database:
 # service nova-compute restart # rm -f /var/lib/nova/nova.sqlite 


Check whether everything works correctly:
 # nova service-list +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-conductor | controller | internal | enabled | up | 2014-09-16T23:54:02.000000 | - | | 2 | nova-consoleauth | controller | internal | enabled | up | 2014-09-16T23:54:04.000000 | - | | 3 | nova-scheduler | controller | internal | enabled | up | 2014-09-16T23:54:07.000000 | - | | 4 | nova-cert | controller | internal | enabled | up | 2014-09-16T23:54:00.000000 | - | | 5 | nova-compute | compute1 | nova | enabled | up | 2014-09-16T23:54:06.000000 | - | +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ 

The fifth line says that we did everything right.

We have done the most important thing - now we have IaaS.

Neutron

Neutron is a network-as-a-service (NaaS) service. In general, official documentation gives a slightly different definition, but I think it will be clearer. Nova-networking has been declared obsolete in new versions of OpenStack, so we will not use it. Yes, and neutron functionality is much broader.

Controller

We install the network core on the controller, although the manual uses the 3rd node. If there are a lot of computation nodes (> 10) and / or the network load is high enough, then it is better to move the Network server to a separate node.

As always, let's start with the database
 # mysql -u root -p CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS'; 


Keystone:
 # openstack user create --password-prompt neutron # openstack role add --project service --user neutron admin # openstack service create --name neutron --description "OpenStack Networking" network # openstack endpoint create --publicurl http://controller:9696 --adminurl http://controller:9696 --internalurl http://controller:9696 --region RegionOne network 


Install the necessary components:
 # apt-get install neutron-server neutron-plugin-ml2 python-neutronclient neutron-plugin-openvswitch-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent 


It is also necessary to correct /etc/sysctl.conf
 # cat /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 # sysctl -p 


/etc/neutron/neutron.conf
 [DEFAULT] ... rpc_backend = rabbit auth_strategy = keystone core_plugin = ml2 service_plugins = router allow_overlapping_ips = True notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller:8774/v2 [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = NEUTRON_PASS [nova] auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = nova password = NOVA_PASS 


Editing the config should not delete anything from there, except for commented lines.
/etc/neutron/plugins/ml2/ml2_conf.ini
 [ml2] type_drivers = flat,vlan,gre,vxlan tenant_network_types = gre mechanism_drivers = openvswitch [ml2_type_gre] tunnel_id_ranges = 1000:2000 [ml2_type_flat] flat_networks = external [securitygroup] enable_security_group = True enable_ipset = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [ovs] local_ip = 10.0.1.11 #INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS bridge_mappings = external:br-ex [agent] tunnel_types = gre 



/etc/neutron/l3_agent.ini
 [DEFAULT] interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver external_network_bridge = router_delete_namespaces = True 



/etc/neutron/dhcp_agent.ini
 [DEFAULT] interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq dhcp_delete_namespaces = True dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf 



/etc/neutron/dnsmasq-neutron.conf
 dhcp-option-force=26,1454 

In the official documentation, this setting was used for network devices without jumbo frames, but in general, almost any settings for dnsmasq can be written there.


We kill all processes dnsmasq
 # pkill dnsmasq 


/etc/neutron/metadata_agent.ini
 [DEFAULT] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_region = RegionOne auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = NEUTRON_PASS nova_metadata_ip = controller metadata_proxy_shared_secret = METADATA_SECRET 



/etc/nova/nova.conf
 [DEFAULT] ... network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver [neutron] url = http://controller:9696 auth_strategy = keystone admin_auth_url = http://controller:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = NEUTRON_PASS service_metadata_proxy = True metadata_proxy_shared_secret = METADATA_SECRET 

METADATA_SECRET is also a set of characters from 10 to 16 characters


We do not delete anything from nova.conf , we just add it.

Synchronize the database and restart the services:
 # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron # service nova-api restart # service neutron-server restart # service openvswitch-switch restart 


Create a bridge and link it with an external interface.
 # ovs-vsctl add-br br-ex # ovs-vsctl add-port br-ex p3p1 


Restart interfaces
 # service neutron-plugin-openvswitch-agent restart # service neutron-l3-agent restart # service neutron-dhcp-agent restart # service neutron-metadata-agent restart 


Compute node


No comments.
 # cat /etc/sysctl.conf net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 # sysctl -p 


 # apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent 


/etc/neutron/neutron.conf
 [DEFAULT] ... rpc_backend = rabbit auth_strategy = keystone core_plugin = ml2 service_plugins = router allow_overlapping_ips = True [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = NEUTRON_PASS 



/etc/neutron/plugins/ml2/ml2_conf.ini
 [ml2] type_drivers = flat,vlan,gre,vxlan tenant_network_types = gre mechanism_drivers = openvswitch [ml2_type_gre] tunnel_id_ranges = 1000:2000 [ml2_type_flat] flat_networks = external [securitygroup] enable_security_group = True enable_ipset = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [ovs] local_ip = 10.0.1.31 #INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS bridge_mappings = external:br-ex [agent] tunnel_types = gre 



Restart openvswitch
 # service openvswitch-switch restart 


Add lines to /etc/nova/nova.conf
 [DEFAULT] ... network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver [neutron] url = http://controller:9696 auth_strategy = keystone admin_auth_url = http://controller:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = NEUTRON_PASS 


Restart services:
 # service nova-compute restart # service neutron-plugin-openvswitch-agent restart 


If I did not forget to mention anything, it should turn out like this:
 # neutron agent-list +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ | id | agent_type | host | alive | admin_state_up | binary | +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ | 30275801-e17a-41e4-8f53-9db63544f689 | Metadata agent | network | :-) | True | neutron-metadata-agent | | 4bd8c50e-7bad-4f3b-955d-67658a491a15 | Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent | | 756e5bba-b70f-4715-b80e-e37f59803d20 | L3 agent | network | :-) | True | neutron-l3-agent | | 9c45473c-6d6d-4f94-8df1-ebd0b6838d5f | DHCP agent | network | :-) | True | neutron-dhcp-agent | | a5a49051-05eb-4b4f-bfc7-d36235fe9131 | Open vSwitch agent | compute1 | :-) | True | neutron-openvswitch-agent | +--------------------------------------+--------------------+----------+-------+----------------+---------------------------+ 


Network

Now we will do the initial procurement of our networks. We will create one external network and one internal.

Create a virtual network:
 # neutron net-create ext-net --router:external --provider:physical_network external --provider:network_type flat 


We configure our external subnet:
 # neutron subnet-create ext-net 192.168.1.0/24 --name ext-subnet \ --allocation-pool start=192.168.1.100,end=192.168.1.200 \ --disable-dhcp --gateway 192.168.1.1 

Our external network is 192.168.1.0/24 and the router is releasing to the Internet 192.168.1.1. External addresses for our cloud will be issued from the range 192.168.1.101-200.

Next, we will create an internal network for the demo project, so you should load variables for the demo user:
 # source demo-openrc.sh 


Now we create a virtual internal network:
 # neutron net-create demo-net # neutron subnet-create demo-net 172.16.1.0/24 --name demo-subnet --gateway 172.16.1.1 

It is clear that our virtual network will be 172.16.1.0/24 and all instances from it will receive 172.16.1.1 as a router.
Question: what is this router?
Answer: this is a virtual router.

The "trick" is that in Neutron you can build virtual networks with a sufficiently large number of subnets, which means they need a virtual router. Each virtual router can add ports to any of the available virtual and external networks. And this is really "strong"! We only assign access to networks to routers, and we manage all firewall rules from security groups. Moreover! We can create a virtual machine with a software router, configure interfaces to all necessary networks and control access through it (I tried using Mikrotik).
In general, Neutron gives plenty of imagination.

Create a virtual router, assign an interface to it in the demo-subnet and connect it to the external network:
 # neutron router-create demo-router # neutron router-interface-add demo-router demo-subnet # neutron router-gateway-set demo-router ext-net 


Now our virtual router should be pinging from the external network:
 # ping 192.168.1.100 PING 192.168.1.100 (192.168.1.100) 56(84) bytes of data. 64 bytes from 192.168.1.100: icmp_req=1 ttl=64 time=0.619 ms 64 bytes from 192.168.1.100: icmp_req=2 ttl=64 time=0.189 ms 64 bytes from 192.168.1.100: icmp_req=3 ttl=64 time=0.165 ms 64 bytes from 192.168.1.100: icmp_req=4 ttl=64 time=0.216 ms ... 


In general, we already have a workable cloud with the network.

Cinder (Block Storage)

Cinder is a service that provides the ability to manage block devices (virtual disks) and attach them to virtual instances. Virtual disks can be bootable. This can be very convenient for transferring a VM to another compute instance.

DB:
 # mysql -u root -p CREATE DATABASE cinder; GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS'; GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS'; 


Keystone:
 # openstack user create --password-prompt cinder # openstack role add --project service --user cinder admin # openstack service create --name cinder --description "OpenStack Block Storage" volume # openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 # openstack endpoint create --publicurl http://controller:8776/v2/%\(tenant_id\)s --internalurl http://controller:8776/v2/%\(tenant_id\)s --adminurl http://controller:8776/v2/%\(tenant_id\)s --region RegionOne volume # openstack endpoint create --publicurl http://controller:8776/v2/%\(tenant_id\)s --internalurl http://controller:8776/v2/%\(tenant_id\)s --adminurl http://controller:8776/v2/%\(tenant_id\)s --region RegionOne volumev2 


Install the necessary packages:
 # apt-get install cinder-api cinder-scheduler python-cinderclient 


Let's fix the config:
/etc/cinder/cinder.conf
 [DEFAULT] ... rpc_backend = rabbit auth_strategy = keystone my_ip = 10.0.0.11 [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS [database] connection = mysql://cinder:CINDER_DBPASS@controller/cinder [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = CINDER_PASS [oslo_concurrency] lock_path = /var/lock/cinder 



Next, synchronize the database and restart the services:
 # su -s /bin/sh -c "cinder-manage db sync" cinder # service cinder-scheduler restart # service cinder-api restart 


Since Since our controller is also a repository, the following actions are also carried out on it.
Install the necessary packages:
 # apt-get install qemu lvm2 

Remember I mentioned in the configuration about two 500GB disks? We will make RAID 1 of them (I’m not going to describe it). Technically, we could just create an lvm partition from two physical disks, but this option is bad because we don’t have a HA project and the crash of one of the disks can be critical. I will not understand how to create a RAID-array, it is easily googled. We assume that our raid is called / dev / md1 :
 # pvcreate /dev/md1 # vgcreate cinder-volumes /dev/md1 

We created a physical LVM device and created lvm-group cinder-volumes .
Next, edit /etc/lvm/lvm.conf .
We find (or add) the following line:
 devices { ... filter = [ "a/md1/", "r/.*/"] 

We assume that except in the raid section we have nothing connected with lvm. If the working partition is also deployed to lvm, then it should be added. For example, if our system is deployed on / dev / md0 and lvm is deployed on top of it, then our config will look like this:
 devices { ... filter = [ "a/md0/", "a/md1/", "r/.*/"] 

In general, I think for those who are faced with lvm it should not be difficult.

Install the necessary packages:
 # apt-get install cinder-volume python-mysqldb 


add to config:
/etc/cinder/cinder.conf
 [DEFAULT] ... enabled_backends = lvm glance_host = controller [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = tgtadm 



And restart the services:
 # service tgt restart # service cinder-scheduler restart # service cinder-api restart # service cinder-volume restart 


Horizon (dashboard)

Horizon - dashboard for OpenStack, written in Python 2.7, the engine is Django. From here, the entire OpenStack environment is fully managed: managing users / projects / roles, managing images, virtual disks, instances, networks, etc.

Installation

 # apt-get install openstack-dashboard 

Installation can be done on a separate server with access to the Controller node, but we will install it on the controller.

Straightened configuration /etc/openstack-dashboard/local_settings.py :
 ... OPENSTACK_HOST = "controller" ... ALLOWED_HOSTS = '*' ... CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211', } } ... OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" ... TIME_ZONE = "Asia/Vladivostok" ... 

TIME_ZONE - your time zone may be (and most likely will be) different. Here you will find your own.

Restart Apache:
 # service apache2 reload 


Now you can go to the controller / horizon . In my previous publication, you can see screenshots of dashboards. Ubuntu additionally installs the openstack-dashboard-ubuntu-theme package , which adds some links with a hint of Juju. If you want to return the original version, you can simply remove the package.

You can also choose the interface language Russian in the user profile , then it will considerably facilitate the work of the Developer.

Done!

The publication was very cumbersome, but did not want to share.
I hope the article will help anyone.
In the next publication (if my karma is not cast as “tomatoes”) I will describe the primitive installation of a Chef server and a simple recipe.

Source: https://habr.com/ru/post/262049/


All Articles