📜 ⬆️ ⬇️

Configuring the virtual machine SPICE console in OpenStack

This article will be of interest to administrators of the OpenStack cloud platform. It will be a question of displaying the console of virtual machines in the dashboard. The fact is that by default, OpenStack uses the noVNC console, which works at a reasonable speed within the local network, but is poorly suited for working with virtual machines running in a remote data center. In this case, the responsiveness of the console, to put it mildly, depresses.

This article will discuss how to set up a much faster SPICE console in your OpenStek installation.

In OpenStek, there are two protocols of the virtual machine's graphic console - VNC and SPICE. Out of the box is the web implementation of the VNC client - noVNC.

About SPICE know much less people. Generally, SPICE is a remote access protocol that supports a lot of useful things, such as streaming video, audio, copy-paste, USB overhead, and more. However, in the OpenStack dashboard, the SPICE-HTML5 web client is used, which does not support all these functions, but it displays the virtual machine console very efficiently and quickly, that is, it does exactly what you need.
')


In the official documentation ( link1 , link2 ) Openstek has quite a bit of information on configuring the SPICE console. In addition, it says that "VNC must be explicitly disabled to get access to the SPICE console". This is not entirely true, rather, the point is that with the VNC console turned on, you cannot use the SPICE console from the dashboard (but still you can use the API, I mean “nova get-spice-console” using python-novaclient) . In addition, the SPICE-console will be available only for new virtual machines, old before the hard reboot, resize or migration will still use only VNC.

So, in this article I used two versions of OpenStek from Mirantis: Kilo (Mirantis OpenStack 7.0) and Mitaka (Mirantis OpenStack 9.0). As with all enterprise distributions, a configuration with three controller nodes and HTTPS on the frontend is used. The qemu-kvm hypervisor, Ubuntu 14.04 OS, everywhere, deployed the cloud through Fuel.

The configuration blocks and controller-nodes and computers. On the controller nodes do the following.

We put the spice-html5 package itself:

apt-get install spice-html5 

Enter the following values ​​into the Nova config:

/etc/nova/nova.conf
 [DEFAULT] ssl_only = True cert = '/path/to/SSL/cert' key = '/path/to/SSL/key' web=/usr/share/spice-html5 [spice] spicehtml5proxy_host = :: html5proxy_base_url = https://<FRONTEND_FQDN>:6082/spice_auto.html enabled = True keymap = en-us 

where <FRONTEND_FQDN> is the FQDN of your Horizon dashboard. Obviously, the certificate and the key above must match the FRONTEND_FQDN, otherwise the modern browser will not allow the SPICE widget to work. If your Horizon does not use HTTPS, then SSL settings can be omitted.

For simultaneous operation of noVNC and SPICE, you need to do this trick:

 cp -r /usr/share/novnc/* /usr/share/spice-html5/ 

To work through HTTPS, you need to use Secure Websockets, for this you have to correct the file /usr/share/spice-html5/spice_auto.html. In this part of the code you need to fix “ws: //” to “wss: //”

/usr/share/spice-html5/spice_auto.html
  function connect() { var host, port, password, scheme = "wss://", uri; 

Again, for simultaneous operation of noVNC and SPICE, you need to correct the upstart-scripts /etc/init/nova-novncproxy.conf and /etc/init/nova-spicehtml5proxy.conf. In both scripts you need to comment out one line:

/etc/init/nova-spicehtml5proxy.conf
 script [ -r /etc/default/nova-consoleproxy ] && . /etc/default/nova-consoleproxy || exit 0 #[ "${NOVA_CONSOLE_PROXY_TYPE}" = "spicehtml5" ] || exit 0 

/etc/init/nova-novncproxy.conf
 script [ -r /etc/default/nova-consoleproxy ] && . /etc/default/nova-consoleproxy || exit 0 #[ "${NOVA_CONSOLE_PROXY_TYPE}" = "novnc" ] || exit 0 

Actually, this allows you to remove the console type check from the / etc / default / nova-consoleproxy file.

Now you need to fix the Haproxy configs:

/etc/haproxy/conf.d/170-nova-novncproxy.cfg
 listen nova-novncproxy bind <PUBLIC_VIP>:6080 ssl crt /var/lib/astute/haproxy/public_haproxy.pem no-sslv3 no-tls-tickets ciphers AES128+EECDH:AES128+EDH:AES256+EECDH:AES256+EDH balance source option httplog option http-buffer-request timeout http-request 10s server controller1 192.168.57.6:6080 ssl verify none check server controller2 192.168.57.3:6080 ssl verify none check server controller3 192.168.57.7:6080 ssl verify none check 

/etc/haproxy/conf.d/171-nova-spiceproxy.cfg
 listen nova-novncproxy bind <PUBLIC_VIP>:6082 ssl crt /var/lib/astute/haproxy/public_haproxy.pem no-sslv3 no-tls-tickets ciphers AES128+EECDH:AES128+EDH:AES256+EECDH:AES256+EDH balance source option httplog timeout tunnel 3600s server controller1 192.168.57.6:6082 ssl verify none check server controller2 192.168.57.3:6082 ssl verify none check server controller3 192.168.57.7:6082 ssl verify none check 

where PUBLIC_VIP is the IP address where the FRONTEND_FQDN is hanging.

Finally, we restart the services on the controller nodes:

 service nova-spicehtml5proxy restart service apache2 restart crm resource restart p_haproxy 

here p_haproxy is the Pacemaker resource for Haproxy, through which numerous OpenStek services run.

On each compute node, you need to make changes to the Nova config:
/etc/nova/nova.conf

 [spice] spicehtml5proxy_host = :: html5proxy_base_url = https://<FRONTEND_FQDN>:6082/spice_auto.html enabled = True agent_enabled = True server_listen = :: server_proxyclient_address = COMPUTE_MGMT_IP keymap = en-us 

here COMPUTE_MGMT_IP is the address of the management interface of this compute node (in Mirantis OpenStack there is a division into the public and management networks).

After that, you need to restart the nova-compute service:

 service nova-compute restart 

Now one important point. I already wrote above that we do not turn off the VNC, because in this case, the existing virtuals will lose the console in the dashboard. However, if we deploy a cloud from scratch, then it makes sense to completely turn off VNC. To do this, in the Nova config on all nodes, set:

 [DEFAULT] vnc_enabled = False novnc_enabled = False 

But anyway, if we activate VNC and SPICE together in the cloud, in which the virtual machines are already spinning, then after all the above actions, nothing will change outwardly for already running virtual machines or for new ones - the noVNC console will still open. If you look in the Horizon settings, the type of console used is controlled by the following setting:

/etc/openstack-dashboard/local_settings.py
 # Set Console type: # valid options would be "AUTO", "VNC" or "SPICE" # CONSOLE_TYPE = "AUTO" 

By default, the AUTO value, that is, the console type is automatically selected. But what does this mean? The point is in one file, where the priority of consoles is set:

/usr/share/openstack-dashboard/openstack_dashboard/dashboards/project/instances/console.py
 ... CONSOLES = SortedDict([('VNC', api.nova.server_vnc_console), ('SPICE', api.nova.server_spice_console), ('RDP', api.nova.server_rdp_console), ('SERIAL', api.nova.server_serial_console)]) ... 

As you can see, the VNC console has priority, if there is one. If not, then the SPICE console will be searched. It makes sense to change the first two points in some places, then the existing virtual machines will still work with a slow VNC, and new ones with a new fast SPICE. Just what you need!

Subjectively, we can say that the SPICE console is very fast. In the mode without graphics, there are no brakes at all, in the graphic mode everything also works quickly, and compared to the VNC protocol, it’s just heaven and earth! So I recommend it to everyone!

At this point, the setting can be considered complete, but in the end I will show how, in fact, both of these consoles look in the XML-config libvirt:

  <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <graphics type='spice' port='5901' autoport='yes' listen='::' keymap='en-us'> <listen type='address' address='::'/> </graphics> 

Obviously, if you have network access to the virtual machine compute node, you can use any other VNC / SPICE client instead of the web interface, simply by connecting to the port in the configuration above (in this case, 5900 for VNC and 5901 for SPICE) .

Source: https://habr.com/ru/post/319072/


All Articles