📜 ⬆️ ⬇️

Integration of Apache CloudStack with third-party systems. Subscribing to events using Apache Kafka


This article discusses the approach to integrating Apache CloudStack (ACS) with third-party systems by exporting events to the Apache Kafka message queue broker.


In the modern world, full-fledged provision of services without the integration of products is almost impossible. In the case of network and cloud services, integration with billing systems, accessibility monitoring systems, customer service services and other infrastructure and business-oriented components is important. At the same time, the product is usually integrated with third-party systems in the ways shown in the following picture:



Thus, the product provides third-party API services that they can use to interact with the product, and support an extension mechanism that allows the product to interact with external systems through their API.


In the context of ACS, these functions are implemented by the following features:


  1. Standard API - allows you to interact with ACS in a typical way.
  2. API Plugins — Allow developers to describe their API extensions designed to implement specific interactions.
  3. Event Export - allows you to interact with external systems when events occur within the ACS that require actions from external systems.

So ACS provides us with all the necessary ways to interact. The article covers the third way to interact - Export Events . There are several examples when this way of interaction is useful, we will give a few examples:



In general, in the absence of a subsystem for notifying third-party systems of events within a product, there is a single way to solve this class of tasks - a periodic API survey. It goes without saying that the method is working, but rarely can be considered effective.

ACS allows you to export events to message queue brokers in two ways - using the AMPQ protocol in RabbitMQ and the Apache Kafka protocol in Apache Kafka, respectively. We widely use Apache Kafka in our practice, so this article will look at how to connect event export to this system to the ACS server.


Exporting events to a message queue broker VS explicitly calling a third-party API


When implementing such mechanisms, developers often choose between two options - exporting events to message queue brokers or explicitly calling third-party APIs. In our opinion, the approach to exporting to brokers is extremely profitable, and it is much more flexible than an explicit API call (for example, REST with some specific protocol). This is due to the following properties of message queue brokers:


  1. fault tolerance;
  2. high performance and scalability;
  3. the possibility of delayed or delayed processing.

These properties are very difficult to achieve in the case of directly calling the processing code of third-party systems without compromising the stability of the calling subsystem of the product. For example, imagine that when creating an account you want to send an SMS with a notification. In the case of a direct call to the send code, the following classes of errors are possible:


  1. Failure of the called code and failure to send a notification
  2. continuous execution of code on the side of the called system and the occurrence of an error on the side of the calling subsystem of the product due to overflow of the handler pool.

The only advantage of this approach may be that the time between exporting an event and processing an event by a third-party system may be less than if exporting this event to a message queue broker, however, it is necessary to provide certain guarantees of time and reliability of processing as on the side of the calling subsystem of the product, and on the side of the called system.


In the same case, when event export is used in a message queue broker that is configured correctly (replicated environment), these problems will not occur, in addition, the code that processes events from the message broker queue and calls the API of a third-party system can be developed and deployed , based on the average expectation to the intensity of the flow of events, without the need to provide guarantees of peak processing.


The flip side of using a message broker is the need to configure this service and have experience in its administration and problem solving. Although Apache Kafka is a very problem-free service, it is still recommended to devote time to setting it up, modeling emergency situations and developing responses.


Setting up ACS event export to Apache Kafka


This guide does not focus on how to configure Apache Kafka for use in a “combat” environment. A lot of specialized manuals are devoted to this. We will focus on how to connect Kafka to ACS and test the export of events.


To deploy Kafka, we will use the spotify / kafka Docker container, which includes all the necessary components (Apache Zookeeper and Kafka) and therefore is great for development purposes.


Docker installation (from the official guide for installation on CentOS 7) is accomplished elementarily:


# yum install -y yum-utils device-mapper-persistent-data lvm2 # yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # yum makecache fast # yum install docker-ce 

Apache Kafka configuration


Deploy the Apache Kafka container:


 # docker run -d -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=10.0.0.66 --env ADVERTISED_PORT=9092 spotify/kafka c660741b512a 

Thus, Kafka will be available at 10.0.0.66:9092, and Apache Zookeeper at 10.0.0.66:2181.
You can test Kafka and Zookeeper as follows:


Create and write the string "test" in the topic "cs":


 # docker exec -i -t c660741b512a bash -c "echo 'test' | /opt/kafka_2.11-0.10.1.0/bin/kafka-console-producer.sh --broker-list 10.0.0.66:9092 --topic cs" [2017-07-23 08:48:11,222] WARN Error while fetching metadata with correlation id 0 : {cs=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient) 

We will read it:


 # docker exec -i -t c660741b512a /opt/kafka_2.11-0.10.1.0/bin/kafka-console-consumer.sh --bootstrap-server=10.0.0.66:9092 --topic cs --offset=earliest --partition=0 test ^CProcessed a total of 1 messages 

If everything happened as it is shown on the code inserts above, it means that Kafka is functioning properly.


Apache CloudStack setup


The next step is to configure the export of events in ACS (the original documentation is here ). Create a configuration file (/etc/cloudstack/management/kafka.producer.properties) for the producer Kafka, which is used by ACS with the following content:


 bootstrap.servers=10.0.0.66:9092 acks=all topic=cs retries=1 

A detailed description of the Kafka settings can be found on the official documentation page.


When using a Kafka replicated cluster, in the bootstrap.servers line, you must specify all known servers.

Create a directory for the java bean, which activates the export of events in Kafka:


 # mkdir -p /etc/cloudstack/management/META-INF/cloudstack/core 

And the configuration file for bean-a (/etc/cloudstack/management/META-INF/cloudstack/core/spring-event-bus-context.xml) with the following contents:


 <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <bean id="eventNotificationBus" class="org.apache.cloudstack.mom.kafka.KafkaEventBus"> <property name="name" value="eventNotificationBus"/> </bean> </beans> 

Reboot the ACS management server:


 # systemctl restart cloudstack-management 

The exported events now fall into the cs topic, while they are in JSON format, an example of the events is displayed further (formatted for convenience):


 { "Role":"e767a39b-6b93-11e7-81e3-06565200012c", "Account":"54d5f55c-5311-48db-bbb8-c44c5175cb2a", "eventDateTime":"2017-07-23 14:09:08 +0700", "entityuuid":"54d5f55c-5311-48db-bbb8-c44c5175cb2a", "description":"Successfully completed creating Account. Account Name: null, Domain Id:1", "event":"ACCOUNT.CREATE", "Domain":"8a90b067-6b93-11e7-81e3-06565200012c", "user":"f484a624-6b93-11e7-81e3-06565200012c", "account":"f4849ae2-6b93-11e7-81e3-06565200012c", "entity":"com.cloud.user.Account","status":"Completed" } { "Role":"e767a39b-6b93-11e7-81e3-06565200012c", "Account":"54d5f55c-5311-48db-bbb8-c44c5175cb2a", "eventDateTime":"2017-07-23 14:09:08 +0700", "entityuuid":"4de64270-7bd7-4932-811a-c7ca7916cd2d", "description":"Successfully completed creating User. Account Name: null, DomainId:1", "event":"USER.CREATE", "Domain":"8a90b067-6b93-11e7-81e3-06565200012c", "user":"f484a624-6b93-11e7-81e3-06565200012c", "account":"f4849ae2-6b93-11e7-81e3-06565200012c", "entity":"com.cloud.user.User","status":"Completed" } { "eventDateTime":"2017-07-23 14:14:13 +0700", "entityuuid":"0f8ffffa-ae04-4d03-902a-d80ef0223b7b", "description":"Successfully completed creating User. UserName: test2, FirstName :test2, LastName: test2", "event":"USER.CREATE", "Domain":"8a90b067-6b93-11e7-81e3-06565200012c", "user":"f484a624-6b93-11e7-81e3-06565200012c", "account":"f4849ae2-6b93-11e7-81e3-06565200012c", "entity":"com.cloud.user.User","status":"Completed" } 

The first event is the creation of an account, the other two are the creation of users within the account. To check the receipt of events in Kafka, the easiest way to use is to use the method we already know:


 # docker exec -i -t c660741b512a \ /opt/kafka_2.11-0.10.1.0/bin/kafka-console-consumer.sh --bootstrap-server=10.0.0.66:9092 --topic cs --offset=earliest --partition=0 

If events arrive, then you can begin to develop integration applications using any programming languages ​​for which there is a consumer interface for Apache Kafka. The entire setup takes 15–20 minutes and presents no difficulty even for a beginner.


In case of setting up the export of events for the “combat” environment, the following should be remembered:


  1. The configuration must be done for each ACS control server;
  2. Kafka must be configured in the replicated version (usually, 3 servers and x3 replication);
  3. Apache Zookeeper must be configured in a replicated version (usually, 3 servers);
  4. The /etc/cloudstack/management/kafka.producer.properties settings should be selected with regard to the required level of reliability of event delivery.
  5. Do not forget to set the period for deleting old data for Kafka (for example, 1 month).

Instead of conclusion


It is possible that the article is not too valuable, especially since the documentation “seems to have written everything,” however, when I decided to write it, I was guided by the following considerations:


  1. When I started practicing ACS in practice, I did not find such a material, as a result, in one of the clouds polling is used to communicate with the billing system.
  2. A performance check was carried out for the newest ACS 4.9.2, which confirms that the functionality is working in this version of the product.
  3. An article in Russian, which can be useful for a number of administrators.
  4. Currently, the focus is on Openstack and it seems that this is a single-source product, although the main benefit it carries is often only to the implementer. I wanted to draw attention to an alternative product that is widely used by a number of large organizations and provides convenient tools for integration.

I hope that readers will find the material interesting and useful for themselves.


')

Source: https://habr.com/ru/post/333928/


All Articles