When implementing a
load balancer user interface for a
virtual private cloud, I had to face significant difficulties. This led me to think about the role of the frontend, what I want to share in the first place. And then justify their reflections on the example of a specific task.
The solution to the problem turned out, in my opinion, quite creative and I had to look for it in very limited limits, so I think it can be interesting.
The role of the frontend
I must say that I do not pretend to the truth and raise a controversial issue. I am somewhat depressed by the irony over the front-end and the web in particular, as something insignificant. And even more depressing that sometimes this happens is justified. Now the fashion was already asleep, but there was a time when everyone was running around with frameworks, paradigms and other entities, loudly saying that all this is super-important and super-necessary, and in response they received the irony that the frontend deals with the derivation of molds and processing clicks on the buttons that you can do and "on the knee."
')
Now, it seems, everything is more or less returned to normal. Already no one is keen to tell about each minor release of the next framework. Few people are looking for the perfect tool or approach, due to the increasing awareness of their utility. But even this, for example, does not interfere with almost groundlessly scolding Electron and applications on it. I think this is due to a misunderstanding of the problem being solved by the front-end.
The frontend is not just a means of displaying information provided by the backend, and not just a means of processing user actions. The frontend is something more, something abstract, and if we give it a simple, clear definition, then the meaning will inevitably be lost.
The frontend is in some "framework". For example, in technical terms, it is between the API provided by the backend and the API provided by the input-output means. In terms of tasks, it is between the UI tasks that the UX solves, and the tasks that the backend solves. Thus, a rather narrow specialization of the frontend, specialization of the interlayer is obtained. This does not mean that front-ends cannot influence areas beyond their specialization, but at the moment when this influence is impossible, the true task of the front-end arises.
This task can be expressed through contradiction. The user interface is not required to conform to the data models and the behavior of the backend. The behavior and data models of the backend do not have to correspond to the tasks of the user interface. And then the task of the frontend is to eliminate this contradiction. The greater the discrepancy between the tasks of the backend and the user interface, the more important is the role of the frontend. And so that it becomes clear what I am talking about, I will give an example where this discrepancy, for some reason, turned out to be significant.
Formulation of the problem
OpenStack LBaaS, in my view, is a software and hardware complex of tools necessary for load balancing between servers. For me it is important that its implementation depends on objective factors, on the physical display. Because of this, there are specific features in the API and in the ways of interacting with this API.
When developing a user interface, first of all, it’s not the technical features of the backend that are interesting, but its fundamental capabilities. The interface is created for the user, and the user needs an interface to control the balancing parameters, and the user does not need to dive into the internal features of the backend implementation.
The backend is mostly developed by the community, and its development can be influenced in very limited quantities. One of the key features for me is that the backend developers are ready to sacrifice the convenience and simplicity of the controls for the sake of performance, and this is absolutely justified, since this is about load balancing.
There is one more subtle point, and I immediately want to identify it, warning some questions. It is clear that OpenStack and their API do not come together with light. You can always develop your own set of tools or a “layer” that will work with the OpenStack API, giving you your own API, which is convenient for the user's tasks. The only question is expediency. If the initially available tools allow the user interface to be implemented the way it was intended, does it make sense to produce entities?
The answer to this question is multifaceted and for business it will be limited to developers, their employment, their competence, questions of responsibility, support, and so on. In our case, the most expedient was to solve part of the tasks at the front end.
OpenStack LBaaS Features
I want to identify only those features that have had a strong influence on the front-end. The questions why these features have arisen or what they are based on are already beyond the scope of this article.
I work with the finished documentation and have to accept its features. Who cares about what is OpenStack Octavia from the inside, can get acquainted with the
official documentation . Octavia is the name of a set of tools created for load balancing in the OpenStack ecosystem.
The first feature that I encountered in the course of development is a large number of models and connections needed to display the state of the balancer. In
the Octavia API , 12 models are described, but only 7 are needed for the client part. These models have connections, often denormalized, in the image below is an exemplary diagram:
"Seven" does not sound very impressive, but in reality, to ensure the full operation of the interface, at the time of writing this text, I had to use 16 data models and about 30 relationships between them. Since Octavia is only a balancer, it requires other OpenStack modules to work. And all this is needed for only two pages in the user interface.
The second and third features are the
asynchrony and transactional work of Octavia. Data models have a
status field that reflects the state of operations performed on the object.
The object read operation is synchronous and has no restrictions. But creating, updating, and deleting operations can take an indefinite amount of time. This is due to the fact that the data models have, roughly speaking, a physical meaning.
After sending the creation request, we can know that the record has appeared, we can read it, but until the creation operation is fully completed, we cannot perform any other operations on this record. Any such attempt will result in an error. The operation of changing an object can be initiated only when the object is in the
ACTIVE status, you can send the object for deletion in the
ACTIVE and
ERROR statuses.
These statuses can come through WebSockets, which greatly facilitates their processing, but transactions are a much bigger problem. When changes are made to an object, all related models will also be involved in the transaction. For example, when you make changes to a
Member , the
Pool ,
Listener, and
Loadbalancer associated with it will be blocked. This is how it looks from the point of view of events received via web sockets:
- The first four events are the transfer of objects to the PENDING_UPDATE status: the target field contains the name of the object model involved in the transaction;
- the fifth event is just a duplicate (I don’t know what it is connected with);
- the last four are reverse translations to ACTIVE status. In this case, it is a weight change operation, and it takes less than a second, but sometimes it happens that it takes much longer.
You can also see in the screenshot that the order of events does not have to be strict. Thus, it turns out that to initiate any operation it is necessary to know not only the status of the object itself, but also the statuses of all dependencies that will also participate in the transaction.
User Interface Features
Now imagine yourself in the place of a user who needs to know from somewhere that to organize balancing between two servers:
- It is necessary to create a listener in which the balancing algorithm will be defined.
- Create a pool.
- Assign a pool to a listener.
- Add links to balanced ports to the pool.
Each time it is necessary to wait for the completion of the operation, which depends on all previously created objects.
According to an internal study, in the view of a regular user there is only an approximate realization that the balancer must have an entry point, must have exit points and parameters of the balancing being carried out: an algorithm, weight, and others. The user is not required to know what OpenStack is.
I don’t know how difficult the interface should be for perception, where all the technical features of the backend described above should be monitored by the user himself. For the console, this may be acceptable, since its use implies a high level of immersion in technology, but for the web, such an interface is a horror.
In the web, the user waits to complete one clear and logical form, press one button, wait and everything will work. Probably, it is possible to argue with this, but I suggest concentrating on the features that influence the implementation of the frontend.
The interface has been designed in such a way that it implies a cascading use of operations: one action in the interface may involve several operations. The interface does not imply that the user can perform actions that are currently impossible, but the interface assumes that the user must understand why this is so. The interface is a single whole, and therefore, its individual elements can use information from various dependent entities, including meta-information.

If we take into account that there are some features of the interface, peculiar not only to the balancer, such as switches, accordions, tabs, context menu and assume that their principles of operation are clear from the beginning, then I think for the user who represents what load balancing is It will be very difficult to read most of the interface above and make an assumption about how to manage it. But to single out which parts of the interface are hidden behind the models of the balancer, the listener, the pool, the member and other entities is not the most obvious task.
Elimination of contradictions
I hope I managed to show that the backend features are bad for the interface, and that these features can not always be removed from the backend. At the same time, the interface features are badly placed on the backend, and also can not always be eliminated, without complicating the interface. Each of these areas solves its problems. The frontend is responsible for troubleshooting to ensure the necessary level of interaction between the interface and the backend.
In my practice, I immediately rushed into the pool with my head, not paying attention, or rather not even trying to figure out the features that are higher, but I was lucky or the experience helped (and the right vector was chosen). Already repeatedly noted for myself that when using a third-party API or library it is very useful to get acquainted with the documentation beforehand: the more detailed the better. Documentation is often similar to each other, people still rely on the experience of other people, but there is a description of the features of each individual system, and it is contained in detail.
If I initially spent a couple of extra hours studying the documentation, rather than pulling out the necessary information on keywords, I would have thought about the problems I would have to face, and this knowledge could have an impact on the project’s architecture even at the earliest stages. Going back to correct mistakes made at the very beginning is very demoralizing. And without a full context, it is sometimes necessary to return several times.
As an option, you can bend your line, gradually generating more and more code “with a dandy”, but the more this pile of code is, the harder it will be to rake in the end. When designing an architecture, of course, it is also not worth plunging deeply into it, taking into account all possible and impossible options, spending a huge amount of time on this, it is important to find a balance. But more or less detailed familiarity with the documentation is often a very useful investment of not very much time.
And nevertheless, from the very beginning, having seen a large number of involved models, I realized that it would be necessary to construct a mapping of the state of the backend to the client while preserving all the connections. After I managed to bring all the necessary information on the client, with all the connections and so on, it was necessary to organize a queue of tasks.
Data is updated asynchronously, the availability of operations is determined by a variety of conditions, and when cascading operations are required, one cannot do without a queue in such conditions. Perhaps in a nutshell, this is the whole architecture of my solution: a repository with a reflection of the state of the backend and a queue of tasks.
Solution architecture
Because of the indefinite number of models and connections, I laid down the possibility of scaling in the storage structure by doing this with the help of a factory that returns a declarative description of the storage collections. The collection has a service, a simple model class with CRUD. It would be possible to take out the description of the connections in the model, as is done, for example, in RoR or in the good old Backbone, but this would require changing a large amount of code. Therefore, the description of relationships lies next to the model class:

In total, I got 2 types of links: one to one, one to many. You can also describe the feedback. In addition to the type, the dependency collection is specified, the field to which the found dependency is attached and the field from which the ID of the dependent object is read (in the case of a one-to-many connection, the ID list is read). If objects have a communication condition more complicated than simple references to objects, then the factory can describe the function of testing two objects, the results of which will determine the existence of a connection. It all looks a bit “cycling”, but it works without unnecessary dependencies and exactly as it should.
The repository has a waiting module for adding and deleting a resource, in fact it is the processing of one-time events with verification by condition and with promise interface. When subscribing, the event type (add, delete), testing function and handler are transferred. When a certain event occurs and when the test result is positive, a handler is executed, after which the tracking stops. The event can occur when you subscribe synchronously.
The use of such a pattern made it possible to automatically affix arbitrarily complex connections between models, and to do it in one place. This place I called the tracker. When you add an object to the repository, it starts tracking its connections. The standby module allows you to respond to events and check whether there is a connection between the monitored object and the object that has entered the repository. If the object is already present in the repository, then the waiting module calls the handler immediately.
Such a storage device allows you to describe any number of collections and links between them. When adding and deleting objects, the storage automatically puts down or resets the properties with the content of dependent objects. The advantages of this approach are that all communications are described explicitly, and one system deals with their tracking and updating; cons - in the implementation and debugging complexity.
In general, such a storage is rather trivial and I did it myself, because it would be much more difficult to embed a ready-made solution into the existing code base, but it would be even more difficult to attach a task queue to a ready-made solution.
All tasks, like collections, have a declarative description and are created by the factory. Tasks may have in the description the conditions for the launch and the list of tasks that will need to be added to the queue after the current one is executed.
The example above describes the task of creating a pool. The dependencies indicate a balancer and a listener, by default, a check is made for the status
ACTIVE . The balancer object is blocked, since the processing of tasks in the queue can occur synchronously, blocking allows you to avoid conflicts at the moment when the request for execution was sent, but the status has not changed yet, but it is assumed that it will change. Instead of
PARENT , if a pool is created as a result of a cascade of tasks, the
ID will be automatically substituted.
After the pool is created, the tasks for creating an accessibility monitor and creating all members of this pool will be added to the queue. The output is a structure that can be completely converted to JSON. This is done to be able to restore the queue in case of failure.
A queue based on the task description independently monitors all changes in the repository and checks the conditions that are necessary to run the task. As I have already said, statuses come on web sockets, and generating the necessary events for the queue in this case is very simple, but if necessary, it will not be a problem to attach a mechanism for updating the data with a timer (this was originally embedded in the architecture, since web sockets on for various reasons may not work very stable). After the task is completed, the queue automatically informs the repository about the need to update the links in the specified objects.
Conclusion
The need for scalability has led to a declarative approach. The need to display models and links between them led to a single repository. The need to process dependent objects led to a queue.
Combining these needs may not be the easiest task in terms of implementation (but this is a separate issue). But in terms of architecture, the solution is very simple and allows you to eliminate all contradictions between the tasks of the backend and the user interface, to adjust their interaction and lay the foundation for other possible features of any of the parties.
On the part of
the Selectel
control panel, the balancing process is simple and straightforward, which allows customers of the
service not to waste resources on the independent implementation of the balancer, while retaining the ability to flexibly manage traffic.
Try our balancer in action now and write your feedback in the comments.