📜 ⬆️ ⬇️

Fractal migration of a virtualized data center

image When performing tasks related to the virtualization of computational processes, data processing and clustering of such processes into a single pool, there is a need for planned or emergency migration (movement) of processes and data to other physical media without loss. This is possible with the parallel operation of a virtual application and the formation of virtual data in at least two independent physical media. The redundancy of this method ensures minimal delays and minimal losses, but does not allow for prompt response to massive system failures, completely eliminating losses. Restoration of the application is possible only after some time, which can critically affect the speed of decision making.
To eliminate this problem, it is necessary to develop a different method for backing up calculations and data operations, the implementation of which will eliminate all the shortcomings of the previous one. Entering the mathematical term “fractal”, interpreted in context with a description of the computational processes and the data operated, allows its properties to be applied in the future system.

So , if the considered computing system has some properties that meet the requirements of the term fractal, then these properties of the system can be called fractal.

Compliance with the properties of the system is possible under these conditions:
- the system is self-similar
- when scaling its properties and values ​​are unchanged and not reduced to triviality
- the system is capable of recovery by a single fragment
')
The introduction of the concept of fractal migration is a logical and reasonable action, since endowing the system with principles of fractality of objects and processes, while preparing them for transfer, allows one to achieve the required coherence of distributed states of executable code and data.
The fractal properties of computational processes imply not only the redundancy of code, which, when scaled, should not lose its original form, but the creation of parallel synchronous or asynchronous processes. Suppose the operational code and the allocated area of ​​memory in which it is implemented will be somewhat redundant and distributed as clones (replicas), in a certain space of the virtual memory area (combining several physical media of the OD) in a certain interval of cyclic calculations and data modification processes, then these clones will be executed synchronously or asynchronously with the parent code and limited to only a given space-time interval. Moreover, this cycle should exceed the transition time of migrating applications and data to the required value, ensuring the integrity of the restored fragment. Such clones, using periodically released resources of processors and memory, can be an infinite number, limited by the lifetime of the cycle and the replica itself. The disordered, chaotic placement and reproduction of them, the cyclical nature of individual fractal fragments and their autonomy, as well as the limited duration of the life cycle, will create redundancy of computational processes, with the self-sufficiency and efficiency of the data center structure without idle power in the hot reserve. The probability of a fatal lesion of computational processes in this case is reduced to zero, since these processes are continuously and infinitely reproduced throughout the length of the space-time representation of the virtualized data center.

In this method, the principles of thin clients will be implemented, in the role of which are applications and data, and the relevance of their “server” part is constant and distributed throughout the system in the form of minimal fractal code fragments. Each such fragment carries full information about the entire application and is able to deploy this application on its own basis. The holographic effect, in which the static data is regenerated, is manifested in this method in dynamically changing processes. Each such fragment has a complete set of values ​​corresponding to the system as a whole, and is also the initial state for scaling. The objects generated by it, in turn, must carry the entire informational component of the original code structures. Thus, virtual processes (applications, operations with data) flow synchronously with their reflections in different areas of memory in predetermined time intervals. Data exchange between processes is carried out on the fly. The administrator only initially sets the boundaries of the operational virtual environment and the activity of its components; in the future, the system replicates and regenerates itself, depending on the tasks performed. Deciding what is true and what is false is based on an imperial analysis of the state of replicas of data and processes. The system is able to independently activate various behavioral scenarios that will be generated in real time from standard and special utilities, commands and instructions that can be created both by humans and by the system itself. When analyzing a particular area of ​​RAM and identifying processes that occupy a part of memory on a secondary basis, when these processes have already lost their relevance, or are secondary, the system can reset it automatically. The clustering of processes and data, basic and alternative routing tables and scenarios in a single virtual environment with assignment to it of a certain property of a fractal object, will solve the problem of hardware decoupling and dynamic movement of a virtualized data center in full composition.

The fractal migration of the data center allows you to create an infinite number of situational solutions to problems and collisions based on the administration of the data center as a whole. Cross-analysis of the reactions of the control system to certain directives with the accumulation of experience is documented in instructions and scenarios. The application, further, of these instructions and scenarios within the virtual data center will allow to partially or fully automate the processes of dynamic management of the system without human intervention. Thus, the system constantly develops its reactions and the range of instructions to the scenarios of its behavior. This is the first step towards understanding the system of its task and the conscious choice of the way to solve it. The second step is the ability of the system to draw conclusions based on self-analysis and make decisions, predicting multiple iterations of possible consequences.

It is impossible to foresee many states and interpretation of their relationships and consequences, therefore it is difficult to predict and embed behavioral situations within some framework of source codes and sets of instructions, rules, etc. It is necessary to simulate situational processes based on continuously generated system conflicts and create a knowledge base of reactions and decisions of the system, which will be evaluated for use, and applied in case of compliance, or rejected as inappropriate at a given time. The apparatus needs to be trained in the management system of the ability to self-improve and change under the influence of various perturbations and provocations.
In a simplified understanding, such an apparatus can be a structure based on genetic algorithms for emulating neural networks with ill-defined logic. The micro and macro patterns of such networks, based on the fractal properties of the computing environment, create a distributed, cascading, fully connected matrix with wandering centers of load actualization and data migration. The connectivity table is not stable; it dynamically changes, leaving snapshots of its states in the space-time series. Experience of mistakes and correct decisions is available at any level of the system.

Source: https://habr.com/ru/post/98239/


All Articles