Overclockers armed with liquid nitrogen have repeatedly shown that modern chips can stably operate at frequencies several times higher than the nominal ones, ensuring a corresponding increase in productivity. Nevertheless, the progress in the “race of gigahertz” stopped for a long time and reliably. The first “Pentium 4” with a frequency of more than 3 GHz appeared back in 2002, almost 10 years ago. Over the past years, the rates of technical processes have decreased from 180 to 32 nm, but even this did not allow to significantly increase the nominal operating frequencies. It all comes down to a huge heat dissipation of elements of digital logic.
The "problem of heat generation" is based on a deep connection between informational and thermodynamic entropy, as well as the second law of thermodynamics, prohibiting a decrease in the total entropy of a closed system. Any calculation that reduces the information entropy must lead to an increase in the thermodynamic entropy, that is, to the generation of heat. Rolf Landauer in 1961 showed [
pdf ] that the destruction of one bit of information should lead to the release of at least k ∙ T ln 2 joules of energy, where k is the Boltzmann constant and T is the temperature of the system. By itself, this energy is small: for T = 300K, it is only 0.017 eV per bit, but in terms of the processor as a whole, the total energy rises to values of the order of one Joule per second of work, that is, of the order of one Watt [
Computerra No. 538 ] . In practice, this theoretical minimum is multiplied by non-zero resistance and other non-ideals of real semiconductors. As a result, we get processors that overpower the irons in heat dissipation.
The destruction of information in modern processors occurs constantly and at the lowest level, in the AND-NOT valves, which are the building blocks of any modern digital circuit. Taking two bits as input, the valve produces a result of just one bit in size, which, of course, cannot restore the values of the original arguments. More strictly, each calculation with the AND-NE gate reduces the information entropy by 1.189 bits, and, accordingly, dissipates at least ~ 0.02 eV of heat. With the unpopular today "OR NOT" the situation is similar.
')
Not better things with memory cells, any entry in which leads to the destruction of the previous values. For a programmer, the old data is simply “lost”, but at the physical level, the laws of conservation of charge and energy do not allow them to get lost “just because” In fact, the old information is not destroyed, it is scattered in space in the form of heat and spurious radiation.
For over 30 years, there is a known method of organizing calculations without destroying information, which does not fall under the Landauer principle and makes it possible to create theoretically energy-efficient processors. Such a method is called conservative or “preserving” logic. Unfortunately, for him it was not possible to create a compact physical implementation in silicon, only the method of realization on MOS transistors with poorly miniaturized inductances is known. On the other hand, this approach is natural for most types of quantum computers, including optical ones (Beniof’s model, etc.)
Over the years, it turned out that conservative logic turned out to be a useful mathematical abstraction and without implementation “in hardware”: block cellular automata are created on its basis, which are very convenient to use when solving problems “for limited resources”. This branch of mathematics is now very actively developing, and perhaps in a decade or so the block cellular automata will be even more well known among programmers than the ordinary ones.
“The stroke of the butterfly wing”, which led to the appearance of this article, was the question “how is it in Russian?” Raised by Comrade Plakhov, the answer to which even Yandex doesn’t know. In addition to the above-mentioned article in Computerra for 2004, there is no information in general in Russian about conservative logic — all the more offensive because its theoretical foundations were laid by the great Soviet physicists Landau and Lifshitz 50 years ago, in 1961. In order to eliminate this gap at least a little, on the one hand, and “do not flog gag,” on the other, I bring to your attention a summary (translation of the main theses) of the fundamental article on the conservative logic of Edward Fredkin and Tomasso Toffoli, published in the journal “Theoretical Physics” No. 21 for 1982. (Fredkin, Edward and Toffoli, Tomasso, "Conservative Logic", Int. J. Theor. Phys. 21 (1982), 219–253;
pdf ). This article covers all the main points concerning the physics, logic and circuit design of systems based on conservative logic.
Conservative logic
Conservative or preserving logic is a mathematical model for organizing computations based on two fundamental physical principles: reversibility of processes and conservation laws.
1. Physical basis
(Numbering of sections of the abstract does not coincide with the numbering of parts of the article -
approx. Lane. )
Any calculation, no matter how it is performed, by a person or by a machine, must obey certain physical laws. In particular, the speed of propagation of information is limited by the speed of light, and the amount of information in the final system must also be finite. There are more complex limitations.
All physical laws can be divided into dynamic and statistical. The former describe the behavior of individual objects in fully-defined microscopic systems and allow, in classical mechanics, to accurately predict the outcome of any experiment. When there are too many objects to calculate each of them, statistical laws are applied that describe the average behavior of homogeneous objects in the macrosystem. They allow us to evaluate the picture as a whole, but do not make it possible to predict the behavior of a particular object.
At the micro level, our world has various fundamental homogeneities and symmetries, many of which lead to the emergence of "conservation laws" at the macro level. So, the homogeneity of time ensures the fulfillment of the law of energy conservation, the homogeneity of space - the law of conservation of momentum, and the isotropy of space (symmetry of directions) leads to the law of conservation of angular momentum.
All dynamic fundamental physical laws are reversible in the sense of changing coordinates to inverse. The kick in billiards can be reversed if all the balls are thrown in the exact opposite direction. On the other hand, statistical physical laws obey the principles of thermodynamics and are not reversible without reversing the arrow of time; a broken vase will never be assembled from fragments. In particular, the traditional computational model based on AND-NOT or OR-NO valves working at the macro level is irreversible. Any calculation in such a model requires energy costs in accordance with the Landauer principle.
In any organization of calculations, digital information physically represents the numbered steady states of any systems.
Note per. : The article is intended for physicists, so the authors do not provide an illustration of things that are obvious to physicists. The last sentence deals with this: for example, a memory cell capacitor can be in two logical states - “charged” or “discharged” (although, of course, the “physical” charge of the capacitor is almost continuous, not Boolean). The stability of these logical states is provided by a special electronic regeneration circuit that recharges charged capacitors and discharges discharged ones. At the same time, the difference in energy between “charged” and “discharged” states should be huge compared with the level of thermal noise, otherwise damage to stored values is possible.
Although in fact these states can be electrical, optical, etc., hereinafter we will call them "mechanical" to distinguish from thermodynamic states. Thermodynamic states are the degrees of freedom of individual atoms and molecules that make up a substance. In one gram of any substance there is a huge, of the order of 10
23 (Avogadro's number), the number of such degrees of freedom, and they can only be taken into account by statistical methods and laws.
The second law of thermodynamics says that the total entropy of a closed system cannot decrease. The Landauer principle establishes the energy relationships between informational and thermodynamic entropy. Many algorithms lead to a decrease in the information entropy, but such a decrease must either be compensated for by the growth of thermodynamic entropy (heat generation), or should be prohibited.
Accordingly, two fundamentally different approaches to the physical organization of calculations are possible.
With the “conservative” or “preserving” approach for storing information, such mechanical states are selected for which the exchange of entropy with thermodynamic states is impossible. In such a system, any operations that reduce informational entropy, in particular, information destruction operations, should be prohibited. Therefore, all operations must be reversible. Landau and Livshits showed that in such a system there must be some invariant additive quantities, for example, charges or moments “protected” by the conservation law — only then the transition of “charges” to “temperature” and back will be impossible. The total level of these quantities present in a closed system will be constant and independent of the processes occurring in the system.
In contrast to the conservative, with the usual “von Neumann” approach, calculations are initially irreversible. To satisfy the second law of thermodynamics, there must be a path of exchange of entropy and energy between mechanical and thermodynamic degrees of freedom. Such a path must be one-sided (that is, irreversible) so that thermal fluctuations cannot destroy the information stored in mechanical states. But, since at the micro level all processes are reversible, then it is possible to achieve a one-sided flow of entropy only if the energy difference between mechanical states is much orders of magnitude greater than between thermodynamic ones. In modern computing (in 1982 -
approx. Lane. ) The difference is from 8 to 12 orders of magnitude.
Note per. : Imagine that our system consists of a W-shaped cup of very small size and a ball in it. The ball has two stable positions - at the bottom of the left and right halves of the glass. These are mechanical states that can be numbered and store information in them. In order to “write” new information into the glass, in the sense of changing the position of the ball, you need to raise the ball on the slide in the center of the glass, spending a certain amount of energy on this ascent. Then, already on the other side of the glass, this excess energy must be taken back or somehow extinguished, otherwise the ball will oscillate with a large amplitude around the equilibrium position. No matter how organized the process of selection of excess energy is, part of it will still turn into heat, but the smaller the “height of the hill”, the less energy loss there will be.
Now imagine that in a real system, both the ball and the glass experience constant thermal oscillations. The strength of such oscillations is proportional to temperature. As long as the temperature is low, the ball, although not always located ideally at the equilibrium point, at least often returns to it. But with increasing temperature, the oscillations increase, and at some point the ball gets a chance to “jump over” spontaneously over the slide. To do this in no case can not, therefore, the "hill" should be very high - with the inevitably large energy costs for its staff to overcome.
All of today's computing is built on the second, “irreversible” path. Irreversibility is specifically incorporated in the basic logic circuits, in the pn junctions of transistors, and even if the calculated high-level algorithm is formally reversible, when executed on modern processors, at least some of its individual steps are irreversible at the lowest level. Irreversibility also means a lot of heat dissipation, which limits the achievable performance.
2. Basic elements of conservative logic
Computational models based on a conservative principle should use basic logic elements that satisfy certain conditions. In particular, they must be reversible and retain those additive quantities (charges, moments). From the second requirement, it follows that the number of inputs for an element must be equal to the number of outputs, which, in turn, means the functional composition is strictly “one to one”.
Note per. : in a conventional circuitry, the output of one logical element can be connected to the inputs of several other elements (“branching”). In conservative circuitry, one output must fall strictly on one input of the next element; for branching requires a special logical element (see below).
Conservative logic is based on two elements - the repeater and the Fredkin valve.
A repeater (unit wire; literally “elementary conductor”) simply repeats the output signal sent to its input with a delay of 1 clock. It is the speed of the repeaters that determines the clock frequency of the circuit. Formally, the repeater function is expressed as:
y
t = x
t-1
Symbol:
The repeater can perform functions and storage (being closed on itself), and information transfer inside the circuit, but its main task is to synchronize signals with a clock generator.
Reverse to the repeater element is the same repeater, but directed in the opposite direction.
The values at the outputs of the repeaters on a certain cycle completely describe the internal state of the digital circuit of conservative logic, that is, the outputs of the repeaters are state variables of such a circuit. The sum of the values at the outputs of the repeaters will be the same additive retained value ("total charge"), and, for a closed system, the constant value. The total number of repeaters in the scheme can be regarded as the number of degrees of freedom of this scheme.
Repeaters are remotely similar, but far from equivalent, to the delay elements in the usual “von Neumanovskoy” circuit design.
The Fredkin gate is a device with three inputs u, x1, x2 and three outputs v, y1, y2, which implements the function of controlled “overlap” (“cross”) of two data lines (see the figure). The output v always coincides with the input u. When u = 1, the outputs y1, y2 are equal to the inputs x1, x2; when u = 0, y1 = x2 and y2 = x1 (see table)
Fredkin's valve truth table:
(u, x1, x2) => (v, y1, y2)
0,0,0 => 0,0,0
0,0,1 => 0,1,0
0,1,0 => 0,0,1
0,1,1 => 0,1,1
1,0,0 => 1,0,0
1,0,1 => 1,0,1
1,1,0 => 1,1,0
1,1,1 => 1,1,1
Fredkin's valve is inverse to itself.
Thus, any scheme in conservative logic is a controlled conditional signal router, at the outputs of which some kind of controlled permutation of input values is formed on previous clock cycles.
3. Conservative circuitry
In conservative logic, you can implement a Turing machine (Bennett, 1973) and a universal cellular automaton (Toffoli, 1980). Thus, conservative logic allows solving any tasks computable according to Church.
The process of implementing arbitrary functions in conservative logic is very similar to the same process in traditional logic.
Just as in traditional circuit design, the implementation of some functions uses constants - sources of constant values 0 or 1. On the other hand, as a result of calculations, you can get not only the required results, but also side ones that are not needed for further use in a particular algorithm. These side effects are called “garbage” (garbage) or “sinks” (sink). Since, in conservative logic, the signals do not appear “out of nowhere” and do not disappear “to nowhere”, then you simply cannot drop the lines with “extra” results, they must be disposed of in a special way (dispersed in space).
The figure shows the symbol for a block for calculating a certain function φ:
This is how the block "
AND " is arranged:
(note that a copy of the argument is used as one of the stocks)
The following figure shows the implementation of the logical functions "
OR " (a), "
NOT " (b) and signal
splitter 2a-a. The last two differ only in the logical role of the conclusions.
The following diagram shows the simplest memory element, the
JK trigger . A question mark indicates a drain that does not have any meaningful meaning.
The following is a more complicated scheme: a
demultiplexer that sends a signal X at the binary address A0, A1 to one of the four outputs Y0 - Y3:
In conservative logic, you can implement any scheme from "ordinary" logic. For example, the following three figures show the implementation of a
single-bit sequential adder : the first in conventional logic (in the old notation), and the second and third in a conservative one. In this case, the second scheme is assembled by simple substitution of substitutions into the first, and the third one is created from scratch. The diagrams show the difference between repeaters and delay elements: in conservative schemes, the signals are synchronized explicitly.
4. Stock problem
The practical value of conservative logic, if it is implemented, for example, in quantum computers, completely depends on the amount of waste requiring disposal in real calculations. Waste disposal is the very “energy-releasing” operation inherent in conventional circuitry. If for each trigger processor will have its own stock - any gain in energy release will not work.
It is impossible to completely avoid the appearance of drains. For example, in many real-world algorithms, the final result is the sum of any numbers. But the summation operation itself is irreversible, since, knowing only its result, it is impossible to determine the initial terms. In a universal processor that runs in turn many different programs and algorithms, such “extra” information will accumulate too much to simply dismiss. A special “entropy exchanger” will be required, to which wastewater results will be delivered.
Depending on the type of the calculated function, three scenarios are possible for the ratio of the number of necessary constants and sinks (see figure):
Third, the best scenario is possible only if the computed function f itself is reversible and obeys the existing conservation laws.
There is a way to partially solve the waste problem, and at the same time to streamline the generation of constants using inverted circuits. Under the "inverted" refers to a circuit that reverses a given, that is, restores the inputs and constants for a given outputs and drains. Since all circuits in a conservative logic are reversible, to invert a circuit, it is sufficient to simply turn all repeaters in the opposite direction and name the inputs as outputs; or, which is the same, reflect the diagram in the mirror (see figure).
Direct and inverted circuit can be connected in series one after another. The element obtained in this way will duplicate its inputs with a long delay, while not requiring any constants and not creating drains, but somewhere inside it will have the results of calculations of the original function (see the figure).
You can “get” the calculated values of the original function if you “embed” “spyware splitters” into the scheme:
To use such a scheme, it is still necessary to provide constants (more precisely, a “scratchpad” register, which you need to fill once with the required zeros and ones), and also correctly fill in both the output and output registers. The output of such a scheme is a) a copy of the arguments, b) a result, and c) an inversion of the result - and if some of this is not used anywhere else, then they will also have to be disposed of.
The main advantage of this method is that the number of “waste” lines is proportional to the input width only. At the same time, the number of gates necessary for the implementation of most of the real functions in any logic grows approximately as an exponent of the input bit. In conventional circuit engineering, all valves emit heat, in a conservative one, only “drain”, therefore, with large digits of inputs, the benefit should be enormous.
5. Model of billiard balls
In this part, an abstract physical model is presented in which conservative logic is realizable.
The model is a plane along which billiard balls move without friction in strictly specified directions. The speeds, permissible directions and sizes of the balls are chosen in such a way that at discrete points in time, corresponding to the beats, the balls can only be in a small set of points forming a rectangular grid rotated 45 '(see figure). If the balls fall into adjacent points, an elastic collision occurs between them.
The logical unit is the presence of a ball, and zero is its absence.
An arbitrary grid point can be declared a “gate of interaction” if the trajectories of two balls intersect in it. The direction of exit of these balls depends on the presence of both:
Fixed walls or mirrors can be installed on the plane. Using mirrors, you can rotate (a), shift (b), hold back, and safely trajectory the paths (d) of the balls. The last element (d) is called a “non-trivial crossover” and allows the balls “as if to pass through each other”.
It is possible to create managed switches. The figure shows: a block diagram of a simple switch, the designation of the reverse switch element, as well as the scheme of implementation of a simple switch.
Finally, from simple switches and a nontrivial crossover (labeled “bridge”), you can construct a Fredkin valve (turning mirrors and delays are not shown for clarity):
As for the actual physical incarnation, gas molecules are more suitable for the role of balls, rather than real billiard balls, although neither of these options has ever been realized.
The main advantage of the billiard balls model is the simplicity of the simulation: thanks to specially selected angles and speeds, the results of all collisions can be calculated by simple Boolean functions, without the participation of floating point numbers, costly rounding operations, etc. This allows you to quickly simulate even very large schemes of conservative logic on traditional computers.
6. "Complicated" questions about conservative logic
1. Are arbitrary calculations possible in conservative logic? Yes, possible; conservative logic is Turing-complete.
2. Are there any physical effects on which conservative logic can be implemented? Yes, it can be implemented on the basis of electronic elements (discrete L / R / C elements plus MOS transistors) (in practice, everything remains only on paper and in laboratories by 2011)
.
3. Wouldn't the energy needed for waste disposal be greater than the potential gain from switching to conservative logic? A full answer to this question cannot be given without taking into account a specific physical model, but the method of concealing wastewaters, described in paragraph 4., leaves very high chances for a negative (in the sense, optimistic) answer.
4. Finally, the question of the stability of the system to low noise remains unanswered.
Instead of the conclusion (from the translator)
The fact that controllably mixing the wiring in a bundle or cable between the input and output and without using any other elements besides “overlapping” and repeaters can be obtained at the output the result of calculating any function from the input is far from obvious. The underlying computational logic is unusual, but unlike other Turing-complete mathematical abstractions, such as Markov's algorithms, it looks very “technological”, ready for use in real processors. Quite possibly, quantum computers will work on something similar, adjusted for purely quantum "tricks." If we combine the conservative quantum logic with superconducting interconnects, we will succeed in building a computer that really does not absorb energy. Well, the future will show.