Two English scientists,
Peter Bentley and
Christos Sakellariou, created a computer that, like a person’s brain, performs tools not sequentially, but with segments in a random order.
According to scientists, this mechanism allows this computer to adapt to critical situations and bypass them without stopping work.
The concept of
Systemic Computation was described by Peter in his doctoral work back in 2007:
www0.cs.ucl.ac.uk/staff/P.Bentley/BEJ4.pdfThe essence of the problem with sequential calculations is that if a processor suddenly runs into something “unknown”, it will not be able to continue the calculations — it will stand still and we will see, for example, a blue screen or core dumped.
Peter tried to solve this problem by borrowing a mechanism from the living brain, which performs tasks regardless of any previous operations. His computer works with random chunks of memory, which contain both executable code and data.
Today on phys.org there was news that Peter, along with Christos, created a prototype of such a computer using an
FPGA chip. His main job is to verify that different segments
(the developers call them "systems") receive requests (in random order) and allocate space for them to work. Thus, the FPGA chip is a resource manager, and in addition to this, it resolves the data flows between the “systems”.
Accordingly, in addition to absolutely parallel computing
(no “system” needs to wait for any other to perform its operations), we also have good resiliency - if one of the systems “fails”, the rest will continue to work as if nothing had happened. But the developers propose to go even further - use some systems to check the performance of others and restart them (or a small reconfiguration) in case of an error.
')
In general, according to the developers, this is a prototype of a computer that (in theory) cannot be “dropped”.
You can read more about this here in this file:
www0.cs.ucl.ac.uk/staff/ucacpjb/SABEC2.pdf