Greetings to respected professionals and just fans of Artificial Intelligence. Long time I did not dare to write here anything more significant comment. Further digging into theory and philosophy would be meaningless without a hint of practice. It's time to make this hint. But for starters, it would not hurt to refresh your memory.
In the previous series.
.
... It was back in 1956 (by this time there was already a series of books by Azimov about robots). In the USA, a seminar was held at Stanford University, at which they proposed the term Artificial Intelligence ... Now we are talking more about "some characteristics of Artificial Intelligence" than about AI itself ... [1]... Based on the data on the structure of neurons, the cells of our brain, they tried to recreate their structure. It was a few years before the very workshop where they first spoke about AI ... Let's be honest, we want Artificial Intelligence to be as close as possible to human ... [2]')
... It is much easier to create something that functions exactly as our intellect than to pretend to be God and invent a system from scratch that could develop autonomously (without interfering with the architecture) from the moment of launching ... I admit, at first I had an idea to use the virtual world created on a computer as a three-dimensional interactive model ... [3]... The process of thinking is reduced to working with information ... Artificial intelligence is a question of the ability to see the right and drop all unnecessary ... [4]... It was just before I thought that I would take a fat reference book on human physiology and find there all the answers ... In fact, at a low level between motivation and emotion there is a direct connection ... [5]... If we want it to develop as a person and think as a person, then we need to create for it a world that most closely matches the real one. Otherwise, we will get Alien-level intelligence from Alpha Centauri ... For this during training, when she crawls into the lighted areas, she will be “shocked” - she will not like it very much ... [6]A little philosophy
Memory refreshed - you can get down to business. To begin with, we summarize the philosophical arguments begun in the previous notes. One of the main lines there was the idea that you need to determine the parties and the qualities of a person that we want to see in the AI, and focus on their implementation. And everything else you just need to drop. I understand that everyone has their own opinion on this matter. That is why a single definition of AI does not exist. Everyone chooses how accurate and detailed the model of intelligence should be.
I made my choice in favor of recreating the basic functions of the brain in the context of working with information, such as: perception of information, its emotional evaluation, the formation of images of objects, making decisions about actions, identifying patterns and so on. In general, all the basic functions that are associated with information and memory. It seems to be nothing special, but the idea is to make it all work together and autonomously. This can be compared to a game in which you need to build a water supply between points A and B. Everything will work only when all the fragments of the system fit perfectly together. And usually, among the huge number of possible combinations, only one is correct.
Here I am talking about information and memory. Since all this economy needs to be transferred from our material world to the digital world, the question of form and content will inevitably arise. In the real world, analog information arrives in the brain and also in various forms. The computer model will somehow work with bytes. Accordingly, all internal logic will be oriented to work with bytes. The question is whether this “orientation” will not affect, for example, the ability to interact with the real world by connecting this model to the body of a robot? Or will the artificial brain be permanently stuck in its artificial world, which is also focused on working with bytes? Then, in order to interact with it, we ourselves will have to connect to the virtual world in a way that we connect to multiplayer games. This is an interesting question and I invite you to think about it at your leisure.
For now, let's focus on the digital form of information and think about the interfaces of the brain. Definitely he should have an entrance and an exit. You can, of course, come up with some tricky formats in which information will flow into and out of the brain, but this will only strengthen the coherence of the system and make it difficult to add new channels of information. A universal solution in this situation is a simple array of values ​​with double precision, say, from 0 to 1. In this form, you can imagine anything. If it is a smell, then the values ​​will talk about the intensity of the different shades of this smell. If this vision, then the amount of light, range, color - depends on the complexity of vision. Touch - intensity for different zones of sensitivity, etc. Outgoing information is also easily interpreted - in values, for example, encoded pulses of bone rotation in the joints. In general, the fact that information can be easily interpreted is very important. This is very helpful when we want to dump the brain and understand what he was thinking at that moment.
Some practice
Before focusing on the work of the brain, you need to create something with which it will interact. You can, of course, limit yourself to writing a single class that would form faceless data sequences and abstract patterns. But then we will lose a wonderful opportunity to interpret the information. In this case, we will not be able to properly evaluate the work of the brain, since there will be nothing to compare it with. We need to create some kind of real world with its laws. Then it will be possible to simulate an analogue of any real situation, see how the artificial brain can cope with it and, in which case, correct its work.
Let me introduce - the project Alan Platform. With it, it will be possible to create the components of the artificial world, and then to model from them various environments for an artificial organism. Similarly, the components of the organism itself are created. The whole process is focused on the elaboration of information transformations, because the brain will work with it. All other aspects are simplified and automated as far as possible.
All right, stop empty words. It is much easier to show everything with examples. Start by creating an object with specific properties.
var apple = new Component( new PropertySetInfo {
Name = "Circle" ,
OperatorName = "Some2DLayout" ,
Properties = new PropertyInfo[] {
new PropertyInfo { Name = "Radius" , Value = 0.5 },
new PropertyInfo { Name = "CenterX" , Value = 0.0 },
new PropertyInfo { Name = "CenterY" , Value = 0.0 }
}}, new PropertySetInfo {
Name = "RGB" ,
OperatorName = "ColorOperator" ,
Properties = new PropertyInfo[] {
new PropertyInfo { Name = "Red" , Value = 1.0 },
new PropertyInfo { Name = "Green" , Value = 0.0 },
new PropertyInfo { Name = "Blue" , Value = 0.0 }
}});
* This source code was highlighted with Source Code Highlighter .
With this code, we created an apple ... Well, the flat model of the apple is a circle of red. What can you say interesting, looking at this code? First, there is no visible implementation of the work of these properties. A component is only a container that contains the names of its properties and their values. It is needed only as an abstraction for various objects. Secondly, it is not yet clear what the OperatorName property means. Operators are committed to managing properties. In one model there may be several, one for each type of information. For example, in the above code, Some2DLayout is responsible for the location in space and size, and ColorOperator is responsible for the color information. Here's what the ColorOperator code might look like:
[PropertySet( "RGB" , "Red" , "Green" , "Blue" )]
[PropertySet( "GrayScale" , "Value" )]
public class ColorOperator : Operator { }
* This source code was highlighted with Source Code Highlighter .
There is also no hint of the implementation of properties. There are only declarations of property sets that are controlled by this operator. The first argument is the name of the set, and all subsequent arguments are the names of the properties. Such sets can be any number, they all in one form or another describe information of the same type - in this case, information about the color. All this is done only for convenience, so that the resulting environment models are simple, understandable, and in some sense close to the prototype - the real world.
So what does the operator do? First, he searches for what objects are declared to contain his properties. Then, for each such property, it creates a PropertyData object inside it, which will store the actual information about the property. This is also done for convenience, so that all the properties of all objects with one type of information are in one place - they are easier to manage.
The following code snippet:
var root = new Component();
root.Add(apple, plate, table);
root.Add( new ColorOperator, new Some3DOperator());
* This source code was highlighted with Source Code Highlighter .
As you can see, a component is not only a container for properties, but also for other components and operators. The root component itself does not have any special functions - it simply combines all the elements of the model of the world into one tree.
Now I would like to clarify what I had in mind when I spoke about the focus on information transformations. In the above model (an apple lying on a plate lying on the table) all the information will be distributed between the two operators. One will store all the information about the properties associated with the color of objects, and the second - with the location in space. It is with these operators that the brain will interact, but not directly, but through intermediaries - sensors and actions.
Sensors are analogs of the senses, and actions are peculiar body functions that it can perform on objects. The first will convert the properties of objects into an array of double values, which will go to the brain, and the second will use the information received from the brain (double []) to change the values ​​of properties of objects. Since each sensor and action works with information of a certain type, they must be associated with one of the operators. The following code snippet demonstrates what a sensor implementation might look like.
[AssociatedOperator( "ColorOperator" )]
[ChannelsCount(3)]
public class ColorSensor : Sensor
{
public override void Update( IEnumerable <PropertyData> data)
{
// ,
// , , .
var layout = this .Parant.Parant.Layout; // Some2DLayout
// , ,
// .
VisibleProperties = data
.Where(x => layout.GetBounds(x.ComponentId)
.Intersect(layout.GetBounds( this .Parent.Id)))
.ToArray();
}
public override void Transmit()
{
// ,
// , .
double red = 0;
double green = 0;
double blue = 0;
var redValues = VisibleProperties
.Where(p => p.PropertyName == "Red" || p.PropertyName == "Value" )
.ToArray();
if (redValues != null )
red = redValues.Aggregate((sum, val) => sum + val) / redValues.Length;
// ... ( green blue)
double [] data = { red, green, blue };
ConnectedBrain.Update( this .Name, data);
}
}
* This source code was highlighted with Source Code Highlighter .
Let's start in order: AssociatedOperator is the operator with which the sensor works, the properties of which it “understands”. The sensor and operator belong to different branches of the element tree, and this attribute helps the program to link them. ChannelsCount is the number of values ​​that the brain should expect from a given sensor. It is also a useful thing, because the brain automatically adjusts its internal structure to the number of incoming and outgoing channels, which makes it even more versatile.
The first Update method is needed in order to select from the set of operator properties only those that should be in the “field of view” of the sensor. This method is periodically called by the operator when one of the objects moves in space. The second method Transmit transforms information from properties into a certain universal form and transmits the result to the brain. It is called by the operator each time the value of one of the properties visible to the sensor changes. Actions are implemented in a similar way, but instead of the Transmit method, they have a DoAction, which performs the reverse operation — the received double [] dat data is used to modify the VisibleProperties. It is at this stage that the laws of the world are implemented, which allow or prohibit changes in properties in certain situations.
In the above code, there is one interesting line: “var layout = this.Parant.Parant.Layout;”. As for the double appeal to the parent - it should be clear if we recall that all the elements are combined into a tree. The first time we get a link to the organism to which the sensor belongs, and in the second - to the root object to which the organism was added. I haven't said anything about the Layouts yet. Correct.
Layout is a special type of statement that controls information about the location of objects. It is needed in order to track the fact of moving an object (changes in properties responsible for coordinates), changes in its size and orientation in space. Each element tree should contain one Layout, since almost all sensors and actions will use its services. Layout definition looks like this:
[PropertySet( "Circle" , "Radius" , "CenterX" , "CenterY" )]<br>[PropertySet( "Rectangle" , "Left" , "Top" , "Right" , "Bottom" )]<br> public class Some2DLayout : Layout<br>{<br> const int radius = 0;<br> const int centerX = 1;<br> const int centerY = 2;<br> const int left = 0;<br> const int top = 1;<br> const int right = 2;<br> const int bottom = 3;<br> <br> public TestLayout()<br> {<br> /* , <br> * . <br> * - , <br> * - . <br> */ <br> this .AddCenterMethod( "Circle" , p => new Center(p[centerX], p[centerY]));<br> this .AddCenterMethod( "Rectangle" , p => new Center(<br> (p[left] + p[right]) / 2,<br> (p[top] + p[bottom]) / 2<br> );<br> <br> // . <br> // . <br> this .AddBoundsMethod( "Circle" , p => new Bounds(<br> p[centerX] - p[padius],<br> p[centerY] - p[radius],<br> p[centerX] + p[radius],<br> p[centerY] + p[radius]<br> );<br> this .AddBoundsMethod( "Rectangle" , p => new Bounds(p[left], p[top], p[right], p[bottom]));<br> }<br>} <br><br> * This source code was highlighted with Source Code Highlighter .
Attributes - just like an operator, constant - to make it easier to read the following code. In the Layout constructor, you must specify the methods by which the coordinates of the center and boundaries for each property set will be calculated. If you choose tricky enough sets for which these methods will be the same, then you can add only one with the name “Default” as the name of the property set.
So, the AddCenterMethod methods take two arguments — the name of the property set and the delegate. The delegate has the form: Func <double [], Center> method. At the input he expects an array consisting of the values ​​of all the properties of the set, and the output must be the coordinates of the center (Center) or borders (Bounds). After that, you can retrieve the coordinates at any time by calling the appropriate Layout methods: GetCenter and GetBounds, and pass them the id of the component whose location you are interested in.
What was it?
These were some of the features of the Alan Platform. Most of them are already working, some are waiting for someone to implement them. Behind the scenes was the simulation of the internal time of the system, since the justification of the need for its implementation would take quite a lot of space and time. To somehow put together all this information, I will describe what work with this library will be. At first it is the development of extension libraries. Creating various operators, layouts, sensors and actions - all that is responsible for working with information. Then - designing with the help of this various objects and organisms and combining them into one closed system - an artificial environment. The latter can be done using XML (partially implemented), and in the future you can add some kind of GUI for this.
This platform basically does not contain any GUI. It only allows you to create different trees from components, operators, etc. ... It is assumed that interaction with the system should be carried out within its laws. To do this, you can construct a separate organism that will not be controlled by a brain model, but by the user. It will be something like a model of god. His sensors can cover the whole world and convert information about the properties of objects to display it on the screen. Similarly with actions - they can “reach out” to any object and change its properties. Actions can also be connected to the interface. At the same time, this organism will have zero size and exist everywhere and at the same time nowhere. Organisms inhabiting the artificial world can only guess about its existence by indirect signs: the spontaneous disappearance of walls, the appearance of food, lightning strikes ...
Why fence this forest, you ask? Programming is essentially a simulation of our reality. I like this process, I like to build beautiful models. As I said at the beginning, to create something similar to the brain, you need to create something similar to the world with which this brain interacts. At the same time, I wanted to make something as universal as possible, so that for the slightest complication of the model it was not necessary to rewrite everything from scratch. This inevitably led to some excessive formalization and unification, but I think that the result did not become too detached from the subject area and continues to follow its logic.
Here you can see the source code, everything is scattered so far in various brunches. There are no brain interiors yet. Perhaps it does not even reach the Technical Preview, but some ideas can already be traced. If this project will seem interesting to someone and there will be a desire to take part in it - please write me in a personal. If all goes well, then in the very near future there will be material on the realization of one of the features of the brain - the formation and recognition of images.