Recently, a real-
life ChipMan traveled to the States and brought a unique report of its kind especially for Habr.
A sunny May day in California’s Santa Clara, where Intel headquarters is located: the height of the work week, but
Jerry Batista , general manager of business development at Intel Labs, agreed to talk to us about Intel Labs, a research division of the company that includes
worldwide who are engaged in dozens of the most advanced scientific projects.

The SC12 building, the headquarters of Intel Labs, is flooded with sunshine - at the reception desk there are practically no people: it seems that all the staff of the laboratory, like heavy bombers, flew away to the task. We choose a meeting room at the cafeteria, the inscription on the wall sets up the conversation: “Your grandchildren will not think that we are doing something strange here”.
')

What are we doing at Intel Labs? The key topic of our research is how we will use calculations in the future. I will explain my thought. Look, the more cores our processors have, the more computing power we have. Of course, if all we do is work with text and browse simple websites, then such tasks do not require a lot of processing power.
And what is required?This is what we want to find out - what tasks will the people of the future want to do? What will they care and worry? What things can be done with completely different computing capabilities, which will be much more modern?
For example?For example, what is commonly called natural interfaces (natural interfaces): recognition of gestures and faces, speech recognition. Take, for example, speech recognition - people have been trying to deal with this topic for decades, but so far the results of these studies cannot be used. Speech recognition is useless until its quality reaches almost one hundred percent. And the most difficult thing here is to improve the quality of speech recognition for these last percentiles, right? This is a big problem. That's about such big problems we deal with.
Pattern recognition is generally one of the biggest problems of modern computer science.Oh yes, this also applies to photos, video, and other types of multimedia data. I think many of us have the digital equivalent of cardboard boxes in which we store hundreds of gigabytes of photos, and these photos are at best piled in piles. Of course, you can somehow mark them: “my grandmother, my aunt,” or sort them by date. But this is a very rudimentary sorting system; such pictures have no intelligible central catalog, so most people - I am not an exception, I have four children - do not organize their digital photo archives at all, so all these hundreds of gigabytes of photos are simply piled up.
But do modern pattern recognition algorithms appear?True, there are ways to find out where the photo was taken by identifying some famous buildings or objects that fell into the background, for example, Notre Dame, Red Square or another popular place - these are very well photographed objects, so the system can recognize them from any parties. This approach solves some problems with the layout of images, because not all photos have a geotag, that is, a binding to geographic coordinates. Now there are several models of cameras that immediately put down the GPS coordinates of the footage, but we already have years of captured images; How to determine where they were made? This is not such an easy task. But, let's say, we found where the picture was taken, and there is topographical information about that place and even about the relief of that place, and there are a lot of other pictures in the public domain, yours or others, and now the system puts them together so that you could literally walk into the past, view these photos in some simple way, as if you were flipping through a photo album-journey.
Sounds beautiful.There are other things - security. We at Intel Labs understand that security needs serious computing power. Let's say there are already a bunch of face recognition algorithms that are used by government services to find people in search, but they are a bit naive. Here is an example: soon you will go to Russia, you will come to the airport, there will be cameras hanging there, shooting you regardless of your desire. Maybe you won't even notice them, but they still hang out there somewhere and take you off. If you are wanted, somewhere inside the system a signal will definitely work. But in fact, the number of people in search is not so large that they are in the same place at the same time - the system is looking for suspicious persons, trying to prevent potential troubles. But what a thing - a bunch of bad things are done by people who are not even wanted. Maybe a crime or terrorist act is committed by some random person, not known to date as a criminal. And here's what you can do: look at the actions of people at the airport, and try to draw some conclusions about how suspicious their actions taken by cameras are.
But this is a huge field for research? Black swans, wildcards, situations that in principle can happen, but are not statistically predicted or detected. These are such constructions from separate micro-events that eventually unfold into a catastrophe.Yes, they can be barely noticeable. But you are talking about data analysis at a very high level: analysis of e-mail, photos, patterns of movement of vehicles and statistics of website visits in order to recognize any threat. Yes, this is also being done now, and this is indeed a big research and practical area, but I want to tell you about a different approach. Look, you have a video recording from which you can see a person; he is engaged in something, some sort of his own affairs. He has a bag, then the next frame - and he no longer has a bag. Where did the bag go? What is this bag, it can be some kind of danger? Is there an explosive device in it? You can try to track the actions of this suspicious person, using data from several airport cameras. Now this process is absolutely not automated, there are no such software packages and algorithms that would allow it to be done. But imagine: let's say you are looking for a frame in which this person passes the bag to someone else, and are trying to determine what kind of person this is? Is he the same age? This is a woman? Are there any frames on which they are together? If the answers are yes, then it is most likely the husband and wife, and the husband gave her purse to his wife, nothing terrible has happened. But if the first person moved alone, at some point “lost” the bag, then on one of the frames this bag appears alone standing in a secluded corner, and on the other camera you can see that the person leaves the airport in a hurry - this is more similar to the problem. Admit that this is a very subtle, unobvious situation, and no security officer will have the patience to keep track of such situations. But here is an intellectual computer system. What I want to show with this example is that we see tremendous possibilities in this kind of computation, where we can use not very complex, but rather intensive pattern recognition algorithms that will form the basis of a program that draws conclusions and conclusions about relatively long and complex patterns. If it seemed to such a program that something really suspicious was happening, it could notify the appropriate security officer and show him a summary report of what was happening.
Another trend we see is the various social applications of computing. For example, care for the elderly. Suppose you have an old grandmother, and you want to see some regular images from her house, because she may need help at any time. However, issues of security and privacy immediately arise - such images should not fall into extraneous hands. And here's how to solve these issues: on the basis of images with very low resolution, on which parts are not distinguishable and which do not require bright light, you can analyze how grandmother lives her life. If it is noticeable that she is sleeping and waking up at her usual time, going to the bathroom, cooking, walking, or watching TV, everything is in order. Or he makes insulin shots for himself - such things can also be monitored and logged.
After all, does Intel Digital Health already have a similar product?Yes, but not so complicated and versatile. I'm talking about a more complex device - as a result of image analysis, it will simply send you an SMS “grandmother is all right”; you don't need to see the grandmother cooking, you just want to know that everything is fine with her, and she wants to know that no one is watching her. The system simply carries out a comprehensive analysis of images and draws conclusions about the behavior of the grandmother, and then sends you a short message. All images of all cameras remain in her house and are not transmitted anywhere, so her privacy is not endangered.
In other words, in the future we will be surrounded by smart devices.True, we at Intel call this concept “computational continuum”, and it says that in the future serious computations will be made not only in the “clouds”, but also locally. Our life will be literally surrounded by computers - from cloud services running on supercomputers to small personal electronics like phones that can be carried in your pocket. All devices and all applications will interact with each other through data exchange standards, social connections and the current user context. You see, here are several trends for you at once - there is a very, very large amount of computing resources, and Moore's law feels very good, and we can then use a growing number of cores that we can put in computers.
But Moore's law is not eternal.Oh, we are regularly asked - when Moore's law stops, what will you do with these transistors? Who needs such crazy computing power? But from the examples I have cited, it is clear that we can go on and on; the reason here is not that Intel is convenient - in our society there are quite objective needs for ever-increasing computing power. Take at least modern games that are now hard to imagine without realistic game physics of objects, and this is a very resource-intensive task. When something on the screen explodes, and the fragments behave in an extremely convincing way - just like in reality. Both collisions are taken into account, and the driving force, the elasticity of collisions is also calculated. When you show such a game to a twelve-year-old, you do not need to explain Newton's laws to him - he already knows, and he is completely delighted with what he saw. And after all, it’s impossible to explain to the child that the powerful quad-core system is required for calculating all this beauty on the screen.
If you imagine a game similar to Cameron's Avatar, it will require fantastic computing power.Exactly!
And we are far from such capacity?Oh, this is a big topic - after all, not only physics is important, there is also behavior. We know that in many games, when the bad guys attack you, they act rather roughly according to the script. They just run at you in a crowd and shoot. But we also know that in nature is not so. For example, wolves and other predators use hunting behavior, hunting patterns - and sharks too. If they are hunted by a group, then roles are distributed within the group - one pounds the prey, others surround it; the pack acts cohesively, together, and long before the victim sees the pack. We can use similar behavioral patterns in modern entertainment to make games even more believable. Well, the quality of the picture really plays a role - in short, when you begin to summarize physics, graphics, behavioral models, it becomes clear that the games of the future will be very, very complex and realistic, and will require a completely new level of computing power. And then you want to get into Avatar, you as a living person who has a face and quite clear physical abilities in the game - it's extremely interesting to find out, and how will I react if I are pursued by alien creatures that behave like sharks surrounding prey? All this takes the game to a whole new level, requiring completely different computing power.
These are the projects we are doing at Intel Labs - there are a lot of them, but they all concern how we want to use the computing resources of the future. Not the distant future, not even in ten years - but in five years, in four years. All that I am talking about is not science fiction, but our immediate future.
I have just returned from the presentation of several Intel projects, and the user context passed through them like a red thread. But there is a problem - a lot of modern mobile devices that are not compatible with each other. How to make them talk to each other? Your friend comes with an iPhone, and you have an android, and you just want to send him a few photos, just r-time and drag your finger on his phone. Right now it is impossible - you need to write a letter, drag photos there, or start a flickr, or open a Bluetooth session ... a headache.Yes, inside the platform you can do such things, but not outside it, not on neighboring platforms. Even transferring photos from the phone to the laptop is already a problem. There are several ways to solve this problem. One of them is when we create a piece of hardware, we always ... oh, in vain I say "always" ... let's say, in the overwhelming majority of cases we try to create a reference design, some kind of complete test solution that can be used by developers of final products as a sample. So we usually show exactly how our hardware can be used in some really working system. For example, to chips that are sent to mobile devices, we create reference devices, which in fact are full-fledged mobile gadgets (a good example is Canoe Lake). Such gadgets have an interface, a display, an input system, and so on. But here it is necessary to properly conduct the border - yet we mostly provide not final solutions, but computer chips and chipsets, everything else is just a way to show how our chips can be used. So with our software development - we write a decent number of applications, even games, which we then show from the inside, so that the ISV, software makers, can then look into the code and understand how we did it. And we share these applications.
We also have extensive tools for optimizing code, multi-threaded programming tools, and so on - so when developers understand what they want and start working directly on development, we can help them with customizing and improving their programs. This is such a process: when you create something new, immediately a certain learning curve arises for this new, and you need to provide your novelty with examples of use and development tools. We have done this way many times with our hardware products, and I know that we are doing this in software development.
Everything is clear with examples, but what to do with user experience, with user experience, which is rather difficult to measure in a formal way? How can a programmer understand how this user experience is created, if he was not taught by cognitive psychology and a bunch of related topics - does he really have to think all this based on some files from the library of examples?I think you're right. Functionally, it is difficult to do, you need to look at ways to work with natural interfaces. And you are right again - now there are no tools for this.
Is Intel Labs doing any research in this area?Not at such a high level, alas. It is very difficult. You look beyond specific devices, to a higher level of abstractions — you look at the ecosystem, at some patterns of good design.
Of course - the same user context, it cannot be correctly implemented without a look at the ecosystem as a whole, right?Right. And this is a big problem. It can be solved for a very large client by spending a huge amount of time and resources on this, because you know that your device will sell in large quantities, but here we have a large community of software developers who are engaged in their development, and we simply cannot individually come to everyone. Apple has quite successfully managed to do something similar - I know that they have courses in developing applications that they read in Stanford and other places. Perhaps such courses are a good way to try to solve a system problem with developers.
NVIDIA did a pretty good job with CUDA in a similar way, so I think it would be fair for me to say that we at Intel, of course, could also do this kind of thing. So yes, good question. If it turns out that the absence of such a systematic view, a systematic approach really hinders the deployment of new platforms and ecosystems, I think that we will address this issue.Here's another point - ultimately, Intel is trying to monetize all these fascinating studies, and here as to the user context - from the point of view of the social network, it's pretty cool to know where your friends are, but will you pay money for such knowledge? Or for a service that agrees to provide you with such services for money? So far, the context exists as a set of various geoservices, and all these services are free.But let's say contextual advertising can be a cool thing?True, but with her, too, a bunch of questions. Let's say you get advertising because you pass by the store, the store already knows your social circle and your age, and that you are a man, and that you pass by, and now he writes you - inside a cold beer, come in. But isn't this becoming intrusive at some point? Same question. In any case, we are still trying to commercialize some of our finds so that they turn into a business or services, and this is that part of our work that is usually overlooked or not often spoken about.Intel Labs is not a product design department, but rather a conceptual design department?Yeah, right.
We are engaged in a full range of work - some of our pieces run too far ahead, and there are no special thoughts on how to turn them into products. Another part of our development concerns a closer future, these developments are more practical, and now we are seriously thinking about them - how can they become products? There are short-term projects, there are long-term projects, there are such esoteric developments that we are not at all sure that they will ever work at all (laughs).Another point: I do not know how obvious it is to people outside the company, but quite a large part of the work inside Intel Labs involves working with the scientific world; we give tangible money to various universities for targeted research programs that concern topics of interest to us, and we try to help such research - not only money, but also guidance. Our laboratories are located all over the world, in various parts of the world that are engaged in local work - in China, Spain, Mexico or Russia. Each of these countries has its own expert areas, around which we will form one or another Intel laboratory. Of course, there is a link to local markets, which concerns how products are sharpened for specific market conditions. So we think about the details, but of course the work at Intel Labs is based on collaboration between labs.However, let me try to tell and show some of our projects.
To be continued.