📜 ⬆️ ⬇️

“It's easier to answer than to remain silent” - a great interview with the father of transactional memory, Maurice Herlihy


Maurice Herlihy is the winner of two Dijkstra awards . The first is for the work on “Wait-Free Synchronization” (Brown University) and the second, more recent, - “Transactional Memory: Architectural Support for Lock-Free Data Structures” (Virginia Tech University). The Dijkstra Prize is given for works whose significance and influence have been noticeable for at least ten years and, obviously, Maurice is one of the most famous experts in the field. He currently works as a professor at Brown University and has many achievements a whole paragraph long. He is currently engaged in blockchain research in the context of classical distributed computing.


Previously, Maurice had already arrived in Russia at the SPTCC ( video ) and made an excellent meeting of the community of Java-developers JUG.ru in St. Petersburg ( video ).


This habrapos - great interview with Maurice Herlihy. It discusses the following topics:



Interviews are:


Vitaly Aksenov is currently a post-dock at IST Austria and an employee of the Computer Technologies Department at ITMO University. Engaged in research in the theory and practice of competitive data structures. Before joining IST, he received a PhD at the University of Paris Diderot and at the University ITMO under the guidance of Professor Peter Kuznetsov.


Alexey Fedorov is a producer at the JUG Ru Group, a Russian company that organizes conferences for developers. Alexey participated in the preparation of more than 50 conferences, and in his resume there is anything from the position of a software engineer at Oracle (JCK, Java Platform Group) to the position of a devrela at Odnoklassniki.


Vladimir Sitnikov is an engineer at Netcracker. For ten years, it has been working on the performance and scalability of NetCracker OS — the software used by telecom operators to automate network management processes and network equipment. He is interested in Java and Oracle Database performance issues. Author of over a dozen performance improvements in the official PostgreSQL JDBC driver.


The interaction of the academic sphere and industry


Alexey: Maurice, you worked in the academic environment for a very long time and the first question is the interaction between the academic and industrial spheres. Could you tell us how the interactions between them have changed lately? What was 20-30 years ago and what is happening now?


Maurice: I have always tried to work closely with commercial companies, because they have interesting tasks. As a rule, they are not very interested either in publishing the results obtained or in detailed explanations of their problems to the world community. They are only interested in solving these problems. I worked for some time in such companies. I spent five years working full-time in a research lab at Digital Equipment Corporation, which used to be a large computer company. I worked one day a week at Sun, at Microsoft, at Oracle, I worked a bit on Facebook. Now I am going to go on sabbatical leave (a sabbatical leave, a professor at an American university is allowed to take such a vacation for a year every six years) and work at Algorand , this is such a crypto-currency company in Boston. Working closely with companies has always been nice, because this is how you learn about new and interesting things. You can generally be the first or second person to publish an article on a chosen topic, instead of engaging in gradual improvement in the solutions of problems that everyone else is working on.


Alexey: Can you tell us more about how this happens?


Maurice: Of course. You know, when I worked at Digital Equipment Corporation, I and Elliot Moss, we invented transactional memory. It was a very fruitful period when everyone began to be interested in information technology. Parallelism including, although multi-core systems did not exist yet. In the days of Sun and Oracle, I worked a lot on parallel data structures. On Facebook, I was involved in their blockchain project, which I can’t talk about, but I hope it will soon become public. Next year, in Algorand, I will work in a research group, studying smart contracts.


Alexey: In the past few years, the blockchain has become a very popular topic. Will this help your research? Perhaps it will facilitate the receipt of grants or give access to the resources of companies operating in the industry?


Maurice: I already received a small grant from the Ethereum Foundation. The popularity of the blockchain is very useful in inspiring students to work in this area. They are very interested in this and are happy to take part, but sometimes they do not understand that the studies that are tempting sounding outside, it turns out, include really hard work. Nevertheless, I am very happy to use all this mysticism around the blockchain, it helps to attract students.


But that is not all. I am on the advisory board of several blockchain startups. Some of them may succeed, some of them may not, but it is always interesting to see their ideas, study them and advise people. The most exciting thing is when you warn people not to do something. Much at first seems like a good idea, but is it really?


The foundation for research blockchain


Vitaliy: Some people think that the blockchain and its algorithms have a future. And other people say that this is just another bubble. Can you share your opinion on this?


Maurice: A lot of what is happening in the blockchain world is not working properly, something is just a scam, a lot is overvalued. Nevertheless, I think that for these studies there is a solid scientific base. The fact that the blockchain world is full of ideological differences shows the level of excitement and dedication. On the other hand, it is not particularly beneficial for scientific research. Now, if you publish an article that says about the shortcomings of a particular algorithm, the resulting reaction is not always fully scientific. Often people throw out their emotions. I think that such an excitement in this area may seem attractive to some, but in the end, there are real scientific and engineering issues that need to be addressed. There are a lot of computer science.


Vitaliy: So you are trying to lay the foundation for the blockchain research, right?


Maurice: I try to lay the foundation for a solid, scientifically and mathematically sound discipline. And partly the problem is that sometimes you have to contradict some unnecessarily harsh positions of other people, to ignore them. Sometimes people ask why I work in an area in which only terrorists and drug dealers are interested. Such a reaction is as meaningless as the behavior of followers blindly repeating your words. I think the truth is somewhere in the middle. The blockchain will still have a profound impact on society and the global economy. But probably this will not happen thanks to modern technology. Modern technologies will develop and what will be called a blockchain in the future will become very important. Maybe it will not even look like modern blockchains, this is an open question.


If people invent new technologies, they will continue to call it blockchain. I mean, just like today's Fortran has nothing to do with the Fortran language from the 1960s, but everyone continues to call it Fortran. The same for UNIX. What is called a “blockchain” will still make its revolution. But I doubt that this new blockchain will look like what everyone likes to use so much today.


Where do breakthrough ideas come from? Influence of popularity


Alexey: Did the blockchain popularity lead to new results from a scientific point of view? More interaction, more students, more companies in the field. Are there any results of this increase in popularity?


Maurice: I became interested in this when someone handed me an official leaflet of a company that just collected quite a lot of money. It wrote about the task of the Byzantine generals , with whom I am more than familiar. Written in the leaflet was obviously technically incorrect. The people who wrote all this did not really understand the model behind the problem ... and yet this company raised a lot of money. Subsequently, the company imperceptibly replaced this leaflet with a much more correct option - and I will not say how this company was called. They still exist and they are doing very well. This case convinced me that, first, the blockchain is just a form of distributed computing. Secondly, the entry threshold (at least four years ago) was rather low. People working in this field were very energetic and intelligent, but they did not read scientific papers. They tried to reinvent famous things and did it wrong. Today the drama has decreased.


Alexey: This is very interesting, because several years ago we had a different trend. This is a bit like front-end development, when browser interface developers re-inventing whole technologies that by then were already popular in the backend: build systems, continuous integration, and so on.


Maurice: I agree. But this is not surprising, because truly breakthrough ideas always come from outside the established community. Recognized researchers, especially authorities in an academic environment, are unlikely to do something truly breakthrough. It is easy to write a report for the next conference on how you have slightly improved the results of your past work. Go to the conference, get together with friends, talk about the same thing. And people who break into breakthrough ideas almost always come from outside. They do not know the rules, do not know the language, but nevertheless ... If you are within the established community, I advise you to pay attention to new things, to something that does not fit into the big picture. In a sense, an attempt can be made to combine external, more mobile designs with methods that we already understand. As a first step, try to create a scientific base, and then change it so that it can be applied to new breakthrough ideas. I think that the blockchain is great for the role of a fresh breakthrough idea.


Alexey: What do you think, why is this happening? Because people "outside" do not have any specific barriers inherent in the community?


Maurice: There is some kind of pattern. If you read the history of the Impressionists in art and painting in general, then at one time famous artists rejected impressionism. They said it was some kind of childishness. A generation later, this previously rejected type of art became the standard. What I see in my field: the inventors of the blockchain were not interested in power, in winding up publications and the citation index, they just wanted to do something good. And so they sat down and started doing it. They lacked a certain technical depth, but this is fixable. It is much harder to come up with new creative ideas than to correct and strengthen not mature enough. Thanks to these inventors, I now have something to do!


Alexey: This is similar to the difference between startups and legacy projects. We inherit a lot of limitations of thinking, barriers, special requirements, and so on.


Maurice: A good analogy is distributed computing. Think of the blockchain as if it were a startup, and distributed computing - a large established company. Distributed computing is in the process of buying and merging with the blockchain.


PhD under the leadership of Barbara Liskov


Vitaliy: We still have a lot of questions! We studied your biography and came across a curious fact about your doctoral degree. Yes, it was a long time ago, but the topic seems to be important. You received a PhD under the guidance of Barbara Liskov herself! Barbara is very famous in the community of developers of programming languages, and generally a very well-known personality. Logically, your research was in the field of programming languages. How did you switch to parallel computing? Why did you decide to change the subject?


Maurice: At that time Barbara and her group were just looking at distributed computing, it was a very new idea. There were those who said that distributed computing is nonsense, communication between computers is meaningless. One of the issues addressed in distributed computing, distinguishing them from centralized computing, is fault tolerance. After lengthy research, we decided that in a programming language for distributed computing, you need to have something like atomic transactions, because you can never be sure of the success of a remote call. As soon as you have transactions, the problem of managing concurrency arises. Then there was a lot of work to get highly parallel transactional data structures. Then, when I graduated, I went to Carnegie Mellon and started looking for a topic for work. It occurred to me that the calculations had moved from individual computers to computer networks. The natural continuation of progress would be multiprocessors - the word "multicore" did not exist then. I thought: what is the equivalent of atomic transactions for a multi-core system? Certainly not ordinary transactions, because they are too big and heavy. And this is how I came to the idea of linearizability, and that is how I came up with all the wait-free synchronization. It was an attempt to answer the question, what is the analogue of atomic transactions for a multiprocessor system with shared memory. At first glance, this work may look quite different, but in fact this is a continuation of the same topic.


The world is waiting for multi-core


Vitali: You mentioned that at that time there were quite a few multi-core computers, right?


Maurice: They simply did not exist. There were several so-called symmetric multiprocessors that were mostly connected to the same bus. It didn’t work very well, because every time a new company created something like this, Intel released a single processor that was superior to a multiprocessor.


Aleksey: Does this mean that in those old times it was rather a theoretical study?


Maurice: It was not a theoretical, but rather a speculative study. All this was not about working with many theorems, rather, we put forward hypotheses about the architecture that did not exist at that time. This is what research is needed for! No company would have done this; it was all from a distant future. In fact, it was so until 2004, when real multi-core processors appeared. Due to the fact that the processors overheat, you can make the processor even smaller, but you cannot make it faster. Because of this, there was a transition to multi-core architecture. And then it meant that suddenly there was an application for all the concepts that we developed in the past.


Alexey: Why do you think multi-core processors appeared only in the two thousandth? So why is it so late?


Maurice: This is due to hardware limitations. Intel, AMD and other companies are very good at increasing processor speed. When at some point the processors became quite small and they could no longer increase the clock frequency, because the processors would start to burn. You can make them smaller, but not faster. What is in their power - instead of a very small processor, fit eight, sixteen or thirty-two processors into the same volume of the case where only one used to fit. Now you have multithreading and fast communication between them, because they share caches. But you cannot make them work faster - there is a very specific speed limit. They continue to gradually improve, but not so much. The laws of physics got in the way of improvements.


New world - new problems. NUMA, NVM and Hacking Architecture


Alexey: It sounds very reasonable. With new multi-core processors came new problems. Did you and your colleagues expect these problems? Perhaps you studied them beforehand? In theoretical studies, it is often not very easy to predict such things. When problems did happen - how much did they meet your expectations and those of your colleagues? Or were they completely new, and you and your colleagues had to spend a lot of time solving problems as they appeared?


Vitaliy: I will add to the question of Alexey: did you correctly predict the processor architecture while you were studying theory?


Maurice: Not all 100%. But I think that my colleagues and I did a good job predicting multi-core with shared memory. I think we correctly predicted the difficulties in developing parallel data structures that work without locks. Such data structures were important for many applications, though not for all, but often you really need a non-blocking data structure. When we invented them, many claimed that it was nonsense, that everything works fine with locks. We foresaw fairly well that there would be ready-made solutions for many programming problems and data structure problems. There were also more complex problems, such as NUMA - non-uniform memory access. In fact, they were not even considered before the invention of multi-core processors, since they were too specific. The research community worked on issues that were generally predictable. Some hardware problems associated with specific architectures had to wait in the wings - in fact, the emergence of these architectures. For example, nobody really worked on GPU-specific data structures, because the GPU did not exist at that time. Despite the fact that a lot of work was done on SIMD , these algorithms were ready for use immediately, as a suitable hardware appeared. However, it is impossible to foresee everything.


Aleksey: If I understand correctly, NUMA is a kind of compromise between cost, performance and some other things. Any idea why NUMA appeared so late?


Maurice: I think that NUMA exists because of problems with the iron used to make memory: the farther away the components are, the slower they are to access. On the other hand, the second value of this abstraction is the uniformity of memory. Therefore, one of the characteristics of parallel computing is that all abstractions are slightly broken. If the access were perfectly even, all the memory would be equidistant, but it is economically and maybe even physically impossible. Therefore, this conflict arises. If you write your program as if the memory were uniform, then most likely it will be correct. In the sense that it will not give out the wrong answers. But the performance of her stars from the sky does not grab. Similarly, if you write spinlock without understanding the cache hierarchy, the lock itself will be correct, but you can forget about performance. In a sense, you have to write programs that live on top of a very simple abstraction, but you have to outsmart the people who have given this abstraction to you: you must know that there is a certain hierarchy of memory under the abstraction, that there is a bus between you and this memory, and so on. Thus, there is some conflict between useful singly abstractions, which leads us to very specific and pragmatic problems.


Vitali: What about the future? Can you predict how processors will evolve further? There is an idea that one of the answers is transactional memory. You probably have something else in stock.


Maurice: There are a couple of major problems ahead. One of them is that coherent memory is a wonderful abstraction, but it begins to break down in special cases. For example, NUMA is a living example of something where you can continue to pretend that there is a uniform memory. In fact - no, performance will make you cry. At some point, architects will have to abandon the idea of ​​a single memory architecture, you can not pretend forever. You will need new programming models that are fairly simple to use and powerful enough for the underlying equipment to be effective. This is a very difficult compromise, because if you show the programmers the architecture that is actually used in hardware, they will go crazy. It is too complicated and not portable. If you present an interface that is too simple, then the performance will be poor. Thus, it will be necessary to reach a lot of very difficult trade-offs necessary for providing useful programming models applicable to really large multi-core processors. I'm not sure that anyone other than a narrow specialist is able to program on a 2000-core computer. And if you are not doing very specialized or scientific calculations, cryptography or something like that, it is still not entirely clear how to do it correctly.


Another similar area is specialized architectures. Graphics accelerators have been around for a long time, but have become a kind of classic example of how you can take a specialized type of computation and run it on a dedicated chip. It adds its own problems: how you communicate with such a device, how you program it. I recently worked on near memory computing tasks. You take a small processor and stick it to a huge chunk of memory, so the memory runs at L1 cache speed, and then it communicates with a device like TPU — the processor loads the new tasks into your memory core. Developing data structures and communication protocols for this kind of thing is another interesting example. Thus, specialized processors and hardware will undergo improvements for quite some time.


Alexey: What about non-volatile memory ( non-volatile memory )?


Maurice: Oh, this is another great example! NVM will greatly change our view of things such as, for example, data structures. Non-volatile memory, in a sense, promises to really accelerate everything. But it will not make life easier, because most processors, caches, and registers are still volatile. When you start work after a failure, your state and the state of your memory will not be exactly the same as they were before the failure. I am very grateful to the people involved in NVM - for a long time, researchers will have something to do, trying to figure out the conditions for correctness. The calculations are correct if they can survive a crash in which the contents of the caches and registers are lost, but the main memory has remained unchanged.


Compilers vs. CPUs, RISC vs CISC, shared memory vs message passing


Vladimir: What do you think about the “compilers vs. processors” dilemma from the point of view of instruction set? Let me explain to those who are not in the subject: if we move on to uneven memory or something like that, we could use a very simple set of commands and ask the compiler to generate complex code capable of using the opened advantages. Or we can go another way: implement complex instructions and ask the processor to reorder the instructions and carry out other manipulations with them. What do you think about it?


Maurice: I really have no answer to this question. This discussion continues for four decades. There was a time when civil wars were fought between the reduced set of commands and the complex set of commands. For a while, RISC-winning people won, but then Intel rebuilt its engines so that an abbreviated set of instructions was used inside, and the full export was carried out. Perhaps this is a topic in which each new generation must find its own compromises and make its own decisions. It is very difficult to predict which of these things will be better. Therefore, any prediction that I make will be true for a certain time, and then for some time it will become false again, and then true again.


Aleksei: Is the situation so common for the industry that some ideas win in the course of several decades and lose in the following? Are there any other examples of such periodic changes?


Maurice: In the subject of distributed computing, there are people who believe in shared memory and people who believe in messaging . Initially, in distributed computing, parallel computing means messaging. Then someone discovered that it is much easier to program with shared memory. The opposite side said that shared memory is too complicated, because they need locks and the like, so it is worth switching to languages ​​where nothing but message passing simply does not exist. Someone looked at what came out of it and says: “Wow, this messaging implementation looks very much like shared memory, because you create many, many of these small modules, they send messages to each other, and they all interlock ,” let's better do a shared memory database! ” All this is repeated again and again, and it is impossible to say that one of the parties is uniquely right. One of the parties will always dominate, because as soon as one of them almost wins, people will again and again invent ways to improve the opposite.


The art of writing fragile multi-threaded code


Alexey: This is very interesting. For example, when we write code, no matter what programming language, we usually have to create abstractions like cells that can be read and written. But in actual fact, at some physical level, it may look like sending a message on a hardware bus between different computers and other devices. It turns out, there is a work at once on both levels of abstraction.


Maurice: It’s absolutely true that shared memory is built on messaging — buses, caches, and so on. But using messaging is difficult to write programs, so the iron deliberately lies, pretending that you have some sort of uniform memory. So it will be easier for you to write simple correct programs until the performance starts to fall. Then you say: it looks like it's time to make friends with the cache. And then you start to worry about the location of the cache, and then it went. : , , . , . , , – , . , java.util.concurrent . , – . ( : , Java, ConcurrentSkipListMap, API c ). , , , . , . , , . , : ! - , , .


: , , , java.util.concurrent , , , . : , , - , -. ? , ? ?


: , : , , , . , . , . : , , , , . : , . , , : : , – , . , . , . , , . , , , . . , , , . , , . , .


: , : – , – ?


: . , . , , . : , , , . , .



: , ?


: , . . . - : «, » — , - , . , , : « , ?». . , , , . , - , . - : , . : , .


: , , , , , . , ?


: . , , , – . , Facebook . , , , , . . , , . , : , , – . . , - : , , , , , – . , , , . , – . , .


: . , . , , . - , ?


: . - , . , , . . , , , - , . : , ! . , .


: : « , ».


: . . , – !


: . , . , , . . , «k-Set Agreement Problem» , , , . . . : , . , – , . , , « » , . : , ? . , . , , . – , . , . , , . , , .


: , ?


: , . , «» , . , , . - , … , : , , - , . . , .


«The Art of Multiprocessor Programming»


: . , , , «The Art of Multiprocessor Programming» . 11 revised reprint . ?


: , ! , . , , fork/join-, MapReduce, – , , . .


: , ?


: . (, ) , . . , .


: ?


: ! , . , , .


: , . . , .


: , , !



: . , – , , . ? ? , - ?


: .


: , !


: . , , , . compare-and-swap load-link/store-conditional , , - , , . , , - , . , , , . , : , . , c . , ? . . , , , Digital Equipment Corporation, 64- Alpha. Alpha : , ? , – , . . , .


Vitali: Billions! Just say "billions"!


Maurice: Yes, I should have said that. Now, in the era of startups and all that, I know how to write a business plan. What can be a little lie about the amount of potential profit. But in those days it seemed naive, so I just said: “I don't know.” If you look at the history of publication about transactional memory, you will notice that a year later there were several references to it, and then for about a decade no one quoted this article at all. Quotes appeared around 2004, when the true multi-core appeared. When people discovered that writing parallel code could bring money, new research began. Ravi Rajvar wrote an article in some way familiarizing the mainstream with the concept of transactional memory. (Editor's note: the article has a second version, released in 2010 and freely available as a PDF ). Suddenly, people realized exactly how all this can be used, how to accelerate traditional algorithms with locks. A good example of something that in the past seemed like an interesting academic problem. And yes, if you asked me in those days if I thought that all this would turn out to be important in the future, I would say: of course, but it is not clear when exactly. Maybe in 50 years? In practice, it turned out just a decade. It is very nice when you do something, and only ten years later people notice it.


Why you should conduct research in the field of distributed computing


Vitaliy: If we talk about new research, what would you advise readers - distributed computing or multi-core and why?


Maurice: It's easy to get a multi-core processor these days, but it's harder to set up a real distributed system. I started working on them because I wanted to do something different from my Ph.D. thesis. This is the advice that I always give to beginners: do not write a continuation of the thesis - try to go in a new direction. And yet, multithreading is easy. I can experiment with my own fork running on a laptop without getting out of bed. But if I suddenly wanted to create a real distributed system, I would have to do a lot of work, attract students, and so on. I am a lazy person and would prefer to work on multi-core. Experiments on multi-core systems are also easier to do than on distributed ones, because even in a silly distributed system there are too many factors that need to be controlled.


Vitali: What are you doing now, investigating the blockchain? What articles should pay attention in the first place?


Maurice: Recently a very good article appeared that I wrote with my student, Vikram Saraf, especially for speaking at the Tokenomcs conference in Paris three weeks ago. This is an article about useful in practice distributed systems, in which we propose to make Ethereum multithreaded. Now smart contracts (code running on the blockchain) are executed sequentially. We have written an article earlier on how to use speculative transactions to speed up the process. We took a lot of ideas from software transactional memory and said that if you make these ideas part of the Etherium virtual machine, then everything will work faster. But for this it is necessary that the contracts do not have conflicts over the data. And then we assumed that in real life there are really no such conflicts. But we did not have the opportunity to find out. Then it occurred to us that we have almost ten years of real contract history in our hands, so we unloaded the Etherium blockchain and wondered: what would happen if these historical records were executed in parallel? We found a significant increase in speed. In the first days of life, Etherium increased very much, but today everything is somewhat more complicated, because there are fewer contracts and the likelihood of conflicts over data requiring serialization has become higher. But all this is experimental work with real historical data. What is nice about the blockchain is that he remembers everything forever, so you can go back and learn what would happen if we used other algorithms to run the code. As people there, in the past, would like our new idea. Such studies are much simpler and more pleasant to do, because there is a thing that monitors everything and writes everything down. This is already something more similar to sociology than the development of algorithms.


Has the development of algorithms stopped and how to live?


Vitaliy: Time for the last theoretical question! Is there a feeling that advances in competitive data structures are shrinking every year? Do you think we approached the plateau in our understanding of data structures or will there be any major improvements? Maybe there are some tricky ideas that can completely change everything?


Maurice: We may have reached a plateau in data structures for traditional architectures. But data structures for new architectures are still a very promising area. If you want to create data structures, say, for hardware accelerators, then the data structures for the GPU are very different from the data structures for the CPU. When you are developing data structures for blockchains, you need to hash the pieces of data and then put them into something like a Merkle tree to prevent fakes. In this area, there has recently been a surge in activity, many doing a very good job. But I think it will happen that new architectures and new applications will lead to new data structures. Old applications and traditional architecture - perhaps there is not much space for research. But if you get off the beaten track and look over the edge, you will see crazy things that the mainstream does not take seriously - that's where all the exciting things really happen.


Vitali: Therefore, to be a very famous researcher, I had to invent my own architecture :-)


Maurice: You can "steal" someone else's new architecture - it seems that it is much easier!


Jobs at Brown University


Vitaliy: Could you tell us more about the Brown University in which you work? Not much is known about him in the context of information technology. Less than about MIT, for example.


Maurice: Brown University is one of the oldest universities in the United States. I think only Harvard is a bit older. Brown is part of the so-called Ivy League , which is a collection of eight of the oldest universities. Harvard, Brown, Cornell, Yale, Columbia, Dartmouth, PA, Princeton. It is a kind of old, small and a bit aristocratic university. The focus is on humanitarian education. He is not trying to be like MIT, MIT is very specialized and technical. Brown is a great place to learn Russian literature or classical Greek, and of course, Computer Science. It focuses on comprehensive education. Most of our students go to Facebook, Apple, Google - so I think our students have no problems getting settled in the industry. I went to work in Brown, because before this worked Digital Equipment Corporation in Boston. It was a company that invented many interesting things, but denied the importance of personal computers. A company with a difficult fate, the founders of which once were once young revolutionaries, they learned nothing and forgot nothing, and therefore they turned from revolutionaries into reactionaries for about a dozen years. They loved to joke that personal computers have a place in the garage - in an abandoned garage, of course. It is quite obvious that they were destroyed by more flexible companies. When it became clear that the company had a problem, I called my friend from Brown, who is about an hour from Boston. I did not want to leave Boston at that time, because there were not so many vacancies at other universities. It was a time when there were not so many vacancies in the field of Computer Science as it is now. And Brown had a vacancy, I didn’t have to move out of my house, I didn’t have to move my family, and I really like living in Boston! So I decided to go to Brown. I like it. The students are wonderful, so I never even tried to go anywhere else. On a sabbatical, I worked at Microsoft for a year, went to Technion in Haifa for a year, now I will be at Algorand. I have many colleagues everywhere and therefore the physical location of our classrooms is not so important. But the most important thing is students, they are the best here. I have never tried to go anywhere else, because I am quite happy here too.


Nevertheless, despite Brown’s fame in the United States, it is surprisingly unknown abroad. As you can see, now I am doing everything possible to correct this state of affairs.


The difference between university and corporate research


Vitali: Okay, the next question is about Digital Equipment. You were an explorer there. What is the difference between working in the R & D department of a large company and working at a university? What are the advantages and disadvantages?


Maurice: For twenty years, I managed to work at Microsoft, worked closely with the staff of Sun Microsystems, Oracle, Facebook, and now Algorand. Based on this, I want to say that it is possible to conduct first-class research in companies and at the university. The important difference is that in a company you work with colleagues. If I suddenly had an idea of ​​a project that does not exist yet, I must convince my peers that this is a good idea. If I am in Brown, then I can tell my students: let's work on anti-gravity! They will either go to someone else, or they will take up the project. Yes, I will need to find funding, I will need to write a grant application, and so on. In any case, there are always many students, and you can unilaterally make decisions. But at university you most likely will not work with people of your level. In the world of industrial research, you will first have to convince everyone that your project is worth taking. I can't order anything to anyone. And both of these ways of working are valuable, because if you work on something really crazy and your colleagues are hard to convince, it is easier to convince graduate students - especially if you pay them. If you are working on something that requires a lot of experience and deep expertise, then you need colleagues who can say “no, it so happens that I understand in this area and your idea is bad, nothing will come of it”. This is very useful in terms of wasting time. And also, if in industrial laboratories you spend a lot of time writing reports, then at university you spend that time trying to find money. If I want students to go somewhere, I have to find money for it in some other place. And the more important you are at the university, the more time you have to spend collecting money. So now you know who I work as a professional beggar! As one of those monks who go with a plate for donations. In general, these two activities complement each other. That is why I try to live and stand firmly on my feet in both worlds.


Vitali: It seems to convince the company more difficult than to convince other scientists.


Maurice: More difficult, and much more. Moreover, in different areas in different ways: someone conducts full-scale research, and someone focuses on his topic. If I would go to Microsoft or Facebook and say: let's do anti-gravity, they would hardly appreciate it. But if I had said exactly the same thing to my graduate students, they would most likely take up the job instantly, although I would have problems now - after all, you need to find money for it. But as long as you want to do something that matches the goals of the company, such a company can be a very good place to do research.


Hydra and SPTDC


Vitaliy: My questions are coming to an end, so let's talk a little about the upcoming trip to Russia.


Maurice: Yes, I look forward to returning to Petersburg.


Alexey: It is a great honor for me that you are with us this year. You are already the second time in St. Petersburg, right?


Maurice: Already the third!


Alexey: Understood, but at SPTDC - exactly the second. Last time the school was called SPTCC , now we have changed one letter (C to D, Concurrent to Distributed) to emphasize that there are more directions concerning distributed computing this year. Can you say a few words about your presentations at the Hydra School and Conference ?


Maurice: At School, I want to talk about the basics of the blockchain's work and what you can do with it. I want to show that blockchains are very similar to the familiar multi-threaded programming, but with their own nuances, and it is important to understand these differences. If you make a mistake in a regular web application, it’s just unpleasant. If you write a buggy code in the financial application, someone will steal all your money for sure. This is a completely different level of responsibility and consequences. I'll talk a little about proof-of-work, about smart contracts, about transactions between different blockchains.


Other speakers will work next to me, who also have something to say about the blockchain, and we agreed to coordinate with each other so that our stories fit well. But for an engineering report, I want to explain to a wide audience the explanation why you shouldn’t believe everything you hear about the blockchains, why blockchains are a great area, how it fits with other well-known ideas, and why we should look ahead to the future.


Alexey: In addition, I want to say that this will not take place in the format of a mitap or user group, as it was two years ago. We decided to make a small conference near the school. The reason is that after talking with Peter Kuznetsov, we realized that the school is limited to just a hundred, maybe 120 people. At the same time, there are a lot of engineers who want to talk with you, attend reports, and are generally interested in the topic. For this, we created a new conference called Hydra . By the way, there are ideas why Hydra?


Maurice: Because there will be seven speakers on it? And they can cut off their heads, and in their place will grow new speakers?


Aleksey: Great idea for growing new speakers. But in fact, there is a story. Remember the legend of Odyssey, where he was supposed to sail between Scylla and Charybdis ? Hydra is something like Charybdis. The story is that once I spoke at a conference and talked about multithreading. At this conference there were only two tracks. At the beginning of the report, I told the audience in the hall that they now have a choice between Scylla and Charybdis. Charybdis became my totem animal, because Charybdis has many heads, and I have a theme - multithreading. So the names of the conferences appear.


In any case, we have run out of questions and time. So, thank you, friends, for the excellent interview, and we will meet at the SPTDC school and the Hydra 2019 conference!


It will be possible to continue communication with Maurice at the Hydra 2019 conference, which will be held on July 11-12, 2019 in St. Petersburg. He will arrive with the report "Blockchains and the future of distributed computing" . Tickets can be purchased on the official website .

')

Source: https://habr.com/ru/post/458936/


All Articles