The other day marks 10 years since I got my strangest job.
Shel 2005th year. My interest in developing a content management system in Java for a company that recently bought our startup, was steadily evaporating, while my real passion was developing compilers and language infrastructure tools (mainly for
SBCL ). Once I noticed an open vacancy in this direction, which actually was quite a rare phenomenon. I quickly went through the interview - so quickly that I didn’t even ask the right questions and ignored a few alarm bells.
I was waiting for an exciting journey into the world of retrocomputing.
The beginning of the weird
So, this was the former department for developing internal tools of a very large company, let's call it
X. For some reason,
X separated the department and sold it to a medium-sized consulting company, let's call it
Y. So, I was going to work on
Y. They were looking for people who understand compilers to take on the support of their own C compiler (not only the compiler, but also the linker, assembler, etc.). I misunderstood them because I thought they inherited this set from
X. But this was not the case. The compiler came from another big company, let's call it
Z , which stopped its support. Then
X bought the source codes from
Z for a lot of money, and gave
Y so that she finally did something about it. In fact, it was not even one set of tools, but two. Wow, give more compilers!
')
I started working in September, but some charts broke down and we had no work for another month or two. I had enough time for acclimatization. And it was good, because from the moment when I stepped into this building, I had the feeling that I had moved in a strange way to a parallel universe, which is now the beginning of the 80s. Well, you know, the case when you need to find some old documentation and in the end you find it inside an ancient, self-written version control system written on top of
RCS .
On the very first business day, I discovered that company
X is probably using the largest
VAX cluster in the world to assemble its product. Yes, dozens of VAX under VMS, working as a compiling farm, creating x86 code. Why did this company need to have its own C compiler at all? Well, they had their own incredible internal programming language that you can imagine if you think about an imperative Erlang with Pascal syntax. Programs in this language were compiled into C code, for building which I needed my own C compiler. I don’t know how much code was written in this incredible internal language, but according to my estimates, there are about 10 million lines of code minimum.
Why in this process needed your own compiler C? Compiling the C-code gave the binaries, which could then be run only on (of course!) Their own incredible internal operating system, which was written in the late 80s. This operating system used 386 processor segment registers to emulate multithreading and message transfer. To support all this, they needed a compiler with significantly more "smart" work with segment registers, as is the case with conventional compilers. You may ask how reasonable it is to rely on segment registers in the 2005th year, when working with them becomes slower and slower with each new generation of processors, and in x86-64 their support stops altogether. And you do not worry about it! The company had a parallel project to port all of this code to Solaris.
After a few months of sitting on my hands and studying all this mysterious infrastructure, a parcel arrived in our compiler development department. We expected the source code. But wait, why is it stated that it takes 2 people to carry? Did they print them out ?!
Oh, if everything was so simple! It was the whole server that we had to use to build the compiler when we get its source code. It was a computer with an 80286 processor running on the Intel Xenix 286 3.5 operating system. The only way to share the power of computing technology with this astounding incarnation was a serial port operating at 9600 bps. Fortunately, the previous owners of this server were kind enough to install Kermit on its huge 40 MB hard drive, so at least I didn’t need to bother with diskettes. God, what an amazing piece of ancient history was this machine and its operating system!
You may ask if it was a wise decision to use a car 15-20 years ago as the only way to build an application — it will inevitably break down sooner or later. I even raised this problem and offer to make a hard disk image and try to run it in a virtual machine. This idea was rejected, because the car was old and fragile, we could not risk disassembling it and making a disk image - in which case it would be very difficult to replace it. When the previous owners were looking for this iron in antique shops, they found only
two copies of the country still working.
Now is the time to say that I, in fact, shot a sparrow. I was raised by wolves on the SunOS 4 server, on which I later administered the accounts of several hundred users. My personal mail in the 2005th year worked through
UUCP . My last weekend (now, in 2015) was after an attempt to build partially missing Lisp interpreter sources, written before I was born. In general, you understand that I have respect for old computers and programs.
But even by my standards, the level of computer archeology in the project for the development of this compiler was extremely high. And, as it turned out, I have not yet reached the bottom of the rabbit hole.
After a few more weeks, the source code finally arrived. They were written in
PL / M. (Wait, is this a general programming language? No, you're kidding, right?). And the last modification date of the source was in the 80th year. Instructions for assembling the project was typed on a typewriter. Not on the printer. Some components were not assembled and required editing makefiles. The hard disk was not enough for the simultaneous assembly of all components, so the whole assembly process looked like this:
- Download the source component to the server via the serial port (you remember, 9600 bps)
- Unpack archive
- To collect
- Download result
- Delete
- Repeat the above for the five remaining components.
The process of downloading the code for each component alone took an hour. And yet after a long butting, I managed to collect all this so that the resulting binaries up to the bit coincided with those that were collected 20 years ago and used all this time.
It was terribly difficult to understand at all why we are doing what we are doing. There was no documentation for this code, except for a couple of leaves with instructions for assembling - we had to use reverse engineering at almost every step. When transmitting the code there were no trainings, seminars. And in general it is difficult to imagine that after 20 years, someone who worked in the development of this code in the 80s was still working at company
Z. No one in our team knew PL / M. From changing a line of code to building a binary and running it, it took at least 1 hour. There was no debugger at all. Do you want to add the output of one debug printf (well, or whatever it was called in PL / M) - add and wait for 1 hour while the code is collected. This development was just pure pain.
A month later, I expressed my concerns to the leadership and I was told not to worry:
-And so we don’t even plan to make changes to this compiler, relatively little code is collected with its help. Here is another one, which we will soon receive, is more modern and we will concentrate on it.
-What? I had just bitten my elbows for a month trying to learn PL / M and Xenix / 286, working through a Kermit connection at a speed of 9600 bps, and now you tell me that this is not useful to us?
-Yes. We just wanted to make sure we got everything we paid for.
I didn’t even know if I should regret more of the time I wasted, or be glad that I no longer have to mess around in this swamp.
The sad part
Here ends the fun section of fancy retrocomputing. Now we come to the sad part - the dysfunctionality of corporate policies. If you started reading this article just to laugh, you can skip to the end.
We got the source code of the second compiler. He also did not look like a vigorous young man. He was going under Visual Studio 6. No documentation, no tests. The absence of tests was explained to us by claims on their code from a third company. The lack of documentation was not explained to us in any way.
This compiler was easy to compile. But what could we do with it? I re-read its code, tried to understand what was being done in each file, did some experiments and wrote some notes. Then we gathered a large meeting with leading engineers from all departments of company
X , where this compiler was used. The goal was to find out what we need to improve. The meeting was incredibly demotivating. Half of the participants were of the opinion that it is better not to touch anything, so God forbid nothing to break. Some were bolder, but they could not recall any important functionality that they would lack. Then someone regretted me and said that the current compiler is not very well planning the load of segment registers and this beats performance. Maybe we could improve it?
After the meeting, one of the managers told me that it was actually our job to propose projects that could help our customer. And it should not be some minor improvements, but clearly formulated and planned tasks, the effectiveness of the implementation of which could be measured in dollars and it should cover all the operational costs of development, and also bring some profit. It is impossible not to note the absolute madness of this way of developing a compiler, but this is what was required of us. In general, in company
X, someone assumed that something good would come out of all this - a mystery to me.
But do not pay attention to all this. Our initial project for the transfer of support for two compilers was initially well-funded enough, and its tasks were formulated so vaguely that it was not difficult for us to prove that improvements are an important part of the main tasks of the project. We at least had the very only improvement to which at least someone expressed an interest.
So, I implemented an additional stage of optimization for segment registers, and even received an approved review of the code from the compiler authors when they came to us from company
Z for training. It seemed to work, but, as I said above, we did not have a set of tests, and creating such a compiler is a lot of work (Great! We can offer this as a whole new project!).
We could not check our improvements on the real code, because it required the very “incredible proprietary operating system” of company
X. The only way to get some measurements of performance and proof of correctness of work was to plan work in company
X's laboratory. We spent weeks and weeks trying to get time for us at the same time in the laboratory and people from company
X , who were needed for the tests. This, of course, is understandable - it was, by and large, anyway, whether a new version of our compiler will be released or not, they had their main working tasks, not related to this. But for us it was important to show that the changes we made are useful and correct. We really increased productivity, but without concrete figures this could not be proved.
At that moment, it finally dawned on me that I had no real work all this time. All these special compilers will become unnecessary as soon as company
X migrates with their incredible proprietary operating system, what ever happens. Oh, of course, they still have to be "supported" in case a bug is discovered in some program 20 years ago and a fix is ​​needed. But in general, investing in the development of a new functional with the limited remaining lifetime of this entire system was catastrophically unprofitable. The company absolutely spent the seven-digit amount (in dollars) on the purchase and support of this project, but did it do something useful with this code? Not.
All this was supposed to be maintained forever, while in 5 years everything will be replaced by completely new systems. But nothing will be thrown away, no. Everything will accumulate and accumulate, demanding support forever. And you will keep in general quite good engineers who can develop compilers to do this crazy job, instead of working at the forefront of science in high-tech companies.
And how can all this end? I presented myself at an interview somewhere in five years, in 2010, trying to explain to a potential employer how my knowledge of Xenix, PL / M and VMS can be useful in developing his product. In the life of an engineer, a period may come when he will cease to be attracted to new technologies, but letting this happen to you at the age of 20 is not the right time :)
The final
I quit even without a preliminary search for a new job - I just hoped that something would turn up. In confirmation of my intuition, after my letter of resignation, but before the actual departure, ITA Software wrote on the SBCL mailing list that it would like to hire someone to work on improving SBCL, which actually looked like my dream job at that time. Very well, it all came together.
That's all. In the comments you can tell about your experience in developing new software on something like
IBM 1401 right now, in 2015, well, or something in the same spirit ;-)