📜 ⬆️ ⬇️

Time capsule

This text was written 6 years ago for a paper-based computer magazine focused on high school students and the like pioneering. It follows a very razdolbaysky style of presentation. The magazine and all its sites have sunk into oblivion, so the text had to be restored from archive.org. Please take this as a greeting to the Komsomol members of 2020 from 1970. Some of the technologies described are now being introduced. The progress of Android and the release of Windows Phone 8 hinted that it was time to pick out the message filled with concrete from the past. At that time, the author was a student in a non-IT profession and used Linux as a home system as a hobby. Alcohol and drugs at the time of writing did not use the article, now too. The text is slightly edited under the requirements of Habr. Since the Sandbox has passed the topic, I publish it in hubs. Hopefully, I chose the right theme and there will be no cruel sanctions for offtopic

In the computer press, there are often arguments on the topic “What will happen if there is no Windows?”. This trend has not bypassed (the initial publication ) and My Computer (the journal for which the publication was prepared - note auth.). Usually, the problems of selecting the right software and / or licensing issues are discussed based on the personal experience of the author. In this case, I tried to fill the gap and imagine what would happen if the developers of computer hardware are not burdened with ensuring compatibility with Windows at all. The article is the personal nonsense of the author and absolutely does not pretend to be anything - see the title.

So, fast forward to an alternative story. Time is ours, i.e. 2006 Computer hardware parameters are quite similar. Processors of desktop computers - a few gigahertz, the pocket from 100 MHz to 600 MHz, memory up to several hundred meters, the hard disk is not more than a hundred gigabytes. But Windows is used only on one third of the total number of all computers in the world. The other third is held by MacOS X, and another third by Linux. On this basis, the developers of iron ensure compatibility with operating systems, and not vice versa. The other two systems are partially or fully open, so they work on anything.

Sketch the first. Desktop computer.
So, the first sketch. Desktop computer.
What do we see, including the computer? No, not the Power button, before. Right, the BIOS splash screen. This is the most ancient and unnecessary piece of modern PC. The BIOS initializes most of the hardware, loads the bootloader, transfers control to it, and provides some sort of universal hardware drivers. DOS programmers have already guessed that we are talking about interrupts. These interrupts are needed only by the loader and are no longer used when the OS kernel is loaded. In addition, this is another loophole for viruses. Bios has become a smart firmware, which is standardized and not very dependent on a specific board. Apple realizes that I'm trying to illiterately describe OpenFirmware. Since flash memory is cheap, this firmware takes up 32MB and contains a universal boot loader like GRUB. The remainder contains a mini-Linux, from which you can repair a crashed operating system, and via the network, reset the recovered data or recover from the backup. The firmware is distributed in open source codes, with the exception of modules specific for this board.
')
The problem of breaking the screw is also successfully solved. The LVM (Logical Volume Manager) system is widely used, which allows you to resize partitions on the go, even without interrupting reading / writing to them. Of course, resizable partitions are supported by the loader and all operating systems. This LVM is easily converted into software RAID, which is picked up by an iron RAID controller with a corresponding increase in speed. In combination with hot-plug support for any screw, this allows any manipulations with the disks. It becomes possible to make backups without interrupting the work, if you just connect the second disk as RAID-1 and just stop recording to it at the right time. If such a job suddenly suddenly runs out of space, you can nail down the RAID and expand the data section to the second disk. By the way, hot plugging is also available for ordinary IDE drives - through a combined 84-core cable (a mixture of IDE cable with a power cable) and a simple passive adapter.

Video. There are a lot of changes. Immediately struck by the inscription on the box with the card: "Memory: 256 Mb installed, up to 2 Gb can be used." It's very simple, the memory can be flexibly divided between video and system. If a super-duper 3D scene is being rendered, memory for the video can be thrown through PCI-E. Yes, such a memory is slower than onboard. But the increase in speed will still be. Conversely, with heavy calculation, you can borrow a little video memory, leaving 4 MB for the framebuffer, and use it as a system memory. By the same principle, video processors are used for various calculations. Why warm the air is essentially an inactive processor, performing fairly resource-intensive applications (say, video encoding or complex audio processing)? You can use a bunch of transistors in a video chip to perform complex calculations and get a good boost in speed. Or imagine, for example, the HDTV decoder executed almost completely on a video processor. I can also assume the appearance of dual-processor and video chips in sockets: I bought a GeForce 6200, podkopil money (successfully passed the session / bottle, received an inheritance) and replaced the percents with the GeForce 6800.

Sound. In this area, I'm not special, so I apologize in advance for errors. Here the changes are similar to those that occurred in the camp of the vidyah. There appeared a single and generally accepted OpenAL library - it, like its graphical counterpart, allows us to abstract from compatibility problems with a particular sound. As a result, all operations with sound are performed on a specialized (and very fast) processor. If the card is old and does not support all features of the new standard in hardware, then they are performed on the central processor. At this round out and turn to the next part.

Sketch the second. A laptop.
Here, an astute reader should stop and think: “What is so fundamental about a laptop that differs from a desktop computer?” In the world with Windows, nothing but flexible power management and compactness of the device. In this case, compactness includes embedding all kinds of (usually discrete) devices, such as card readers and Bluetooth adapters. In this case, it is problematic to come up with something original. If you do not deviate from the usual architecture.

But we are not burdened with compatibility, so you can imagine a very light and low-power laptop on the quad ARM. Instead of one fast and voracious Stump / Atlon / G4, four extremely economical and not too high-frequency ARM processors are installed, for example, Intel XScale with a frequency of 200-300 MHz or (even better) Texas Instrument OMAP. Better - because of the coprocessor on the chip, which shows the hurricane performance on multimedia tasks.

I will give an example from life. Perc OMAP 311, frequency 126 MHz, test machine - Palm Tungsten E, player - Kinoma. Video encoded in DivX with a resolution of 320x240, a bitrate of about 300 kbps and silent, when playing in Benchmark mode, plays at a speed of 250fps. But I digress. Just want to say that the four processors should be enough to solve almost any problem characteristic of a mobile user. For lovers of heavy toys in nature, there are ordinary laptops with standard operating time. Gracefully solved the problem with docking stations. It is absolutely natural that the user wants to use a laptop at home, but with all the conveniences: a full-sized keyboard, mouse, monitor, fast and most importantly large disk, as well as a processor. The docking stations themselves have changed their purpose, but this is a bit lower.

Now, having come home, the user simply connects the laptop and the desktop PC with a network cable (for aesthetes there is WiFi) and gets the Desktop from the laptop on a large monitor. How is this implemented? Elementary. If you explain on the fingers, then on the desktop computer is stored a copy of the screw of the laptop, which synchronizes with it on the fly. The same software is installed on this copy as on the laptop, but compiled under x86 (do not forget that the laptop and the desktop PC work on different architecture). I think the rest is obvious.

This design is successfully used in lightweight laptops. Laptops of medium size / performance are created according to the classical scheme, and advanced docking stations were revived for the most powerful machines. Actually, it is difficult to call such a dock. In fact, the dock is a mother with a processor, hard drive, video card and all the necessary ports and expansion slots. The connector is a HyperTransport link. And we see how a laptop with a flick of the wrist turns into a pretty and powerful dual-processor workstation. It is worth noting that this is one of the few technologies still supported by Windows. The idea of ​​a "smart dock" liked the PDA manufacturers. And they thought: "And if you try to replace the laptop on the PDA?" We tried. Happened. Liked. From this follows the third sketch ...

Sketch of the third. PDA.
The developers found (on dusty mezzanines) the idea of ​​an expandable PDA. They blew off dust, adjusted them to modern technology and put them on the conveyor. Most of the machines come with a CardBus slot, through which you can use a variety of expansion modules, up to disk drives and wired network card. Support for a variety of network interfaces allows you to organize the separation of resources between the desktop system and PDAs.

I give a simple example. For some reason you cannot watch a freshly bought DVD on the big screen and you have to adapt a pickpocket for this. The user inserts a disk into the desktop drive and starts the corresponding program. The program reads the disk, on the fly adjusts the image under the screen, clamps the video in DivX, the sound in Ogg Vorbis (read the multichannel remains) and throws it into the network. As a result, only the desktop processor is effectively used. The mobile processor is lightly loaded and therefore consumes little power. Most of the calculations are performed on the video processor. Yes, hardware OpenGL got here. We get almost complete compatibility with software with desktop systems. The work of game developers is noticeably easier: Quake works great on a cool gaming machine and on a pocket PC.

Also managed to develop a dock for PDAs, to which you can connect an external monitor, mouse and keyboard. But the developers have gone even further. Now there is no huge gap between laptops and PDAs, because the Handheld PC class machines have risen from the dead (in fact, small subnotes on PDA-shny units) and the user can choose a machine afford - in the truest sense of the word. Where to draw the line between laptops and pocket PCs is not clear. But I can offer such a criterion for the PDA, as the absence of a built-in disk drive (for external, see above). This determines a very short time to prepare the computer for use - it is actually determined by the speed of the device getting out of the pocket.

Interestingly, Windows for PDAs are almost extinct. Rather, it is not even interesting, but quite natural. And indeed, what chances of survival can a system have that requires a 624 MHz processor to draw its own interface at a resolution of 640x480, and is already slowing down at 520? The reader may argue that this is normal. I will cite an argument that has already effectively worked on many window lovers. The operating system for desktop computers that is closest in design and architecture is Windows 2000. We will remove all applications from it and disable all network services, antiviruses, firewalls and other background tasks. Leave 128 MB of memory (typical for VGA handhelds) and set the screen resolution to 640x480. Well, on which processor will the brakes start when the interface is drawn? And another example from personal experience. I had been watching the CCP for a long time, and at that time I still had doubts about the choice of platform. An acquaintance bought the iPaq H1940 and decided to show me the capabilities of the hardware. I decided to start familiarization with the device with a decisive link in the “Start” button. What came out of this reminded me of trying to launch Win98 at 486: the menu was simply outlined in jerks. Owner's comment about this just killed me! It turns out the system has slowed down the processor because of the drying up battery. I strongly doubt that the frequency has dropped from the usual 266 MHz lower than to 100 MHz. Of course, after demonstrating such amazing speed and usability, the possibility of buying a PocketPC was not even discussed.

And now about the sad. Do you think all this awaits us in the bright future? And no. The vast majority of these technologies have long been implemented in hardware, except for ARM-beech. At least I have not heard of him. I'll start in order. LinuxBIOS is a completely successful project supported by an office like VIA. EPIA series motherboards contain a core stitched into BIOS. "Smart firmware" is on almost all platforms except the PC. And the combination of these technologies may be used in Apple Hemp. The LVM system is effectively used and described in My Computer. HotSwap for ordinary IDE hard drives supported. The description is on the Internet, you just have to look good.

The redistribution of memory in the direction of the system has been supported since the days of the Intel 440 chipset, something is there. What is the connection? Very simple. Some of the chipsets in this series do not know how to cache memory of a more specific and very small amount. The use of such memory completely kills performance. Therefore, programmers wrote a kernel module or, more simply, a driver that used this memory as a virtual disk. It usually placed the swap file or used for other needs. By the way, this is an example of how software developers patch up bugs or flaws of hardware developers. For this and the following topic, see the link (first references to CUDA - auth. Note) I can’t say anything about the sound, everything stated is a personal speculation

We go further. The performance of the OMAP I measured myself, palm growers can check. For advanced synchronization, you only need a network cable or a WiFi access point, the software is already there. Sly dual-processor notebooks were mass-produced in 1992-1997 and were called the Apple PowerBook Duo, there is enough information on them at least in Wikipedia.

The PDA, expandable through the PCMCIA slot (well, CardBus wasn’t then) - iPaq 3000 and 5000 series. The Handheld PC is now available in as many as one model. NEC MobilePro 900 running Windows CE.NET. Numerous old models of this class are perfectly used under Linux or NetBSD for a wide variety of tasks. After all this, some disappointing conclusions suggest themselves - Microsoft deliberately slows down the progress of computer hardware. It does not even slow down, but sends it along the path of a blunt increase in frequencies and the support of one or two manufacturers. After all, 10-12 years ago, Windows NT was cool on several architectures. Why do you think the directory with the WinXP installer is called i386? Yes, because before there were also alpha, powerpc, mips, ia64 (there are now for some versions of Win2000 and Win2003) and there may be something else. For handheld devices, the same thing is observed, with the latest versions of WinMobile since 2003, fast, economical and 64-bit MIPS processes are no longer supported. There are two reasons for this. Firstly, all these systems are absolutely incompatible with software. For Open Source, there is no such problem; you can recompile the program for almost any platform. The second reason is the increasing distance of Windows from the roots. I mean that the older Windows (only NT and CE branches), the more they contain OS / 2 and UNIX code. Who said there, they say, "Take it from scratch, it was written, and Unix - suks, and from there did not take anything"? Have you ever seen a blue screen with an error like “/devices/floppy0/sdfg.sys not found”? Nothing like?

About the upcoming Windows Vista generally keep quiet. According to feedback from users (or free beta testers?), It slows down very nicely on a computer with a processor Atlon64 3200+ 1GB of RAM and a GeForce 6600 vidyahoy. Moreover, the developers have already abandoned half of the promised super-duper features, for example WinFS. And if we ever in 2008 and see the new Windows, it will be only slightly embellished XP. With all this we see in the system requirements of DirectX 9. This results in incompatibility with professional video cards, such as nVidia Quadro or ATI FireGL. The server version, which is now known as Windows Server 2007, I can not imagine. There is only one spoon of honey in this vat with ... tar - we are promised unification of drivers, at least graphic ones. This compensates for the dislike of iron developers to develop drivers for Linux and will give good and proprietary drivers to users. Rave? Not. The Ndiswrapper project thrives and allows the use of a unified driver for Linux and FreeBSD network cards since version 2000.

Source: https://habr.com/ru/post/154325/


All Articles