📜 ⬆️ ⬇️

Thin clients as they are

A few years ago, the director of a firm (selling computer junk and doing some server work), where I worked as a system administrator, had an idea to make and sell thin clients. After a number of not very successful attempts at full development outsourcing, I was attracted to the process, as a person who knows why and how to eat it (I did not develop software, I swore with derogatory developers and, as far as possible, I consulted about what was needed, and what is not in the TC).

For two years I have been working with thin clients. Now I'm going to another place, but for now I'll write what I know about thin clients. Both from the user, and the creator.

Let's start with the theory, more precisely, with why thin clients and what they eat. Before this, we will have to understand why all these dances with remote desktops, places, VDI, clouds, clusters and application farms, etc. etc.
')

The main idea of ​​remote access (whether it is a trivial RDP or non-trivial access to the application in the cloud through the node's bridge server) is to separate the user hardware (buttons and monitor) from the data and applications. User iron has a habit of dying, deteriorating, being stolen, confiscated by employees of (un) legitimate gangs. This is on the one hand. On the other hand, in conditions where administrative control works poorly, users may want to use the computer "not so." For example, to put toys, watch movies, etc. Any measures to protect against this either go to extremism (riveting the corps), or to the endless struggle of enterprising hackers and zadolbavshih administrators.

An ordinary desktop (by the way, it doesn't matter at all, Mac, Windows or Linux) has all possible means to ruin the life of the system administrator, the head of the security department (if there is one) or even the general director.

Data and applications are stored in a box under the table, where there is a lot of dust, where they poke at it with nonsense, like flash drives. Inside the box, components are usually ... of a moderate degree of reliability, which can pok from a flash drive poking. Not to mention anything else: dust, dirt, etc. This data is vulnerable to unauthorized software, strangers.

Put reliable components in a user computer? Raid, daily backup? This is a dead-end path (although, yes, RAID1 in a computer accountant may not be such a bad idea, especially in March of each year - when the deadline for the delivery of ready-made tax returns is approaching).

The obvious solution was to take the data out of the computer. There are three fundamentally different methods of removal: either the application knows that the data lies on the server and works with it itself (for example, any SQL-based application works like this, all web applications), when the data lies on the server, but their local presence is simulated ( network balls) and when synchronization of data stored locally and remote (roaming profiles) occurs. Each of the methods has its pros and cons.

“Direct operation of the application with the server” looks the most interesting, but it requires an “understanding” application. At a time when our bank clients still cannot unlearn how to work as an administrator and want to have the right to write to their directory, which is to require them to work with servers in the way the administrator is comfortable ...

Network balls are more interesting, but they have a lot of limitations on performance. If a certain program (for example, an e-mail client) decides to re-index a 16GB mail archive for several people, then any imputed piece of hardware will shut up. But the user doesn’t care what happens. He is worried that he is slowing down _tut_. And the more users there are, the more acute the performance problem becomes.

Roaming profiles (windows) also have a significant disadvantage: a long profile load (the same 16 changed gigabytes) every morning and every evening. Plus, a local copy, which may be interesting to armed gangs of economic orientation who broke into the office.

And, most importantly, none of these methods solve the problem with applications . Applications can be an incredible degree of capriciousness (I somehow dealt with a special program for printing out specific documents of a program for filling out customs declarations, which stopped working if the default layout was not Russian). In addition to the bumps of programmers, there can also be an objectively difficult to set up application that takes several hours to bring it into working condition.

All this leads to the idea, and why don't we bring the applications to the same place as the data - to the server? And from the workplace to make a flat and primitive board with buttons that can only show what the programs on the server have drawn.

This idea looked so beautiful in the mid-1990s, when terminal solutions were just beginning to gain popularity (more from citrix, but gradually from Microsoft). In fact, there are many problems there, and they are still engaged in solving them ... However, the topic is about thin clients.

So, the thin client should give the interface to the server. Why a thin client, and not “just a workplace with a terminal session”? In addition to the marketing buzz about energy efficiency, reliability, noiselessness (which is true, but nobody cares), the main reason for the use of thin clients is the standardization of workplaces with minimization of their capabilities to the level of “no more than necessary”. The fewer freedoms, the less likely it is to go to the dressing. Making a TK on your own is possible, but much more problematic than it seems to a person who can configure an X-server with rdesktop autorun. (I hope I will write a little more about “samostroy” and the problems that await those who took it) later.

There are many manufacturers of thin clients on the market (too many, as it seems from the perspective of one of the manufacturers), but by and large, all thin clients, from monsters of HP level to local small manufacturers (in one of which I worked) are less similar in their basis.

Five years ago, all the functionality of the TC could be reduced to “configure”, “run”, “work”. The list of protocols was modest: RDP, ICA, SSH, NX (nomachines), X11 / XDM, VNC ... Perhaps a web browser. Now the situation has begun to lean towards all kinds of clouds and virtualization, but from the point of view of TC, this means only one more client to another remote access service. Web2.0 applications have become a much more serious change in modernity, i.e. applications that store data on the server, and are running (partially) in the user's web browser. This subtle change led to a significant rethinking of what should be the TC. He should not only draw, he should also count. However, such a change again opened the question of the privacy of the vulnerability of the workplace.

Another major change was the emergence of built-in SIP / Skype clients, attempts (not very successful yet) to make good multimedia (in particular, video).

From the point of view of loading, shopping malls are divided into two classes: local and network boot. For obvious reasons, the network load is mainly in Linux, because I don’t want to load gig over the network for windows, but I haven’t heard about PXE for CE (although it's interesting, since the average size of the installed CE is 15-30MB).

So, what are the advantages of local loading? The main one (as my experience shows me, quite serious from the point of view of many buyers) is that it works out of the box. No DHCP / TFTP servers, no fuss with presets. Stuck, turned on, indicated the server address, work. The second plus is the absence of “morning lags” (when everyone starts loading at the same time).

Plus network boot in some price reduction (for the price of DOM'a, ie 10-30 dollars), in the automatic download of the latest version. The minuses are morning lags, a special fuss with DHCP, and the inapplicability of such thin clients in small branches (where the entire Internet is made into a SOHO box, perhaps by wifi) that results from it.

With the exception of experimental solutions (which I will probably discuss later), all thin clients are doing on one of the three platforms: Windows CE, Linux and Windows Embedded Standard (a special version of Windows XP that differs quite a lot from desktops). Up to a certain point, the championship flag was on the Linux side, but now the situation has turned towards CE6. Reason: rdesktop does not know RDP version 6 or higher. A CE can. And this is quite a significant advantage, because, at least, this is microphone support. (in RDP 5.2 it is not, it is not in CE5 and rdesktop). But, on the other hand, the VMView Client is only under linux and WES, and all its versions under CE are “home-grown” (that is, not written by VMWare) and suffer very much from ... the fact that they were not written by the authors of the server. Another disadvantage of CE is the very fig WWW support. it is, but IE6 from CE is even more terrifying than regular IE. Firefox on Linux and the full range of browsers on WES are much more pleasing.

The presence of control is much more important (than the loading mode) for the TC.

Management can be:
  1. local (went to the TC, pressed F2, was in the configurator).
  2. remote (connected to TC, configured, disconnected)
  3. centralized (chose the TK group, set the settings, then it’s all by itself).


The first is a de facto standard for shopping malls with local loading, but it can often be absent from PXE-thin clients (since they are loaded from the server, and from there they take settings).

The second and third for clients with local loading is a feature. Those. not all manufacturers have it at all. Separately, you need to talk about centralized management. This is a much more complicated thing than it seems at first glance, since the point is not to set all the TCs the same settings, but to be able to indicate which settings are the same and which are not (for example, the hardware settings are different for everyone , session settings may vary from group to group).

For shopping malls that work on windows, the most chic is integration into AD, i.e. the ability to either retrieve data from an active directory, or (this is aerobatics) to integrate TC as computers into the AD snap-in, apply group policies based on OU / Site, etc. I must say, in some strange way, WES can do it all “out of the box.” However, WES, with all its visual appeal, is the worst platform for TC (because it is too similar to ordinary Windows and vulnerable to all that can be done bad on Windows ...)

Most manufacturers work in a very closed mode (ie, do not publish anything, although they consume a lot from the open source community). The only exception I know is openthinclient.org , a rather nice and feature-rich PXE-based thin client with centralized management. Its disadvantage is the size - about 150 megabytes is loaded over the network for each thin client.

(a continuation of how thin clients are made follows).

Source: https://habr.com/ru/post/90670/


All Articles