📜 ⬆️ ⬇️

Tuning typical roles of Windows. Part Two: Terminal Server and Deduplication


We continue to squeeze all the juice from the Windows server. This time I will talk about setting up a remote desktop server, better known as a terminal server. As a “cherry on the cake,” we’ll dwell on the tuning of the Windows volume deduplication system.


Remote Desktop Server


In the infrastructure of remote desktops, the main role is the session node Remote Desktop Session Host (hereinafter referred to as RDSH). I'll start with the selection of hardware for this role.


With a processor and memory, everything is quite simple: more memory, more high-frequency processors and a larger cache are better. A little more attention should be paid to the disk subsystem, because it is often a bottleneck. The main load on the disks can be divided into three groups:



In order to increase speed, it makes sense to spread these groups on different physical disks. About the paging file has already been mentioned in the previous article , but the user profiles will require some manipulation of the registry.


To change the storage location of profiles, you need to change the following registry value:



Then the profiles of all users will be created in a different place.


Separately, it should be noted roaming user profiles of the terminal server. You can configure them by Group Policy Windows Configuration - Administrative Templates - Remote Desktop Services - Remote Desktop Session Host - Profiles - Set the path for the remote desktop services user profile to be moved. In this policy, you can set a place to store user profiles of the terminal server, but only part of the profile will be stored. In particular, AppData \ Local will remain in the old place.

In addition to changing the storage location of profiles, it is useful to redirect a number of user folders to another location. As a rule, it is practiced for speed and configuration of backups - after all, “my documents” and “desktop” can contain important information, but they also do not require high performance. Redirection is conveniently configured using group policy.


Under the spoiler I remind you how to do it better.

Group Policy itself is configured in User Configuration - Windows Configuration - Folder Redirection , and alternative locations can be specified for most user profile folders.



Setting folder redirection policy


To avoid problems with creating folders and accessing them, I usually do the following permissions on a resource with redirected folders:


  • the group “Everyone” can create folders and read. Permissions apply only to the root folder;


  • the creator-owner (owner) has full access, but only to subfolders and files;


  • Administrators and system have full access.


Permissions to a network resource with user redirected folders.


One of the main disk loads on the server with the RDSH role is synchronous recording. The reason is again the users: after all, during normal operation there is a regular access to the profile, as well as loading and unloading of user registry branches (% userprofile% \ ntuser.dat). In addition to creating various kinds of productive arrays, the write cache can help optimize performance.


If you do not have a battery for a RAID controller, you should think about purchasing one or endlessly believing that no-breakers are. I must say that the cache can also be configured via Windows - on the Policy tab in the disk properties in the Disk Management snap-in.



Setup Windows hard disk cache.


The second potentially bottleneck is the network subsystem. In order to optimize, it will not be superfluous to divide the network into a conditional frontend and backend if possible. Through the frontend to make user connections, and through the backend - connections to other servers, including repositories for redirected folders and roaming profiles.


Network connections with user connections are usually minimal, and for better backend performance, you can use adapter teaming (LACP) or install ten-gigabit adapters into the server.


In setting up the operating system itself, you can highlight the following points:




129 - just as many assignments in the newly installed Windows 2016 + Office 2016.


Not to mention the possibility of a terminal server called RemoteFX, setting its behavior also affects performance.


RemoteFX is a whole set of technologies responsible for compressing RDP, forwarding devices, working with a video card and USB devices both in a virtual machine and on a terminal server.

A useful setting is RemoteFX compression. You can search for it in group policies at Computer Configuration - Administrative Templates - Windows Components - Remote Desktop Services - Remote Desktop Session Host - Remote Session Environment - Configuring RemoteFX Compression .



Configure RemoteFX compression through group policies.


The setup has the following options:



With a small number of users from inside the local network, tuning has little effect on speed. But when there are many users or if they are remote, it is worth thinking about its application.


Next to this setting, there are other options that affect the speed - here and setting the encoding, and setting the maximum screen resolution. Of particular interest is the configuration of using a graphics adapter to render the image — for all sessions of the Remote Desktop Services, use the hardware graphics adapter by default. Starting with Windows 2016, RemoteFX has learned how to work not only with virtual video cards in virtual machines, but also with a video card directly. Yes, now you can install a video card even in a regular session server of remote desktops to speed up work.


Other display settings can be made on the client side, in .rdp files. Most options are on the Interaction tab.



Display settings that affect performance.


You can configure it manually, you can use presets depending on the approximate connection speed, or give control to the automation. Most importantly, you should not disable the "Constant caching of bitmaps." When this option is enabled, the client will keep the image cache and upload only changes to the image.


I turn to the speed of another terminal server role - the remote desktop gateway.


The Remote Desktop Gateway (RDG) is conveniently used to connect external clients without a VPN — in particular, it allows you to configure certificate authentication. An example of configuration can be found in the relevant article , but I will limit myself to speed issues.


RDG can use TCP and UDP transport, but in most cases standard RPC over HTTP is used. The following registry settings apply to it:


ParameterWayDefault valueComment
MaxiothreadsHKLM \ Software \ Microsoft \ Terminal Server GatewayEqual to the number of processorsResponsible for the number of exiting streams that the RDG processes
MaxPoolThreadsHKLM \ System \ CurrentControlSet \ Services \ InetInfo \ ParameterfourThe number of threads that IIS processes
ServerReceiveWindowHKLM \ Software \ Microsoft \ Rpc64 KBThe maximum frame size received by the server. Can vary from 8 KB to 1 GB.
ClientReceiveWindowHKLM \ Software \ Microsoft \ Rpc64 KBMaximum frame size accepted by the client. Can vary from 8 KB to 1 GB.

To understand where there are problems, will help performance counters:



I will say a few words about the third role - the site of virtualization of remote desktops . It is needed to deploy Virtual Desktop Infrastructure (VDI), a little less popular than regular terminal servers.


In addition to general recommendations for hypervisors - faster, higher, stronger - in the VDI infrastructure in the virtual desktop pool mode, it makes sense to pay attention to unnecessary services and features in client operating systems:



In addition to disabling services in a patterned virtual machine, to optimize storage, it will not be superfluous to enable deduplication in any VDI deployment option. This can be done using the PowerShell command:


Enable-DedupVolume <volume> -UsageType HyperV 

Using deduplication will help save a lot of space, not only when using virtual desktops.


Deduplication


Deduplication is an array compression method where duplicate data is not duplicated.


Under the spoiler, let me remind you when and how to include deduplication.

Microsoft recommends including deduplication in the following cases:


  • general-purpose file servers — user shared folders, redirected profile folders, etc .;


  • Remote Desktop Infrastructure Servers (VDI);


  • virtualized backup applications like Microsoft DPM.

In all other cases, the benefits of deduplication should be pre-evaluated. This will help the DDPEval.exe utility, which will appear after installing the deduplication role on the server.



On a disk with a large number of 1C file bases, deduplication can save up to 70% of space.


Installation of the deduplication component is done through the graphics mode or by the PowerShell cmdlet:


Install-WindowsFeature -Name FS-Data-Deduplication


It is convenient to enable by the Enable-DedupVolume cmdlet with the - UsageType parameter, which can be:


  • HyperV - for storage of virtual machines and VDI;


  • Backup - for virtualized backup applications;


  • Default is the default.

Read more about deduplication and how it differs, I suggest in the Microsoft documentation.


To optimize the balance of storage and performance, you should pay attention to the assigned tasks of deduplication and its fine tuning.


The deduplication mechanism uses three types of assigned tasks:


TitleWhat is he doingDefault schedule
OptimizationPerforms deduplicationEvery hour
Garbage collectionFrees up disk spaceEvery Saturday at 02:35
Integrity checkSearches and repairs damageEvery Saturday at 03:35

You can view scheduled tasks with the Get-DedupSchedule cmdlet .



Scheduled deduplication jobs by default.


Of course, such a schedule can be unacceptable, especially on servers with high load. In this case, it will be convenient to run tasks only during idle hours. Disable jobs with the PowerShell command:


 Get-DedupSchedule | ForEach-Object { Set-DedupSchedule -InputObject $_ -Enabled $false } 

Add a new optimization task after hours:


 New-DedupSchedule -Name "Optimization" -Type Optimization -DurationHours 8 -Memory 100 -Cores 100 -Priority High -Days @(1,2,3,4,5) -Start (Get-Date "2017-01-01 21:00:00") 

And tasks garbage collection and checks on non-working days:


 New-DedupSchedule -Name "WeeklyGarbageCollection" -Type GarbageCollection -DurationHours 23 -Memory 100 -Cores 100 -Priority High -Days @(6) -Start (Get-Date "2017-01-01 07:00:00") New-DedupSchedule -Name "WeeklyIntegrityScrubbing" -Type Scrubbing -DurationHours 23 -Memory 100 -Cores 100 -Priority High -Days @(0) -Start (Get-Date "2017-01-01 07:00:00") 

You can read more about the syntax of the New-DedupSchedule cmdlet in the documentation .


Most deduplication tweaks — that is, most of the parameters for a volume — are configured using the Set-DedupVolume cmdlet .


Usage example:


 Set-DedupVolume -Volume F: -OptimizePartialFiles 

Under the spoiler you will find the possible parameters of the cmdlet.
ParameterDescriptionValid ValuesComment
ChunkRedundancyThresholdNumber of links to a block before it is copied to the block's active zone partitionPositive integersUsually there is no need to change the default value, but increasing the value may increase the speed of high duplication volumes.
ExcludeFileTypeFile types excluded from deduplicationArray of file extensionsSuch files as multimedia are difficult to deduplicate, it makes no sense to optimize them.
ExcludeFolderFolders excluded from deduplicationArray of paths to foldersTo improve performance, you can exclude some folders.
InputOutputScaleParallelization of I / O operations1 - 36On a server with a high load, you can reduce the number of I / O deduplication operations, which will affect the speed of optimization, but will speed up the overall server operation
MinimumFileAgeDaysThe number of days after the file is created before it is considered available for optimization.Positive integers, including 0The default value is 3, in some cases you can change the value to increase optimization.
MinimumFileSizeThe minimum file size so that it can be considered available for optimization.Positive integers (bytes) larger than 32 KBFor small files, deduplication doesn't matter
NocompressCompress deduplicated blocksTrue \ FalseFor a volume containing a large amount of compressed data, such as archives or multimedia, compression makes sense to turn off
NoCompressionFileTypeFiles that do not need to compressArray of file extensionsAlready compressed files do not make sense to compress
OptimizeInUseFilesUsing deduplication for active filesTrue \ FalseIf most files on the volume are large and constantly used, in which only a part of the files change regularly, it is better to include this parameter. Otherwise, these files will not be optimized.
OptimizePartialFilesWhen this option is enabled, the MinimumFileAge value is applied to the file segments, not the entire file.True \ False
VerifyChecking the data block not only by hash, but also byteTrue \ FalseTurning on the parameter slows down the performance, but provides a great guarantee of data integrity.

In addition to the settings for the volume, there are general deduplication service settings. In particular, we are interested in two parameters located in the registry at the following address:


 HKLM:\System\CurrentControlSet\Services\ddpsvc\Settings 

ParameterDescriptionValid ValuesComment
WlmMemoryOverPercentThresholdAbility to use more memory than auto evaluationPositive integers, as a percentage. for example, a value of 300 means "three times more"Changing the parameter value is important when simultaneously running deduplication tasks and other resource-intensive tasks.
DeepGCintervalThe interval between full garbage collectionPositive integers, -1 = disabledFull garbage collection optimizes data more carefully, but it is more resource-intensive. In fact, the difference in free space after the normal and complete garbage collection is about 5%.

Now that deduplication is set up, you can try to articulate, “deduplicate, deduplicate, but not otuple duplicate,” and then spend the budget not on disks, but on new servers to the remote desktop farm. And start tyunit them already.


Have you ever had to do similar tuning of the speed of terminal servers or is everything “by default”? Was there any tangible result?


')

Source: https://habr.com/ru/post/333476/


All Articles