The topic is a continuation of this topic.
First of all, I would like to note that it turned out to be warm in the hall, unlike hall and hall A, where the key report took place.
In this topic reports:
- “Unlimited computing capabilities for your organization - from HPC Server to Azure” by Oleg Kryuchkov
- “Personal Clouds: Desktop as a Service” by Igor Shastitko
It all began with the words that HPC (High-performance computing) is moving from a highly specialized niche to universal applicability. In particular, it is integration into office infrastructures.
Simplification of HPC goes in the following main areas:
- Convert HPC to Mass Technology
- Partner Ecosystem Development
In brief, the following main use cases were considered:
- Engineering and Scientific Calculations
- Accelerate Excel 2010 based on HPC Cluster
- SOA applications (do not require MPI)
- HPC integration in SharePoint is also supported.
- HPC calculations are performed by a special add-on platform over Microsoft Server 2008 R2.
Windows HPC Server 2008 R2 includes:
- complete, integrated clustering platform (OS + MS HPC Pack)
- add-on over 64-bit Windows Server 2008 R2
- To solve all HPC task classes
HPC on the example of integration with Excel 2010
As an example of the implementation of HPC in Excel 2010 was an example from the life of an American insurance company. If before 1700 records were considered 14 hours, calculations using HPC take 2.5 minutes.
After integration, the HPC menu item appears in the Excel menu bar.
Requires Microsoft Windows 7 to use workstations as cluster nodes. The server allows a fairly flexible setting: resource limit (CPU usage, the speaker could not answer about the memory usage limit. In any case, simultaneous use of machines as cluster nodes and work machines is not recommended), work schedule, and so on.
Consider how this bunch of Excel-HPC works. We have the following items:
- The client who needs to make calculations
- Head Node controlling access to the cluster
- Broker (Broker Node) - the node that controls the cluster nodes directly.
- Normal nodes that perform calculations.
In general, the calculation process is as follows:
1. The client sends a calculation request to the head node.
2. If the head node gives permission, the data is submitted to the broker, who distributes the tasks between the nodes as follows:
- If there is a large number of cells, each node processes a small block of records.
- If there are relatively few cells, but each processing takes considerable time and a large number of functions are used, each node considers one or several functions.
3. The result is collected together on a broker and given to the client.
At the same time, the client does not need to wait until the end of the calculations - you can close Excel, and then simply pick up the ready calculated table from the broker by simply pressing a button.
Now the question, which probably already torments many: how much is this goodness? I am not well-versed in financial matters, so there may be inaccuracies here. But still, licenses and their cost:
')
Head Node -
$ 450
Compute node OS Windows HPC Server -
$ 250
HPC Enterprize -
$ 945
Broker -
$ 450
Workstation -
$ 100
HPC Development
Visual Studio with MPI, Task Parallel Libray in .NET Framework 4.0 is used, as well as a few other libraries and technologies that I did not have time to write, but those who are interested in it can look at techdays.in.ua - as I said, in Soon full lecture videos and all materials will be uploaded.
The use of graphic libraries is also allowed. As far as I understand, it was with the help of these tools that Microsoft rendered Avatar using cloud-based HPC computing.
Administrative console
Separate attention deserves the administrative console, which is quite a powerful and convenient tool for managing and monitoring the cluster. So that she can:
- configuring
- monitoring
- "Temperature map" - on one screen information on nodes, which nodes to which parameters work incorrectly (up to 3 metric parameters per node)
- - Up to 100 knots on screen
- - mode "map" or "list"
- - color scale for each metric
- node management
- task management
- reports (a rich collection of ready-made templates, the ability to configure your own)
- diagnostics
Planning takes place according to the following main parameters:
- on nodes, processors, cores
- different types of tasks
- various policies
- various interfaces
- cluster support over 1000 nodes
- load balancing
However, we all remember the figures that IE9 gave out, and how it all ended. Therefore, I treated these charts with some skepticism.
Finally, the audience showed customers and partners, among which I noticed brands such as Nvidia, Intel and ACER.
This report is over.
Report 2 "Personal clouds: Desktop as a Service", Igor Shastitko
A brief description of the site in Russian translation:
Microsoft has in its portfolio a lot of technologies that provide ease of deployment and support of workplaces in modern changing environments - Application Virtualization (APP-V, MED-V, Remote Apps, Terminal Server), OS virtualization (Remote Desktop, Terminal Services, VDI) , hardware virtualization (Hyper-V), management (SCCM, SCVMM, SCOM). What services can be implemented for centralization and dynamic work of workplaces.
First, several introductory words were said about the difficulties of deploying and transferring stations and the need and usefulness of DaaS. Special attention was paid to the fact that VDI is not DaaS.
Ideal DaaS is the use of appropriate methods for isolation, virtualization, delivery, and management to meet the requirements.
When creating a DaaS strategy, the OS, applications, and data are separated. Each of these items may have a different implementation principle.
What is a virtual workplace, what it consists of:
- User state virtualization (data & settings)
- virtual presentation
- virtual applications
- virtual OS (not fully implemented yet, expected in 2012)
Gartner presents 10 architectural scenarios according to the following principles:
OS can be:
- Local - as on a normal workstation.
- Streamed - the image is delivered to the target computer
- Hosted - the OS is running on the server. A prime example is a terminal server or a thin client server.
Applications are divided like OS:
- Distributed - locally installed.
- Streamed - stored on the server, if necessary, delivered to the client machine and run on it.
- Centralized - applications are located and run on a remote server.
Igor Shastitko: “Who entrusted you to ask this question?”
DaaS strategy development is as follows:
- infrastructure decapitalization
- definition of user scripts. Basic patterns:
- Overlaying available strategies
- Defining user / site profiles
- The choice of technologies for the implementation of specific needs.
In the presentation there were several flowcharts for choosing the optimal DaaS construction strategy for a specific user. Who are interested in the details - look for a presentation on techdays.
Results:
- one solution is not suitable for all, if only because such an approach often does not have a logical need
- Depending on the strategy, streamline your environment.
- Implementing appropriate virtualization technologies to isolate the required stack level tasks (OS-App-Presentation-Settings & Data)
- DaaS V1 in the form of Solutions Accelerator will be released soon