I am Artem Klavdiev, the technical leader of the Linxdatacenter HyperCloud hyperconverged cloud project. Today I will continue the story about the Cisco Live EMEA 2019 global conference. Let's go straight from the general to the particular, to the announcements presented by the vendor at the relevant sessions.This was my first participation at Cisco Live, the mission is to visit the events of the technical program, dive into the world of advanced technologies and solutions of the company and gain a foothold in the forefront of specialists who are attached to the ecosystem of Cisco products in Russia.
It turned out to be difficult to implement this mission in practice: the program of technical sessions turned out to be super-intensive. All round tables, panels, workshops and discussions, divided into many sections and starters in parallel, it is impossible to visit physically. Everything was discussed: data centers, network, information security, software solutions, hardware - any aspect of the work of Cisco and vendor partners was presented by a separate section with a huge number of events. I had to follow the recommendations of the organizers and create myself a kind of personal program for events, pre-booked places in the halls.
')
I will dwell in more detail on the sessions that I managed to attend.
Accelerating Big Data and AI / ML on UCS and HX (Acceleration of AI and machine learning on UCS and HyperFlex platforms)

This session was devoted to a review of the Cisco platforms for the development of artificial intelligence and machine learning solutions. Semi-marketing event interspersed with technical issues.
The bottom line is this: IT engineers and data specialists today spend a significant amount of time and resources on designing architectures that combine legacy infrastructure, several stacks to provide machine learning and software to manage this complex.
To simplify this task, Cisco also serves: the vendor focuses on changing traditional patterns of data center and workflow management by increasing the integration level of all the components necessary for AI / MO.
As an example, a case study was presented between Cisco and
Google : companies combine UCS and HyperFlex platforms with leading industry software products under AI / MO like
KubeFlow to create a comprehensive on-premise infrastructure.
The company described how KubeFlow deployed on the basis of UCS / HX in conjunction with the Cisco Container Platform transforms the solution into something that the company's employees called “Cisco / Google open hybrid cloud” - an infrastructure in which it is possible to realize the symmetrical development and operation of the working environment under AI tasks are simultaneously based on on-premise components and in Google Cloud.
Internet of Things Session (IoT)

Cisco is actively promoting the idea of ​​the need to develop IoT and based on its own network solutions. The company talked about its Industrial Router product - a special line of compact LTE switches and routers with increased fault tolerance, moisture protection and the absence of moving parts. Such switches can be embedded in any objects of the surrounding world: transport, industrial facilities, commercial buildings. The basic idea: "Deploy these switches on your sites and manage them from the cloud using a centralized console." Ruler runs on Kinetic Software to optimize remote deployment and management. The goal is to increase the manageability of IoT systems.
ACI-Multisite Architecture and Deployment (ACI or Application Centric Infrastructure, and network microsegmentation)

Session devoted to the consideration of the concept of infrastructure, focused on microsegmentation of networks. It was the most difficult and detailed session that I was able to attend. The general message from Cisco was as follows: previously, the traditional elements of IT systems (network, server, storage, etc.) were connected and configured separately. The task of the engineers was to bring everything into a single, manageable working environment. UCS changed the situation - the network part stood out in a separate area, and servers were managed centrally from a single panel. No matter how many servers - 10 or 10 000, any number is controlled from a single point of control, control and data transfer take place along a single wire. ACI allows you to consolidate management and network and server into one console.
So, microsegmentation of networks is the most important function of ACI, which allows granular separation of applications in a system with a different level of dialogue between themselves and the outside world. For example, two virtual machines running ACI by default cannot communicate with each other. Interaction with each other is opened only by opening the so-called “contract”, which allows you to paint in detail the access lists for detailed (in other words, micro) network segmentation.
Micro-segmentation allows you to achieve a point setting of any segment of an IT system by isolating any components and linking them together in any configuration of physical and virtual machines. Groups of final computational elements (EPGs) are created, to which traffic filtering and routing policies are applied. Cisco ACI allows you to group these EPGs in existing applications into new microsegments (uSeg) and configure network policies or VM attributes for each specific element of the microsegment.
For example, you can assign web servers to some EPG in order to apply the same policies to them. By default, all compute nodes in the EPG are free to communicate with each other. However, if the web EPG includes web servers for the development and operational phases, it may be worthwhile to prevent them from communicating with each other to guarantee against crashes. Microsegmentation with Cisco ACI allows you to create a new EPG and automatically assign policies for it based on VM name attributes, such as “Prod-xxxx” or “Dev-xxx”.
Of course, this was one of the key sessions of the technical program.
Effective evolution of a DC Networking (Evolution of a network of data centers in the context of virtualization technologies)

This session was logically linked to the microsegmentation session of the network, and also covered the topic of container networking. In general, it was a question of migration from virtual routers of the same generation to routers of another - with architecture schemes, connection schemes between different hypervisors, etc.
So, the ACI-VXLAN architecture, micro-segmentation, and a distributed firewall, which allow for the configuration of a firewall for conditional 100 virtual machines.
ACI architecture allows you to perform these operations not at the virtual OS level, but at the virtual network level: it is safer to configure for each machine a specific set of rules not from the OS, manually, but at the virtualized network level, safer, faster, less labor-intensive, etc. The best control of everything that happens is on every network segment. What's new:
- ACI Anywhere allows you to distribute policies to public clouds (as long as AWS, in the future - to Azure), as well as to on-premise elements or on the web, simply by copying the necessary configuration settings and policies.
- Virtual Pod is a virtual instance of ACI, a copy of the physical control module, its use requires a physical original (but this is not accurate).
How it can be applied in practice: expanding network connectivity to large clouds. Multiclaud is coming, more and more companies are using hybrid configurations, faced with the need for scattered network configuration in each cloud environment. Now ACI Anywhere gives you the opportunity to spread networks with a single approach, protocols and policies.
Designing Storage Networks for the NextFlash DC (SAN)
An interesting session about SAN-networks with a demonstration of a set of best practices for setting up.
Top content: overcoming slow drain on SAN-networks. It occurs when any of two or more data arrays are upgraded or replaced with a more productive configuration, and the rest of the infrastructure does not change. This leads to the "inhibition" of all applications running on this infrastructure. FC protocol does not have window size matching technology (window size), which IP protocol has. Therefore, when there is an imbalance in the amount of information sent and the bandwidth and computing areas of the channel, there is a chance to catch slow drain. Recommendations for overcoming are to control the balance of bandwidth and speed of the host edge and storage edge so that the rate of aggregation of the channels is higher than in the rest of the factory. Also considered such ways to identify slow drain, as traffic segregation using vSAN.
Much attention was paid to zoning. The main recommendation for configuring SAN is adherence to the “1 to 1” principle (1 initiator is assigned to 1 target). And if the network factory is large, it generates a huge amount of work. However, the TCAM list is not infinite, so the smart zoning and auto zoning options appeared in Cisco software for managing SANs.
Session HyperFlex Deep Dive
Find me in the photo :-)This session was devoted to the HyperFlex platform as a whole - its architecture, data protection methods, various application scenarios, including for new generation tasks: for example, for data analytics.
The main message is that the capabilities of the platform today allow it to be customized for any task, scaling and distributing its resources among the tasks facing the business. The platform experts presented the main advantages of the hyperconvergent platform architecture, the most important of which today is the ability to quickly deploy any advanced technological solutions with minimal costs for configuring the infrastructure, reducing TCO for IT and increasing productivity. Cisco delivers all of these benefits with advanced network solutions and management and control software.
A separate part of the session was devoted to Logical Availability Zones, a technology that allows to increase the fault tolerance of server clusters. For example, if there are 16 nodes assembled into a single cluster with a replication factor of 2 or 3, then the technology will create copies of servers, blocking the consequences of possible server failures due to the sacrifice of space.
Results and conclusions

Cisco is actively promoting the idea that today absolutely all the possibilities for setting up and monitoring IT infrastructure are accessible from the clouds, and these solutions need to be transferred as soon as possible and in large numbers. Just because they are more convenient, eliminate the need to solve a mountain of infrastructure issues, make your business more flexible and modern.
As the productivity of devices grows, so do all the associated risks. 100-gigabit interfaces are already real, and you need to learn how to manage technology in relation to the needs of the business and its competencies. Deploying IT infrastructure has become easy, but management and development has become many times more complex.
At the same time, nothing radically new in terms of basic technologies and protocols seems to be absent (all on Ethernet, TCP / IP, etc.), but multiple encapsulation (VLAN, VXLAN, etc.) makes the overall system extremely complex. Behind externally simple interfaces today very complex architectures and problems are hidden, and the price of one mistake is increasing. Easier to control - easier to make a fatal blunder. You should always remember that the policy you have changed is applied instantly and applies to all devices in your IT infrastructure. In the future, the introduction of new technological approaches and concepts such as ACI will require a radical upgrade of training and development of processes within the company: you will have to pay a high price for simplicity. With progress there are risks of a completely new level and profile.
Epilogue

While I was preparing to release an article about Cisco Live technical sessions, my colleagues from the cloud team had time to visit Cisco Connect in Moscow. And that's what interesting they heard there.
Panel discussion on digitalization challenges
Speech by IT managers of the bank and the mining company. Summary: whereas before, IT specialists came to the management for the coordination of procurement and achieved it with a creak, now everything is the other way around - the management runs for IT in the framework of the enterprise digitalization processes. And here two strategies are noticeable: the first can be called “innovative” - finding new items, filtering, testing and finding practical applications for them, the second, “early followers strategy”, implies the ability to find cases from Russian and foreign colleagues, partners, vendors and use them in your company.

Stand "Data Centers with the New Cisco AI Platform Server (UCS C480 ML M5)"
The server contains 8 NVIDIA V100 chips + 2 Intel CPUs up to 28 cores + up to 3 TB of RAM + up to 24 HDD / SSD disks - all this in one 4-unit package with a powerful cooling system. Designed to run applications based on artificial intelligence and machine learning, in particular TensorFlow gives a performance of 8x125 teraFLOPs. On the basis of the server, a system for analyzing the routes of the conference visitors was implemented by processing video streams.
New Nexus 9316D Switch
The 1-unit package accommodates 16 400 Gbps ports, which is a total of 6.4 Tbps.
For comparison, I looked at the peak traffic of the largest traffic exchange point in Russia MSK-IX - 3.3 Tbit, i.e. a significant part of the Runet in the 1st unit.
Able to L2, L3, ACI.
And finally: a picture to attract attention with our performance on Cisco Connect.
First article: Cisco Live EMEA 2019: we are changing an old IT bike on a BMW in the clouds