Paris, 1989 - the beginning of the creation of one of the greatest and most expensive creations of modern times, the Large Hadron Collider. This event can undoubtedly be called a feat, but the installation, which formed a nearly 27-kilometer ring, dug more than 90 meters underground on the Franco-Swiss border, is useless without enormous computing power and no less huge data storage.

Such computing power for the European Organization for Nuclear Research (CERN) comes from an IT team of 4 cloud environments based on the open source software OpenStack, which is rapidly becoming the industry standard for cloud creation. CERN currently has four OpenStack clouds located in two data centers: one in Meyran, Switzerland, the second in Budapest, Hungary.
The largest cloud, located in Meyran, contains about 70,000 cores on 3,000 servers, the other three clouds contain a total of 45,000 cores. In addition, the CERN unit in Budapest will be connected to the headquarters in Geneva by two communication lines with a capacity of 100 Gb / s.
')

CERN began building its cloud environment back in 2011 using Cactus (open source cloud software). Clouds saw the light with the release of the OpenStack Grizzly interface in July 2013. Today, all four clouds are working on the ninth release of OpenStack platforms, called Icehouse. At the moment, CERN is preparing to activate about 2,000 additional servers, which will increase the computing capacity of the cloud by an order of magnitude. Increasing the collision energy of particles in the collider from 8 TeV (teraelectronvolt) to 13-14 TeV will cause it to generate more data than it now generates. Over the entire period of experiments, more than 100 petabytes of data were collected, of which 27 were only for this year. In the first quarter of 2015, this figure will increase, according to plans, up to 400 petabytes per year, and the cloud should be ready for this.

The CERN cloud architecture is a single system that resides in two data centers. Each data center, in Switzerland and in Hungary, has clusters, compute nodes, and controllers for these clusters. Cluster controllers refer to the main controller in Switzerland, which in turn distributes the data flow between the two balancers.

The OpenStack cloud is never created using just the components of the OpenStack suite, and the CERN cloud is no exception. Other open source components are used with it:
- Git: software version control system.
- Ceph: distributed object storage that runs on handler servers.
- Elasticsearch: real-time distributed search and analytics.
- Kibana: visualization engine for Elasticsearch.
- Puppet: configuration management utility.
- Foreman: a tool to configure and control server resources.
- Hadoop: a computing distribution architecture used in analyzing large amounts of data on cluster servers.
- Rundeck: task scheduler.
- RDO: software package for deploying OpenStack clouds on a Linux distribution of Red Hat.
- Jenkins: a continuous integration tool.
The choice was between Chef and Puppet utilities, both tools are mature, well integrated with other developments. However, Puppet’s rigorous declarative approach was considered more appropriate for this kind of work.
The current system architecture is demonstrated by the diagram below:

The OpenStack CERN environment is already quite massive, but this is not the limit. In the not so distant future, it is planned to increase it at least twice, which will be connected with the upgrade of the collider itself.
According to the plans, this will happen in the first quarter of 2015, since at this stage the physicists no longer have enough power to search for answers to the fundamental questions of the universe.
Ps Peaceful all elementary particles.