📜 ⬆️ ⬇️

BAK increased uptime to 70% and sets records in the number of collisions


A small part of the CMS collaborators against the background of a full-scale photograph of a compact muon solenoid (CMS)

At the Large Hadron Collider a lot of proton collisions are handled like never before: about 1 billion per second . This is a lot. Initially, the collider was not supposed to be used so intensely. This year alone, the LHC collected more data than in all previous years of operation combined.

The main reason for the increase in the number of experiments is the high reliability of the collider, even with an increase in energy up to 13 TeV. There was almost no downtime at the LHC this year. Physicists are now trying to gather more information about the Higgs boson - an elementary particle that is formed about once per billion collisions.

“Each proton collision can be compared to a spin of a roulette wheel, which has several billion possible outcomes,” says Jim Olsen, a professor of physics at Princeton University who participates in experiments on the compact muon solenoid (CMS), one of two large universal elementary particle detectors at the LHC.
')
On a compact muon solenoid, various physical experiments take place, including the search for Higgs bosons, additional dimensions of space and time, and also particles that can interact with dark matter or be part of it. In total, about 4,300 scientists, engineers, technicians and students from 179 laboratories and universities in 41 countries, including Russia, Ukraine and Belarus, work in the framework of the CMS collaboration .

The compact muon solenoid is 21.6 meters long, 15 meters in diameter and weighs approximately 14,000 tons.

How is the data collected from the detector


Most collisions in a collider are not accompanied by interesting effects, so a very large number of collisions are required to collect valuable scientific data. After the collision, the particles fly apart. Some of them pass through several CMS layers, leaving “traces” (events) that the detector removes at 40 MHz. Each event is approximately 1 megabyte of data. That is, in this mode, the detector generates about 40 terabytes per second.

It is impossible to store such volumes. Fortunately, the detector has a built-in event filtering system that filters out insignificant events. First, the first level FPGA hardware triggers on the detector itself are triggered, which reduce the number of events for filtering by approximately 1000 times. Then the second level software triggers come in - the information from the detector is sent over fiber to the nearby servers, where C ++ software is running for high-level filtering of the signal. After two levels of filtering, there are approximately 1000 potentially interesting events per second for scientific analysis. Thus, for subsequent analysis, the detector transmits approximately 1 gigabyte per second, that is, relatively little.

Data that has passed two levels of filtering is recorded on storage tape drives, and also transferred to the high-speed LHC Computing Grid research network accessible to CMS collaborators around the world. In 2012, about 25 petabytes per year came to the network from the collider, but now the volume has grown.

This data is analyzed in various ways. Scientists are looking for some "anomalies" and are trying to bring a theoretical basis for them. Or on the contrary, they are looking for events whose existence is predicted by theorists. For example, the existence of the Higgs boson followed from the Standard Model, and the thesis about the necessity of the existence of the Higgs field for the integrity of the theory was formulated in the 1960s.

In 2012-2014, the CMS collaboration found traces of a particle with a mass of 125-126 GeV - the Higgs boson. This discovery was made possible through careful data-mining of information collected from the detectors. Finally, these data were issued in 2016 .

Since April, the LHC produced about 2.4 square kilometers of collisions in the framework of the ATLAS and CMS experiments. Such an unprecedented number is simultaneously explained by a gradual increase in the number of collisions and an increase in uptime of the LHC.

When scientists only planned the construction of the LHC, they assumed that scientific experiments would be carried out at the collider only 30% of the time. All the rest of the time, engineers will be servicing this tool, checking the system, replacing fluids in a cryogenic cooling system, increasing the energy of proton beams to collision energy, etc. In fact, the LHC is used much more intensively than expected. Now the collider is in operation about 70% of the time. This year it works stably and reliably like a clock, there was almost no downtime.

The data flow from the LHC is like an avalanche almost without stopping. “We take about 10 times more data than last year,” said Paul Laycock, a physicist at the University of Liverpool who works for the ATLAS collaboration. - But according to the results of Run 2 [the second session of the Large Hadron Collider, started in April 2016], more data have been collected than during the entire Run 1 [first three-year session of the LHC]. Of course, the most important difference between the sessions of work is that now the collision energy has doubled.

In the first few months of the second session, scientists were able to collect as much data on the Higgs boson as they did in all three years of the first session. In the two decay channels (the first two channels in the lower list), the Higgs signal is already visible at the level of statistical significance 10σ.

Recall that the data analysis of the first Run 1 session revealed five channels of Higgs boson decay:

  1. two photons (γγ);
  2. on a ZZ-pair with their subsequent decay into four leptons;
  3. on a WW-pair;
  4. on tau-lepton pair;
  5. per quark pair b-anti-b.

The joy of scientists was overshadowed by minor technical difficulties. It turned out that the initial budget of the BAC was not designed for such intensive scientific work in 2016. In particular, hard drives for data storage were purchased on the basis of estimated uptime of 30%, not 70%. “Since the LHC works better than even in the most optimistic scenario, we began to run out of disk space. We need to quickly consolidate old simulations and data to make room for new collisions, ”says Olsen.

Lack of HDD is a pleasant problem. From the category of those that money does not intermeddle in the wallet.

The data collected so far about 2.4 square kilometers of collisions is only 1% of the amount of information that is planned to be removed from the LHC detectors for the entire duration of its operation. It is planned to use it until 2037. Scientists are going to make several upgrades throughout these decades to increase the collision energy from the current 13 TeV.

No one else knows what we will see in a collision of beams with more energy. “We only know that we have a scientific tool, unprecedented in human history, and if some particles form in collisions at the LHC, we will find them,” said Olsen.

Source: https://habr.com/ru/post/369737/


All Articles