Community News

Supporting the next generation of the Large Hadron Collider

Continuing our new series of blogs by GÉANT’s ‘travelling ambassadors’, Enzo Capone updates us on the latest meeting between the LHC experiments and the Research and Education Networking community. 

It’s 21 years since the LHC started moving a bunch of particles around a 27km-long tunnel, and since the Worldwide LHC Computing Grid started to analyse the massive amounts of data produced by the four big detectors, that capture the different aspects and flavours of collisions between the above mentioned ‘particle bunches’.

A lot has changed since then: the advancement in technology has of course played a major role in re-shaping and re-focusing the initial ideas: thanks not only to the vastly improved computing power brought by new generations of CPUs, but more deeply thanks to the exponential increase in network capabilities and performance.

In fact, the current computing model of LHC computing is heavily dependent on the underlying network that the worldwide Research and Education Networking (REN) community provides. In fact, the LHCOPN and LHCONE networks were created with the specific purpose of providing the best infrastructure for this community, and as of today remain the single most successful multidomain services provided by the global R&E network community, with some 60 RENs worldwide providing the service to their users. The LHCONE service scope has also been expanded to allow additional High Energy Physics (HEP) experiments to make use of it, such as BELLE2, NOVA, and the Pierre Auger Observatory.

The LHC facility itself is under continuous improvements, to reach higher and higher levels of so-called ‘luminosity’ (simply speaking, the amount of particles shot out by each event, which depends on how often the particles collide, which depend on how precise the ‘focus’ of the beams is, which…. – you get the idea!). And the higher the luminosity, the more data is produced.

The next major iteration of this upgrade, which goes by the name of High Luminosity LHC (HL-LHC) is expected to happen in 2026, and it’s going to entail a huge advancement to both the collider and each experiment’s detector: more events will be produced, and more data will be captured for each event. This bump-up will eventually impact the computing, in terms of the massive increase in data to be moved, analysed and stored.

On 13 January 2020, the representatives of the computing activities for the High Energy Physics experiments (ATLAS, ALICE, CMS, LHCb, BELLE2) met with the REN community. The objective of the meeting was to evaluate what has happened so far, in terms of how the network providers have supported the scientific activities, and more importantly to gather information on the future needs of the experiments, in terms of data production, so that the RENs have useful information to allow them to plan well ahead of time, for the upgrade of their infrastructures and services.

The LHCONE network has grown, both in geographical reach as well as in the number of connected sites, and it’s time to revisit some of the assumptions and processes that so far have been used as the basis for its design and operations. Not least to design a more robust framework for the security enforcement.

Perhaps the best way to show how the RENs have served the experiment so far, is to let the experiments comment in their own words:

Nikola Hardi (ALICE): “The networking requirements for Run2 were fully satisfied! Always one step ahead of our needs.”

Alessandro Di Girolamo (ATLAS): “Networking is and has been one of the rock-solid, highly reliable building blocks of ATLAS computing successes.”

Danilo Piparo (CMS): “Network was a crucial ingredient for the success of CMS in Run1 and Run2. CMS counts on the same quality for Run3 and HL-LHC.”

Concezio Bozzi (LHCb): “The fast and reliable network provided to us in the past years is at the basis of our successful computing operations and ultimately of the physics productivity of LHCb”.

Then followed what each of we ‘REN-people’ were waiting for: each experiment presented a more-or-less detailed picture of how the future upgrades will impact their data production. Without going into the nitty-gritty detail of each one, the general trend was to expect a ten-fold increase across the board, compared to the last run of LHC, between 2026 and 2030.

This is even more impressive considering the efforts that every experiment is making to cut down the amount of data they produce, leaving only the significant part of the data production, hopefully that part that would allow a few years down the line to have another incredible series of science-changing discoveries.

Enzo Capone is Head of Research Engagement at GÉANT

To learn more, visit impact.geant.org/CERN

 

Skip to content