While tentative confirmation of the Higgs Boson particle was made on March 14, 2013 after a previously unknown particle with a mass between 125 and 127 electronvolts was discovered back in July 2012, I thought I’d revisit the ongoing efforts at the CERN organization and how exactly they’re managing all of the data they collect from the millions of collisions produced in the Large Hadron Collider (LHC) each second. Generating approximately one petabyte of data every second none of today’s computing systems are actually capable of recording such rates of information and so CERN devised a sophisticated system for paring this down to one in every ten thousand events, of which one percent of the collected events is then analyzed. And even with such drastic reductions the four primary experiments being run at the LHC still produce over 25 petabytes a year that require storing. The below animation released by the organization helps explain how the sheer volume of data being collected is managed and distributed to a grid of computers in 36 collaborating countries around world that scientists use to recreate simulations and perform analysis of the results being generated.