Professor Stephen Hawking, a central figure in black hole theory, during his recent visit to CERN with colloquium organiser, Luis Alvarez-Gaume on his left.
This time last year, talk of black holes overwhelmed the global news media. Closer to home, black holes are also making mischief – this time overwhelming the Trigger system.It turns out that if blacks hole event occurs in the first few months of data taking, we may actually be none the wiser. Not, as some tabloid newspapers were purporting, because we’ll be swallowed into oblivion, but rather because they’ll be masked as flawed events by the Trigger system.The problem, according to Ignacio Aracena, who works on jets and missing ET, is not that there is nothing to trigger on. Quite the contrary, plenty of final state particles will be produced, but to such an extent that the system will be inundated.“We expect that black holes will decay in essentially all the Standard Model particles,” says Ignacio. “But for black holes the number of jets is way higher [than for other events]. I’m not a black hole expert, but it’s something like 10 jets with high transverse momentum.”Compare this to, for example, a supersymmetry event where perhaps four or so jets, some missing transverse energy and a handful of leptons are expected, and you begin to get a sense of the challenge that black holes pose. They pretty much light up the whole detector.“For the trigger, the main idea of having a sequential selection was to focus on interesting physics objects and then only do the reconstruction in the trigger in that region,” Ignacio explains. Since there is only limited time available to process events at Levels 1 and 2, reading out the whole detector simply isn’t possible.The situation right now is that the Trigger system is virtually thrown whenever Monte Carlo black hole events are run. Processing the jets and retrieving all the data for them just takes too long; the time-out feature built into the algorithms kicks in before processing is complete, and data is instead dumped into the debug stream. This is a safety store where potentially interesting, but problematic, data is filed – corrupted or noisy data, or events that crash during execution – for later reprocessing offline.“This debug stream handling will be done in quasi-real time,” says Anna Sfyrla, who works on it, and adds: “Events with time-outs will usually be recovered during this reprocessing.” Recovered events are saved in datasets and made available for analysis, but so far there are no plans for these to be re-integrated into the physics online datasets.“In the long term, we’ll have to find a strategy to select these events,” says Ignacio. Allowing the system to be snowed under trying to process black hole data, at the expense of picking out and processing other physics events, is not an option. “From an analysis point of view, of course it would be helpful to know that you have black hole events in a specific data set. But we have a broad physics program and you have to keep the whole system running.”Eventually, a specific trigger chain, or even a specific data stream will likely be set up to select events that have large jet multiplicity with a high transverse energy. However, Ignacio concedes that with the current focus on really understanding the detector, its noise-levels and its responses, “It’s probably not something that we’re going to claim to see in the first two years.” Which means that if black hole events occur at all, the debug stream will be where they’re discovered.In the meanwhile, cosmic running is continually helping to improve the performance of algorithms – an optimisation process that will continue with the arrival of beam and collisions. “In this context, any improvements we make, even while taking cosmic data, are going to benefit [the eventual online identification of black holes],” says Ignacio. “Having this finally sent to a specific data stream will be the sum of all the efforts that we’re making right now and will do in the future.”
Ceri Perkins
ATLAS e-News
Nessun commento:
Posta un commento