Restarting the LHC

By 0x7df, Sun 22 March 2015, in category Physics

particle physics

The Large Hadron Collider/ATLAS at CERN from Flickr via Wylio © 2007 Image Editor, Flickr | CC-BY | via Wylio

As the Large Hadron Collider (LHC) gears up, after a big upgrade, to start its second major run, this post is a reminder of the story so far.

What is it?

The Large Hadron Collider (LHC) is the largest particle accelerator in the world, costing over 6.3b Euros.

For hadrons, read protons. (Protons are one of a class of sub-atomic particles called hadrons; hence the name.) The basic idea is to accelerate the protons until they have very high energies, and then smash them into each other; the result of each collision is the creation of a number other particles, which can be studied. In particular, the purpose of it was to find and study one specific particle called the Higgs boson.

How are the particles accelerated?

To get to the necessary high energies, the protons need a long track; so they're steered around a 27 km-long underground ring, typically circulating many times before colliding. The acceleration, including the steering round the circular track, is done by thousands of powerful superconducting magnets.

How much energy?

The energy of the protons being collided is measured in TeV - tera-electron-volts (probably easier just to say teravolts - people will know what you mean from the context). During the first run of LHC, up to the end of 2012, the proton beams had an energy of 3.5 TeV.

Now, 1 TeV is 10\(^{12}\) - or 1 trillion - electron-volts, where an electron-volt is defined as the amount of energy an electron would have if it were accelerated from rest through a potential difference of 1 V. That turns out to be 1.6 \(\times\) 10\(^{-19}\) J, so 3.5 TeV is 5.6 × 10\(^{-7}\) J. For context, a Joule is the energy it takes to lift 100 g (e.g. a small apple) through 1 m at the earth's surface; so, with an energy of 3.5 TeV each, it would take the combined energy of only 1.8 million of these protons to lift that apple...

After the upgrade, the proton energy will be 6.5 TeV.

The total collision energy is twice the beam energy, because two beams of equal energy are directed head-on at each other. So the first run had a collision energy of 7 TeV, and the post-upgrade second run will have a collision energy of 13 TeV. The design intent was always that the LHC would achieve 14 TeV, but because of technological problems (discussed later), CERN have kept the energy lower.

However, even the lower collision energy of 7 TeV was a big step up from what had been achieved before: the previous record, set by the 6.3 km Tevatron proton-antiproton collider at Fermilab in Illinois, was 1.96 TeV. And, it turned out OK, because as we'll see, the LHC fulfilled its main purpose of discovering the Higgs boson.

What are the limiting factors on collision energy?

The beam energy, and therefore the collision energy, is limited by the magnetic field that steers the beam, and the size of the ring.

The magnetic force provides the centripetal acceleration, so:

$$ q_pvB = \frac{\gamma m_p v^2}{r} $$

where \(q_p\) and \(m_p\) are the charge and mass of the proton, respectively, \(v\) is its speed, \(B\) is the magnetic field strength, \(r\) is the radius of the circular motion, and \(\gamma\) is the relativistic factor that we must include to account for the fact that the protons are moving at an appreciable fraction of the speed of light. The energy of a relativistic proton is \(\gamma m_p\), so:

$$ q_p B = \frac{E v}{r} $$

Hence:

$$ E = \frac{q_p B r}{v} $$

A larger ring would have a smaller radius of curvature, and so particles could be given higher energies and still be kept on track with a given field strength. For example, using the LHC-type 9 Tesla magnets in a 100 km-circumference collider, rather than a 27 km one, would allow an increase of the collision energy to 50 TeV. Clearly, the limitation on size is the cost.

Alternatively, for a fixed-size ring, further increasing the beam energy means having to increase the field strength. The limiting factor for the field strength is local heating in the superconducting coils, caused in turn by movement due to the high fields themselves. The heating causes the coils to cease to be superconducting; this is called quenching. The magnet starts to conduct normally, and releases a large part of its stored energy as heat, which in turn boils off the liquid helium coolant and induces high pressures. (Note that, during preparations, quenching is done deliberately to increase the strength of the field that the magnets can support. The field is gradually increased until quenching occurs; when this process is repeated, the quenching occurs at a higher field strength. This is continued until the magnets can support a sufficiently high field for the beam energy required. The process is called training, and takes months.)

Furthermore, protons, like electrons and other charged particles, emit synchrotron radiation and thus lose energy when they travel in circular orbits such as in the LHC. The radiated power goes as the fourth power of \(\gamma\), and since the energy of a relativistic proton is \(\gamma m_p\), the rate of energy loss is proportional to the fourth power of the beam energy. Clearly also, the loss is worse for electrons and positrons, because of their lower mass, than for protons.

Is the beam energy all that matters?

The other important aspect of a particle collider is the intensity of the beam, which dictates the rate of collisions. This is usually characterised by a quantity called the luminosity.

Working out the collision rate is fairly simply. It helps to know that the particles are actually grouped into bunches, rather than being a continuous stream. If we imagine one beam as a stationary target and the other beam as impacting on it (with twice the nominal collision energy), it makes it easier to work out the collision rate. The number of protons in the target bunch is \(n_T\), and the cross-sectional area of each proton is \(\sigma\), so the total area 'blocked out' by the target protons in \(n_T\sigma\). The actual cross-sectional area of the beam is \(\pi r^2\); so the proportion of this that's blocked out by protons is \(n_T\sigma / \pi r^2\) - the rest of the area is empty space. (This assumes that the protons in the bunch are spaced out enough that, if we were to look along the length of the 'target' beam, none of the protons further away from us would be hidden behind any others. Because the size of the protons is so astonishingly tiny relative to the cross-sectional area of the beam, this is a fair assumption).

Now, when the incident bunch of protons, \(n_I\), impacts the target bunch, the fraction of them that collide with a proton in the target is equal to the fraction of the total area that's blocked out by target protons; which we've already determined to be \(n_T\sigma / \pi r^2\). Hence there are \(n_In_T\sigma / \pi r^2\) collisions for every bunch crossing. If the rate of bunch crossings is \(f\), then the overall collision rate is \(fn_In_T\sigma / \pi r^2\).

The cross-section for inelastic scatter is about 60 mb, which is 6 × 10\(^{-30}\) m\(^2\). The diameter of the beam at the point of collision is about 16 microns, so its cross-sectional area is 2 × 10\(^{-10}\) m\(^2\). The ratio of these areas - and therefore the probability of a collision - is about 3 × 10\(^{-20}\). The value of \(n\) is about 10\(^{11}\), which suggests the number of collisions is 300 per bunch crossing. The bunches are crossed every 50 ns, so there are 2 × 10\(^7\) crossings per second, and so 6 billion collisions per second. (Actually, the figures are about a factor of 15 too high - CERN reports the number of collision per bunch to be about 20 for bunch size of 10\(^{11}\), suggesting the cross-section of 60 mb used should be more like 4 mb.)

It's conventional to take the proton cross-section, which is dependent on the proton energy but otherwise doesn't vary with the characteristics of the accelerator, out of this equation; what's left is referred to as the luminosity, L:

$$ L = \frac{f n_I n_T }{\pi r^2} $$

which is expressed in collisions per unit area per unit time, i.e. it has units of \([T]^{-1} [L]^{-2}\).  With luminosity so defined, the collision rate is just the product of the accelerator's luminosity and the cross-section of the proton collision (at the particular proton energy in question).

The total number of collisions during some time period is the integral of the collision rate over the time period. Of course, this is equal to the integral of the luminosity over the time period, times the cross-section (assuming that the cross-section remains constant over time, because it only depends on the beam energy, so it can be brought out of this integration):

$$ N = \int_0^T R dt $$
$$ N = \sigma \int_0^T L dt $$

The integrated luminosity has units of inverse area, \([L]^{-2}\).

The unit of area in typical use is the barn, which is approximately the cross-sectional area of the uranium nucleus (1 b is 10\(^{-28}\) m\(^2\)). A femtobarn is 10\(^{-15}\) b, so 1 fb = 10\(^{-43}\) m\(^2\). Hence, the luminosity is measured in fb\(^{-1}\) s\(^{-1}\), which, when multiplied by the collision cross-section in fb, gives a collision rate in s\(^{-1}\). The time-integrated luminosity is therefore typically expressed in inverse femtobarns. In fact, the total number of collisions is also usually quoted in inverse femtobarns (the true number of collisions would be this number times the cross-section of the proton-proton collision at whatever energy is being used). Given the actual cross-section of the proton-proton collision at 7 TeV collision energy, 1 fb\(^{-1}\) corresponds to about 8 × 10\(^{12}\) collisions.

What about the magnets?

There are several thousand superconducting magnets, cooled by helium to 1.9 K, of various types. For example, there are 392 quadrupole-type magnets, which focus the proton beams along the straight sections, and 1,232 dipole magnets.

So there were teething problems?

There were some early problems. On 27 March 2007 one of the quadrupole-type superconducting magnets failed a preliminary test. The magnet was deliberately being subjected to high pressures, similar to those that occur during quenching. The magnet that failed was built by Fermilab.

That was embarrassing, more for Fermilab than CERN, but things got worse after initial switch-on. Protons were first streamed in both directions around the tunnel on 10 September 2008, at 0.45 TeV beam energy. However, only eight or nine days later, an accident occurred in which the electrical connection between two of the 1,232 superconducting dipole magnets evaporated while carrying a current of 8.7 kA, as the beam energy was being increased to 5 TeV per beam. An electrical arc occurred that ruptured the cooling system, allowing six tonnes of liquid helium to boil off and leak into the tunnel. The pressure spike created a shock wave that damaged a few hundred metres of the tunnel. Repairs cost 40m Euros: 53 magnets had to be brought back to the surface, and either repaired or replaced.

On 23 November 2009, over a year later, the LHC returned to accelerating protons, initially at 0.45 TeV per beam again. On December 9 members of the ATLAS collaboration spotted 2.36 TeV collisions during test circulations. By December 16, there had been 50,000 collisions at this energy. Overall this was four years later than originally expected.

In March 2010, CERN announced its plans to keep the collision energy to 7 TeV for the next 18-24 months, or until an inverse femtobarn of data had been collected.

It was realised that the copper stabilisers surrounding the superconducting cables, which bear electrical current in the event that the superconducting cables fail, had too high a resistance. There are 10,000 of these connections. The upgrade that has been going on over the past two years has involved, among other things, replacing these shunts. The upgrade cost 124m Euros. However, despite this, CERN is still going to run LHC at 13 TeV rather than the design energy of 14 TeV.

What are the experiments?

There are four interaction points around the ring. The two main experiments are the 7000-tonne ATLAS and the 12,500-tonne CMS (compact muon solenoid), which are general-purpose detectors involved in the search for the Higgs boson. A third, smaller experiment - LHCb - is involved in detailed investigations into B-mesons. A fourth is ALICE, which is designed to study collisions between nuclei such as lead, which the LHC delivers in special runs.

What is it for?

The Higgs boson was the last missing piece of the Standard Model of particle physics, which was conceived in the early 1970s and remains the best description of particle physics. The existence of the particle was predicted by Peter Higgs in 1964, and Higgs (along with Francois Englert) won the Nobel Prize for physics in 2013, after the particle was discovered at the LHC. The significance of the Higgs is usually said to be that it explains how some particles acquire mass.

The idea is that there's a uniform scalar field pervading the universe, and that interactions of particles with this field is what gives those particles their mass. The stronger a particle's interaction with the field is, the greater its mass. Photons have no interaction whatsoever, electrons have a weak interaction, etc. This field has come to be known as the Higgs field, and the phenomenon of particles acquiring mass through coupling with this field as the Higgs mechanism. An analogy often given for the mechanism is that it's like the particles are travelling through treacle or molasses; particles more strongly coupled to the field aren't able to travel as far, before they decay into lighter particles. But Higgs himself has complained that this isn't an appropriate metaphor, as the mechanism isn't dissipative.

This mechanism was independently proposed by Francois Englert and Robert Brout at the Free University of Brussels, as well as Higgs, and also by Gerald Guralnik, Carl Hagen and Thomas Kibble at Imperial College. Higgs has always felt uncomfortable about his name being the one associated with the phenomenon; in fact, Brout and Englert published their paper two weeks before he did. However, Higgs was the one who proposed that the mechanism would have 'experimental consequences'; i.e. that, as a consequence of wave-particle duality, vibrations in the Higgs field ought to manifest as particles, in the same way that vibrations in the electromagnetic field manifest as photons.

The theory did not predict the mass, however, of the Higgs boson. Hence, the LEP (the Large Electron Positron Collider, which previously occupied the tunnel the LHC is in now and which operated with a collision energy of 200 GeV for 11 years from 1989 to 2000, to provide data to study the W and Z bosons, as well as to search for the Higgs boson), the Superconducting Super Collider (which started to be built in Texas but was never completed due to cost over-runs), the Tevatron at Fermilab, and ultimately the LHC, were conceived of to verify the existence of the Higgs boson and measure its mass.

Prior to the LHC, theory suggested the mass of the Higgs was probably no more than 186 GeV, and the LEP's failure to find it demonstrated that the Higgs could not be less than 114 GeV. The Tevatron data at 1.96 TeV collision energy ruled out masses around 165 GeV, and suggested the 160-180 GeV range was unlikely. This left two regions - a lighter region of 114-160 GeV and a heavier region of 180-186 GeV - unexplored.

Finding the Higgs

On 30 March 2010, 18 months after start-up, the first collisions at 7 TeV collisions, were achieved, but at low luminosity. The original intention was to run for 18-24 months at 7 TeV. Serious data collection began in 2011; by summer 2011 the mass range of the Higgs had been narrowed down to 115-145 GeV, and by 13 December 2011 CERN was able to announce that the energy range had been restricted to 116-130 GeV, with 'an intriguing excess' at around 125 GeV.

Further measurements began in April 2012. At this point, the beam energy was increased from 3.5 TeV to 4 TeV. On 4 July 2012, the discovery of the Higgs boson, with a mass of around 125 GeV, was announced with 5\( \sigma\) significance by both ATLAS and CMS. This is the same mass as about 133 protons, or one caesium atom.

A further three months of run time was announced beyond the scheduled shut-down at the end of 2012. The first run ended in February 2013, after three years.

What is 'five sigma' significance?

The protons are collided in bunches, which causes pile-up - multiple collisions within each bunch crossing. Furthermore, protons aren't elementary particles like electrons, but are composite particles made of quarks held together by gluons. Therefore, when they collide at these high energies, a lot of debris is created - Feynman likened it to 'smashing garbage cans into garbage cans'. This means there's a huge background of detected events that disguise the events actually being looked for; getting confidence about the signal in the light of this background requires a very large number of collisions to be integrated over a long period. This is why the luminosity is so significant; obviously the higher the luminosity, the higher the collision rate, and the less time it takes to accumulate the necessary data.

So, the determination of the existence of the Higgs was a statistical one, based on the agglomeration of a lot of data, rather than something that could be determined from analysing any one particular collision. There are many possible outcomes from a proton-proton collision at these energies, each one a different combination of outgoing particles. Each combination of outgoing particles is referred to as a decay channel, and the probability of each channel, and of the production of each possible particle, is predicted by the Standard Model and precisely known. Deviations from these known probabilities, indicating the presence of a new particle, need to be built up over a long period of time to establish confidence that it's not merely background. The number of particles actually detected in a given energy range over a given period of time is not fixed, but follows a Poisson distribution with a mean equal to the expected number based on the probability. There's therefore a finite likelihood of detecting more than the expected number even if there is no new particle in that energy range; this is just a statistical fluctuation. The excess has to be large to be attributed to the presence of a new particle; 5σ is the 'gold standard' (i.e. the excess has to be at least 5 times the standard deviation for it to be confidently attributed to a new particle). An excess of 3σ is usually referred to as only 'evidence for' it.

How much data is this?

Each collision/event that is recorded represents a few hundred MB of data. Because only a small fraction of the theoretically-possible decay channels involve a Higgs boson being produced, many can be disregarded; hence only a small fraction of events taking place need to be recorded, with the majority being deliberately discarded. The data is initially stored at the on-site tape silo facility known as Tier 0. Reconstructed data are delivered to regional centres around the world.

What's next?

The upgrade involved:

These modifications will allow the LHC to operate safely at the higher collision energy of 13 TeV.

The other operational change will be that the bunch size will be reduced from 1.7 × 10\(^{11}\) protons per bunch down to 1.2 × 10\(^{11}\). This will reduce 'pile-up', which is the simultaneous occurrence of numerous collisions, which are hard to disentangle from each other when analysing the data. However, the bunches will be collided every 25 ns instead of every 50 ns, giving an overall increase in luminosity despite the reduced bunch size.

Now the two-year upgrade is complete, preparations are underway for the next major run. The SPS - a 7 km-long accelerator that feeds protons to the LHC - began being powered up in early July 2014.

The objectives of the next set of experiments are to further study the Higgs, but also to investigate dark matter. We've known for some time that observable matter and energy make up only 5% of the energy in the universe; the rest is invisible, hard to detect, and we don't know what it is. The remainder is comprised of dark energy (70%) and dark matter (25%) - we know it's there only through its gravitational pull. At the new high energies, LHC scientists hope to be able to detect particles that display the same properties

LHC is expected to operate into the 2030s.

Comments

Add comment