Scattering Amplitude Workshop at SCGP – the Amplituhedron!

I’ve just attended a workshop on Scattering Amplitudes at the Simons center (posting here: http://scgp.stonybrook.edu/archives/7136, and check out the video here), so I thought I would put a few thoughts about the most interesting recent result in this direction – the Amplituhedron. Naturally, since this is the Simons center you need to translate “Scattering Amplitudes” to mean “scattering of strings and super-things”, but the techniques come from good old QFT.

Also, since this is not my direct area of expertise, the content of this post will be from the “interested outsider”…anyway…

The point of all this focus on scattering amplitudes is that although Feynman diagrams provide us a nice way of organizing diagrams for complicated QFT calculations, in the end we always end up calculating things which look like

\int d^4l \frac{\mathcal{N}}{l^2(l+p_1)^2(l+p_1+p_2)^2(l-p_1)^2}

(specifically, this is the scattering of 4 particles with momentum p_1, p_2, p_3, and p_4=-(p_1+p_2+p_3) at 1-loop with momentum l). The numerator \mathcal{N} might be some complicated thing, but when these particles are all massless, all the interesting information is contained at the poles (where l=p_1, for example). There is nothing too new in wanting to understand these kinds of integrals – this was the point of “The S-matrix Program”, which lead to all kinds of interesting work in the 1960s and 70s, but has died out a little since then. The revival has occured because apparently when one restricts to N=4 super Yang-Mills, enough simplifications occur so that further progress can be made.

This further progress has lead to “the amptliduhedron”, which has been propagandized as “the end of locality and unitarity in physics!” by Quanta magazine (link) – although is it worth it to note that Quanta is an “editorially independent division of the Simons Foundation”? Anyway, it’s really discussed much better at Sean Carrol’s blog (link). Nearly everything is discussed much better there. This was topic of much debate during this workshop, but last week I got the chance to see one of the original workers on the subject (Nima Arkani-Hamed) give a review of it. Having seen him talk, it’s easy to see why everyone is so excited – he is a fantastic speaker, very passionate and energetic. He also has some “colorful metaphors” to describe how he thinks and works, so it’s not surprising that the blogosphere has attached themselves to him to deliver us from these 19th century notations that the universe should make sense.

So I will try and relay some of his talk to describe this beast (so this next part is not mine). Essentially, look at the integral above. If you were dumb, you might say “the integrand is a product of d(log l^2)s”. If you were a little smarter, you would notice that not only does the numerator screw up that nice description, but also you can’t take the log of a dimensional number. However, if you were *a lot* smarter, you would see there is a change of coordinates (which is very similar to the duality transformations that Feynman came up with back in the day) which takes the integrand to the form

\frac{d\alpha}{\alpha}\frac{d\beta}{\beta}\frac{d\gamma}{\gamma}\frac{d\rho}{\rho}

(This is the part that only seems to work for N=4 super-YM). The amplituhedron is the geometric shape which describes a form with log-singularities on its boundaries, and thus completely encodes all the information about this scattering amplitude. The sweeping claims made by Arkani-Hamed and others is that *this describes everything*, so the universe can be reduced to a bunch of amplitudhedrons, which do not require locality and unitarity because these transformations are independent of them.

(just a note that unitary is still encoded when you actually *do* the integral, but my understanding is that if they can make this integrand into an honest volume form, unitary will not be needed either).

So the work is very interesting and sounds totally reasonable – but what about the claim that “this describes everything”? Well, if you believe that QFT describes everything (seems reasonable – I guess you have to assume we have no souls), that scattering amplitudes in QFT describe everything (which they don’t – various topological properties are not detected by them), and further that supersymmetry is real (not my bag, but I think 80% of HEP physicists would agree), then the amplitudhedron should “describe everything”. At least, it provides a method to construct any interaction by referring only to a specific geometric object. A rather fascinating idea, but don’t we already have such an object (at least for the standard model), called a fibre bundle?

Anyway, more questions remain, but my impression is that this is the most productive area of S-matrix work – since they have actually been able to analyze a scattering amplitude in terms of some very concrete geometry.  I expect more interesting work and grand claims in the near future!

Some recent particle experiments – dark matter and neutrinos

Recently two new particle detectors have released their first set of results – the Large Underground Xenon (LUX) experiment and Ice Cube. Both machines are trying to track down the extremely rare interactions of particles which only interact weakly with other matter. These are generally the kind of interactions you cannot study with something like the LHC, because even though many, many, (many) particles are produced every second at the LHC, the detector is too small to catch those that interact weakly. The solution is to build giant underground tanks of some material (in this case, Xenon or Ice), and look for the extremely rare collision of a weakly interacting particle with the detector material.

LUX is an experiment designed to look for dark matter. A class of dark matter candidates are WIMPS – Weakly Interacting Massive Particles. They are weakly interacting because they do not interact electromagnetically (they are “dark”), and they are massive because the claim is that they are responsible for a bunch of phenomena which can be explained by adding mass to astrophysical objects. I won’t rant too much about this, but the logic goes something like this:

  1. Model a system with Newtonian gravity (galactic rotation curves, gravitational lensing*, etc).
  2. Try to verify your model, and find that it doesn’t quite work.
  3. Without trying the full theory of general relativity, give up on a gravitational solution to a gravitational phenomena.
  4. Propose a new form of matter which lies outside the standard model, which must be dark and massive.

* Yes, I guess you should argue this is not “Newtonian” since you are letting photons interact gravitationally even though they don’t have mass. This should probably say “Linearized GR”…

So LUX fills this big underground tank in South Dakota with Xenon and looks for WIMPS. If they exist, they should (very rarely!) perturb the Xenon atoms around, which will then produce a flash of light (“scintillate”) and signal a detection. Of course, all kinds of other things are flying into this vat of Xenon, so the clever folks there focus the search in the very center of the detector, which should have the best shielding from the outside because there is so much Xenon in the way.

Anyway, the recently announced that the first dataset is “consistent with the background-only hypothesis” at the 90% confidence level (http://arxiv.org/abs/1310.8214). In other words, they have not detected anything other then what the standard model predicts. This is in contrast to some other recent results from similar experiments. For those of us who love the standard model and think we just need to work on gravity a little harder, this is one for the win column. Even if I’m the only one counting it as such…

The Ice Cube detector is based on the same principle, but is looking for neutrinos. These little guys are the last piece of the standard model which contains some uncertainty (now that we have found Higgs, anyway…). We know they exist, we just need to fill in some details like their exact masses and exact character. They also interact only weakly, but in order to get the largest possible sample, you want the largest possible amount of matter in your detector. This material should also be relatively transluscent, since you are looking for flashes of light. So, they drilled a bunch of holes in the Antarctic Ice, lowered some cameras in, and looked for flashes of light from neutrinos interacting with frozen water! I think this is such a cool idea – their detector is effectively a cubic kilometer in size! The Xenon detector is a cylinder 6 m tall with a radius of under 4 m. So that’s around a million times the size of Xenon. Despite being so damn huge, they found a paltry 28 neutrino candidates (http://www.sciencemag.org/content/342/6161/1242856). Which is actually more then twice what was predicted!

At 28 neutrinos, I guess one can’t get overly excited about the numbers, but there is something quite striking about their energies – from 30 TeV to 1200 TeV. Compare this with the LHC, which is currently operating at 8 GeV – these neutrinos are over 1000 times the energy present in LHC collisions. These kind of experiments allow us to probe energy scales which are generally impossible to reach on Earth. Basically, Universe is much better at accelerating than we could ever be.

In my opinion, these types of experiments are the future – they allow us to directly answer questions about unexplained phenomena, and they do it pretty cheap (relatively). Although people are already talking about the next big machine after the LHC, without direct detection of dark matter such a prospect seems highly unlikely, and would likely leave the neutrino problem completely untouched. It seems that Astroparticle physics is a bit of a growth area for the physical sciences, and has the potential to open up entirely new “eyes” on the universe. Very Exciting!