(my essay on this topic can be found here.)

Tegmark breaks reality into three pieces, and it will be easiest to see what’s going on if I show you the actual figure in the book (this is shamelessly stolen from Tegmark, and all credit is his. If it turns out he’s not ok with this, I hope he’ll let me know!)

The idea here is that our perception of reality (“Internal Reality”) is governed by our senses, like sight and touch and smell. We interact directly with a version of reality which we can all agree on called “Consensus Reality”, and that consensus reality is a result of something which is abstractly true, “External Reality”. In the book he makes the point that to determine the fundamental “theory of everything”, we don’t need to actually understand human consciousness, because that’s explicitly separated from consensus reality by our own perceptions.

While there certainly are elements to this hierarchy that I like, I actually think making these divisions is pretty arbitrary. I can easily ask my physics I students questions which will break “consensus reality” but stay in the realm of classical physics. For instance, I recently asked someone “what is the acceleration of an object in projectile motion?” and they responded “in the direction of motion”, indicating the parabolic path. Ok, I asked a well-defined mathematical question and received an (incorrect) response that left the bounds of mathematical rigor, but it was about classical physics, and therefore solidly in Tegmark’s “consensus reality”. The student’s level of analysis was not high enough to understand that “acceleration” does not mean “velocity” (or whatever else they might have thought I meant), but it was within *their* consensus reality.

What am I driving at? Perhaps the reality we can all agree on is not mathematical, but only descriptive in nature. For instance, the student and I can both draw pictures of how an object moves in projectile motion because we’ve seen real-life objects move in projectile motion. On the other hand, if mathematics is objectively “right” then I can prove some versions of consensus reality incorrect (“The day is 24 hours long”). Of course, no one would really say “the day is 24 hours long” is *wrong*, just that if you define the day with respect to the background stars, you get something a little bit shorter.

So even if we split off the “perception of reality” piece from our hierarchy of reality, we still end up with some rather arbitrary definitions of reality, from purely mathematical up to descriptive. This suggests that reality should be viewed as a continuum, with no clear boundaries between abstractly true and subjectively true, which all occur at different levels of detail. So what can we use to determine which level we are talking about? I’ve called such a thing **the axiom of measurement**, and you can check out the link in the first paragraph if you want to read the original essay.

The idea is that in order to determine a standard of “truth”, we need a standard of “measurement”. I can verify the statement “objects in projectile motion move in parabolic motion” as long as I use a measurement tool which is not accurate enough to see the effects of air resistance. That defines our “consensus reality”. But once I build a better tool, I can prove our consensus reality wrong, which requires us to redefine it at each moment for each measurement. Thus we have a natural scale for truth, defined experimentally by whatever apparatus we available.

For me, the bonus with this approach is that you *know when things are true*; they are true when you know an experiment can confirm them. What you lose is the concept of absolute truth, but it’s easy to argue that the concept of absolute truth has brought us nothing but trouble anyway!

(just as note, I think we necessarily lose absolute truth because we would have to be able to say “we will never design an experiment to prove this wrong”, but I don’t think we will ever be able to do that. Can anyone imagine an experiment to prove that 1+1 is not 2? I think it might strain the logical system I’m working in. Anyway, more thought on this is required).

Of course, I’m really not trying to be super-critical of Tegmark, I actually like some of his analysis. But, I think his splitting here is someone on this side of *homo*-centric, since it includes human perceptions at all levels (after all, we didn’t even know about his transition between quantum and classical reality until ~100 years ago. I worry about a definition of reality which shifts in time!). If we include the experimental apparatus into the very definition of our theoretical model, we achieve consistency without having to worry either about either cognitive science or a shifting consensus of reality.

Before getting into the science, SageMath is a free, open-source mathematics software which includes things like Maxima, Python, and the GSL. It’s great because it’s extremely powerful and can be used right in a web browser, thanks for the Sage Cell Server. So I did all of this right in front of my students, to demonstrate how easy this tool is to use.

For the scientific background, I am going to do the same example of the driven, damped pendulum found in *Classical Mechanics* by John Taylor (although the exact same system can be found in *Analytic Mechanics*, by Hand and Finch). So, I didn’t create any of this science, I’m just demonstrating how to study it using Sage.

First, some very basic background. The equation of motion for a driven, damped pendulum of length and mass being acted upon by a driving force is

here is the damping term and , the ratio of the forcing amplitude to the weight of the pendulum. In order to get this into Sage, I’m going to rewrite it as a system of first-order linear differential equations,

This is a typical trick to use numerical integrators, basically because it’s easy to integrate first-order equations, even if they are nonlinear.

It’s easiest to find chaos right near resonance, so let’s pick the parameters and . This means the -axis will display in units of the period, 1 s. We also take . The first plot will be this system when the driving force is the same as the weight. That is, , and code + result is Figure 1 shown below.

`from sage.calculus.desolvers import desolve_system_rk4`

x,y,t=var('x y t')

w=2*pi

w0=3*pi

g=3/4*pi

f=1.0

P=desolve_system_rk4([-2*g*x-w0^2*sin(y)+f*w0^2*cos(w*t),x],[x,y],[0,0,0],ivar=t,end_points=[0,15],step=0.01)

Q=[[i,k] for i,j,k in P]

intP=spline(Q)

plot(intP,0,15)

Figure 2 is a plot with the driving force slightly bigger than the weight, .

This demonstrates an *attractor*, meaning the steady-state solution eventually settles down to oscillate around . We can check this is actually still periodic by asking Sage for the value of at =30 s, =31 s, etc., by calling this line instead of the plot command above

`[intP(i) for i in range(30,40)]`

(Note that we also have to change the range of integration from to .) The output is shown in Figure 3; the period is clearly 1.0 s out to four significant figures.

Next, let’s increase the forcing to . The result is shown in Figure 7. The attractor is still present (now with a value of around ), but the behavior is much more dramatic. In fact, you might not even be convinced that the period is still 1.0 s, since the peaks look to be different values. We can repeat our experiment from above, and ask Sage to print out the value of for integer timesteps between =30 and =40. The result is shown in Figure 4. The actual period appears to be 2.0 s, since the value of does not repeat exactly after 1.0 s. This is called *Period Doubling*.

In Figure 8, I’ve displayed a plot with , and it’s immediately obvious that the oscillatory motion now has period 3.0 s. We can check this by playing the same game, shown in Figure 6.

Now we are in a position to see some unique behavior. I am going to overlay a new solution onto this one, but give the second solution a different initial value, instead of . The code I am adding is

`P2=desolve_system_rk4([-2*b*x-w0^2*sin(y)+g*w0^2*cos(w*t),x],[x,y],[0,0,-pi/2],`

ivar=t,end_points=[0,15],step=0.01)

Q2=[[i,k] for i,j,k in P2]

intP2=spline(Q2)

plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')

The result is shown in Figure 8. Here we can see the first example of the sensitivity to initial conditions. The two solutions diverge markedly once you have a slightly different initial condition, heading towards two very different attractors. Let’s plot the difference between the two oscillators,

but with only a very small difference in the initial conditions, . The code follows:

`#plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')`

plot(lambda x: abs(intP(x)-intP2(x)),0,15)

This is shown in Figure 9. It clearly decays to zero, but that’s hard to see so let’s plot it on a log scale, shown in Figure 10.

`#plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')`

plot_semilogy(lambda x: abs(intP(x)-intP2(x)),0,15)

Now, let’s see what happens if we do this same thing, but make the force parameter over the critical value of . This is displayed in Figure 11, for . We get completely the *opposite* behavior, the differences in the oscillators are driven *away* from each other due to their small initial separations. This is the essence of “Jurrasic Park Chaos” – a small change in the initial conditions (like a butterfly flapping it’s wings in Malaysa) causes a large change in the final outcome (a change in the weather pattern over California).

The problem we were tackling had to do with the Planetary Nebula Luminosity Function (PNLF – there is even a Wikipedia page about this now!). As medium-sized and smaller (under 10 solar masses or so) stars reach the end of their life, they turn into really pretty objects called Planetary Nebula (PNe, and here are some cool Hubble pics). Massive stars a) evolve faster and b) make brighter PNe then their less massive siblings, so over time less and less bright PNe should be produced by any given population of stars. Further, the luminosity from a PNe is primarily due to excitation from the central white dwarf, which also dims over time. Therefore, PNe in a single population of stars should be generally getting less luminous over time. Problem is, that is not observed, at all!

The figure above comes from Ciardullo (2006), and demonstrates the problem – all the brightest PNe have the same absolute magnitude, regardless of the age of the stellar population (which goes old to young from top to bottom). This allows you to use PNe as a secondary method to find astronomical distances, but it also shows that there is something fundamentally incorrect with the nice picture of stellar evolution I’ve presented above. The idea explored in my thesis was that as the population aged, stellar mergers produced a ready supply of massive blue stars (called “Blue Stragglers”) which would form the brightest PNe. The advantage of a model like this is that it does not require a significant amount of detailed physics, such as the effects of stellar rotation, wind, or other micro-astrophysics. It is simply a population synthesis approach – we essentially created stellar populations, used standard stellar evolutionary models, but included a small fraction of stars (around 10%) which merged to form more massive stars.

First, let’s take a look at the “standard picture”, with no Blue Stragglers:

The ages of the stellar populations are shown in the upper lefthand corner (1-10 Gyr). It clearly displays the effect I talked about – the brightest PNe fade over time as the population ages.

Now let’s take a look at our basic model, including 10% blue stragglers into a population of several different ages:

As we expected, the brightest PNe held pretty constant for a variety of stellar population ages (1-10 Gyr, shown in the upper corner, with the 1 Gyr being a bit of an outlier). The absolute magnitude ended up being a little high, and the initial shape was more shallow then the observations, but it was clear that the blue stragglers were able to keep the maximum luminosity of the PNLF relatively constant over a wide range in population ages.

It’s worth noting that the two populations of blue stragglers which we are discussing here are actually disjoint. Since PNe form from stars under 10 solar masses, the usual formation scenarios have no trouble making them. It’s only for the stars over 10 solar masses that the merging scenario is invoked for a creation mechanism. On the other hand, both of these merger scenarios are based on stars which form in binary systems, and then merge at a later time. So although the end masses are different the formation mechanism from a blue straggler point of view is the same. It would be interesting to see if one could reproduce the required blue straggler fraction by using the initial binary population. Using both the PNLF and mass star formation considerations, one might be able to check this over the entire mass range of the initial mass function of binaries. Not something I can see spending time on at the moment, but an interesting question which even might make a nice undergraduate project!

If you are interesting in reading the whole thesis, you can check it out here. What I’ve talked about above the only half the story – there is also the “dip” found in some PNLFs (but not M31, for instance), which the model tried to address as well.

]]>The latest part of this story is using some of the interesting properties of cosmic strings, I’ve been able to use the* lack* of observational evidence for them to constrain the nature of the global topology of the universe. This is a pretty interesting idea because there are very few ways that we can study the overall shape of the universe (*shape* here means topology, so does the surface looks like a plane, a sphere, a donut, or what?). There are lots of ways we can study the local details of the universe (the geometry), because we can look for the gravitational effects from massive objects like stars, galaxies, black holes, etc. However, we have essentially no access to the topological structure, because gravity is actually only a local theory, not a global theory (I could write forever about this, but I’ll just leave it for a later post maybe…). We have zero theoretical understanding of the global topology, and our only observational understanding comes from studying patterns in the CMB. The trick with cosmic strings is that they actually serve to connect the local gravitational field (the geometry) to the global structure of the universe (the topology).

The game is this – take a spacetime with cosmic strings running around everywhere, and take a flat surface which intersects some of them. This surface can always be taken as flat, so the intersections are conical points. If you measure an angular coordinate around each point, you won’t get , you’ll get something a bit smaller or a bit larger since the surfaces are twisted up around the points. It turns out that if you add up all the twists, you had better get an integer – the *genus* of the surface. The genus is essentially the number of holes in the surface. A sphere has , a torus , two tori attached to each other have , and so on.

Now we consider what kind of observational evidence there is for cosmic strings. The short answer, none! People have been looking for them in the CMB, but so far they’ve only been able to say “if cosmic strings exist, they must be in such-and-such numbers and have energies of such-and-such.” If we use these limits, we find that to a very good approximation, **if cosmic strings exist, a surface passing through them must have genus 1, and therefore be a torus (the surface of a doughnut)!**

Ok big deal – but here’s where the foliations come in. For example, if we parametrize our spatial (3-dimensional) manifold with tori, the result is a 3-torus. So this actually implies that **space is not a sphere, but is a solid torus** (like a doughnut). The mathematics behind this statement are actually quite profound, and were worked out in the early days of foliation theory by the likes of Reeb, Thurston, and Novikov. But the idea is that such foliations of 3-manifolds are very stable, and a single closed surface greatly restricts the kinds of foliations allowed for the manifold as a whole.

The archive paper where I discuss this in more detail can be found here. This idea that space is not a sphere is not new, and there is actually some evidence for it in the CMB, in the form of a repeating pattern (or a preferred direction) in space. But my primary interest is pointing out that this is an independent way of measuring the topology of the universe, since it’s based on local observations of strings in the CMB rather than overall patterns. If strings don’t actually exist, it can still be used to study the presence of the conical singularities, but I expect the restrictions on the topology are much less strict. Perhaps I’ll look into that further into that, but for the moment I’m happy with this. It’s a new way to determine information about the global topology of the universe, and it’s a great combination of pure mathematics, theoretical physics, and observational cosmology.

]]>The backstory to this is that I was participating in a weekly mathematical physics seminar back at Florida State (although I use the word “seminar” pretty loosely – it was regularly attended by only myself and *one* other individual!), and in the process of working on presenting on some NCG topic, I came across “The Bost-Connes System”. This is a particular -algbera, on which you can define some dynamics. What makes it special is that if you calculate the partition function for this dynamical system, you get the Riemann Zeta function! Since the partition function can be used to generate predictions for a statistical mechanical system, I wondered how possible it was to construct a real physical system with the same symmetry as the Bost-Connes system. Then you would have experimental access to (at least some features of) the Riemann Zeta. There is a great deal of mathematical important to Zeta, including a $1 million dollar prize for finding the zeros!

I wasn’t exactly thinking about which Benz to buy with my prize money yet, but I thought it was an interesting idea – experimental verification of a mathematical theorem. I wasn’t aware that anything like this had been done before. Normally the “flow of ideas” works the other way – constructions in mathematics find usefulness in physics, or theoretical models become interesting mathematical systems. It stuck in my head for a while, I did a few calculations to determine what the zeros of a partition function might look like, but nothing really came of it.

When this essay contest came around, I thought it might be an opportunity to share this idea. I figured that if it was going to be taken seriously, you needed to raise experimental verification to the level of mathematics – after all, if I prove a conjecture is true outside of the field in which the conjecture is stated, we should not take the proof very seriously! I needed to make experimental physics a subfield of mathematics. It turns out that this is pretty easy, and so that’s what the essay is about. If you take your physical model as a set of formal axioms, and add in an additional axiom which can be used to experimentally verify a theorem (I call this “an axiom of measurement”), you can formulate physics as a complete formal system. As a bonus, the axiom can be used to add a little more structure to the Platonist viewpoint on universal versus physical forms.

Now, the FQXi Essay Contests are *Contests*; the community and the public can vote on the quality of the essays, and the quality of the essays vary *widely*, since nearly anyone is allowed to submit an entry. I actually think my essay represents a pretty mainstream viewpoint about physics – that we are not really studying “nature” or “the universe” when we do physics, we are really studying a “model for the universe”, which is confirmed by our everyday observations as well as carefully constructed experiments. Since it’s not a new, dramatic viewpoint on any particular aspect of the relationship between the two fields, I don’t expect to be winning any awards. But, I had an opinion with an interesting idea behind it, and an essay seemed like the ideal place to explore it.

Anyway, if you’re so inclined go over and check out my entry as well as all the others.

]]>Of course, who cares, everyone on the internet is crazy. Well, this is my first experience so I’m recording it. I’m making some teaching videos for a partially flipped class we are teaching at Merrimack College. Last week, I posted a video about time dilation for my class to watch this week:

(click the youtube link on the bottom right to see the troll I am referring to),

Pretty quickly, I had someone named “Pentcho Valev” asking why the speed of light was constant. I was split between thinking “wow someone doesn’t understand but really wants to know more!” and thinking “uh oh”. In retrospect, I should have known what was happening as soon as a read these two lines:

“To put it simply, the frequency shifts because the speed of light shifts.”

“An alternative explanation of the frequency shift (the only salvation for relativity) involves the assumption that the motion of the observer has somehow changed the wavelength of the incoming light. […] This assumption is so obviously absurd that relativists never state it explicitly. Yet without it relativity collapses.”

Doing my due diligence as a physicist and teacher, I attempted to reason with him. But, he’s a troll and it didn’t work. Meh, no big deal.

BUT, it turns out Pentcho Valev is an entire internet quack phenomenon! There is even an entire (albeit out-of-date) website, outlining his “scholarly activities”:

http://bip.cnrs-mrs.fr/bip10/valevfaq.htm#embarrased

So there are lots of ways one can decide informally “they have made it” – that is, not an award or publication or something. Maybe you get recognized at a conference by someone who knows your work, or the subject of something you published is a topic of debate *without* you having to inject it into the discussion manually. Well, I’m trying to teach early-career STEM majors the basics of mechanics – how to solve problems, how to use conceptual and analytic reasoning, and how to avoid common pitfalls and misunderstandings. And I’ve had a famous troll pay attention.

I’m going to count this as “I’ve made it”.

]]>Of course, the concept of as the ratio between the diameter and circumference of a circle is more than important – a cursory glance through the arXiv suggests it appears in near 85% of all papers on theoretical physics. What I mean is are the *digits* of special? Is there anything actually significant hidden in the seemingly random digits of this all-important transcendental number?

This is a well-trodden topic among pseudo-intellectuals and science fiction writers alike. No less then the great Carl Sagan afforded a special significance to the digits of our friend at the very end of *Contact* (a part which didn’t make it into the movie). But is there any truth to this? Or even any evidence for it? In fact, how would you even go about trying to figure it out?

What got me thinking about this was a website I came across a few weeks back, talking about finding strings of specific numbers in – you can see it here. It’s a very cool page, which lets you do cool things like search for your SS number in the digits of (no joy for me there, I only get 8/9 numbers). However, down at the bottom they define something which I formalize as follows:

**Loop Sequences**: A loop sequence in a string of single-digits integers is a set of integers such that as a string of single digits, the integer is found at the position in the string , and is found at position .

This is perhaps best illustrated by an example. Let’s start with . Turns out, starting at the 35th digit of (counting starting after the decimal point), we have …841971693993…, which contains 169 starting at the 40th position, so . I continue to do this and find , , and so on until you find that you are looking 169 again! This is a loop sequence.

That web page gives a single loop sequence, found by one Dan Sikorski. I wondered if there are more – how common are these loops, and what would it take to find them? Sounding like an interesting computational project (rather than tackling this theoretically, which might be possible but struck me as more difficult), I though I would look for loops in some mathematical constants, along with random numbers, to see if there was any evidence for being special.

In short, no, there is not.

Of course, since these numbers are infinitely long, *every* number you start with must loop back at some point. What I’m after is how common these loops are. So let’s start with a million digits of , and try to find loops that contain any of the numbers 1 to 100000 (rather arbitrary, but my PC can handle this in under an hour so it seems appropriate). The results are as follows:

For , I found the following loops:

(this is self-referencing)

(this might be called “the Sikorski loop”)

(this is a new loop, but who knows if I’m the first to notice it!)

For , I found the following loops:

(self-referencing)

(another self-referencing)

To see if this distribution is at all unusual, I generated 100 random strings of a million integers and did the same kind of search. The distribution for the random numbers, plotted with the strings I found in and is:

The random distribution probably looks exactly how one would expect it – smaller loops are far more common, and larger loops (>5 or so) are part of the statistical variation. Due to small number statistics, its very hard to convince yourself that and are particularly special in terms of the distribution of their loops. It might be tempting to say that the length 20 loop in lies outside the statistical variation, but you can see that I found loops of lengths 17, 18 and 31 in the random sample. For this reason, I would say this study does not suggest anything about the special character of the digits of and .

I suppose one should go further to try and deal with the statistics, and perhaps I’ll just run my laptop for a week and do the 10 million digit version of this, but it’s a little hard to imagine that I will find any evidence to suggest that there are any cyclic patterns in the digits of or .

]]>

The first is on Overtones and Beats, which I demonstrate on my double bass. It’s classic topic; I discuss how to produce the overtones series on a stringed instrument, and I also talk a little bit about beats.

The second is on Vibrato, and is just a simple illustration of what vibrato is and what it sounds like on a stringed instrument (my double bass again). I slow things down as best I can so you can really hear what is going on.

The third is on Sympathetic Resonance on a classical guitar. While I was playing on day I started noticing a variety of resonance occurring, so I thought it would be interesting to make a short video discussing the phenomena. I’ve been thinking in sympathetic resonance from a compositional standpoint for a while, and it’s a topic which might interest students as well. Resonance on a guitar is not quite as dramatic as the Tacoma Narrows bridge, but I’m thinking it might be something they can relate a little better too…

The quality of the videos is good enough but not fantastic – I’ve just done them now because I have time now. If I have more time (and technology) in the future I might update/improve them.

]]>Also, since this is not my direct area of expertise, the content of this post will be from the “interested outsider”…anyway…

The point of all this focus on scattering amplitudes is that although Feynman diagrams provide us a nice way of organizing diagrams for complicated QFT calculations, in the end we always end up calculating things which look like

(specifically, this is the scattering of 4 particles with momentum , and at 1-loop with momentum ). The numerator might be some complicated thing, but when these particles are all massless, all the interesting information is contained at the poles (where , for example). There is nothing too new in wanting to understand these kinds of integrals – this was the point of “The S-matrix Program”, which lead to all kinds of interesting work in the 1960s and 70s, but has died out a little since then. The revival has occured because apparently when one restricts to N=4 super Yang-Mills, enough simplifications occur so that further progress can be made.

This further progress has lead to “the amptliduhedron”, which has been propagandized as “the end of locality and unitarity in physics!” by Quanta magazine (link) – although is it worth it to note that Quanta is an “editorially independent division of the Simons Foundation”? Anyway, it’s really discussed much better at Sean Carrol’s blog (link). Nearly everything is discussed much better there. This was topic of much debate during this workshop, but last week I got the chance to see one of the original workers on the subject (Nima Arkani-Hamed) give a review of it. Having seen him talk, it’s easy to see why everyone is so excited – he is a fantastic speaker, very passionate and energetic. He also has some “colorful metaphors” to describe how he thinks and works, so it’s not surprising that the blogosphere has attached themselves to him to deliver us from these 19th century notations that the universe should make sense.

So I will try and relay some of his talk to describe this beast (so this next part is not mine). Essentially, look at the integral above. If you were dumb, you might say “the integrand is a product of s”. If you were a little smarter, you would notice that not only does the numerator screw up that nice description, but also you can’t take the log of a dimensional number. However, if you were *a lot* smarter, you would see there is a change of coordinates (which is very similar to the duality transformations that Feynman came up with back in the day) which takes the integrand to the form

(This is the part that only seems to work for N=4 super-YM). The amplituhedron is the geometric shape which describes a form with log-singularities on its boundaries, and thus completely encodes all the information about this scattering amplitude. The sweeping claims made by Arkani-Hamed and others is that *this describes everything*, so the universe can be reduced to a bunch of amplitudhedrons, which do not require locality and unitarity because these transformations are independent of them.

(just a note that unitary is still encoded when you actually *do* the integral, but my understanding is that if they can make this integrand into an honest volume form, unitary will not be needed either).

So the work is very interesting and sounds totally reasonable – but what about the claim that “this describes everything”? Well, if you believe that QFT describes everything (seems reasonable – I guess you have to assume we have no souls), that scattering amplitudes in QFT describe everything (which they don’t – various topological properties are not detected by them), and further that supersymmetry is real (not my bag, but I think 80% of HEP physicists would agree), then the amplitudhedron should “describe everything”. At least, it provides a method to **construct any interaction by referring only to a specific geometric object**. A rather fascinating idea, but don’t we already have such an object (at least for the standard model), called a fibre bundle?

Anyway, more questions remain, but my impression is that this is the most productive area of S-matrix work – since they have actually been able to analyze a scattering amplitude in terms of some very concrete geometry. I expect more interesting work and grand claims in the near future!

]]>**LUX** is an experiment designed to look for dark matter. A class of dark matter candidates are WIMPS – Weakly Interacting Massive Particles. They are weakly interacting because they do not interact electromagnetically (they are “dark”), and they are massive because the claim is that they are responsible for a bunch of phenomena which can be explained by adding mass to astrophysical objects. I won’t rant too much about this, but the logic goes something like this:

- Model a system with Newtonian gravity (galactic rotation curves, gravitational lensing*, etc).
- Try to verify your model, and find that it doesn’t quite work.
- Without trying the full theory of general relativity, give up on a gravitational solution to a gravitational phenomena.
- Propose a new form of matter which lies outside the standard model, which must be dark and massive.

* Yes, I guess you should argue this is not “Newtonian” since you are letting photons interact gravitationally even though they don’t have mass. This should probably say “Linearized GR”…

So LUX fills this big underground tank in South Dakota with Xenon and looks for WIMPS. If they exist, they should (very rarely!) perturb the Xenon atoms around, which will then produce a flash of light (“scintillate”) and signal a detection. Of course, all kinds of other things are flying into this vat of Xenon, so the clever folks there focus the search in the very center of the detector, which should have the best shielding from the outside because there is so much Xenon in the way.

Anyway, the recently announced that the first dataset is “consistent with the background-only hypothesis” at the 90% confidence level (http://arxiv.org/abs/1310.8214). In other words, they have not detected anything other then what the standard model predicts. This is in contrast to some other recent results from similar experiments. For those of us who love the standard model and think we just need to work on gravity a little harder, this is one for the win column. Even if I’m the only one counting it as such…

The **Ice Cube detector** is based on the same principle, but is looking for neutrinos. These little guys are the last piece of the standard model which contains some uncertainty (now that we have found Higgs, anyway…). We know they exist, we just need to fill in some details like their exact masses and exact character. They also interact only weakly, but in order to get the largest possible sample, you want the largest possible amount of matter in your detector. This material should also be relatively transluscent, since you are looking for flashes of light. So, they drilled a bunch of holes in the Antarctic Ice, lowered some cameras in, and looked for flashes of light from neutrinos interacting with frozen water! I think this is such a cool idea – their detector is effectively* a cubic kilometer* in size! The Xenon detector is a cylinder 6 m tall with a radius of under 4 m. So that’s around a million times the size of Xenon. Despite being so damn huge, they found a paltry 28 neutrino candidates (http://www.sciencemag.org/content/342/6161/1242856). Which is actually more then *twice what was predicted!*

At 28 neutrinos, I guess one can’t get overly excited about the numbers, but there is something quite striking about their energies – from 30 TeV to 1200 TeV. Compare this with the LHC, which is currently operating at 8 GeV – these neutrinos are over 1000 times the energy present in LHC collisions. These kind of experiments allow us to probe energy scales which are generally impossible to reach on Earth. Basically, Universe is much better at accelerating than we could ever be.

In my opinion, these types of experiments are the future – they allow us to directly answer questions about unexplained phenomena, and they do it pretty cheap (relatively). Although people are already talking about the next big machine after the LHC, without direct detection of dark matter such a prospect seems highly unlikely, and would likely leave the neutrino problem completely untouched. It seems that Astroparticle physics is a bit of a growth area for the physical sciences, and has the potential to open up entirely new “eyes” on the universe. Very Exciting!

]]>