#MeasureEarth: Crowdsourced Data for the Radius of the Earth

In this post I am going to go through a crowdsourced data project for determining the radius of the Earth. It’s part of an exercise I gave to my Physics lab students as an introduction to data science, and here I am going to go through the theory and evaluation of the data to see how we can use it to determine the radius of the Earth.

First, all credit where credit is due: this is the brain child of Lauren Weiss, over at the RockyWorlds blog. She envisioned a world-wide initiative to measure the shadow of a meter stick a noon, mimicking the classical measurement of Eratosthenes in ~300 BC. It’s a great exercise for a middle school/high school science class, and I thought it would really great if we really did have data from all over the world to try and mine it as much as possible. This was the assignment I gave to my laboratory students.

Unfortunately, it didn’t end up being  “world-wide”. There were 33 entries, and although the US was well-covered, the only international data came from Mexico and the Canary Islands. However, that should not dissuade us from treating the data like data and seeing what the results are. After all, this project is really about the process of science – that’s why I gave it to my students, and that’s why I wanted to try and do a careful analysis of my own.

A quick description of what we are doing – measuring the shadow of a meter stick at as many locations around the globe at the moment the Sun is highest in the sky. It should not be hard to see that because the Earth is a curved surface, the length of the shadow will be different depending on where your observation was made. To see carefully how one can get the radius of the Earth from a set of these measurements, we need a little geometry.

The Sun’s rays travel in parallel lines to the surface of the Earth, since the Sun is so far away. When they fall on a meter stick as shown, they make shadows which form little triangles. The angle between the Sun’s rays and the meter sticks can be found using \tan(\theta)=l_s/l_m. This can be be done at two different locations to find two different angles, \theta_1 and \theta_2. Looking at the picture, we can use simple geometry to relate those angles to the angle formed by the two radial lines from the meter sticks to the center of the Earth, \Delta \theta=\theta_2-\theta_1.  Using this and the basic definition of the angle as \Delta \theta=s/R, we can look up the distance s between the two points on the Earth’s surface and determine the radius of the Earth R.

At least, in principle. There are several complications. The major one is that the above analysis really only works if the longitudes of the two locations are the same. Apparently that is how the “classical” measurement was done. In my class, I had the students pick groups of measurements which were done at approximately the same longitude, and that worked well. But I want to be able to use all available data to try and determine the best value for the radius of the Earth that I can, so I need to take the different longitudes into account.

Let me introduce the theoretical tool that we can use to do this: the Haversine formula:

\sin^2(d/R)=\sin^2(\Delta \phi/2)+\frac{1}{2}[\cos(\phi_1-\phi_2)+cos(\phi_1+\phi_2)]\sin^2(\Delta \lambda/2)

(This isn’t quite the form of the Haversine function you’ll see in the Wikipedia link above, but it’s the form I used and equivalent to the Wikipedia one). Here \Delta \phi=\phi_2-\phi_1 and is the difference in latitude, \Delta \lambda is the difference in longitude, d is the arclength distance between the two points and R is the radius of the Earth.

My first plan failed: I used this formula to determine the distance between the two points, assuming the radius of the Earth was the known value. Then I used a second application of the Haverside formula to determine the distance s along the latitude line, which I then used in R=s/\Delta \theta with my measured values of the shadow angle \Delta \theta=\theta_2-\theta_1. The measured radii were all over the place, with an almost flat distribution from 0 up to (and over!) 100 000 km. The radius of the Earth is approximately 6370 km…

I believe the problem was this approach directly compares the known latitude change \Delta \phi with the measured change in the shadow angle \Delta \theta, and a little study of that diagram will tell you they should be the same. But when you look at the data, the measured shadow angles were often very far off from the change in latitude (factors of 3,5, even 10 times!), which caused the radius to be correspondingly erroneous.

So my next thought was to use the Haverside formula twice; once with the known values of the latitude, once with the measured values for the shadow. We would get two different measurements of the distance along the arc, and assuming they are supposed to be the same (like, we could have gotten out our meter stick and walked between all the points, and we should have gotten the same result in both cases), we could determine the radius of the Earth implied by the difference. In a little more detail: Call h(\phi,\lambda) the right hand side of the Haverside formula above. We have two formula for this ratio:

d/R_0=2\arcsin(\sqrt{h(\phi,\lambda)}),\qquad d/R=2\arcsin(\sqrt{h(\theta,\lambda)})

where R_0 is the known value of the radius of the Earth. Dividing these two equations by each other we can solve for the radius R implied by our shadow measurements:

R=R_0\frac{\arcsin \sqrt{h(\\phi,\lambda)}}{\arcsin \sqrt{h(\theta,\lambda)}}

In this way, we are not really doing a truely independent measurement for the radius of the Earth, but we are certainly checking what the value our shadow measurements imply for the radius of the Earth.

UPDATE: There is an additional complication here, which I missed in the original version of this post: the latitude is measured from the equator, but the equator is tilted with respect to the Earth’s orbit around the Sun (this path is called the ecliptic). This tilt is around 23 degrees, but the angle of the Sun changes throughout the year because the axis is fixed with respect to the background stars (this is what gives us the seasons!) But on this particular day, we can determine what the angle was between the equator and the ecliptic by simply \alpha=\theta-\phi for each value, and averaging. When I do that, I get \alpha=11.3^\circ. That will change the Haversine formula to the following:

h(\theta,\lambda)=\sin^2(\Delta \theta/2)+\frac{1}{2}[\cos(\theta_1-\theta_2)+cos(\theta_1+\theta_2+2\alpha)]\sin^2(\Delta \lambda/2)

Ok, so what happens when we try to apply this approach to the real-life data set? You can find the raw data here, and the cleaned versions discussed below here or here (fair warning – some of the angle calculations in the raw data were incorrect, so we had to recalculate them! Thanks great lab students!). I used Sagemath to do the analysis; I’ll discuss what I did but you can find the notebook here.

For our first pass, we find all the pairs (for 33 independent points we get 528 points – combinatorics!), and calculate the radius of the Earth implied as outlined above. The result was:

So obviously, a few things jump out at us about this plot. First, there is an overwhealming number of points that are very small – actually, these are zero. This happened because many of the data points were taken at the exact same location – Middleburg, VA. Obviously, these pairs cannot be used because they are not separated by any distance on the Earth’s surface. My solution to this was to average all the points that were close together and count it as a single measurement. These kind of decisions are generally called “applying data cuts”; you have some reason to remove some part of the data for some good reason.

But how do we know exactly what cuts to apply? It’s pretty convincing to average the points from exactly the same location, which removes those zeros, but what about that long tail from 0 to around 3000-4000 km? If the errors associated with this measurement are randomly distributed, we would expect this distribution to be normal – specifically, symmetric about the most common value. So we have some reason to suspect that long tail – the underestimation of the radius of the Earth – is due to additional points which are too close together to be considered in our analysis.

Looking back at the data, we have a set that looks like

San Francisco, CA, United States of America 37.775369 -122.421869
Hanford, California U.S.A. 36.370196 -119.646648
Los Altos, CA 37.392 -122.109
Hanford California USA 36.368977 -119.646669

So these measurements all came from California, and not particularly close to each other at that. But if you look at all the data less than 3000 km (what I would call “clear underestimates”), all the pairs associated with this list appear there. That tells me that these locations should probably be averaged together as well.

In the end, I combined a bunch of points into 2: Virginia and Maryland (18 points), and California (4 points). It ended up being differentiated by state, but that was just a coincidence – I asked “which pairs are well outside of the expected normal distribution?”, and the answer was consistently the pairs within Virginia and Maryland and within California. The resulting plot looks like

So this is still not obviously a normal distribution, but I didn’t see any geographical reason to combine any of the points further, so this represents “the best I can do”. The final result for the radius of the Earth in this dataset is

R=6402 \pm 126~km

(that error is indeed the error on the average as it should be, and not the error of the sample. And I usually teach my students to report your errors to 1 significant figure, and the best value should match. So maybe I should really report this as

R=6400 \pm 100~km)

So the average radius of the Earth is around 6370 km, so this worked very well! Not only is the real value within one standard deviation of the average, it is actually extremely close to the real value (within 0.5%!). Of course, our measurement is not really an independent estimate for the radius of the Earth, because we were not able to go out and measure the distance between all the points – we had to use the Haversine formula to do that, which included the radius of the Earth! I don’t really see how this could lead to the overestimate we see here, but it is worth keeping in mind.

(As a sidenote: A previous version of this post did not include the difference in angle between the ecliptic and the equator, and the result was a constant overestimation of the radius by about 1000 km. That’s called a systematic error, and I was able to deal with it by finding the angle on that day within the data itself. )

So the last thing we should do, as responsible data scientists, is to look for additional patterns in the data. The simplest way of doing this is probably simply graphing all the variables against each other. Unfortunately, we only have a few choices here. For instance, here is the radius of the Earth as a function of the change in latitude between the two observation points:

That actually doesn’t seem to tell us much. I don’t really think there is any argument for the clustering to be more apparent at any particular latitude changes. If I plot the change in the shadow angle against the calculated radius of the Earth, I basically get the same thing as the plot above. The longitude plot is a bit more illuminating:

So here we can see a relatively clear trend towards both smaller estimates for the radius of the Earth and more variation in the radius as the change in longitude becomes small. That echos what we found when cleaning the data – points that are nearby each other (although apparently only nearby in longitude!) tend towards great underestimations for the radius of the Earth.

One further plot we can look at is difference in shadow angle against difference in latitude angle.

That looks confusing, but I think it is showing us exactly what we expect. As the change in latitude gets larger, the change in shadow angle gets larger too, in the same direction and with some scatter (you can imagine a straight line running from the lower left to upper right of this plot). Since we expect all the points to lie right on that line (with some scatter), I think this is just confirming our expectations.

That’s pretty much all the additional analysis we can really do. We can’t really do anything with the “time” column in the raw data, because I ended up having to merge some points taken at different times.  When I started this, I was hoping that column might provide some interesting analysis because the instructions were “do this at noon”, but of course that’s not when the Sun is actually highest in the sky. For instance, “Solar noon” occurred at 12:30 in my location. Alas, we’ve lost most of that information, and now I’m not sure it would be much use to us.

And of course, we have to decide when to stop; the nature of this field is that you are never really done analyzing a particular data set, even one as small as this. We could try to tease some information out of the observation time, or go back and evaluate individual measurements, like try to determine the altitude of each observation. In reality, at some stage you have to determine the “level of truth” at which you are going to say “at this point, I am going to accept this information as fact”. I’ve referred to this basic concept as The Axiom of Measurement, and argued that it should be incorporated into our scientific lexicon.

So, that was quite a journey for something which was done reasonably accurately over 2000 years ago! But I think it’s a great illustration of crowdsourcing a scientific project, and also gave us a chance to play around with some of the tools and philosophies surrounding the field of data science.



#MeasureEarth: a world-wide experiment to determine whether Earth is flat or round

Just want to draw some attention to this cool world-side data collection project happening on Oct 24. I am hoping there will be a lot of involvement because I am planning on making this part of a laboratory class I teach here at Merrimack College. The more people who participate, the more data we will have to play with!


A group of people are claiming that the Earth is flat.  Meanwhile, much of humanity believes the Earth is round.  Who is right, and who is wrong?

Folks: we live in the Internet age, which means we can test these ideas with an EXPERIMENT.  To participate, all you need are three one-meter measuring sticks, a sunny day, some flat ground, and an Internet connection.  I hereby declare October 24th, 2017 as the day we #MeasureEarth.

What to measure

The goal is to determine the length of shadow your stick creates when you hold it straight up on a specific date at a specific time of day.  Then compare the length of shadow you measured with what people in other parts of the world measured.

Wait, why?

Do you expect people at other latitudes to measure the same length of shadow that you measure?  Why or why not?  What do you…

View original post 806 more words

Livestream of Solar Eclipse 2017

As many of you know, there is going to be a total Solar Eclipse today, Aug 21, 2017. Here in Massachusetts we will not experience totality, but because I am running some outreach for the students at Merrimack College, I will be livestreaming through the eyepiece of our solar telescope. Hopefully the stream will be up from 1:30 pm to around 4:00 pm EST. The Youtube link is here:


To get this image, I am using a solar telescope with an H-alpha filter, which allows us to view things like solar prominances and other structures in the corona. I am viewing directly with my cell phone and a camera app called “Camera VF-5”.

Happy viewing, and also check out the NASA livestream, at https://eclipse2017.nasa.gov/eclipse-live-stream. Of course, they have a bunch more interesting things going on, but we are all at the mercy of the weather!

Knot Theory on Sage

I recently found myself needing to know some things about knots – calculating fundamental groups and polynomial invariants, specifically. Going to the sources (Fox’s 1962 classic “A Quick Trip Through Knot Theory” is really great mathematical reading) reveals there is a pretty straightforward algorithm for doing these kinds of things, but it seems like I’m going to have to work out a lot of them. So I thought “oh, maybe it’s easy to do this in SageMath?”

Well, it totally is, so I’m going to just show a few examples. Most of what I’m going to show can easily be found on the Sage help pages for links, but I’ll focus on specifically what I am interested in. Since I’m also a beginner at Sage, this will be a very basic tutorial.

First, we should make sure we can use the tools on Sage to reproduce known results, so I’ll do that with the most popular non-trivial knot, the trefoil (it also happens the trefoil is a (2,3) torus knot, and I’m going to be interested in torus knots in general). I’ve tested the following both on my local installation of Sage and on the live SageMathCell.

First, we need to tell Sage which knot we are interested in by giving it the linking of the arcs of the knot. Each arc needs a number, and we need an orientation. The figure on the right shows my picture for the trefoil. Notice that the definition of “arc” is “between two crossings”, so although in this particular projection the arc 3 and 5 are “the same line”, in the link representation they are represented differently. There are three crossings, so the way you tell Sage which knot you want to know about is by specifying these three crossings as a list of the arcs, starting with the undercrossing on, going clockwise. This is done in the following manner:


So first (and maybe most impressively), you can get a nice plot of your knot:


It’s easy to verify this is the same knot that I drew above, although the orientation is not specified. We can also find the fundamental group,


Finitely presented group < x0, x1, x2 | x1*x0^-1*x2^-1*x0,x0*x2^-1*x1^-1*x2, x2*x1^-1*x0^-1*x1 >

Which we can simplify to the traditional representation by storing it as a group first:

Finitely presented group < x0, x1 | x1*x0^-1*x1^-1*x0^-1*x1*x0 >

And, finally, we can easily find both the Alexander polynomial and the Jones polynomial:

t^-1 - 1 + t
1/t + 1/t^3 - 1/t^4

So although working through the Fox derivatives is kinda fun, this is clearly easier!

So the problem I’m actually interested in is the following: given a covering space p:M\to \mathbb{S}^3, branched over a knot K, what is the preimage of the knot p^{-1}(K)? It turns out that there are some nice results in this area, but mostly dealing with the branched cover M rather than with the knot K. For instance, if you pick K carefully (the Borromean Rings, for example), you can construct any closed 3-manifold M as such a branched covering. The problem is, there aren’t many results on what the preimage of the knot actually is. What we do have is an algorithm for a presentation of the fundamental group (presented by Fox in that earlier article as a variation of the Reidemiser-Schurr algorithm). Basically, given a knot K and a representation of the group of the knot on a symmetric group S_n, we can determine the representation of the group in the cover. So that doesn’t directly tell us the knot itself, but at least we can use a presentation of the fundamental group to calculate the knot invariants and maybe learn something about the knot in that manner.

The problem with trying to do exactly what we did above is Sage doesn’t know how to find the Alexander polynomial directly from a group presentation. However, it does know how to find the Alexander matrix, so as long as we are careful with polynomial rings we can use this to find the Alexander polynomials.

What I’m going to do is an example from Fox. This is actually example 1 in his article, which is the knot shown below. I’ve added orientations which correspond to the later calculations.

To find the group presentation, we take the generators specified in the picture and define some relations, which are rather like the link relations we used above. Around each crossing, going under the first undercrossing is the same as going under the other three crossings, taking orientations into account. For instance, you can read the first one off the figure,


What we need to do is figure out how to implement these relations in Sage to reproduce the same fundamental group.

The first step will be to define a free group on the generators; in Sage this is easy:


Next we need to create our finitely presented group by taking the quotient of this free group with our relations. In Sage, each relation can be specified by first ordering the generators (1-4 in this case), and then specifying words by sequences of integers, where each integer represents a generator. The sign of the integer indicates the power to which the generator is raised. For instance, the word abc^{-1}a would be [1,2,-3,1] and the word a^3 would be [1,1,1]. Doing this for the knot above gives us


(Pay careful attention to the parenthiesis and brackets – Sage interprets them differently, and it matters if you have a series of relators as the case is here, or a single relator. For a single relator, the synthax is


So the next step is to find the Alexander matrix, which Sage knows how to do. But, we want to first specify the ring over which we want to defined the matrix. We really should use Laurent Polynomials, since that’s how the Alexander matrix is defined, but the last step (taking determinants) is not implemented in Sage for Laurent Polynomails, so the trick here is to use Polynomials over an integer ring:


With that done, we can find the Alexander matrix under the Abelianizing map, which sends each generator to a single generator t (which I defined above):

[-t^2 + 2*t - 1             -t              t  t^2 - 2*t + 1]
[ t^2 - 2*t + 1 -t^2 + 2*t - 1             -t              t]
[             t  t^2 - 2*t + 1 -t^2 + 2*t - 1             -t]
[            -t              t  t^2 - 2*t + 1 -t^2 + 2*t - 1]

(You can also just call this with empty parenthesis () and get the Alexander Matrix before the Abelianization).

Finally, we need to find the generator of the first elementary ideal of this matrix, which is the Alexander polynomial. I grabbed this code from someone smarter than me (a mathematician by the name of Nathan Dunfield):

alex_poly = gcd(M.minors(G.ngens() - 1))

There’s obviously nothing too magic about this line, I just wasn’t aware Sage knew about finding the GCD of polynomials. Anyway, we can check to make sure our answer is correct (that is, matches Fox’s original answer)

print alex_poly
t^6 - 5*t^5 + 10*t^4 - 13*t^3 + 10*t^2 - 5*t + 1

And it does!

So of course, I haven’t talked at all about the physics of these preimages of knots p^{-1}(K). This is related to a talk I gave at a recent conference The First Minkowski Meeting on the Foundations of Spacetime Physics, so I’ll post something on that as I work on the paper for the conference proceedings.

Primordial Black Holes as Dark Matter?

I was recently asked by a family friend “have you heard about this new idea that primordial black holes could explain dark matter?”

Well I hadn’t, so I did a little investigating and it’s a pretty clever idea. Part of the backstory here is “what can we do with gravitational waves?”, so that’s where I’ll start.

One of the surprising things about the very first direct observations of gravitational waves by LIGO is the masses of the constituent black holes. The first pair was 36 and 29 solar masses, the second was 14 and 8, and the third was 31 and 19. What was immediately understood to be important about these sources is that they are generally more massive than the other stellar-mass black holes we’re found previously (from X-ray studies, usually. Max there is 18 solar masses). Significantly, the larger mass ones should also be *less* likely, from stellar formation scenarios. So while we are only talking about 6 new black holes, we clearly need to know if that will pose a problem for stellar formation models. (there are also some issues in regard to the spins of these black holes, but I won’t go down that particular rabbit hole).

So people started looking at it, and found that it was generally possible to get these kinds of higher-mass black holes, but it did put some constraints on the formation scenarios. Basically, the problem is you need to make giant stars, which generally need to have low metallicity to form. However, the conditions that generate those stars (high star formation rate in the past) generally turn out to produce higher overall metallicity quicker. If you tune the star formation rate a bit so there are actually fewer large-mass stars, you reduce the overall metalicity so you can effectively create massive black holes. So it’s constraining, but not overly so.

But that’s actually not what I want to talk about – what about other formation scenarios for these black holes? Specifically, what about primordial black holes (PBH)? These are black holes that formed as a result of density fluctuations in the early universe. It turns out it’s pretty easy to produce black holes of this mass in this manner (and the spin, which I skipped talking about above, is a little easier to produce as well). So, cool, we have at least two different ways the universe can give us the black holes found by LIGO.

But, are there any other implications of primordial mass black hole production at this rate? Well, without a stellar companion, there would typically not be an accretion disk and we would have no way to observe these black holes. But of course – that’s exactly the condition we need for dark matter!

So, in a recent paper, Juan Garcia-Bellido and his collaborators (who include Sebastien Clesse, Andre Linde, and David Wands) have worked this out in a bit of detail (and apparently there are others working on this as well, such as Alexander Kashlinsky).

The idea that black holes (or other compact objects) could be a model for dark matter is not new, actually. We’ve been looking for microlensing due to compact objects in the galactic halo for years (these objects are called MACHOS), but have essentially found nothing. What’s interesting about their new models is the mass distribution for primordial black holes in the 10-100 range sits right in the region of parameter space which was has not been covered by previous studies:

As you can see in the figure (which comes from the paper), the lower limits on PBH have a gap in between the lower mass MACHO/EROS observations and the higher mass WMAP3/FIRAS observations. It looks to me like that gap peaks around 0.01 of a solar mass and carries up to around 100. Which is broad range for black holes, but look at the range which we are talking about here (25 orders of magnitude!).

So there are lots of other interested details here, but what’s really fascinating about this new paper is that there are apparently a very large set of phenomenological signals we can use to test this hypothesis. It would affect the CMB, star formation in the early universe, X-ray transients, and a whole host of others. One particularly interesting idea is that rather then looking for lensing, we might try to look for the shift of the positions of stars over time. With the new plethora of data on stellar positions (like the GAIA satellite), it also might be the first time someone could actually attempt such a study. So there are a lot of interesting things to check.

As a sidenote, some of these black holes would of course develop an accretion disk through random interactions with stars or gas, and produce point sources that would emit in Gamma or X-ray range. Well, there actually is a large list of unidentified point sources in nearly all the X-ray catalogs. In fact, my undergraduate honors thesis was working on trying to identify unknown point sources in a Chandra X-ray image of the galactic center. The paper suggests that rather than looking at spectral characteristics, one should look for a correlation between the point sources and the expected dark matter distribution.

So, we’ve got LIGO finding a new class of black holes, which could be created in the early universe, and a new model for dark matter. Given how much trouble the particle model for dark matter is having (sorry LHC!), we should be taking these new ideas seriously. And what’s great about this is there are *bunch* of great ways to look for this primordial black hole signal. Of course, maybe that means it won’t last long as an explanation for dark matter, but it’s something new to look at that doesn’t require any exotic new physics.

And, not to belabor the point, but all of this wouldn’t have been possible with LIGO. Thanks LIGO!


Comments on Max Tegmark’s Hierarchy of Reality

I’m in the middle of reading Max Tegmark’s recent book Our Mathematical Universe, which is (so far, I’m about halfway through) mostly about the idea that it’s possible the simplest (or most natural) interpretation of quantum mechanics directly leads to the conclusion that multiple universes must exist. I just finished reading an interesting “excursion” chapter in which he discusses the nature and perception of reality, and I would like to make some comments on it because it differs from my own work on the subject.

(my essay on this topic can be found here.)

Tegmark breaks reality into three pieces, and it will be easiest to see what’s going on if I show you the actual figure in the book (this is shamelessly stolen from Tegmark, and all credit is his. If it turns out he’s not ok with this, I hope he’ll let me know!)

Tegmark's Hierarchy of Reality, from Our Mathematical Universe (Although I'm assigning the word "hierarchy" to it for my own devious purposes)

Tegmark’s Hierarchy of Reality, from Our Mathematical Universe (Although I’m assigning the word “hierarchy” to it for my own devious purposes)

The idea here is that our perception of reality (“Internal Reality”) is governed by our senses, like sight and touch and smell. We interact directly with a version of reality which we can all agree on called “Consensus Reality”, and that consensus reality is a result of something which is abstractly true, “External Reality”. In the book he makes the point that to determine the fundamental “theory of everything”, we don’t need to actually understand human consciousness, because that’s explicitly separated from consensus reality by our own perceptions.

While there certainly are elements to this hierarchy that I like, I actually think making these divisions is pretty arbitrary. I can easily ask my physics I students questions which will break “consensus reality” but stay in the realm of classical physics. For instance, I recently asked someone “what is the acceleration of an object in projectile motion?” and they responded “in the direction of motion”, indicating the parabolic path. Ok, I asked a well-defined mathematical question and received an (incorrect) response that left the bounds of mathematical rigor, but it was about classical physics, and therefore solidly in Tegmark’s “consensus reality”. The student’s level of analysis was not high enough to understand that “acceleration” does not mean “velocity” (or whatever else they might have thought I meant), but it was within *their* consensus reality.

What am I driving at? Perhaps the reality we can all agree on is not mathematical, but only descriptive in nature. For instance, the student and I can both draw pictures of how an object moves in projectile motion because we’ve seen real-life objects move in projectile motion. On the other hand, if mathematics is objectively “right” then I can prove some versions of consensus reality incorrect (“The day is 24 hours long”). Of course, no one would really say “the day is 24 hours long” is *wrong*, just that if you define the day with respect to the background stars, you get something a little bit shorter.

So even if we split off the “perception of reality” piece from our hierarchy of reality, we still end up with some rather arbitrary definitions of reality, from purely mathematical up to descriptive. This suggests that reality should be viewed as a continuum, with no clear boundaries between abstractly true and subjectively true, which all occur at different levels of detail. So what can we use to determine which level we are talking about? I’ve called such a thing the axiom of measurement, and you can check out the link in the first paragraph if you want to read the original essay.

The idea is that in order to determine a standard of “truth”, we need a standard of “measurement”. I can verify the statement “objects in projectile motion move in parabolic motion” as long as I use a measurement tool which is not accurate enough to see the effects of air resistance. That defines our “consensus reality”. But once I build a better tool, I can prove our consensus reality wrong, which requires us to redefine it at each moment for each measurement. Thus we have a natural scale for truth, defined experimentally by whatever apparatus we available.

For me, the bonus with this approach is that you know when things are true; they are true when you know an experiment can confirm them. What you lose is the concept of absolute truth, but it’s easy to argue that the concept of absolute truth has brought us nothing but trouble anyway!

(just as note, I think we necessarily lose absolute truth because we would have to be able to say “we will never design an experiment to prove this wrong”, but I don’t think we will ever be able to do that. Can anyone imagine an experiment to prove that 1+1 is not 2? I think it might strain the logical system I’m working in. Anyway, more thought on this is required).

Of course, I’m really not trying to be super-critical of Tegmark, I actually like some of his analysis. But, I think his splitting here is someone on this side of homo-centric, since it includes human perceptions at all levels (after all, we didn’t even know about his transition between quantum and classical reality until ~100 years ago. I worry about a definition of reality which shifts in time!). If we include the experimental apparatus into the very definition of our theoretical model, we achieve consistency without having to worry either about either cognitive science or a shifting consensus of reality.

Chaos and SageMath

This semester I’m teaching our Analytical Mechanics course, and I just finished a day on Introduction to Chaos. In this class, we are using the SageMath language to do some simple numerical analysis, so I used Sage to demonstrate some of the interesting behavior you find in chaotic systems. Since I’m learning Sage myself, I thought I would post the result of that class here, to demonstrate some of the Codes and the kinds of plots you can get with simple tools like Sage.

Before getting into the science, SageMath is a free, open-source mathematics software which includes things like Maxima, Python, and the GSL. It’s great because it’s extremely powerful and can be used right in a web browser, thanks for the Sage Cell Server. So I did all of this right in front of my students, to demonstrate how easy this tool is to use.

For the scientific background, I am going to do the same example of the driven, damped pendulum found in Classical Mechanics by John Taylor (although the exact same system can be found in Analytic Mechanics, by Hand and Finch). So, I didn’t create any of this science, I’m just demonstrating how to study it using Sage.

First, some very basic background. The equation of motion for a driven, damped pendulum of length L and mass m being acted upon by a driving force F(t)=F_0\cos\omega t is

\ddot{\phi}+2\gamma\dot{\phi}-\omega_0^2\sin(\phi)=f\omega_0^2\cos(\omega t)

\gamma here is the damping term and f=F_0/(mg), the ratio of the forcing amplitude to the weight of the pendulum. In order to get this into Sage, I’m going to rewrite it as a system of first-order linear differential equations,

\dot y=x

\dot x=f\omega_0^2\cos(\omega t)+\omega_0^2\sin(y)-2\gamma x

This is a typical trick to use numerical integrators, basically because it’s easy to integrate first-order equations, even if they are nonlinear.

It’s easiest to find chaos right near resonance, so let’s pick the parameters \omega_0=3\pi and \omega=2\pi. This means the t-axis will display in units of the period, 1 s. We also take \gamma=3/4\pi. The first plot will be this system when the driving force is the same as the weight. That is, f=1, and code + result is Figure 1 shown below.

from sage.calculus.desolvers import desolve_system_rk4
x,y,t=var('x y t')
Q=[[i,k] for i,j,k in P]

Figure 2 is a plot with the driving force slightly bigger than the weight, f=1.06.

Figure 1, f=1

Figure 2, f=1.06







This demonstrates an attractor, meaning the steady-state solution eventually settles down to oscillate around \phi\approx 2\pi. We can check this is actually still periodic by asking Sage for the value of \phi at t=30 s, t=31 s, etc., by calling this line instead of the plot command above

[intP(i) for i in range(30,40)]

(Note that we also have to change the range of integration from [0,15] to [30,40].) The output is shown in Figure 3; the period is clearly 1.0 s out to four significant figures.


Figure 3: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.06.


Figure 4: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.073


Figure 6: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.077.









Next, let’s increase the forcing to \gamma=1.073. The result is shown in Figure 7. The attractor is still present (now with a value of around \phi=-2\pi), but the behavior is much more dramatic. In fact, you might not even be convinced that the period is still 1.0 s, since the peaks look to be different values. We can repeat our experiment from above, and ask Sage to print out the value of \phi for integer timesteps between t=30 and t=40. The result is shown in Figure 4. The actual period appears to be 2.0 s, since the value of \phi does not repeat exactly after 1.0 s. This is called Period Doubling.


Figure 7: f=1.073


Figure 8: f=1.077









In Figure 8, I’ve displayed a plot with f=1.077, and it’s immediately obvious that the oscillatory motion now has period 3.0 s. We can check this by playing the same game, shown in Figure 6.

Now we are in a position to see some unique behavior. I am going to overlay a new solution onto this one, but give the second solution a different initial value, \phi(0)=-\pi/2 instead of \phi(0)=0. The code I am adding is

Q2=[[i,k] for i,j,k in P2]
plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')

The result is shown in Figure 8. Here we can see the first example of the sensitivity to initial conditions. The two solutions diverge markedly once you have a slightly different initial condition, heading towards two very different attractors. Let’s plot the difference between the two oscillators,
but with only a very small difference in the initial conditions, \Delta\phi(0)=1\times 10^{-4}. The code follows:

#plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')
plot(lambda x: abs(intP(x)-intP2(x)),0,15)



Figure 8: Two oscillators with f=1.073 but with different initial values, \Delta\phi(0)=-\pi/2.


Figure 9: A plot of the difference in the oscillators over time, \Delta\phi(t), for f=1.077 and a very small initial separation, \Delta \phi(0)=1\times 10^-4.









This is shown in Figure 9. It clearly decays to zero, but that’s hard to see so let’s plot it on a log scale, shown in Figure 10.

#plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')
plot_semilogy(lambda x: abs(intP(x)-intP2(x)),0,15)

Now, let’s see what happens if we do this same thing, but make the force parameter over the critical value of f=1.0829. This is displayed in Figure 11, for f=1.105. We get completely the opposite behavior, the differences in the oscillators are driven away from each other due to their small initial separations. This is the essence of “Jurrasic Park Chaos” – a small change in the initial conditions (like a butterfly flapping it’s wings in Malaysa) causes a large change in the final outcome (a change in the weather pattern over California).


Figure 10: A log plot of two oscillators with f=1.073 but with very small differences in their initial conditions, \Delta\phi(0)=1\times 10^{-4}.


Figure 11: Finally, a log plot of two overcritical oscillators (f=1.105) and very small differences in their initial conditions, \Delta\phi(0)=1\times 10^{-4}