Comments on Max Tegmark’s Hierarchy of Reality

I’m in the middle of reading Max Tegmark’s recent book Our Mathematical Universe, which is (so far, I’m about halfway through) mostly about the idea that it’s possible the simplest (or most natural) interpretation of quantum mechanics directly leads to the conclusion that multiple universes must exist. I just finished reading an interesting “excursion” chapter in which he discusses the nature and perception of reality, and I would like to make some comments on it because it differs from my own work on the subject.

(my essay on this topic can be found here.)

Tegmark breaks reality into three pieces, and it will be easiest to see what’s going on if I show you the actual figure in the book (this is shamelessly stolen from Tegmark, and all credit is his. If it turns out he’s not ok with this, I hope he’ll let me know!)

Tegmark's Hierarchy of Reality, from Our Mathematical Universe (Although I'm assigning the word "hierarchy" to it for my own devious purposes)

Tegmark’s Hierarchy of Reality, from Our Mathematical Universe (Although I’m assigning the word “hierarchy” to it for my own devious purposes)

The idea here is that our perception of reality (“Internal Reality”) is governed by our senses, like sight and touch and smell. We interact directly with a version of reality which we can all agree on called “Consensus Reality”, and that consensus reality is a result of something which is abstractly true, “External Reality”. In the book he makes the point that to determine the fundamental “theory of everything”, we don’t need to actually understand human consciousness, because that’s explicitly separated from consensus reality by our own perceptions.

While there certainly are elements to this hierarchy that I like, I actually think making these divisions is pretty arbitrary. I can easily ask my physics I students questions which will break “consensus reality” but stay in the realm of classical physics. For instance, I recently asked someone “what is the acceleration of an object in projectile motion?” and they responded “in the direction of motion”, indicating the parabolic path. Ok, I asked a well-defined mathematical question and received an (incorrect) response that left the bounds of mathematical rigor, but it was about classical physics, and therefore solidly in Tegmark’s “consensus reality”. The student’s level of analysis was not high enough to understand that “acceleration” does not mean “velocity” (or whatever else they might have thought I meant), but it was within *their* consensus reality.

What am I driving at? Perhaps the reality we can all agree on is not mathematical, but only descriptive in nature. For instance, the student and I can both draw pictures of how an object moves in projectile motion because we’ve seen real-life objects move in projectile motion. On the other hand, if mathematics is objectively “right” then I can prove some versions of consensus reality incorrect (“The day is 24 hours long”). Of course, no one would really say “the day is 24 hours long” is *wrong*, just that if you define the day with respect to the background stars, you get something a little bit shorter.

So even if we split off the “perception of reality” piece from our hierarchy of reality, we still end up with some rather arbitrary definitions of reality, from purely mathematical up to descriptive. This suggests that reality should be viewed as a continuum, with no clear boundaries between abstractly true and subjectively true, which all occur at different levels of detail. So what can we use to determine which level we are talking about? I’ve called such a thing the axiom of measurement, and you can check out the link in the first paragraph if you want to read the original essay.

The idea is that in order to determine a standard of “truth”, we need a standard of “measurement”. I can verify the statement “objects in projectile motion move in parabolic motion” as long as I use a measurement tool which is not accurate enough to see the effects of air resistance. That defines our “consensus reality”. But once I build a better tool, I can prove our consensus reality wrong, which requires us to redefine it at each moment for each measurement. Thus we have a natural scale for truth, defined experimentally by whatever apparatus we available.

For me, the bonus with this approach is that you know when things are true; they are true when you know an experiment can confirm them. What you lose is the concept of absolute truth, but it’s easy to argue that the concept of absolute truth has brought us nothing but trouble anyway!

(just as note, I think we necessarily lose absolute truth because we would have to be able to say “we will never design an experiment to prove this wrong”, but I don’t think we will ever be able to do that. Can anyone imagine an experiment to prove that 1+1 is not 2? I think it might strain the logical system I’m working in. Anyway, more thought on this is required).

Of course, I’m really not trying to be super-critical of Tegmark, I actually like some of his analysis. But, I think his splitting here is someone on this side of homo-centric, since it includes human perceptions at all levels (after all, we didn’t even know about his transition between quantum and classical reality until ~100 years ago. I worry about a definition of reality which shifts in time!). If we include the experimental apparatus into the very definition of our theoretical model, we achieve consistency without having to worry either about either cognitive science or a shifting consensus of reality.

Chaos and SageMath

This semester I’m teaching our Analytical Mechanics course, and I just finished a day on Introduction to Chaos. In this class, we are using the SageMath language to do some simple numerical analysis, so I used Sage to demonstrate some of the interesting behavior you find in chaotic systems. Since I’m learning Sage myself, I thought I would post the result of that class here, to demonstrate some of the Codes and the kinds of plots you can get with simple tools like Sage.

Before getting into the science, SageMath is a free, open-source mathematics software which includes things like Maxima, Python, and the GSL. It’s great because it’s extremely powerful and can be used right in a web browser, thanks for the Sage Cell Server. So I did all of this right in front of my students, to demonstrate how easy this tool is to use.

For the scientific background, I am going to do the same example of the driven, damped pendulum found in Classical Mechanics by John Taylor (although the exact same system can be found in Analytic Mechanics, by Hand and Finch). So, I didn’t create any of this science, I’m just demonstrating how to study it using Sage.

First, some very basic background. The equation of motion for a driven, damped pendulum of length L and mass m being acted upon by a driving force F(t)=F_0\cos\omega t is

\ddot{\phi}+2\gamma\dot{\phi}-\omega_0^2\sin(\phi)=f\omega_0^2\cos(\omega t)

\gamma here is the damping term and f=F_0/(mg), the ratio of the forcing amplitude to the weight of the pendulum. In order to get this into Sage, I’m going to rewrite it as a system of first-order linear differential equations,

\dot y=x

\dot x=f\omega_0^2\cos(\omega t)+\omega_0^2\sin(y)-2\gamma x

This is a typical trick to use numerical integrators, basically because it’s easy to integrate first-order equations, even if they are nonlinear.

It’s easiest to find chaos right near resonance, so let’s pick the parameters \omega_0=3\pi and \omega=2\pi. This means the t-axis will display in units of the period, 1 s. We also take \gamma=3/4\pi. The first plot will be this system when the driving force is the same as the weight. That is, f=1, and code + result is Figure 1 shown below.

from sage.calculus.desolvers import desolve_system_rk4
x,y,t=var('x y t')
w=2*pi
w0=3*pi
g=3/4*pi
f=1.0
P=desolve_system_rk4([-2*g*x-w0^2*sin(y)+f*w0^2*cos(w*t),x],[x,y],[0,0,0],ivar=t,end_points=[0,15],step=0.01)
Q=[[i,k] for i,j,k in P]
intP=spline(Q)
plot(intP,0,15)

Figure 2 is a plot with the driving force slightly bigger than the weight, f=1.06.

Figure 1, f=1

Figure 2, f=1.06

 

 

 

 

 

 

This demonstrates an attractor, meaning the steady-state solution eventually settles down to oscillate around \phi\approx 2\pi. We can check this is actually still periodic by asking Sage for the value of \phi at t=30 s, t=31 s, etc., by calling this line instead of the plot command above

[intP(i) for i in range(30,40)]

(Note that we also have to change the range of integration from [0,15] to [30,40].) The output is shown in Figure 3; the period is clearly 1.0 s out to four significant figures.

Figure2a

Figure 3: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.06.

Figure2b

Figure 4: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.073

Figure2c

Figure 6: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.077.

 

 

 

 

 

 

 

 

Next, let’s increase the forcing to \gamma=1.073. The result is shown in Figure 7. The attractor is still present (now with a value of around \phi=-2\pi), but the behavior is much more dramatic. In fact, you might not even be convinced that the period is still 1.0 s, since the peaks look to be different values. We can repeat our experiment from above, and ask Sage to print out the value of \phi for integer timesteps between t=30 and t=40. The result is shown in Figure 4. The actual period appears to be 2.0 s, since the value of \phi does not repeat exactly after 1.0 s. This is called Period Doubling.

Figure3a

Figure 7: f=1.073

Figure3b

Figure 8: f=1.077

 

 

 

 

 

 

 

 

In Figure 8, I’ve displayed a plot with f=1.077, and it’s immediately obvious that the oscillatory motion now has period 3.0 s. We can check this by playing the same game, shown in Figure 6.

Now we are in a position to see some unique behavior. I am going to overlay a new solution onto this one, but give the second solution a different initial value, \phi(0)=-\pi/2 instead of \phi(0)=0. The code I am adding is

P2=desolve_system_rk4([-2*b*x-w0^2*sin(y)+g*w0^2*cos(w*t),x],[x,y],[0,0,-pi/2],
ivar=t,end_points=[0,15],step=0.01)
Q2=[[i,k] for i,j,k in P2]
intP2=spline(Q2)
plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')

The result is shown in Figure 8. Here we can see the first example of the sensitivity to initial conditions. The two solutions diverge markedly once you have a slightly different initial condition, heading towards two very different attractors. Let’s plot the difference between the two oscillators,
|\phi_1(t)-\phi_2(t)|,
but with only a very small difference in the initial conditions, \Delta\phi(0)=1\times 10^{-4}. The code follows:

#plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')
plot(lambda x: abs(intP(x)-intP2(x)),0,15)

 

Figure4a

Figure 8: Two oscillators with f=1.073 but with different initial values, \Delta\phi(0)=-\pi/2.

Figure4b

Figure 9: A plot of the difference in the oscillators over time, \Delta\phi(t), for f=1.077 and a very small initial separation, \Delta \phi(0)=1\times 10^-4.

 

 

 

 

 

 

 

 

This is shown in Figure 9. It clearly decays to zero, but that’s hard to see so let’s plot it on a log scale, shown in Figure 10.

#plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')
plot_semilogy(lambda x: abs(intP(x)-intP2(x)),0,15)

Now, let’s see what happens if we do this same thing, but make the force parameter over the critical value of f=1.0829. This is displayed in Figure 11, for f=1.105. We get completely the opposite behavior, the differences in the oscillators are driven away from each other due to their small initial separations. This is the essence of “Jurrasic Park Chaos” – a small change in the initial conditions (like a butterfly flapping it’s wings in Malaysa) causes a large change in the final outcome (a change in the weather pattern over California).

Figure5a

Figure 10: A log plot of two oscillators with f=1.073 but with very small differences in their initial conditions, \Delta\phi(0)=1\times 10^{-4}.

Figure5b

Figure 11: Finally, a log plot of two overcritical oscillators (f=1.105) and very small differences in their initial conditions, \Delta\phi(0)=1\times 10^{-4}

Merging Stars, Star Formation, and Planetary Nebulae

I was reading the latest issue of Sky and Telescope this week and came across an article by Monica Young talking about the formation of massive stars (here a link to the highlights, you’ll need an account to actually read it). The gist of the article is that forming massive stars is difficult – as mass accumulates and nuclear reactions begin, the radiation pressure from the young (not yet massive) star will tend to blow material away, halting the growth. This happens around 10 solar masses, so it’s a bit mysterious how we end up with more massive stars then that (and we do – although they are rare, Type O stars are over 15 solar masses, and the most massive stars are over 25). The article covers a few modern approaches, mostly which involve particular dynamics by which material is accumulated in a different physical location then the photon flux from the new star. But, it was also mentioned that some massive stars are simply caused by merging younger stars, which was the topic of my master’s thesis! Since I’ve never written about it here (and it’s only been published at the academic library), I thought I would give a quick overview on the cute idea and nice results we worked out (“we” being myself and my adviser at the time, Robin Ciardullo).

The problem we were tackling had to do with the Planetary Nebula Luminosity Function (PNLF – there is even a Wikipedia page about this now!). As medium-sized and smaller (under 10 solar masses or so) stars reach the end of their life, they turn into really pretty objects called Planetary Nebula (PNe, and here are some cool Hubble pics). Massive stars a) evolve faster and b) make brighter PNe then their less massive siblings, so over time less and less bright PNe should be produced by any given population of stars. Further, the luminosity from a PNe is primarily due to excitation from the central white dwarf, which also dims over time. Therefore, PNe in a single population of stars should be generally getting less luminous over time. Problem is, that is not observed, at all!

The Brightest PNes in a population are the same luminosity, regardless of the age of the population.

The Brightest PNes in a population are the same luminosity, regardless of the age of the population.

The figure above comes from Ciardullo (2006), and demonstrates the problem – all the brightest PNe have the same absolute magnitude, regardless of the age of the stellar population (which goes old to young from top to bottom). This allows you to use PNe as a secondary method to find astronomical distances, but it also shows that there is something fundamentally incorrect with the nice picture of stellar evolution I’ve presented above. The idea explored in my thesis was that as the population aged, stellar mergers produced a ready supply of massive blue stars (called “Blue Stragglers”) which would form the brightest PNe. The advantage of a model like this is that it does not require a significant amount of detailed physics, such as the effects of stellar rotation, wind, or other micro-astrophysics. It is simply a population synthesis approach – we essentially created stellar populations, used standard stellar evolutionary models, but included a small fraction of stars (around 10%) which merged to form more massive stars.

First, let’s take a look at the “standard picture”, with no Blue Stragglers:

Simulated PNLFs with no merging stars.

Simulated PNLFs with no merging stars.

The ages of the stellar populations are shown in the upper lefthand corner (1-10 Gyr). It clearly displays the effect I talked about – the brightest PNe fade over time as the population ages.

Now let’s take a look at our basic model, including 10% blue stragglers into a population of several different ages:

The PNLF single burst models with 10% blue straggler fraction.

The PNLF single burst models with 10% blue straggler fraction.

As we expected, the brightest PNe held pretty constant for a variety of stellar population ages (1-10 Gyr, shown in the upper corner, with the 1 Gyr being a bit of an outlier). The absolute magnitude ended up being a little high, and the initial shape was more shallow then the observations, but it was clear that the blue stragglers were able to keep the maximum luminosity of the PNLF relatively constant over a wide range in population ages.

It’s worth noting that the two populations of blue stragglers which we are discussing here are actually disjoint. Since PNe form from stars under 10 solar masses, the usual formation scenarios have no trouble making them. It’s only for the stars over 10 solar masses that the merging scenario is invoked for a creation mechanism. On the other hand, both of these merger scenarios are based on stars which form in binary systems, and then merge at a later time. So although the end masses are different the formation mechanism from a blue straggler point of view is the same. It would be interesting to see if one could reproduce the required blue straggler fraction by using the initial binary population. Using both the PNLF and mass star formation considerations, one might be able to check this over the entire mass range of the initial mass function of binaries. Not something I can see spending time on at the moment, but an interesting question which even might make a nice undergraduate project!

If you are interesting in reading the whole thesis, you can check it out here. What I’ve talked about above the only half the story – there is also the “dip” found in some PNLFs (but not M31, for instance), which the model tried to address as well.

Cosmic Strings and Spatial Topology

Some of my recent work has focused on foliations of spacetime – which are essentially 1- or 2-dimensional parametrizations of 3- or 4-manifolds. I am mostly interested in them because my work often requires 1- and 2-dimensional embedded spaces, and you can use these initial spaces to construct entire foliations of the manifold. If you have a 2-dimensional space (a surface), you can always smooth it out by squishing all the curvature to a single point, called a conical point (you can try this – next time you get an ice cream cone, take the paper off the cone and try to flatten it. You will be able to do it everywhere except for one point, at the tip of the cone). Mathematically, a surface with a conical point looks exactly like a surface intersecting a cosmic string, so this is a long winded way of explaining how my interest in foliations has lead to an interest in cosmic strings!

The latest part of this story is using some of the interesting properties of cosmic strings, I’ve been able to use the lack of observational evidence for them to constrain the nature of the global topology of the universe. This is a pretty interesting idea because there are very few ways that we can study the overall shape of the universe (shape here means topology, so does the surface looks like a plane, a sphere, a donut, or what?). There are lots of ways we can study the local details of the universe (the geometry), because we can look for the gravitational effects from massive objects like stars, galaxies, black holes, etc. However, we have essentially no access to the topological structure, because gravity is actually only a local theory, not a global theory (I could write forever about this, but I’ll just leave it for a later post maybe…). We have zero theoretical understanding of the global topology, and our only observational understanding comes from studying patterns in the CMB. The trick with cosmic strings is that they actually serve to connect the local gravitational field (the geometry) to the global structure of the universe (the topology).

The game is this – take a spacetime with cosmic strings running around everywhere, and take a flat surface which intersects some of them. This surface can always be taken as flat, so the intersections are conical points. If you measure an angular coordinate around each point, you won’t get 2\pi, you’ll get something a bit smaller or a bit larger since the surfaces are twisted up around the points. It turns out that if you add up all the twists, you had better get an integer – the genus of the surface. The genus is essentially the number of holes in the surface. A sphere has g=0, a torus g=1, two tori attached to each other have g=2, and so on.

Now we consider what kind of observational evidence there is for cosmic strings. The short answer, none! People have been looking for them in the CMB, but so far they’ve only been able to say “if cosmic strings exist, they must be in such-and-such numbers and have energies of such-and-such.” If we use these limits, we find that to a very good approximation, if cosmic strings exist, a surface passing through them must have genus 1, and therefore be a torus (the surface of a doughnut)!

Ok big deal – but here’s where the foliations come in. For example, if we parametrize our spatial (3-dimensional) manifold with tori, the result is a 3-torus. So this actually implies that space is not a sphere, but is a solid torus (like a doughnut). The mathematics behind this statement are actually quite profound, and were worked out in the early days of foliation theory by the likes of Reeb, Thurston, and Novikov. But the idea is that such foliations of 3-manifolds are very stable, and a single closed surface greatly restricts the kinds of foliations allowed for the manifold as a whole.

The archive paper where I discuss this in more detail can be found here. This idea that space is not a sphere is not new, and there is actually some evidence for it in the CMB, in the form of a repeating pattern (or a preferred direction) in space. But my primary interest is pointing out that this is an independent way of measuring the topology of the universe, since it’s based on local observations of strings in the CMB rather than overall patterns. If strings don’t actually exist, it can still be used to study the presence of the conical singularities, but I expect the restrictions on the topology are much less strict. Perhaps I’ll look into that further into that, but for the moment I’m happy with this. It’s a new way to determine information about the global topology of the universe, and it’s a great combination of pure mathematics, theoretical physics, and observational cosmology.

FQXi Essay Contest: The Axiom of Measurement

I’ve submitted an entry for the Foundational Questions Institute Essay Contest: “Trick or Truth: the Mysterious Connection Between Physics and Mathematics”. You can check out my entry here.

The backstory to this is that I was participating in a weekly mathematical physics seminar back at Florida State (although I use the word “seminar” pretty loosely – it was regularly attended by only myself and one other individual!), and in the process of working on presenting on some NCG topic, I came across “The Bost-Connes System”. This is a particular C^*-algbera, on which you can define some dynamics. What makes it special is that if you calculate the partition function for this dynamical system, you get the Riemann Zeta function! Since the partition function can be used to generate predictions for a statistical mechanical system, I wondered how possible it was to construct a real physical system with the same symmetry as the Bost-Connes system. Then you would have experimental access to (at least some features of) the Riemann Zeta. There is a great deal of mathematical important to Zeta, including a $1 million dollar prize for finding the zeros!

I wasn’t exactly thinking about which Benz to buy with my prize money yet, but I thought it was an interesting idea – experimental verification of a mathematical theorem. I wasn’t aware that anything like this had been done before. Normally the “flow of ideas” works the other way – constructions in mathematics find usefulness in physics, or theoretical models become interesting mathematical systems. It stuck in my head for a while, I did a few calculations to determine what the zeros of a partition function might look like, but nothing really came of it.

When this essay contest came around, I thought it might be an opportunity to share this idea. I figured that if it was going to be taken seriously, you needed to raise experimental verification to the level of mathematics – after all, if I prove a conjecture is true outside of the field in which the conjecture is stated, we should not take the proof very seriously! I needed to make experimental physics a subfield of mathematics. It turns out that this is pretty easy, and so that’s what the essay is about. If you take your physical model as a set of formal axioms, and add in an additional axiom which can be used to experimentally verify a theorem (I call this “an axiom of measurement”), you can formulate physics as a complete formal system. As a bonus, the axiom can be used to add a little more structure to the Platonist viewpoint on universal versus physical forms.

Now, the FQXi Essay Contests are Contests; the community and the public can vote on the quality of the essays, and the quality of the essays vary widely, since nearly anyone is allowed to submit an entry. I actually think  my essay represents a pretty mainstream viewpoint about physics – that we are not really studying “nature” or “the universe” when we do physics, we are really studying a “model for the universe”, which is confirmed by our everyday observations as well as carefully constructed experiments. Since it’s not a new, dramatic viewpoint on any particular aspect of the relationship between the two fields, I don’t expect to be winning any awards. But, I had an opinion with an interesting idea behind it, and an essay seemed like the ideal place to explore it.

Anyway, if you’re so inclined go over and check out my entry as well as all the others.

I have a troll!

As a practicing physicist I’ve met/interacted with a few people who can be considered Trolls – or maybe “quacks” is a more appropriate term since rather than just arguing for the sake of arguing they believe they have discovered some important feature about the world which they alone understand. I’ve been e-mailed by quacks, seen some quack seminars (always the best), and have now had a public debate with one – “public” as in the venue is the comment section on Youtube.

Of course, who cares, everyone on the internet is crazy. Well, this is my first experience so I’m recording it. I’m making some teaching videos for a partially flipped class we are teaching at Merrimack College. Last week, I posted a video about time dilation for my class to watch this week:

(click the youtube link on the bottom right to see the troll I am referring to),

Pretty quickly, I had someone named “Pentcho Valev” asking why the speed of light was constant. I was split between thinking “wow someone doesn’t understand but really wants to know more!” and thinking “uh oh”. In retrospect, I should have known what was happening as soon as a read these two lines:

“To put it simply, the frequency shifts because the speed of light shifts.”

“An alternative explanation of the frequency shift (the only salvation for relativity) involves the assumption that the motion of the observer has somehow changed the wavelength of the incoming light. […] This assumption is so obviously absurd that relativists never state it explicitly. Yet without it relativity collapses.”

Doing my due diligence as a physicist and teacher, I attempted to reason with him. But, he’s a troll and it didn’t work. Meh, no big deal.

BUT, it turns out Pentcho Valev is an entire internet quack phenomenon! There is even an entire (albeit out-of-date) website, outlining his “scholarly activities”:

http://bip.cnrs-mrs.fr/bip10/valevfaq.htm#embarrased

So there are lots of ways one can decide informally “they have made it” – that is, not an award or publication or something. Maybe you get recognized at a conference by someone who knows your work, or the subject of something you published is a topic of debate *without* you having to inject it into the discussion manually. Well, I’m trying to teach early-career STEM majors the basics of mechanics – how to solve problems, how to use conceptual and analytic reasoning, and how to avoid common pitfalls and misunderstandings. And I’ve had a famous troll pay attention.

I’m going to count this as “I’ve made it”.

Loops in the Digits of Pi

Is \pi special?

Of course, the concept of \pi as the ratio between the diameter and circumference of a circle is more than important – a cursory glance through the arXiv suggests it appears in near 85% of all papers on theoretical physics. What I mean is are the digits of \pi special? Is there anything actually significant hidden in the seemingly random digits of this all-important transcendental number?

This is a well-trodden topic among pseudo-intellectuals and science fiction writers alike. No less then the great Carl Sagan afforded a special significance to the digits of our friend \pi at the very end of Contact (a part which didn’t make it into the movie). But is there any truth to this? Or even any evidence for it? In fact, how would you even go about trying to figure it out?

What got me thinking about this was a website I came across a few weeks back, talking about finding strings of specific numbers in \pi – you can see it here. It’s a very cool page, which lets you do cool things like search for your SS number in the digits of \pi (no joy for me there, I only get 8/9 numbers). However, down at the bottom they define something which I formalize as follows:

Loop Sequences: A loop sequence \mathcal{L} in a string of single-digits integers \mathcal{S} is a set of integers \mathcal{L}=\{n_1,n_2,...,n_N\} such that as a string of single digits, the integer n_j is found at the position n_{j+1} in the string \mathcal{S}, and n_N is found at position n_1.

This is perhaps best illustrated by an example. Let’s start with n_1=169. Turns out, starting at the 35th digit of \pi (counting starting after the decimal point), we have …841971693993…, which contains 169 starting at the 40th position, so n_2=40. I continue to do this and find n_3=70, n_4=96, and so on until you find that you are looking 169 again! This is a loop sequence.

That web page gives a single loop sequence, found by one Dan Sikorski. I wondered if there are more – how common are these loops, and what would it take to find them? Sounding like an interesting computational project (rather than tackling this theoretically, which might be possible but struck me as more difficult), I though I would look for loops in some mathematical constants, along with random numbers, to see if there was any evidence for \pi being special.

In short, no, there is not.

Of course, since these numbers are infinitely long, every number you start with must loop back at some point. What I’m after is how common these loops are. So let’s start with a million digits of \pi, and try to find loops that contain any of the numbers 1 to 100000 (rather arbitrary, but my PC can handle this in under an hour so it seems appropriate). The results are as follows:

For \pi, I found the following loops:

\mathcal{L}_1(\pi)=\{1\} (this is self-referencing)

\mathcal{L}_2(\pi)=\{40,70,96,180,3664,24717,15492,84198,65489,3725,16974,41702,3788,5757,1609,62892,44745,9385,169\} (this might be called “the Sikorski loop”)

\mathcal{L}_3(\pi)=\{19,37,46\} (this is a new loop, but who knows if I’m the first to notice it!)

For e, I found the following loops:

\mathcal{L}_1(e)=\{20,111,431,602\}

\mathcal{L}_2(e)=\{118376,308486\}

\mathcal{L}_3(e)=\{44709\} (self-referencing)

\mathcal{L}_4(e)=\{57310\} (another self-referencing)

To see if this distribution is at all unusual, I generated 100 random strings of a million integers and did the same kind of search. The distribution for the random numbers, plotted with the strings I found in \pi and e is:

Distribution of Loops in Random Digits, Pi, and e.

The random distribution probably looks exactly how one would expect it – smaller loops are far more common, and larger loops (>5 or so) are part of the statistical variation. Due to small number statistics, its very hard to convince yourself that \pi and e are particularly special in terms of the distribution of their loops. It might be tempting to say that the length 20 loop in \pi lies outside the statistical variation, but you can see that I found loops of lengths 17, 18 and 31 in the random sample. For this reason, I would say this study does not suggest anything about the special character of the digits of \pi and e.

I suppose one should go further to try and deal with the statistics, and perhaps I’ll just run my laptop for a week and do the 10 million digit version of this, but it’s a little hard to imagine that I will find any evidence to suggest that there are any cyclic patterns in the digits of \pi or e.