Knot Theory on Sage

I recently found myself needing to know some things about knots – calculating fundamental groups and polynomial invariants, specifically. Going to the sources (Fox’s 1962 classic “A Quick Trip Through Knot Theory” is really great mathematical reading) reveals there is a pretty straightforward algorithm for doing these kinds of things, but it seems like I’m going to have to work out a lot of them. So I thought “oh, maybe it’s easy to do this in SageMath?”

Well, it totally is, so I’m going to just show a few examples. Most of what I’m going to show can easily be found on the Sage help pages for links, but I’ll focus on specifically what I am interested in. Since I’m also a beginner at Sage, this will be a very basic tutorial.

First, we should make sure we can use the tools on Sage to reproduce known results, so I’ll do that with the most popular non-trivial knot, the trefoil (it also happens the trefoil is a (2,3) torus knot, and I’m going to be interested in torus knots in general). I’ve tested the following both on my local installation of Sage and on the live SageMathCell.

First, we need to tell Sage which knot we are interested in by giving it the linking of the arcs of the knot. Each arc needs a number, and we need an orientation. The figure on the right shows my picture for the trefoil. Notice that the definition of “arc” is “between two crossings”, so although in this particular projection the arc 3 and 5 are “the same line”, in the link representation they are represented differently. There are three crossings, so the way you tell Sage which knot you want to know about is by specifying these three crossings as a list of the arcs, starting with the undercrossing on, going clockwise. This is done in the following manner:


So first (and maybe most impressively), you can get a nice plot of your knot:


It’s easy to verify this is the same knot that I drew above, although the orientation is not specified. We can also find the fundamental group,


Finitely presented group < x0, x1, x2 | x1*x0^-1*x2^-1*x0,x0*x2^-1*x1^-1*x2, x2*x1^-1*x0^-1*x1 >

Which we can simplify to the traditional representation by storing it as a group first:

Finitely presented group < x0, x1 | x1*x0^-1*x1^-1*x0^-1*x1*x0 >

And, finally, we can easily find both the Alexander polynomial and the Jones polynomial:

t^-1 - 1 + t
1/t + 1/t^3 - 1/t^4

So although working through the Fox derivatives is kinda fun, this is clearly easier!

So the problem I’m actually interested in is the following: given a covering space p:M\to \mathbb{S}^3, branched over a knot K, what is the preimage of the knot p^{-1}(K)? It turns out that there are some nice results in this area, but mostly dealing with the branched cover M rather than with the knot K. For instance, if you pick K carefully (the Borromean Rings, for example), you can construct any closed 3-manifold M as such a branched covering. The problem is, there aren’t many results on what the preimage of the knot actually is. What we do have is an algorithm for a presentation of the fundamental group (presented by Fox in that earlier article as a variation of the Reidemiser-Schurr algorithm). Basically, given a knot K and a representation of the group of the knot on a symmetric group S_n, we can determine the representation of the group in the cover. So that doesn’t directly tell us the knot itself, but at least we can use a presentation of the fundamental group to calculate the knot invariants and maybe learn something about the knot in that manner.

The problem with trying to do exactly what we did above is Sage doesn’t know how to find the Alexander polynomial directly from a group presentation. However, it does know how to find the Alexander matrix, so as long as we are careful with polynomial rings we can use this to find the Alexander polynomials.

What I’m going to do is an example from Fox. This is actually example 1 in his article, which is the knot shown below. I’ve added orientations which correspond to the later calculations.

To find the group presentation, we take the generators specified in the picture and define some relations, which are rather like the link relations we used above. Around each crossing, going under the first undercrossing is the same as going under the other three crossings, taking orientations into account. For instance, you can read the first one off the figure,


What we need to do is figure out how to implement these relations in Sage to reproduce the same fundamental group.

The first step will be to define a free group on the generators; in Sage this is easy:


Next we need to create our finitely presented group by taking the quotient of this free group with our relations. In Sage, each relation can be specified by first ordering the generators (1-4 in this case), and then specifying words by sequences of integers, where each integer represents a generator. The sign of the integer indicates the power to which the generator is raised. For instance, the word abc^{-1}a would be [1,2,-3,1] and the word a^3 would be [1,1,1]. Doing this for the knot above gives us


(Pay careful attention to the parenthiesis and brackets – Sage interprets them differently, and it matters if you have a series of relators as the case is here, or a single relator. For a single relator, the synthax is


So the next step is to find the Alexander matrix, which Sage knows how to do. But, we want to first specify the ring over which we want to defined the matrix. We really should use Laurent Polynomials, since that’s how the Alexander matrix is defined, but the last step (taking determinants) is not implemented in Sage for Laurent Polynomails, so the trick here is to use Polynomials over an integer ring:


With that done, we can find the Alexander matrix under the Abelianizing map, which sends each generator to a single generator t (which I defined above):

[-t^2 + 2*t - 1             -t              t  t^2 - 2*t + 1]
[ t^2 - 2*t + 1 -t^2 + 2*t - 1             -t              t]
[             t  t^2 - 2*t + 1 -t^2 + 2*t - 1             -t]
[            -t              t  t^2 - 2*t + 1 -t^2 + 2*t - 1]

(You can also just call this with empty parenthesis () and get the Alexander Matrix before the Abelianization).

Finally, we need to find the generator of the first elementary ideal of this matrix, which is the Alexander polynomial. I grabbed this code from someone smarter than me (a mathematician by the name of Nathan Dunfield):

alex_poly = gcd(M.minors(G.ngens() - 1))

There’s obviously nothing too magic about this line, I just wasn’t aware Sage knew about finding the GCD of polynomials. Anyway, we can check to make sure our answer is correct (that is, matches Fox’s original answer)

print alex_poly
t^6 - 5*t^5 + 10*t^4 - 13*t^3 + 10*t^2 - 5*t + 1

And it does!

So of course, I haven’t talked at all about the physics of these preimages of knots p^{-1}(K). This is related to a talk I gave at a recent conference The First Minkowski Meeting on the Foundations of Spacetime Physics, so I’ll post something on that as I work on the paper for the conference proceedings.


Comments on Max Tegmark’s Hierarchy of Reality

I’m in the middle of reading Max Tegmark’s recent book Our Mathematical Universe, which is (so far, I’m about halfway through) mostly about the idea that it’s possible the simplest (or most natural) interpretation of quantum mechanics directly leads to the conclusion that multiple universes must exist. I just finished reading an interesting “excursion” chapter in which he discusses the nature and perception of reality, and I would like to make some comments on it because it differs from my own work on the subject.

(my essay on this topic can be found here.)

Tegmark breaks reality into three pieces, and it will be easiest to see what’s going on if I show you the actual figure in the book (this is shamelessly stolen from Tegmark, and all credit is his. If it turns out he’s not ok with this, I hope he’ll let me know!)

Tegmark's Hierarchy of Reality, from Our Mathematical Universe (Although I'm assigning the word "hierarchy" to it for my own devious purposes)

Tegmark’s Hierarchy of Reality, from Our Mathematical Universe (Although I’m assigning the word “hierarchy” to it for my own devious purposes)

The idea here is that our perception of reality (“Internal Reality”) is governed by our senses, like sight and touch and smell. We interact directly with a version of reality which we can all agree on called “Consensus Reality”, and that consensus reality is a result of something which is abstractly true, “External Reality”. In the book he makes the point that to determine the fundamental “theory of everything”, we don’t need to actually understand human consciousness, because that’s explicitly separated from consensus reality by our own perceptions.

While there certainly are elements to this hierarchy that I like, I actually think making these divisions is pretty arbitrary. I can easily ask my physics I students questions which will break “consensus reality” but stay in the realm of classical physics. For instance, I recently asked someone “what is the acceleration of an object in projectile motion?” and they responded “in the direction of motion”, indicating the parabolic path. Ok, I asked a well-defined mathematical question and received an (incorrect) response that left the bounds of mathematical rigor, but it was about classical physics, and therefore solidly in Tegmark’s “consensus reality”. The student’s level of analysis was not high enough to understand that “acceleration” does not mean “velocity” (or whatever else they might have thought I meant), but it was within *their* consensus reality.

What am I driving at? Perhaps the reality we can all agree on is not mathematical, but only descriptive in nature. For instance, the student and I can both draw pictures of how an object moves in projectile motion because we’ve seen real-life objects move in projectile motion. On the other hand, if mathematics is objectively “right” then I can prove some versions of consensus reality incorrect (“The day is 24 hours long”). Of course, no one would really say “the day is 24 hours long” is *wrong*, just that if you define the day with respect to the background stars, you get something a little bit shorter.

So even if we split off the “perception of reality” piece from our hierarchy of reality, we still end up with some rather arbitrary definitions of reality, from purely mathematical up to descriptive. This suggests that reality should be viewed as a continuum, with no clear boundaries between abstractly true and subjectively true, which all occur at different levels of detail. So what can we use to determine which level we are talking about? I’ve called such a thing the axiom of measurement, and you can check out the link in the first paragraph if you want to read the original essay.

The idea is that in order to determine a standard of “truth”, we need a standard of “measurement”. I can verify the statement “objects in projectile motion move in parabolic motion” as long as I use a measurement tool which is not accurate enough to see the effects of air resistance. That defines our “consensus reality”. But once I build a better tool, I can prove our consensus reality wrong, which requires us to redefine it at each moment for each measurement. Thus we have a natural scale for truth, defined experimentally by whatever apparatus we available.

For me, the bonus with this approach is that you know when things are true; they are true when you know an experiment can confirm them. What you lose is the concept of absolute truth, but it’s easy to argue that the concept of absolute truth has brought us nothing but trouble anyway!

(just as note, I think we necessarily lose absolute truth because we would have to be able to say “we will never design an experiment to prove this wrong”, but I don’t think we will ever be able to do that. Can anyone imagine an experiment to prove that 1+1 is not 2? I think it might strain the logical system I’m working in. Anyway, more thought on this is required).

Of course, I’m really not trying to be super-critical of Tegmark, I actually like some of his analysis. But, I think his splitting here is someone on this side of homo-centric, since it includes human perceptions at all levels (after all, we didn’t even know about his transition between quantum and classical reality until ~100 years ago. I worry about a definition of reality which shifts in time!). If we include the experimental apparatus into the very definition of our theoretical model, we achieve consistency without having to worry either about either cognitive science or a shifting consensus of reality.

Chaos and SageMath

This semester I’m teaching our Analytical Mechanics course, and I just finished a day on Introduction to Chaos. In this class, we are using the SageMath language to do some simple numerical analysis, so I used Sage to demonstrate some of the interesting behavior you find in chaotic systems. Since I’m learning Sage myself, I thought I would post the result of that class here, to demonstrate some of the Codes and the kinds of plots you can get with simple tools like Sage.

Before getting into the science, SageMath is a free, open-source mathematics software which includes things like Maxima, Python, and the GSL. It’s great because it’s extremely powerful and can be used right in a web browser, thanks for the Sage Cell Server. So I did all of this right in front of my students, to demonstrate how easy this tool is to use.

For the scientific background, I am going to do the same example of the driven, damped pendulum found in Classical Mechanics by John Taylor (although the exact same system can be found in Analytic Mechanics, by Hand and Finch). So, I didn’t create any of this science, I’m just demonstrating how to study it using Sage.

First, some very basic background. The equation of motion for a driven, damped pendulum of length L and mass m being acted upon by a driving force F(t)=F_0\cos\omega t is

\ddot{\phi}+2\gamma\dot{\phi}-\omega_0^2\sin(\phi)=f\omega_0^2\cos(\omega t)

\gamma here is the damping term and f=F_0/(mg), the ratio of the forcing amplitude to the weight of the pendulum. In order to get this into Sage, I’m going to rewrite it as a system of first-order linear differential equations,

\dot y=x

\dot x=f\omega_0^2\cos(\omega t)+\omega_0^2\sin(y)-2\gamma x

This is a typical trick to use numerical integrators, basically because it’s easy to integrate first-order equations, even if they are nonlinear.

It’s easiest to find chaos right near resonance, so let’s pick the parameters \omega_0=3\pi and \omega=2\pi. This means the t-axis will display in units of the period, 1 s. We also take \gamma=3/4\pi. The first plot will be this system when the driving force is the same as the weight. That is, f=1, and code + result is Figure 1 shown below.

from sage.calculus.desolvers import desolve_system_rk4
x,y,t=var('x y t')
Q=[[i,k] for i,j,k in P]

Figure 2 is a plot with the driving force slightly bigger than the weight, f=1.06.

Figure 1, f=1

Figure 2, f=1.06







This demonstrates an attractor, meaning the steady-state solution eventually settles down to oscillate around \phi\approx 2\pi. We can check this is actually still periodic by asking Sage for the value of \phi at t=30 s, t=31 s, etc., by calling this line instead of the plot command above

[intP(i) for i in range(30,40)]

(Note that we also have to change the range of integration from [0,15] to [30,40].) The output is shown in Figure 3; the period is clearly 1.0 s out to four significant figures.


Figure 3: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.06.


Figure 4: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.073


Figure 6: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.077.









Next, let’s increase the forcing to \gamma=1.073. The result is shown in Figure 7. The attractor is still present (now with a value of around \phi=-2\pi), but the behavior is much more dramatic. In fact, you might not even be convinced that the period is still 1.0 s, since the peaks look to be different values. We can repeat our experiment from above, and ask Sage to print out the value of \phi for integer timesteps between t=30 and t=40. The result is shown in Figure 4. The actual period appears to be 2.0 s, since the value of \phi does not repeat exactly after 1.0 s. This is called Period Doubling.


Figure 7: f=1.073


Figure 8: f=1.077









In Figure 8, I’ve displayed a plot with f=1.077, and it’s immediately obvious that the oscillatory motion now has period 3.0 s. We can check this by playing the same game, shown in Figure 6.

Now we are in a position to see some unique behavior. I am going to overlay a new solution onto this one, but give the second solution a different initial value, \phi(0)=-\pi/2 instead of \phi(0)=0. The code I am adding is

Q2=[[i,k] for i,j,k in P2]
plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')

The result is shown in Figure 8. Here we can see the first example of the sensitivity to initial conditions. The two solutions diverge markedly once you have a slightly different initial condition, heading towards two very different attractors. Let’s plot the difference between the two oscillators,
but with only a very small difference in the initial conditions, \Delta\phi(0)=1\times 10^{-4}. The code follows:

#plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')
plot(lambda x: abs(intP(x)-intP2(x)),0,15)



Figure 8: Two oscillators with f=1.073 but with different initial values, \Delta\phi(0)=-\pi/2.


Figure 9: A plot of the difference in the oscillators over time, \Delta\phi(t), for f=1.077 and a very small initial separation, \Delta \phi(0)=1\times 10^-4.









This is shown in Figure 9. It clearly decays to zero, but that’s hard to see so let’s plot it on a log scale, shown in Figure 10.

#plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')
plot_semilogy(lambda x: abs(intP(x)-intP2(x)),0,15)

Now, let’s see what happens if we do this same thing, but make the force parameter over the critical value of f=1.0829. This is displayed in Figure 11, for f=1.105. We get completely the opposite behavior, the differences in the oscillators are driven away from each other due to their small initial separations. This is the essence of “Jurrasic Park Chaos” – a small change in the initial conditions (like a butterfly flapping it’s wings in Malaysa) causes a large change in the final outcome (a change in the weather pattern over California).


Figure 10: A log plot of two oscillators with f=1.073 but with very small differences in their initial conditions, \Delta\phi(0)=1\times 10^{-4}.


Figure 11: Finally, a log plot of two overcritical oscillators (f=1.105) and very small differences in their initial conditions, \Delta\phi(0)=1\times 10^{-4}

Cosmic Strings and Spatial Topology

Some of my recent work has focused on foliations of spacetime – which are essentially 1- or 2-dimensional parametrizations of 3- or 4-manifolds. I am mostly interested in them because my work often requires 1- and 2-dimensional embedded spaces, and you can use these initial spaces to construct entire foliations of the manifold. If you have a 2-dimensional space (a surface), you can always smooth it out by squishing all the curvature to a single point, called a conical point (you can try this – next time you get an ice cream cone, take the paper off the cone and try to flatten it. You will be able to do it everywhere except for one point, at the tip of the cone). Mathematically, a surface with a conical point looks exactly like a surface intersecting a cosmic string, so this is a long winded way of explaining how my interest in foliations has lead to an interest in cosmic strings!

The latest part of this story is using some of the interesting properties of cosmic strings, I’ve been able to use the lack of observational evidence for them to constrain the nature of the global topology of the universe. This is a pretty interesting idea because there are very few ways that we can study the overall shape of the universe (shape here means topology, so does the surface looks like a plane, a sphere, a donut, or what?). There are lots of ways we can study the local details of the universe (the geometry), because we can look for the gravitational effects from massive objects like stars, galaxies, black holes, etc. However, we have essentially no access to the topological structure, because gravity is actually only a local theory, not a global theory (I could write forever about this, but I’ll just leave it for a later post maybe…). We have zero theoretical understanding of the global topology, and our only observational understanding comes from studying patterns in the CMB. The trick with cosmic strings is that they actually serve to connect the local gravitational field (the geometry) to the global structure of the universe (the topology).

The game is this – take a spacetime with cosmic strings running around everywhere, and take a flat surface which intersects some of them. This surface can always be taken as flat, so the intersections are conical points. If you measure an angular coordinate around each point, you won’t get 2\pi, you’ll get something a bit smaller or a bit larger since the surfaces are twisted up around the points. It turns out that if you add up all the twists, you had better get an integer – the genus of the surface. The genus is essentially the number of holes in the surface. A sphere has g=0, a torus g=1, two tori attached to each other have g=2, and so on.

Now we consider what kind of observational evidence there is for cosmic strings. The short answer, none! People have been looking for them in the CMB, but so far they’ve only been able to say “if cosmic strings exist, they must be in such-and-such numbers and have energies of such-and-such.” If we use these limits, we find that to a very good approximation, if cosmic strings exist, a surface passing through them must have genus 1, and therefore be a torus (the surface of a doughnut)!

Ok big deal – but here’s where the foliations come in. For example, if we parametrize our spatial (3-dimensional) manifold with tori, the result is a 3-torus. So this actually implies that space is not a sphere, but is a solid torus (like a doughnut). The mathematics behind this statement are actually quite profound, and were worked out in the early days of foliation theory by the likes of Reeb, Thurston, and Novikov. But the idea is that such foliations of 3-manifolds are very stable, and a single closed surface greatly restricts the kinds of foliations allowed for the manifold as a whole.

The archive paper where I discuss this in more detail can be found here. This idea that space is not a sphere is not new, and there is actually some evidence for it in the CMB, in the form of a repeating pattern (or a preferred direction) in space. But my primary interest is pointing out that this is an independent way of measuring the topology of the universe, since it’s based on local observations of strings in the CMB rather than overall patterns. If strings don’t actually exist, it can still be used to study the presence of the conical singularities, but I expect the restrictions on the topology are much less strict. Perhaps I’ll look into that further into that, but for the moment I’m happy with this. It’s a new way to determine information about the global topology of the universe, and it’s a great combination of pure mathematics, theoretical physics, and observational cosmology.

FQXi Essay Contest: The Axiom of Measurement

I’ve submitted an entry for the Foundational Questions Institute Essay Contest: “Trick or Truth: the Mysterious Connection Between Physics and Mathematics”. You can check out my entry here.

The backstory to this is that I was participating in a weekly mathematical physics seminar back at Florida State (although I use the word “seminar” pretty loosely – it was regularly attended by only myself and one other individual!), and in the process of working on presenting on some NCG topic, I came across “The Bost-Connes System”. This is a particular C^*-algbera, on which you can define some dynamics. What makes it special is that if you calculate the partition function for this dynamical system, you get the Riemann Zeta function! Since the partition function can be used to generate predictions for a statistical mechanical system, I wondered how possible it was to construct a real physical system with the same symmetry as the Bost-Connes system. Then you would have experimental access to (at least some features of) the Riemann Zeta. There is a great deal of mathematical important to Zeta, including a $1 million dollar prize for finding the zeros!

I wasn’t exactly thinking about which Benz to buy with my prize money yet, but I thought it was an interesting idea – experimental verification of a mathematical theorem. I wasn’t aware that anything like this had been done before. Normally the “flow of ideas” works the other way – constructions in mathematics find usefulness in physics, or theoretical models become interesting mathematical systems. It stuck in my head for a while, I did a few calculations to determine what the zeros of a partition function might look like, but nothing really came of it.

When this essay contest came around, I thought it might be an opportunity to share this idea. I figured that if it was going to be taken seriously, you needed to raise experimental verification to the level of mathematics – after all, if I prove a conjecture is true outside of the field in which the conjecture is stated, we should not take the proof very seriously! I needed to make experimental physics a subfield of mathematics. It turns out that this is pretty easy, and so that’s what the essay is about. If you take your physical model as a set of formal axioms, and add in an additional axiom which can be used to experimentally verify a theorem (I call this “an axiom of measurement”), you can formulate physics as a complete formal system. As a bonus, the axiom can be used to add a little more structure to the Platonist viewpoint on universal versus physical forms.

Now, the FQXi Essay Contests are Contests; the community and the public can vote on the quality of the essays, and the quality of the essays vary widely, since nearly anyone is allowed to submit an entry. I actually think  my essay represents a pretty mainstream viewpoint about physics – that we are not really studying “nature” or “the universe” when we do physics, we are really studying a “model for the universe”, which is confirmed by our everyday observations as well as carefully constructed experiments. Since it’s not a new, dramatic viewpoint on any particular aspect of the relationship between the two fields, I don’t expect to be winning any awards. But, I had an opinion with an interesting idea behind it, and an essay seemed like the ideal place to explore it.

Anyway, if you’re so inclined go over and check out my entry as well as all the others.

Loops in the Digits of Pi

Is \pi special?

Of course, the concept of \pi as the ratio between the diameter and circumference of a circle is more than important – a cursory glance through the arXiv suggests it appears in near 85% of all papers on theoretical physics. What I mean is are the digits of \pi special? Is there anything actually significant hidden in the seemingly random digits of this all-important transcendental number?

This is a well-trodden topic among pseudo-intellectuals and science fiction writers alike. No less then the great Carl Sagan afforded a special significance to the digits of our friend \pi at the very end of Contact (a part which didn’t make it into the movie). But is there any truth to this? Or even any evidence for it? In fact, how would you even go about trying to figure it out?

What got me thinking about this was a website I came across a few weeks back, talking about finding strings of specific numbers in \pi – you can see it here. It’s a very cool page, which lets you do cool things like search for your SS number in the digits of \pi (no joy for me there, I only get 8/9 numbers). However, down at the bottom they define something which I formalize as follows:

Loop Sequences: A loop sequence \mathcal{L} in a string of single-digits integers \mathcal{S} is a set of integers \mathcal{L}=\{n_1,n_2,...,n_N\} such that as a string of single digits, the integer n_j is found at the position n_{j+1} in the string \mathcal{S}, and n_N is found at position n_1.

This is perhaps best illustrated by an example. Let’s start with n_1=169. Turns out, starting at the 35th digit of \pi (counting starting after the decimal point), we have …841971693993…, which contains 169 starting at the 40th position, so n_2=40. I continue to do this and find n_3=70, n_4=96, and so on until you find that you are looking 169 again! This is a loop sequence.

That web page gives a single loop sequence, found by one Dan Sikorski. I wondered if there are more – how common are these loops, and what would it take to find them? Sounding like an interesting computational project (rather than tackling this theoretically, which might be possible but struck me as more difficult), I though I would look for loops in some mathematical constants, along with random numbers, to see if there was any evidence for \pi being special.

In short, no, there is not.

Of course, since these numbers are infinitely long, every number you start with must loop back at some point. What I’m after is how common these loops are. So let’s start with a million digits of \pi, and try to find loops that contain any of the numbers 1 to 100000 (rather arbitrary, but my PC can handle this in under an hour so it seems appropriate). The results are as follows:

For \pi, I found the following loops:

\mathcal{L}_1(\pi)=\{1\} (this is self-referencing)

\mathcal{L}_2(\pi)=\{40,70,96,180,3664,24717,15492,84198,65489,3725,16974,41702,3788,5757,1958,14609,62892,44745,9385,169\} (this might be called “the Sikorski loop”. And thanks to Jesse Chamberlain for pointing out a typographical error in an old version of this loop!)

\mathcal{L}_3(\pi)=\{19,37,46\} (this is a new loop, but who knows if I’m the first to notice it!)

For e, I found the following loops:

\mathcal{L}_1(e)=\{3624, 810\}

\mathcal{L}_2(e)=\{586, 1439, 4227, 19689, 76567, 4846, 16317, 32036\}

\mathcal{L}_3(e)=\{2543\} (self-referencing)

\mathcal{L}_4(e)=\{170596\} (another self-referencing)

(the original version of this post had an error in the digits of e, which was found by some students who are working on this project with me, D Vadala and J Gormley.)

To see if this distribution is at all unusual, I generated 100 random strings of a million integers and did the same kind of search. The distribution for the random numbers, plotted with the strings I found in \pi and e is:

The random distribution probably looks exactly how one would expect it – smaller loops are far more common, and larger loops (>5 or so) are part of the statistical variation. Due to small number statistics, its very hard to convince yourself that \pi and e are particularly special in terms of the distribution of their loops. It might be tempting to say that the length 20 loop in \pi lies outside the statistical variation, but you can see that I found loops of lengths 17, 18 and 31 in the random sample. For this reason, I would say this study does not suggest anything about the special character of the digits of \pi and e.

I suppose one should go further to try and deal with the statistics, and perhaps I’ll just run my laptop for a week and do the 10 million digit version of this, but it’s a little hard to imagine that I will find any evidence to suggest that there are any cyclic patterns in the digits of \pi or e.


Notes on Lattice Topological Field Theory in Three Dimensions, Part VII

In this final post I will explain the two primary examples found in the Dijkgraaf and Witten paper, and give some concluding remarks regarding my original interest in this topic – the application to loop quantum gravity.

Example Calculations, Continued.

The first example from the paper is Y\times \mathbb{S}^1 (2-sphere with 3 punctures times 1-sphere), which can be used to generate many complicated examples. First we need a representation of this space as a simplex:


The first picture shows that Y can be represented as a 2-complex with the three vertices identified. Taking the product of this with the circle gives us the final image. This can be decomposed into 3 simplices, as shown:


This is certainly confusing, since not only is the top and bottom identified, but all the vertices are being identified (if you recall in the last post orientation was an issue – that is not a problem here due to how this gluing is happening. Just stare at it long enough and it should seem obvious, even if it’s not). In any case, this is the easiest decomposition into tetrahedrons (3D figures with four triangular faces). If the first simplex has positive orientation, the middle one must have negative orientation since their faces are in contact – counterclockwise on the faces of the first results in clockwise on the middle. The third one is again positive, and using the order of the vertices we find the action to be


It’s a messy but very doable calculation to show that \delta c_h(g_1,g_2)=1 – it’s a cocycle. In addition, we should check that under \alpha \to \alpha\delta\beta we get c_h\to c_h\delta\beta_h for some 1-cocycle \beta_h.. This will get a bit messy as well, but it has at least one subtly so I’ll do it.

\begin{aligned}c_h(g_1,g_2)&\to c_h(g_1,g_2)\frac{\beta (h,g_1g_2)\beta (g_1,g_2)}{\beta (h,g_1)\beta (hg_1,g_2)}\frac{\beta (g_1,g_2h)\beta (g_2,h)}{\beta (g_1,g_2)\beta (g_1g_2,h)}\frac{\beta (g_1h,g_2)\beta (g_1,h)}{\beta (g_1,hg_2)\beta (h,g_2)}\\    &\to c_h(g_1,g_2)\frac{\beta (h,g_1g_2)\beta (g_2,h) \beta (g_1,h)}{\beta (h,g_1)\beta (g_1g_2,h)\beta (h,g_2)}\end{aligned}

To cancel out a bunch of those terms I have used the fact that the gs are in the stabilizer subgroup of the h – so g_i h=g_i. This can also be seen from the simplicial complex representation above. So we see that the transformation works if \beta_h(g)=\beta(g,h)\beta(h,g)^{-1} is the 1-cocyle which transforms c_h, because \delta \beta_h is a 2-cocycle so that:

\delta \beta_h(g_1,g_2)=\beta_h(g_1)\beta_h(g_2)\beta_h(g_1g_2)^{-1}=\frac{\beta(g_1,h)}{\beta(h,g_1)}\frac{\beta(g_2,h)}{\beta(h,g_2)}\frac{\beta(g_1g_2,h)}{\beta(h,g_1g_2)}

The first equality comes from the definition of a coboundary, and the second is using the definition of the 1-form \beta_h(g). This matches the expression above.

The next example we will do is \mathbb{S}^1\times\mathbb{S}^1\times\mathbb{S}^1 (the three torus). This can be represented as the cube with opposite sides identified, which splits into 6 simplices, a pair of which is shown below.


Each pair of these simplices have differential orientation and order of edges. The ones I have shown contribute \alpha(g,h,k)\alpha(g,k,h)^{-1}, and by permutation it’s pretty easy to see the action is


As stated above, we can generate a higher-genus surface with the manifold Y\times \mathbb{S}^1; in the case of the 3-torus it is easy to see that it takes two Y\times \mathbb{S}^1 to make \mathbb{S}^1\times\mathbb{S}^1\times\mathbb{S}^1 (just cut the cube along a diagonal and you get a prism). Therefore we can write the action in terms of the 2-cocycles W(h,g,k)=c_g(h,k)c_g(k,g)^{-1}.

TQFT and Loop Quantum Gravity

In loop quantum gravity we don’t have a lattice but rather a graph – which is something like an irregular lattice. The fields live in SU(2), and in a 2009 paper, Bianchi constructed something which (at least superficially) looks very much like a TFT by considering the moduli space of flat SU(2) connections. However, the actual topological character is rather hidden in this approach, and in fact in all of the standard approaches to LQG (BF theory, etc). This is where the motivation for topspin networks comes from – you can read more about that elsewhere on this site. The end result of the incorporation of topspin networks is additional S_n labels on the spin networks – which turns LQG into something resembling a gauge theory with finite gauge group on an irregular lattice. The idea is to construct the partition function for a TQFT associated to a topspin network and use it to measure the topological contribution to the correlation function, which is a topic of hot research right now. In addition, this should make the rather tenuous connection between group field theory and LQG more clear.

So…watch out on the archive!