Chaos and SageMath

This semester I’m teaching our Analytical Mechanics course, and I just finished a day on Introduction to Chaos. In this class, we are using the SageMath language to do some simple numerical analysis, so I used Sage to demonstrate some of the interesting behavior you find in chaotic systems. Since I’m learning Sage myself, I thought I would post the result of that class here, to demonstrate some of the Codes and the kinds of plots you can get with simple tools like Sage.

Before getting into the science, SageMath is a free, open-source mathematics software which includes things like Maxima, Python, and the GSL. It’s great because it’s extremely powerful and can be used right in a web browser, thanks for the Sage Cell Server. So I did all of this right in front of my students, to demonstrate how easy this tool is to use.

For the scientific background, I am going to do the same example of the driven, damped pendulum found in Classical Mechanics by John Taylor (although the exact same system can be found in Analytic Mechanics, by Hand and Finch). So, I didn’t create any of this science, I’m just demonstrating how to study it using Sage.

First, some very basic background. The equation of motion for a driven, damped pendulum of length L and mass m being acted upon by a driving force F(t)=F_0\cos\omega t is

\ddot{\phi}+2\gamma\dot{\phi}-\omega_0^2\sin(\phi)=f\omega_0^2\cos(\omega t)

\gamma here is the damping term and f=F_0/(mg), the ratio of the forcing amplitude to the weight of the pendulum. In order to get this into Sage, I’m going to rewrite it as a system of first-order linear differential equations,

\dot y=x

\dot x=f\omega_0^2\cos(\omega t)+\omega_0^2\sin(y)-2\gamma x

This is a typical trick to use numerical integrators, basically because it’s easy to integrate first-order equations, even if they are nonlinear.

It’s easiest to find chaos right near resonance, so let’s pick the parameters \omega_0=3\pi and \omega=2\pi. This means the t-axis will display in units of the period, 1 s. We also take \gamma=3/4\pi. The first plot will be this system when the driving force is the same as the weight. That is, f=1, and code + result is Figure 1 shown below.

from sage.calculus.desolvers import desolve_system_rk4
x,y,t=var('x y t')
Q=[[i,k] for i,j,k in P]

Figure 2 is a plot with the driving force slightly bigger than the weight, f=1.06.

Figure 1, f=1

Figure 2, f=1.06







This demonstrates an attractor, meaning the steady-state solution eventually settles down to oscillate around \phi\approx 2\pi. We can check this is actually still periodic by asking Sage for the value of \phi at t=30 s, t=31 s, etc., by calling this line instead of the plot command above

[intP(i) for i in range(30,40)]

(Note that we also have to change the range of integration from [0,15] to [30,40].) The output is shown in Figure 3; the period is clearly 1.0 s out to four significant figures.


Figure 3: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.06.


Figure 4: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.073


Figure 6: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.077.









Next, let’s increase the forcing to \gamma=1.073. The result is shown in Figure 7. The attractor is still present (now with a value of around \phi=-2\pi), but the behavior is much more dramatic. In fact, you might not even be convinced that the period is still 1.0 s, since the peaks look to be different values. We can repeat our experiment from above, and ask Sage to print out the value of \phi for integer timesteps between t=30 and t=40. The result is shown in Figure 4. The actual period appears to be 2.0 s, since the value of \phi does not repeat exactly after 1.0 s. This is called Period Doubling.


Figure 7: f=1.073


Figure 8: f=1.077









In Figure 8, I’ve displayed a plot with f=1.077, and it’s immediately obvious that the oscillatory motion now has period 3.0 s. We can check this by playing the same game, shown in Figure 6.

Now we are in a position to see some unique behavior. I am going to overlay a new solution onto this one, but give the second solution a different initial value, \phi(0)=-\pi/2 instead of \phi(0)=0. The code I am adding is

Q2=[[i,k] for i,j,k in P2]
plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')

The result is shown in Figure 8. Here we can see the first example of the sensitivity to initial conditions. The two solutions diverge markedly once you have a slightly different initial condition, heading towards two very different attractors. Let’s plot the difference between the two oscillators,
but with only a very small difference in the initial conditions, \Delta\phi(0)=1\times 10^{-4}. The code follows:

#plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')
plot(lambda x: abs(intP(x)-intP2(x)),0,15)



Figure 8: Two oscillators with f=1.073 but with different initial values, \Delta\phi(0)=-\pi/2.


Figure 9: A plot of the difference in the oscillators over time, \Delta\phi(t), for f=1.077 and a very small initial separation, \Delta \phi(0)=1\times 10^-4.









This is shown in Figure 9. It clearly decays to zero, but that’s hard to see so let’s plot it on a log scale, shown in Figure 10.

#plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')
plot_semilogy(lambda x: abs(intP(x)-intP2(x)),0,15)

Now, let’s see what happens if we do this same thing, but make the force parameter over the critical value of f=1.0829. This is displayed in Figure 11, for f=1.105. We get completely the opposite behavior, the differences in the oscillators are driven away from each other due to their small initial separations. This is the essence of “Jurrasic Park Chaos” – a small change in the initial conditions (like a butterfly flapping it’s wings in Malaysa) causes a large change in the final outcome (a change in the weather pattern over California).


Figure 10: A log plot of two oscillators with f=1.073 but with very small differences in their initial conditions, \Delta\phi(0)=1\times 10^{-4}.


Figure 11: Finally, a log plot of two overcritical oscillators (f=1.105) and very small differences in their initial conditions, \Delta\phi(0)=1\times 10^{-4}

Loops in the Digits of Pi

Is \pi special?

Of course, the concept of \pi as the ratio between the diameter and circumference of a circle is more than important – a cursory glance through the arXiv suggests it appears in near 85% of all papers on theoretical physics. What I mean is are the digits of \pi special? Is there anything actually significant hidden in the seemingly random digits of this all-important transcendental number?

This is a well-trodden topic among pseudo-intellectuals and science fiction writers alike. No less then the great Carl Sagan afforded a special significance to the digits of our friend \pi at the very end of Contact (a part which didn’t make it into the movie). But is there any truth to this? Or even any evidence for it? In fact, how would you even go about trying to figure it out?

What got me thinking about this was a website I came across a few weeks back, talking about finding strings of specific numbers in \pi – you can see it here. It’s a very cool page, which lets you do cool things like search for your SS number in the digits of \pi (no joy for me there, I only get 8/9 numbers). However, down at the bottom they define something which I formalize as follows:

Loop Sequences: A loop sequence \mathcal{L} in a string of single-digits integers \mathcal{S} is a set of integers \mathcal{L}=\{n_1,n_2,...,n_N\} such that as a string of single digits, the integer n_j is found at the position n_{j+1} in the string \mathcal{S}, and n_N is found at position n_1.

This is perhaps best illustrated by an example. Let’s start with n_1=169. Turns out, starting at the 35th digit of \pi (counting starting after the decimal point), we have …841971693993…, which contains 169 starting at the 40th position, so n_2=40. I continue to do this and find n_3=70, n_4=96, and so on until you find that you are looking 169 again! This is a loop sequence.

That web page gives a single loop sequence, found by one Dan Sikorski. I wondered if there are more – how common are these loops, and what would it take to find them? Sounding like an interesting computational project (rather than tackling this theoretically, which might be possible but struck me as more difficult), I though I would look for loops in some mathematical constants, along with random numbers, to see if there was any evidence for \pi being special.

In short, no, there is not.

Of course, since these numbers are infinitely long, every number you start with must loop back at some point. What I’m after is how common these loops are. So let’s start with a million digits of \pi, and try to find loops that contain any of the numbers 1 to 100000 (rather arbitrary, but my PC can handle this in under an hour so it seems appropriate). The results are as follows:

For \pi, I found the following loops:

\mathcal{L}_1(\pi)=\{1\} (this is self-referencing)

\mathcal{L}_2(\pi)=\{40,70,96,180,3664,24717,15492,84198,65489,3725,16974,41702,3788,5757,1609,62892,44745,9385,169\} (this might be called “the Sikorski loop”)

\mathcal{L}_3(\pi)=\{19,37,46\} (this is a new loop, but who knows if I’m the first to notice it!)

For e, I found the following loops:



\mathcal{L}_3(e)=\{44709\} (self-referencing)

\mathcal{L}_4(e)=\{57310\} (another self-referencing)

To see if this distribution is at all unusual, I generated 100 random strings of a million integers and did the same kind of search. The distribution for the random numbers, plotted with the strings I found in \pi and e is:

Distribution of Loops in Random Digits, Pi, and e.

The random distribution probably looks exactly how one would expect it – smaller loops are far more common, and larger loops (>5 or so) are part of the statistical variation. Due to small number statistics, its very hard to convince yourself that \pi and e are particularly special in terms of the distribution of their loops. It might be tempting to say that the length 20 loop in \pi lies outside the statistical variation, but you can see that I found loops of lengths 17, 18 and 31 in the random sample. For this reason, I would say this study does not suggest anything about the special character of the digits of \pi and e.

I suppose one should go further to try and deal with the statistics, and perhaps I’ll just run my laptop for a week and do the 10 million digit version of this, but it’s a little hard to imagine that I will find any evidence to suggest that there are any cyclic patterns in the digits of \pi or e.