Livestream of Solar Eclipse 2017

As many of you know, there is going to be a total Solar Eclipse today, Aug 21, 2017. Here in Massachusetts we will not experience totality, but because I am running some outreach for the students at Merrimack College, I will be livestreaming through the eyepiece of our solar telescope. Hopefully the stream will be up from 1:30 pm to around 4:00 pm EST. The Youtube link is here:

https://gaming.youtube.com/user/cdustonable/live

To get this image, I am using a solar telescope with an H-alpha filter, which allows us to view things like solar prominances and other structures in the corona. I am viewing directly with my cell phone and a camera app called “Camera VF-5”.

Happy viewing, and also check out the NASA livestream, at https://eclipse2017.nasa.gov/eclipse-live-stream. Of course, they have a bunch more interesting things going on, but we are all at the mercy of the weather!

Knot Theory on Sage

I recently found myself needing to know some things about knots – calculating fundamental groups and polynomial invariants, specifically. Going to the sources (Fox’s 1962 classic “A Quick Trip Through Knot Theory” is really great mathematical reading) reveals there is a pretty straightforward algorithm for doing these kinds of things, but it seems like I’m going to have to work out a lot of them. So I thought “oh, maybe it’s easy to do this in SageMath?”

Well, it totally is, so I’m going to just show a few examples. Most of what I’m going to show can easily be found on the Sage help pages for links, but I’ll focus on specifically what I am interested in. Since I’m also a beginner at Sage, this will be a very basic tutorial.

First, we should make sure we can use the tools on Sage to reproduce known results, so I’ll do that with the most popular non-trivial knot, the trefoil (it also happens the trefoil is a (2,3) torus knot, and I’m going to be interested in torus knots in general). I’ve tested the following both on my local installation of Sage and on the live SageMathCell.

First, we need to tell Sage which knot we are interested in by giving it the linking of the arcs of the knot. Each arc needs a number, and we need an orientation. The figure on the right shows my picture for the trefoil. Notice that the definition of “arc” is “between two crossings”, so although in this particular projection the arc 3 and 5 are “the same line”, in the link representation they are represented differently. There are three crossings, so the way you tell Sage which knot you want to know about is by specifying these three crossings as a list of the arcs, starting with the undercrossing on, going clockwise. This is done in the following manner:

L=Link([[2,6,5,1],[6,3,4,5],[3,2,1,4]])

So first (and maybe most impressively), you can get a nice plot of your knot:

L.plot()

It’s easy to verify this is the same knot that I drew above, although the orientation is not specified. We can also find the fundamental group,

L.fundamental_group()

Finitely presented group < x0, x1, x2 | x1*x0^-1*x2^-1*x0,x0*x2^-1*x1^-1*x2, x2*x1^-1*x0^-1*x1 >

Which we can simplify to the traditional representation by storing it as a group first:

G=L.fundamental_group()
G.simplified()
Finitely presented group < x0, x1 | x1*x0^-1*x1^-1*x0^-1*x1*x0 >

And, finally, we can easily find both the Alexander polynomial and the Jones polynomial:

L.alexander_polynomial()
t^-1 - 1 + t
 L.jones_polynomial()
1/t + 1/t^3 - 1/t^4

So although working through the Fox derivatives is kinda fun, this is clearly easier!

So the problem I’m actually interested in is the following: given a covering space p:M\to \mathbb{S}^3, branched over a knot K, what is the preimage of the knot p^{-1}(K)? It turns out that there are some nice results in this area, but mostly dealing with the branched cover M rather than with the knot K. For instance, if you pick K carefully (the Borromean Rings, for example), you can construct any closed 3-manifold M as such a branched covering. The problem is, there aren’t many results on what the preimage of the knot actually is. What we do have is an algorithm for a presentation of the fundamental group (presented by Fox in that earlier article as a variation of the Reidemiser-Schurr algorithm). Basically, given a knot K and a representation of the group of the knot on a symmetric group S_n, we can determine the representation of the group in the cover. So that doesn’t directly tell us the knot itself, but at least we can use a presentation of the fundamental group to calculate the knot invariants and maybe learn something about the knot in that manner.

The problem with trying to do exactly what we did above is Sage doesn’t know how to find the Alexander polynomial directly from a group presentation. However, it does know how to find the Alexander matrix, so as long as we are careful with polynomial rings we can use this to find the Alexander polynomials.

What I’m going to do is an example from Fox. This is actually example 1 in his article, which is the knot shown below. I’ve added orientations which correspond to the later calculations.

To find the group presentation, we take the generators specified in the picture and define some relations, which are rather like the link relations we used above. Around each crossing, going under the first undercrossing is the same as going under the other three crossings, taking orientations into account. For instance, you can read the first one off the figure,

dad^{-1}cda^{-1}d^{-1}ab^{-1}a^{-1}=1

What we need to do is figure out how to implement these relations in Sage to reproduce the same fundamental group.

The first step will be to define a free group on the generators; in Sage this is easy:

H=FreeGroup(4)

Next we need to create our finitely presented group by taking the quotient of this free group with our relations. In Sage, each relation can be specified by first ordering the generators (1-4 in this case), and then specifying words by sequences of integers, where each integer represents a generator. The sign of the integer indicates the power to which the generator is raised. For instance, the word abc^{-1}a would be [1,2,-3,1] and the word a^3 would be [1,1,1]. Doing this for the knot above gives us

G=H/(H([4,1,-4,3,4,-1,-4,1,-2,-1]),H([1,2,-1,4,1,-2,-1,2,-3,-2]),H([2,3,-2,1,2,-3,-2,3,-4,-3]),H([3,4,-3,2,3,-4,-3,4,-1,-4]))

(Pay careful attention to the parenthiesis and brackets – Sage interprets them differently, and it matters if you have a series of relators as the case is here, or a single relator. For a single relator, the synthax is

G=H/[H([....])]

.)
So the next step is to find the Alexander matrix, which Sage knows how to do. But, we want to first specify the ring over which we want to defined the matrix. We really should use Laurent Polynomials, since that’s how the Alexander matrix is defined, but the last step (taking determinants) is not implemented in Sage for Laurent Polynomails, so the trick here is to use Polynomials over an integer ring:

R.<t>=PolynomialRing(ZZ)

With that done, we can find the Alexander matrix under the Abelianizing map, which sends each generator to a single generator t (which I defined above):

M=G.alexander_matrix([t,t,t,t]);M
[-t^2 + 2*t - 1             -t              t  t^2 - 2*t + 1]
[ t^2 - 2*t + 1 -t^2 + 2*t - 1             -t              t]
[             t  t^2 - 2*t + 1 -t^2 + 2*t - 1             -t]
[            -t              t  t^2 - 2*t + 1 -t^2 + 2*t - 1]

(You can also just call this with empty parenthesis () and get the Alexander Matrix before the Abelianization).

Finally, we need to find the generator of the first elementary ideal of this matrix, which is the Alexander polynomial. I grabbed this code from someone smarter than me (a mathematician by the name of Nathan Dunfield):

alex_poly = gcd(M.minors(G.ngens() - 1))

There’s obviously nothing too magic about this line, I just wasn’t aware Sage knew about finding the GCD of polynomials. Anyway, we can check to make sure our answer is correct (that is, matches Fox’s original answer)

print alex_poly
t^6 - 5*t^5 + 10*t^4 - 13*t^3 + 10*t^2 - 5*t + 1

And it does!

So of course, I haven’t talked at all about the physics of these preimages of knots p^{-1}(K). This is related to a talk I gave at a recent conference The First Minkowski Meeting on the Foundations of Spacetime Physics, so I’ll post something on that as I work on the paper for the conference proceedings.

Primordial Black Holes as Dark Matter?

I was recently asked by a family friend “have you heard about this new idea that primordial black holes could explain dark matter?”

Well I hadn’t, so I did a little investigating and it’s a pretty clever idea. Part of the backstory here is “what can we do with gravitational waves?”, so that’s where I’ll start.

One of the surprising things about the very first direct observations of gravitational waves by LIGO is the masses of the constituent black holes. The first pair was 36 and 29 solar masses, the second was 14 and 8, and the third was 31 and 19. What was immediately understood to be important about these sources is that they are generally more massive than the other stellar-mass black holes we’re found previously (from X-ray studies, usually. Max there is 18 solar masses). Significantly, the larger mass ones should also be *less* likely, from stellar formation scenarios. So while we are only talking about 6 new black holes, we clearly need to know if that will pose a problem for stellar formation models. (there are also some issues in regard to the spins of these black holes, but I won’t go down that particular rabbit hole).

So people started looking at it, and found that it was generally possible to get these kinds of higher-mass black holes, but it did put some constraints on the formation scenarios. Basically, the problem is you need to make giant stars, which generally need to have low metallicity to form. However, the conditions that generate those stars (high star formation rate in the past) generally turn out to produce higher overall metallicity quicker. If you tune the star formation rate a bit so there are actually fewer large-mass stars, you reduce the overall metalicity so you can effectively create massive black holes. So it’s constraining, but not overly so.

But that’s actually not what I want to talk about – what about other formation scenarios for these black holes? Specifically, what about primordial black holes (PBH)? These are black holes that formed as a result of density fluctuations in the early universe. It turns out it’s pretty easy to produce black holes of this mass in this manner (and the spin, which I skipped talking about above, is a little easier to produce as well). So, cool, we have at least two different ways the universe can give us the black holes found by LIGO.

But, are there any other implications of primordial mass black hole production at this rate? Well, without a stellar companion, there would typically not be an accretion disk and we would have no way to observe these black holes. But of course – that’s exactly the condition we need for dark matter!

So, in a recent paper, Juan Garcia-Bellido and his collaborators (who include Sebastien Clesse, Andre Linde, and David Wands) have worked this out in a bit of detail (and apparently there are others working on this as well, such as Alexander Kashlinsky).

The idea that black holes (or other compact objects) could be a model for dark matter is not new, actually. We’ve been looking for microlensing due to compact objects in the galactic halo for years (these objects are called MACHOS), but have essentially found nothing. What’s interesting about their new models is the mass distribution for primordial black holes in the 10-100 range sits right in the region of parameter space which was has not been covered by previous studies:

As you can see in the figure (which comes from the paper), the lower limits on PBH have a gap in between the lower mass MACHO/EROS observations and the higher mass WMAP3/FIRAS observations. It looks to me like that gap peaks around 0.01 of a solar mass and carries up to around 100. Which is broad range for black holes, but look at the range which we are talking about here (25 orders of magnitude!).

So there are lots of other interested details here, but what’s really fascinating about this new paper is that there are apparently a very large set of phenomenological signals we can use to test this hypothesis. It would affect the CMB, star formation in the early universe, X-ray transients, and a whole host of others. One particularly interesting idea is that rather then looking for lensing, we might try to look for the shift of the positions of stars over time. With the new plethora of data on stellar positions (like the GAIA satellite), it also might be the first time someone could actually attempt such a study. So there are a lot of interesting things to check.

As a sidenote, some of these black holes would of course develop an accretion disk through random interactions with stars or gas, and produce point sources that would emit in Gamma or X-ray range. Well, there actually is a large list of unidentified point sources in nearly all the X-ray catalogs. In fact, my undergraduate honors thesis was working on trying to identify unknown point sources in a Chandra X-ray image of the galactic center. The paper suggests that rather than looking at spectral characteristics, one should look for a correlation between the point sources and the expected dark matter distribution.

So, we’ve got LIGO finding a new class of black holes, which could be created in the early universe, and a new model for dark matter. Given how much trouble the particle model for dark matter is having (sorry LHC!), we should be taking these new ideas seriously. And what’s great about this is there are *bunch* of great ways to look for this primordial black hole signal. Of course, maybe that means it won’t last long as an explanation for dark matter, but it’s something new to look at that doesn’t require any exotic new physics.

And, not to belabor the point, but all of this wouldn’t have been possible with LIGO. Thanks LIGO!

 

Comments on Max Tegmark’s Hierarchy of Reality

I’m in the middle of reading Max Tegmark’s recent book Our Mathematical Universe, which is (so far, I’m about halfway through) mostly about the idea that it’s possible the simplest (or most natural) interpretation of quantum mechanics directly leads to the conclusion that multiple universes must exist. I just finished reading an interesting “excursion” chapter in which he discusses the nature and perception of reality, and I would like to make some comments on it because it differs from my own work on the subject.

(my essay on this topic can be found here.)

Tegmark breaks reality into three pieces, and it will be easiest to see what’s going on if I show you the actual figure in the book (this is shamelessly stolen from Tegmark, and all credit is his. If it turns out he’s not ok with this, I hope he’ll let me know!)

Tegmark's Hierarchy of Reality, from Our Mathematical Universe (Although I'm assigning the word "hierarchy" to it for my own devious purposes)

Tegmark’s Hierarchy of Reality, from Our Mathematical Universe (Although I’m assigning the word “hierarchy” to it for my own devious purposes)

The idea here is that our perception of reality (“Internal Reality”) is governed by our senses, like sight and touch and smell. We interact directly with a version of reality which we can all agree on called “Consensus Reality”, and that consensus reality is a result of something which is abstractly true, “External Reality”. In the book he makes the point that to determine the fundamental “theory of everything”, we don’t need to actually understand human consciousness, because that’s explicitly separated from consensus reality by our own perceptions.

While there certainly are elements to this hierarchy that I like, I actually think making these divisions is pretty arbitrary. I can easily ask my physics I students questions which will break “consensus reality” but stay in the realm of classical physics. For instance, I recently asked someone “what is the acceleration of an object in projectile motion?” and they responded “in the direction of motion”, indicating the parabolic path. Ok, I asked a well-defined mathematical question and received an (incorrect) response that left the bounds of mathematical rigor, but it was about classical physics, and therefore solidly in Tegmark’s “consensus reality”. The student’s level of analysis was not high enough to understand that “acceleration” does not mean “velocity” (or whatever else they might have thought I meant), but it was within *their* consensus reality.

What am I driving at? Perhaps the reality we can all agree on is not mathematical, but only descriptive in nature. For instance, the student and I can both draw pictures of how an object moves in projectile motion because we’ve seen real-life objects move in projectile motion. On the other hand, if mathematics is objectively “right” then I can prove some versions of consensus reality incorrect (“The day is 24 hours long”). Of course, no one would really say “the day is 24 hours long” is *wrong*, just that if you define the day with respect to the background stars, you get something a little bit shorter.

So even if we split off the “perception of reality” piece from our hierarchy of reality, we still end up with some rather arbitrary definitions of reality, from purely mathematical up to descriptive. This suggests that reality should be viewed as a continuum, with no clear boundaries between abstractly true and subjectively true, which all occur at different levels of detail. So what can we use to determine which level we are talking about? I’ve called such a thing the axiom of measurement, and you can check out the link in the first paragraph if you want to read the original essay.

The idea is that in order to determine a standard of “truth”, we need a standard of “measurement”. I can verify the statement “objects in projectile motion move in parabolic motion” as long as I use a measurement tool which is not accurate enough to see the effects of air resistance. That defines our “consensus reality”. But once I build a better tool, I can prove our consensus reality wrong, which requires us to redefine it at each moment for each measurement. Thus we have a natural scale for truth, defined experimentally by whatever apparatus we available.

For me, the bonus with this approach is that you know when things are true; they are true when you know an experiment can confirm them. What you lose is the concept of absolute truth, but it’s easy to argue that the concept of absolute truth has brought us nothing but trouble anyway!

(just as note, I think we necessarily lose absolute truth because we would have to be able to say “we will never design an experiment to prove this wrong”, but I don’t think we will ever be able to do that. Can anyone imagine an experiment to prove that 1+1 is not 2? I think it might strain the logical system I’m working in. Anyway, more thought on this is required).

Of course, I’m really not trying to be super-critical of Tegmark, I actually like some of his analysis. But, I think his splitting here is someone on this side of homo-centric, since it includes human perceptions at all levels (after all, we didn’t even know about his transition between quantum and classical reality until ~100 years ago. I worry about a definition of reality which shifts in time!). If we include the experimental apparatus into the very definition of our theoretical model, we achieve consistency without having to worry either about either cognitive science or a shifting consensus of reality.

Chaos and SageMath

This semester I’m teaching our Analytical Mechanics course, and I just finished a day on Introduction to Chaos. In this class, we are using the SageMath language to do some simple numerical analysis, so I used Sage to demonstrate some of the interesting behavior you find in chaotic systems. Since I’m learning Sage myself, I thought I would post the result of that class here, to demonstrate some of the Codes and the kinds of plots you can get with simple tools like Sage.

Before getting into the science, SageMath is a free, open-source mathematics software which includes things like Maxima, Python, and the GSL. It’s great because it’s extremely powerful and can be used right in a web browser, thanks for the Sage Cell Server. So I did all of this right in front of my students, to demonstrate how easy this tool is to use.

For the scientific background, I am going to do the same example of the driven, damped pendulum found in Classical Mechanics by John Taylor (although the exact same system can be found in Analytic Mechanics, by Hand and Finch). So, I didn’t create any of this science, I’m just demonstrating how to study it using Sage.

First, some very basic background. The equation of motion for a driven, damped pendulum of length L and mass m being acted upon by a driving force F(t)=F_0\cos\omega t is

\ddot{\phi}+2\gamma\dot{\phi}-\omega_0^2\sin(\phi)=f\omega_0^2\cos(\omega t)

\gamma here is the damping term and f=F_0/(mg), the ratio of the forcing amplitude to the weight of the pendulum. In order to get this into Sage, I’m going to rewrite it as a system of first-order linear differential equations,

\dot y=x

\dot x=f\omega_0^2\cos(\omega t)+\omega_0^2\sin(y)-2\gamma x

This is a typical trick to use numerical integrators, basically because it’s easy to integrate first-order equations, even if they are nonlinear.

It’s easiest to find chaos right near resonance, so let’s pick the parameters \omega_0=3\pi and \omega=2\pi. This means the t-axis will display in units of the period, 1 s. We also take \gamma=3/4\pi. The first plot will be this system when the driving force is the same as the weight. That is, f=1, and code + result is Figure 1 shown below.

from sage.calculus.desolvers import desolve_system_rk4
x,y,t=var('x y t')
w=2*pi
w0=3*pi
g=3/4*pi
f=1.0
P=desolve_system_rk4([-2*g*x-w0^2*sin(y)+f*w0^2*cos(w*t),x],[x,y],[0,0,0],ivar=t,end_points=[0,15],step=0.01)
Q=[[i,k] for i,j,k in P]
intP=spline(Q)
plot(intP,0,15)

Figure 2 is a plot with the driving force slightly bigger than the weight, f=1.06.

Figure 1, f=1

Figure 2, f=1.06

 

 

 

 

 

 

This demonstrates an attractor, meaning the steady-state solution eventually settles down to oscillate around \phi\approx 2\pi. We can check this is actually still periodic by asking Sage for the value of \phi at t=30 s, t=31 s, etc., by calling this line instead of the plot command above

[intP(i) for i in range(30,40)]

(Note that we also have to change the range of integration from [0,15] to [30,40].) The output is shown in Figure 3; the period is clearly 1.0 s out to four significant figures.

Figure2a

Figure 3: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.06.

Figure2b

Figure 4: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.073

Figure2c

Figure 6: The value of \phi at integer steps between t=30 and t=40 for \gamma=1.077.

 

 

 

 

 

 

 

 

Next, let’s increase the forcing to \gamma=1.073. The result is shown in Figure 7. The attractor is still present (now with a value of around \phi=-2\pi), but the behavior is much more dramatic. In fact, you might not even be convinced that the period is still 1.0 s, since the peaks look to be different values. We can repeat our experiment from above, and ask Sage to print out the value of \phi for integer timesteps between t=30 and t=40. The result is shown in Figure 4. The actual period appears to be 2.0 s, since the value of \phi does not repeat exactly after 1.0 s. This is called Period Doubling.

Figure3a

Figure 7: f=1.073

Figure3b

Figure 8: f=1.077

 

 

 

 

 

 

 

 

In Figure 8, I’ve displayed a plot with f=1.077, and it’s immediately obvious that the oscillatory motion now has period 3.0 s. We can check this by playing the same game, shown in Figure 6.

Now we are in a position to see some unique behavior. I am going to overlay a new solution onto this one, but give the second solution a different initial value, \phi(0)=-\pi/2 instead of \phi(0)=0. The code I am adding is

P2=desolve_system_rk4([-2*b*x-w0^2*sin(y)+g*w0^2*cos(w*t),x],[x,y],[0,0,-pi/2],
ivar=t,end_points=[0,15],step=0.01)
Q2=[[i,k] for i,j,k in P2]
intP2=spline(Q2)
plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')

The result is shown in Figure 8. Here we can see the first example of the sensitivity to initial conditions. The two solutions diverge markedly once you have a slightly different initial condition, heading towards two very different attractors. Let’s plot the difference between the two oscillators,
|\phi_1(t)-\phi_2(t)|,
but with only a very small difference in the initial conditions, \Delta\phi(0)=1\times 10^{-4}. The code follows:

#plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')
plot(lambda x: abs(intP(x)-intP2(x)),0,15)

 

Figure4a

Figure 8: Two oscillators with f=1.073 but with different initial values, \Delta\phi(0)=-\pi/2.

Figure4b

Figure 9: A plot of the difference in the oscillators over time, \Delta\phi(t), for f=1.077 and a very small initial separation, \Delta \phi(0)=1\times 10^-4.

 

 

 

 

 

 

 

 

This is shown in Figure 9. It clearly decays to zero, but that’s hard to see so let’s plot it on a log scale, shown in Figure 10.

#plot(intP,0,15)+plot(intP2,0,15,linestyle=":", color=''red'')
plot_semilogy(lambda x: abs(intP(x)-intP2(x)),0,15)

Now, let’s see what happens if we do this same thing, but make the force parameter over the critical value of f=1.0829. This is displayed in Figure 11, for f=1.105. We get completely the opposite behavior, the differences in the oscillators are driven away from each other due to their small initial separations. This is the essence of “Jurrasic Park Chaos” – a small change in the initial conditions (like a butterfly flapping it’s wings in Malaysa) causes a large change in the final outcome (a change in the weather pattern over California).

Figure5a

Figure 10: A log plot of two oscillators with f=1.073 but with very small differences in their initial conditions, \Delta\phi(0)=1\times 10^{-4}.

Figure5b

Figure 11: Finally, a log plot of two overcritical oscillators (f=1.105) and very small differences in their initial conditions, \Delta\phi(0)=1\times 10^{-4}

Merging Stars, Star Formation, and Planetary Nebulae

I was reading the latest issue of Sky and Telescope this week and came across an article by Monica Young talking about the formation of massive stars (here a link to the highlights, you’ll need an account to actually read it). The gist of the article is that forming massive stars is difficult – as mass accumulates and nuclear reactions begin, the radiation pressure from the young (not yet massive) star will tend to blow material away, halting the growth. This happens around 10 solar masses, so it’s a bit mysterious how we end up with more massive stars then that (and we do – although they are rare, Type O stars are over 15 solar masses, and the most massive stars are over 25). The article covers a few modern approaches, mostly which involve particular dynamics by which material is accumulated in a different physical location then the photon flux from the new star. But, it was also mentioned that some massive stars are simply caused by merging younger stars, which was the topic of my master’s thesis! Since I’ve never written about it here (and it’s only been published at the academic library), I thought I would give a quick overview on the cute idea and nice results we worked out (“we” being myself and my adviser at the time, Robin Ciardullo).

The problem we were tackling had to do with the Planetary Nebula Luminosity Function (PNLF – there is even a Wikipedia page about this now!). As medium-sized and smaller (under 10 solar masses or so) stars reach the end of their life, they turn into really pretty objects called Planetary Nebula (PNe, and here are some cool Hubble pics). Massive stars a) evolve faster and b) make brighter PNe then their less massive siblings, so over time less and less bright PNe should be produced by any given population of stars. Further, the luminosity from a PNe is primarily due to excitation from the central white dwarf, which also dims over time. Therefore, PNe in a single population of stars should be generally getting less luminous over time. Problem is, that is not observed, at all!

The Brightest PNes in a population are the same luminosity, regardless of the age of the population.

The Brightest PNes in a population are the same luminosity, regardless of the age of the population.

The figure above comes from Ciardullo (2006), and demonstrates the problem – all the brightest PNe have the same absolute magnitude, regardless of the age of the stellar population (which goes old to young from top to bottom). This allows you to use PNe as a secondary method to find astronomical distances, but it also shows that there is something fundamentally incorrect with the nice picture of stellar evolution I’ve presented above. The idea explored in my thesis was that as the population aged, stellar mergers produced a ready supply of massive blue stars (called “Blue Stragglers”) which would form the brightest PNe. The advantage of a model like this is that it does not require a significant amount of detailed physics, such as the effects of stellar rotation, wind, or other micro-astrophysics. It is simply a population synthesis approach – we essentially created stellar populations, used standard stellar evolutionary models, but included a small fraction of stars (around 10%) which merged to form more massive stars.

First, let’s take a look at the “standard picture”, with no Blue Stragglers:

Simulated PNLFs with no merging stars.

Simulated PNLFs with no merging stars.

The ages of the stellar populations are shown in the upper lefthand corner (1-10 Gyr). It clearly displays the effect I talked about – the brightest PNe fade over time as the population ages.

Now let’s take a look at our basic model, including 10% blue stragglers into a population of several different ages:

The PNLF single burst models with 10% blue straggler fraction.

The PNLF single burst models with 10% blue straggler fraction.

As we expected, the brightest PNe held pretty constant for a variety of stellar population ages (1-10 Gyr, shown in the upper corner, with the 1 Gyr being a bit of an outlier). The absolute magnitude ended up being a little high, and the initial shape was more shallow then the observations, but it was clear that the blue stragglers were able to keep the maximum luminosity of the PNLF relatively constant over a wide range in population ages.

It’s worth noting that the two populations of blue stragglers which we are discussing here are actually disjoint. Since PNe form from stars under 10 solar masses, the usual formation scenarios have no trouble making them. It’s only for the stars over 10 solar masses that the merging scenario is invoked for a creation mechanism. On the other hand, both of these merger scenarios are based on stars which form in binary systems, and then merge at a later time. So although the end masses are different the formation mechanism from a blue straggler point of view is the same. It would be interesting to see if one could reproduce the required blue straggler fraction by using the initial binary population. Using both the PNLF and mass star formation considerations, one might be able to check this over the entire mass range of the initial mass function of binaries. Not something I can see spending time on at the moment, but an interesting question which even might make a nice undergraduate project!

If you are interesting in reading the whole thesis, you can check it out here. What I’ve talked about above the only half the story – there is also the “dip” found in some PNLFs (but not M31, for instance), which the model tried to address as well.

Cosmic Strings and Spatial Topology

Some of my recent work has focused on foliations of spacetime – which are essentially 1- or 2-dimensional parametrizations of 3- or 4-manifolds. I am mostly interested in them because my work often requires 1- and 2-dimensional embedded spaces, and you can use these initial spaces to construct entire foliations of the manifold. If you have a 2-dimensional space (a surface), you can always smooth it out by squishing all the curvature to a single point, called a conical point (you can try this – next time you get an ice cream cone, take the paper off the cone and try to flatten it. You will be able to do it everywhere except for one point, at the tip of the cone). Mathematically, a surface with a conical point looks exactly like a surface intersecting a cosmic string, so this is a long winded way of explaining how my interest in foliations has lead to an interest in cosmic strings!

The latest part of this story is using some of the interesting properties of cosmic strings, I’ve been able to use the lack of observational evidence for them to constrain the nature of the global topology of the universe. This is a pretty interesting idea because there are very few ways that we can study the overall shape of the universe (shape here means topology, so does the surface looks like a plane, a sphere, a donut, or what?). There are lots of ways we can study the local details of the universe (the geometry), because we can look for the gravitational effects from massive objects like stars, galaxies, black holes, etc. However, we have essentially no access to the topological structure, because gravity is actually only a local theory, not a global theory (I could write forever about this, but I’ll just leave it for a later post maybe…). We have zero theoretical understanding of the global topology, and our only observational understanding comes from studying patterns in the CMB. The trick with cosmic strings is that they actually serve to connect the local gravitational field (the geometry) to the global structure of the universe (the topology).

The game is this – take a spacetime with cosmic strings running around everywhere, and take a flat surface which intersects some of them. This surface can always be taken as flat, so the intersections are conical points. If you measure an angular coordinate around each point, you won’t get 2\pi, you’ll get something a bit smaller or a bit larger since the surfaces are twisted up around the points. It turns out that if you add up all the twists, you had better get an integer – the genus of the surface. The genus is essentially the number of holes in the surface. A sphere has g=0, a torus g=1, two tori attached to each other have g=2, and so on.

Now we consider what kind of observational evidence there is for cosmic strings. The short answer, none! People have been looking for them in the CMB, but so far they’ve only been able to say “if cosmic strings exist, they must be in such-and-such numbers and have energies of such-and-such.” If we use these limits, we find that to a very good approximation, if cosmic strings exist, a surface passing through them must have genus 1, and therefore be a torus (the surface of a doughnut)!

Ok big deal – but here’s where the foliations come in. For example, if we parametrize our spatial (3-dimensional) manifold with tori, the result is a 3-torus. So this actually implies that space is not a sphere, but is a solid torus (like a doughnut). The mathematics behind this statement are actually quite profound, and were worked out in the early days of foliation theory by the likes of Reeb, Thurston, and Novikov. But the idea is that such foliations of 3-manifolds are very stable, and a single closed surface greatly restricts the kinds of foliations allowed for the manifold as a whole.

The archive paper where I discuss this in more detail can be found here. This idea that space is not a sphere is not new, and there is actually some evidence for it in the CMB, in the form of a repeating pattern (or a preferred direction) in space. But my primary interest is pointing out that this is an independent way of measuring the topology of the universe, since it’s based on local observations of strings in the CMB rather than overall patterns. If strings don’t actually exist, it can still be used to study the presence of the conical singularities, but I expect the restrictions on the topology are much less strict. Perhaps I’ll look into that further into that, but for the moment I’m happy with this. It’s a new way to determine information about the global topology of the universe, and it’s a great combination of pure mathematics, theoretical physics, and observational cosmology.