Bubbles Bad; Ripples Good

… Data aequatione quotcunque fluentes quantitates involvente fluxiones invenire et vice versa …

Category: mathematical physics

Bouncing a quantum particle back and forth

If you have not seen my previous two posts, you should read them first.

In the two previous posts, I shot particles (okay, simulated the shooting on a computer) at a single potential barrier and looked at what happens. What happens when we have more than one barrier? In the classical case the picture is easy to understand: a particle with insufficient energy to escape will be trapped in the local potential well for ever, while a particle with sufficiently high energy will gain freedom and never come back. But what happens in the quantum case?

If the intuition we developed from scattering a quantum particle against a potential barrier, where we see that depending on the frequency (energy) of the particle, some portion gets transmitted and some portion gets reflected, is indeed correct, what we may expect to see is that the quantum particle bounces between the two barriers, each time losing some amplitude due to tunneling.

But we also saw that the higher frequency components of the quantum particle have higher transmission amplitudes. So we may expect that the high frequency components to decay more rapidly than the low frequency ones, so the frequency of the “left over” parts will continue to decay in time. This however, would be wrong, because we would be overlooking one simple fact: by the uncertainty principle again, very low frequency waves cannot be confined to a small physical region. So when we are faced with two potential barriers, the distance between them gives a characteristic frequency. Below this frequency (energy) it is actually not possible to fit a (half) wave between the barriers, and so the low frequency waves must have significant physical extent beyond the barriers, which means that large portions of these low frequency waves will just radiate away freely. Much above the characteristic frequency, however, the waves have large transmission coefficients and will not be confined.

So the net result is that we should expect for each double barrier a characteristic frequency at which the wave can remain “mostly” stuck between the two barriers, losing a little bit of amplitude at each bounce. This will look like a slowly, but exponentially, decaying standing wave. And I have some videos to show for that!

In the video we start with the same random initial data and evolve it under the linear wave equation with different potentials: the equations look like

\displaystyle - \partial^2_{tt} u + \partial^2_{xx} u - V u = 0

where V is a non-negative potential taken in the form

\displaystyle V(x) = a_1 \exp( - x^2 / b_1) - a_2 \exp( -x^2 / b_2)

which is a difference of two Gaussians. For the five waves shown the values of a_1, b_1 are the same throughout. The coefficients a_2 (taken to be \leq a_1) and b_2 (taken to be < b_1) increases from top to bottom, resulting in more and more-widely separated double barriers. Qualitatively we see, as we expected,

  • The shallower and narrower the dip the faster the solution decays.
  • The shallower and narrower the dip the higher the “characteristic frequency”.

As an aside: the video shown above is generated using Python, in particular NumPy and MatPlotLib; the code took significantly longer to run (20+hours) than to write (not counting the HPDE solver I wrote before for a different project, coding and debugging this simulation took about 3 hours or less). On the other hand, this only uses one core of my quad-core machine, and leaves the computer responsive in the mean time for other things. Compare that to Auto-QCM: the last time I ran it to grade a stack of 400+ multiple choice exams it locked up all four cores of my desktop computer for almost an entire day.

As a further aside, this post is related somewhat to my MathOverflow question to which I have not received a satisfactory answer.

Advertisements

… and scattering a quantum particle

In the previous post we shot a classical particle at a potential barrier. In this post we shoot a quantum particle.

Whereas the behaviour of the classical particle is governed by Newton’s laws (where the external force providing the acceleration is given as minus the gradient of the potential), we allow our quantum particle to be governed by the Klein-Gordon equations.

  • Mathematically, the Klein-Gordon equation is a partial differential equation, whereas Newton’s laws form ordinary differential equations. A typical physical interpretation is that the state space of quantum particles are infinite dimensional, whereas the phase space of physics has finite dimensions.
  • Note that physically the Klein-Gordon equation was designed to model a relativistic particle, while in the previous post we used the non-relativistic Newton’s laws. In some ways it would’ve been better to model the quantum particle using Schroedinger’s equation. I plead here however that (a) qualitatively there is not a big difference in terms of the simulated outcomes and (b) it is more convenient for me to use the Klein-Gordon model as I already have a finite-difference solver for hyperbolic PDEs coded in Python on my computer.

To model a particle, we set the initial data to be a moving wave packet, such that at the initial time the solution is strongly localized and satisfies \partial_t u + \partial_x u = 0. Absent the mass and potential energy terms in the Klein-Gordon equation (so under the evolution of the free wave equation), this wave packet will stay coherent and just translate to the right as time goes along. The addition of the mass term causes some small dispersion, but the mass is chosen small so that this is not a large effect. The main change to the evolution is the potential barrier, which you can see illustrated in the simulation.

The video shows 8 runs of the simulation with different initial data. Whereas in the classical picture the initial kinetic energy is captured by the initial speed at which the particle is moving, in the quantum/wave picture the kinetic energy is related to the central frequency of your wave packet. So each of the 8 runs have increasing frequency offset that represents increasing kinetic energy. The simulation has two plots, the top shows the square of the solution itself, which gives a good indication of where physically the wave packet is located. The bottom shows a normalized kinetic energy density (I have to include a normalization since the kinetic energy of the first and last particles differ roughly 10 fold).

One notices that in the first two runs, the kinetic energy is sufficiently small that the particle mostly bounces back to the left after hitting the potential.

For the third and fourth runs (frequency shift \sqrt{2} and \sqrt{3} respectively) we see that while a significant portion of the particle bounces back, a noticeable portion “tunnels through” the barrier: this caused by a combination of the quantum tunneling phenomenon and the wave packet form of the initial data.

The phenomenon of quantum tunneling manifests in that all non-zero energy waves will penetrate a finite potential barrier a little bit. But the amount of penetration decays to zero as the energy of the wave goes to zero: this is known as the semiclassical regime. In the semiclassical limit it is known that quantum mechanics converge toward classical mechanics, and so in the low-energy limit we expect our particle to behave like a classical particle and bounce off. So we see that naturally increasing the energy (frequency) of our wave packet we expect more of the tunneling to happen.

Further, observe that by shaping our data into a wave packet it necessarily contains some high frequency components (due to Heisenberg uncertainty principle); high frequency, and hence high energy components do not “see” the potential barrier. Even in the classical picture high energy particles would fly over the potential barrier. So for wave packets there will always be some (perhaps not noticeable due to the resolution of our computation) leakage of energy through the potential barrier. The quantum effect on these high energy waves is that they back-scatter. Whereas the classical high energy particles just fly directly over the barrier, a high energy quantum particle will leave some parts of itself behind the barrier always. We see this in the sixth and seventh runs of the simulation, where the particle mostly passes through the barrier, but a noticeable amount bounces off in the opposite direction.

In between during the fifth run, where the frequency shift is 2, we see that the barrier basically split the particle in two and send one half flying to the right and the other half flying to the left. Classically this is the turning point between particles that go over the bump and particles that bounces back, and would be the case (hard to show numerically!) where a classical particle comes in from afar with just enough energy that it comes to a half at the top of the potential barrier!

And further increasing the energy after the seventh run, we see in the final run a situation where only a negligible amount of the particle scatters backward with almost all of it passing through the barrier unchanged. One interesting thing to note however is that just like the case of the classical particle, the wave packet appears to “slow down” a tiny bit as it goes over the potential barrier.

Why do Volvox spin?

For today’s High-Energy-Physics/General-Relativity Colloquium, we had a speaker whose research is rather far from the usual topics. Raymond Goldstein of DAMTP gave a talk on the physics of multicellular organisms, with particular focus (since the field is so broad and so new for most of the audience members) on the example of Volvox, a kind of green algae composed of spherical colonies of about 50,000 cells.

One of the very interesting things about them is that, if you look under a microscope (or even a magnifying glass! Each colony is about half a millimeter across, so you can even see them with the naked eye), they spin. (Yes, the Goldstein lab has its own YouTube channel.)

(The video also shows how their motion can be constrained by hydrodynamical bound states formed due to their individual spinning motion.)

Now, we have a pretty good idea of the very basic locomotive mechanism of these organisms. Each colony is formed with an exterior ball of “swimming” cells and some interior balls of “reproducing” cells. The swimming cells each have two flagella pointed outwards into the surrounding fluid. Their beating will give rise to the motion for the whole colony. But the strange thing is that they do not swim straight: the cells colonies tend to travel in one direction, will spinning with the axes aligned with the direction of travel. Why? Isn’t is inefficient to expand extra energy to spin all the time? This was a central question around which the presentation today was built.

Two main results were described in the talk today. First is a result about how the two flagella of each cell interacts. It was observed (some time ago) that, by direct observation under a microscope, the two flagella can exhibit three types of interaction. First is complete synchronisation: the two flagella beats in unison, like how a swimmer’s arms move when pulling the breaststroke. This is observed 85% of the time. Then there is “slippage”, where for some reason one flagellum is slips out-of-phase from the other briefly, and recovers after a while. This happens about 10% of the time. And lastly there is a completely lack of synchronisation when the two flagella beats with different frequencies for about 5% of the time. The original report on this surmised that this difference represents three different “types” of cells: since each observation is short in time, they didn’t observe much in terms of transitions from one type to the other. What was discovered more recently is that, in fact, the three behaviour all belong to the one single type of cells making up Volvox, and the transition is stochastic!

Now, why this may be surprising is the following: each flagellum is a mechanical beater and has some innate characteristic frequency at which it beats. So in an ideal, linear situation, the two independent flagella should not interact. And so there cannot be reinforcements of any type. Now, one may guess that since the two flagella are swimming in water, the hydrodynamics may serve as a medium for the interaction. However, a pure hydrodynamic interaction should lead to something like sympathy, a phenomenon first observed by Huygens. Basically, Huygens put two pendulum clocks on the same wall, and set their pendulums to be of arbitrary phase relative to each other. After some time, however, he discovers that invariably the clocks settle down to a state where the their pendulums are completely out of phase. This “tuning” is attributed to vibrations being passed along the supporting beams of the wall.

(One can do a similar experiment at home with a board, two metronomes, and two soda cans. A dramatic example is shown below.)

But the problem with the synchronisation theory is that it can only explain the 85% of the time occurrence of completely in phase swimming, but not the other 15%. The solution to this problem requires real consideration of the chaos on a molecular level. As it turns out, one of the force we have so far neglected is the force driving the flagella. This is dependent on the biochemical processes inside the cells. By considering the biochemical noise which contributes a stochastic forcing on the entire system, one can recover the other 15% of out-of-phase behaviour. (The noise is not thermal, as thermal noise should have much lower amplitude than required to cause the phenomena.)

The second beautiful result described in the talk was on the spinning of the colonies, and its relation to phototaxis, the attraction of the green algae to light. How are to two related? It is a quite magnificent feat of evolution. Now, in this colony of 50,000 cells, there is no central nervous system, so how do the cells coordinate their motion to swim toward the light? You cannot rely on chemical signals, or even hydrodynamical synchronisation, since the physical distance between the cells are typically larger than the cells themselves. The effects of this signalling would be too weak and too slow. It is more reasonable to expect the behaviour to be “crowd sourced” (for viewers of the Ghost in the Shell anime series, a cellular level of “stand alone complex”): each cell is programmed to behave in a certain way, and when taken as a whole, their joint behaviour gives rise to the desired response of the colony as a whole.

Well, at the level of the cell, what can they do? Each cell is equipped with a photosensing organelle. And like the classic Gary Larsen cartoon, each cell is really only capable of stimulus-response. Experimentally it was confirmed that each individual cell reacts to light. When a cell is initially “facing away” (the light-sensor is direction sensitive) from the light source, and turns to “see” the light, the stimulus would shock the cell into slowing down its flagella’s beating. After a very short while the cell gets used to the light, and the beating resumes in earnest. The reverse change from light to darkness, however, does not cause changes in the beating of the flagella.

And this explains the spinning of the Volvox! Imagine the colony swimming along, minding its own business, when suddenly light hits one side of the colony. The cells on the lit side slows down its flagella beating, and gradually recovers its beating as it rotates out of view of the light source. So the net effect of the spinning of the colony is that “new” cells kept being brought into view of the light, receive the shock, slows its flagella, and recovers as it “retreats into the night”, only to be shocked again “as sun rises the next day”. So the flagella beats more fervently on the dark side of the colony compared to the bright side, so, as anyone who has tried swimming one-armed would know, the colony will slowly turn toward the light source.

The best part about this process is that it is self-correcting. As the axis of rotation gets more and more aligned with the light source, more and more of the cells experience an “Alaskan summer” with the “sun” perpetually overhead. These cells that are not brought back into darkness no-longer receive the periodic shock that slows their flagella, and so swim equally as hard through the entire “day”, and therefore no longer contributes to turning. When the spin axis is perfectly aligned with the light source, the entire “northern hemisphere” is perpetually illuminated, while the “southern” is not, so until the light-source changes directions, the colony will cease to change directions and move straight toward the light source.

For this all to work, it requires that the spin rate of the colony be exactly the same as the rate at which the cells recover from the shock of seeing the light. And this is experimentally confirmed. (An interesting question brought up at the end is whether we can use this as a laboratory test for evolution: if we add some syrup or something to the water to make it more viscous, the spin rate will necessarily slow down. Then the original strand of Volvox will not be as effective at swimming toward the light. It would be interesting to see whether after a few hundred generations, a mutant strand evolves with slower recovery time from illumination.)

Dominant energy condition versus hyperbolicity

I’ve posted a new paper to arXiv over the weekend. The goal of this paper is to clarify some misconceptions in the literature about the connection between the dominant energy condition of general relativity and “hyperbolicity” and domain of dependence properties of partial differential equations. (If you don’t know what hyperbolicity is, don’t worry. To paraphrase Jacques Hadamard: What a partial differential equation is, is well known. What hyperbolicity is, will be explained.) (This paper also solves a question that has been on my Questions and Answers page for a while.)

Dominant energy condition
Take Einstein’s equation G_{\mu\nu} = 8\pi T_{\mu\nu}. The left hand side is the Einstein tensor, composed from the Ricci tensor, the curvature scalar, and the metric tensor; it captures exactly the geometry of the space-time. The right hand side is the Einstein-Hilbert stress-energy, and captures the matter content of the universe. This equation connects matter to its effect on gravity.

The dominant energy condition is an assumption often made about the tensor T_{\mu\nu}. It requires that given two future-directed, time-like vector fields X,Y, the scalar quantity T_{\mu\nu}X^\mu Y^\nu be non-negative, and in the case where X = Y, vanish only when the tensor itself vanishes identically. The usual interpretation (and in some cases, used as an a priori justification of the condition) is drawn from fluids and elasticity, and T_{\mu\nu}X^\mu Y^\nu is treated to mean the flux in X direction of the energy measured by an observer moving with velocity Y. That the quantity is always non-negative is supposed to reflect the fact that “energy cannot flow faster than the speed of gravity”.

The assumption is a very powerful one when it comes to dealing with the geometry of space-time. Many important theorems in general relativity about the global structure of space-time can be proven under the assumption of dominant energy. Prime among these theorems are probably the Singularity Theorem of Roger Penrose, and the Positive Mass Theorem first proved by Rick Schoen and S-T Yau, and later by Ed Witten using different methods. On the other hand, the condition has relatively little to say about the matter side of the equation. The only well-know result in this direction being a classical theorem of Stephen Hawking, which states the following:

Let \Sigma be a space-like hypersurface in space-time, and let U\subset \Sigma be a region. If T_{\mu\nu} satisfies the dominant energy condition and vanishes on U, then T must also vanish on the space-time region D composed of all points p with the property that any time-like curve emanating from p must intersect U exactly once. D is called the domain of dependence of U.

Roughly speaking, this captures the causality of classical events. If energy cannot flow faster than the speed of gravity by the dominant energy condition, the edge of “pure vacuum” cannot recede faster than the speed of gravity either. However, this is about as much as the dominant energy condition can say: contrary to some suppositions, the dominant energy condition does nto guarantee causality be preserved in an absolute sense: it leaves a certain loophole. We can illustrate this by a bit of science fiction.

Hawking’s theorem can be re-interpreted as the following: a signal cannot penetrate into vacuum at a speed faster than that of gravity. But can we circumvent the theorem by forcing a medium? Imagine an infinitely rigid stick that reaches from the earth to the moon. Then if I push on one end of the stick here on earth, I should be able to poke the moon instantaneously, due to infinite rigidity of the stick. The usual resolution to this problem is that, in real life, there does not exist an infinitely rigid stick. That the stick will be somewhat elastic, and the motion of my push on one end can only propagate at the speed of sound inside the stick, which in general is slower than the speed of gravity. Many people assume that the dominant energy condition rules out the infinitely rigid stick; in the paper I show that this is not the case.

In particular, I show that relativistic models of fluids and elastic matter are perfectly happy to deal with tachyonic particles if one only impose the dominant energy condition. Then if we were somehow able to fill a tube with tachyonic fluid, the tube can be used for super-luminal information transfer!

Hyperbolicity
In the sense of Hadamard, an evolutionary partial differential equation is said to be hyperbolic (or locally well-posed) if it satisfies three conditions, given some initial conditions.

  1. Any reasonable initial condition leads to a reasonable solution of the equations. (Reasonableness is a fairly relaxed, mathematical condition; it has little to do with whether the initial conditions are reasonable physically.)
  2. The reasonable solution is unique: the same initial condition cannot lead to two different futures.
  3. The solution is stable with respect to small errors. Two sets of initial data that differs by a tiny bit will, for some period of time (whose length is inversely proportional to the difference), give rise to similar solutions. This of course cannot be always true in the long run (think chaos theory), but the fact that it can be done for short periods of time is what allows us to, say, predict the weather for the next 5 days: small errors in our measurements and calculations will only propagate and lead to small errors for next week’s predictions. But those errors exponentially compound upon themselves if you try to make a prediction for a month.

One may say that Hadamard’s notion of hyperbolicity is what defines hyperbolic partial differential equations as the most useful type for physics.

There are various mathematically precise way of checking whether an equation is hyperbolic. (Almost all of these give sufficient conditions for hyperbolicity, but not necessary ones. So equations failing the “test” imposed by one method may yet be hyperbolic, just that it is sufficiently degenerate that the “test” used cannot discern it.) In the above paper I use the notion of regular hyperbolicity as described by Demetrios Christodoulou in his book The Action Principle and Partial Differential Equations. (I give a self-contained summary of the framework in the paper.)

It turns out that a sufficient condition for the hyperbolicity of a system of PDEs is, roughly speaking, a hierarchy of energy estimates. For linear equations, the hierarchy only needs one level. For non-linear equations, we need higher levels to control higher derivatives, which in turn by Sobolev embedding theorem allows us to control the non-linear terms. As it turns out, the dominant energy condition is, roughly speaking, equivalent to the existence of the first-level of this hierarchy of energy estimates. For linear equations, then, the dominant energy condition is sufficient to guarantee local well-posedness. Slightly less obvious is the fact that the same is true for semi-linear systems. The key idea is that the dominant energy condition is a condition on the form of the equation: it doesn’t really matter if what you “plug into” the stress-energy tensor is actually a solution. For semi-linear equations, this becomes a strong condition on the coefficients in the equations: the freedom in plugging in arbitrary data strongly constraint the coefficients. And since those coefficients also play a role in the higher order energies, we get the control we desired. So in this case, the dominant energy condition is actually a lot stronger than one may naively expect.

For quasi-linear equations, however, there is a problem. Since the coefficients now depend strongly on the data you plug-in, there is a chance that there is a conspiratorial miraculous cancellation! This can cause problems when we consider the linearised problem to compute the higher energy estimates. Another way to think about it is to separate the input for an equation into two parts. One part is to give some data, from which we derive the coefficients. The second part is to evaluate the energy for the data set using the (derived) coefficients. The semi-linear case corresponds the first step being trivial: regardless of what data we input we always get the same coefficients back. In the quasi-linear case, however, the dominant energy condition is a condition on the diagonal part of this process, the case when the data you input for the first and second steps are the same. In general, however, to establish the hierarchy of energy estimates, we need to consider cases where the data input for the first and second steps are different (this is to guarantee stability, the third of Hadamard’s conditions). Think about a square matrix M. Suppose we know all the diagonal elements of M are positive. Then if we know M is a diagonal matrix, we have that M must be positive definite (this is the semi-linear case). But in general, the off-diagonal terms can be so bad that M is in fact indefinite (this is the bad quasi-linear case).

In the paper I make explicit this difference between the dominant energy condition and hyperbolicity. In particular, I showed that the Skyrme model always obeys the dominant energy condition, but there are cases where it fails to be hyperbolic. In hindsight, this is perhaps not too surprising, as one can treat the Skyrme model as a model of relativistic elasticity. The hyperbolicity break-down corresponds to a strongly tachyonic regime, which for fluids (another, different, sub-case of elasticity) is also well-known to be non-hyperbolic.

How to ballast your ship or why my kitchen sink is always dirty

Recently I’ve been reading a bit of fluid dynamics, and came across the classical problems of free and forced vortices. In this post I will first discuss some mathematics leading to a computation of the free surface for the two types of vortices, and finish with a discussion of their implications.

Free vortex
A free vortex is seen in nature as either the bathtub drain or the maelstrom. A physical model of this is to start with a circular tub with a drain in the centre. To establish a steady state, refill the tub at the same rate it drains. The water is refilled near the rim of the tub, where the water enters tangential to the tub with fixed angular velocity \omega_0. The fluid is assumed to be incompressible and inviscid (good approximation of water) and is free of vorticity (the water is rotational on a global level, but we have a topological obstruction, so it is allowed to be locally free of vorticity). We also assume that the draining rate is slow, so that the fluid motion is dominated by rotation around the drain.

Under these conditions, the fluid can be modelled by the irrotational Euler equation

\displaystyle \frac{\partial u}{\partial t} + \nabla \left( \frac{p}{\rho} + gz + \frac{1}{2} u\cdot u\right) = 0

where u is the fluid velocity field, p is the pressure, \rho is the (constant) density, g is the acceleration due to gravity. We work in a cylindrical coordinate system (r,\theta,z) where the z axis is centred with respect to the tub, whose rim is r = R.

Now, by the slow-draining assumption, u\cdot u is given by the square norm of the azimuthal component of the velocity. This we can determine by conservation of angular momentum: the particles come in at the rim with angular momentum \omega_0 R^2, so u\cdot u = \frac{\omega_0^2 R^4}{r^2}. In the stead-state \partial_t u =0, so in the end we need to have

\displaystyle \frac{1}{\rho}\frac{\partial p}{\partial z} = -g
and
\displaystyle \frac{1}{\rho}\frac{\partial p}{\partial r} = \frac{\omega_0^2R^4}{r^3}

Now, let the fluid surface by given by z = f(r). At the free surface the pressure is fixed (to be the ambient atmospheric pressure), so \frac{d}{dr} p(r,f(r)) = 0. From the chain rule this implies

\displaystyle \frac{\omega_0^2R^4}{r^3} - g f'(r) = 0

which shows that the fluid depth dips by

\displaystyle f(r) = f(R) + \frac{\omega_0^2R^2}{2g} - \frac{\omega_0^2R^4}{2g r^2}

(where f(r) drops below zero is, presumably, inside the drain hole).

Forced vortex
For the second problem of the forced rotation, I’d like to present a slightly different method of computation. The physical model of this problem is to set a bucket of water spinning around the central axis of the bucket. Now, in the idealized perfect fluid, the lack of interaction between concentric layers means that any word done spinning the bucket will not be transferred to the fluid. However, we are interested in the steady state problem. In the steady state the fluid will be co-rotating with the bucket and each concentric layer will have the same angular velocity (so there’s non-vanishing vorticity and the framework of the previous problem cannot be used), so there is no shear. In the regime where there is no shear, viscosity plays no role. So after the fluid has settled down to a steady state, we can analyse it as if it were incompressible and inviscid.

For this problem in particular, we can use the action principle. In the steady state, the total kinetic energy of the fluid is given by

\displaystyle T = \int_0^R \pi \rho f(r) r^3 \omega_0^2 dr

where the factor of \pi enters from the suppressed azimuthal integral, and the computation comes from the particulate $\frac{1}{2} mv^2$ expression for kinetic energy. f(r) is, as before, the height of the fluid. The potential energy is gravitational

\displaystyle V = \int_0^R \pi f(r)^2 g r dr

The incompressibility gives the constraint \int f(r) r dr is a fixed constant. In the steady state the action principle gives that S[f] = T - V. To compute the variation, observe that the volume constraint implies \int \delta f(r) r dr = 0, and so by fundamental theorem of calculus, the admissible perturbations are such that

\displaystyle \delta f(r) = \frac{1}{r} \frac{d}{dr} \delta h(r)
where
h(R) = h(0) = 0

Thus we can compute \delta S[f] and integrate by parts to obtain the Euler-Lagrange equation

\displaystyle \frac{d}{dr}(r^2\omega^2 - 2f(r) g) = 0

which leads to the paraboloid solution

\displaystyle f(r) = f(0) + \frac{r^2\omega^2}{2g}

(I have a bit of fondness for this problem as it appeared on the 2001 International Physics Olympiad’s experimental section. We were then to use the parabolic surface of the spinning bucket of gel as a focusing mirror.)

How to ballast your boat
So far the discussion is fairly well-known. Now let us consider a floating body on the vortex. We’ll assume that the angular motion of the floating body is driven by the fluid’s rotation, so that at every point of its journey its angular speed is equal to that of the fluid’s.

We try to draw a force diagram for the object. It experiences the force of gravity and the centrifugal force at its centre of mass. The force of gravity is mg and the centrifugal force is mv^2 / r: in the case of a maelstrom, it is m\omega_0^2 R^4 / r^3, and in the case of the forced vortex it is m\omega_0^2 r.

On the other hand, it also experiences a force due to the pressure of the water at its centre of buoyancy. By Archimede’s principle, this force is equal and opposite to the sum of the gravitational force and the centrifugal force on the water displaced. We can assume that the total net force normal to the water surface is zero, and we examine the radial drift of the floating body.

If the radial position of the centres of mass and of buoyancy are equal, then the total force on the object must be zero, so the object stays with the current and rotates around the vortex. This is the case of the ship being perfectly ballasted.

If the centre of mass is shallower than the centre of buoyancy (say, of a beach ball floating on water, or any uniform, lighter-than-water body), then we see that the radial position of the centre of mass r_m \le r_b the radial position of the centre of buoyancy. In a forced vortex, this means that the force of buoyancy provided by the water pressure in the radial direction \omega_0^2 r_b \ge  \omega_0^2 r_m the centripetal force required to keep the object on that circular trajectory, so the under-ballasted ship will drift toward the centre of a forced vortex. This should be familiar for those people who have stirred powdered milk or made hot chocolate: when the mug or pot is stirred, the floating clumps of powder congregates toward the centre. On the other hand, in a free vortex, the force of buoyancy in the radial direction is \omega_0^2 R^4 / r_b^3 \le \omega_0^2 R^4 / r_m^3 the requisite centripetal force, so objects will drift away from the vortex! This explains why, after washing dishes and pulling drain-stop, the floating food bits tend not to go down the drain but end up stuck to the walls of the kitchen sink.

We can also consider over-ballasted ships. This tend to be the norm to ensure stability for travel at sea, especially in face of changing barometric pressures. Now the centre of mass is deeper than the centre of buoyancy r_m \ge r_b, and we see that the conclusions are reversed! For a forced vortex the over-ballasted ship will be turned away from the vortex, whereas in a maelstrom the over-ballasted ship will be sucked in!

So the moral of the story is: if your ship is trapped in a maelstrom, throw all ballasts overboard and stand up straight holding your arm over your head, and hope for the best.

Skyrmions are narcissistic

Gary Gibbons, Claude Warnick and I just announced a new paper, in which we prove that Skyrmions are narcissistic (this colourful name is due to Gary). Basically what we have demonstrated is a version of the rule well-known to tods, that opposites attract and likes repel. (In this case Skyrmions carry parity: so its reflection in a mirror becomes its anti-particle.) A more precise version of the statement is that “finite energy solutions to the Skyrme model in Minkowski space that are symmetric or anti-symmetric across a mirror plane cannot remain in static equilibria.”

Physically the intuition is simple: when something remains in static equilibrium, the total net force across any plane must vanish. This is just Newton’s second law where the action of a force will produce an acceleration. So to show that such static equlibria cannot exist, it suffices to show that any configurations of these kinds must have a net force across the mirror surface.

For linear theories (for which the particles obey the superposition principle), the situation is simple. The linearity essentially implies that there cannot be internal structure to the particles which can provide a counter-balance to the interactions between the particles. A simple way to look at this is to hold up one’s pinky and drop a bead of water on it. The top of your finger should be slightly curved, so parts of the bead of the water is sitting on a slope. But it doesn’t flow down hill! Why? This is because water has internal structure (hydrogen bonds, van der Waals forces, surface tension, etc.) that holds the small bead together. When the bead of water is small enough, the internal energy is enough to overcome gravity on a gentle slope to prevent the drop from breaking up and flowing off. Now if you take some other liquid, say rubbing alcohol, with much less surface tension, and you take the same volume of liquid and try to bead it on your pinky, you’d find it much more difficult.

In the linear theory, without the internal structure, each infinitesimal volume acts independently of other infinitesimal volumes. So for a macroscopic configuration to remain in equilibrium, it is necessary that the potential energy everywhere is constant. And for most field theories this implies that the only finite energy solution must have zero energy, and thus the solution itself is trivial. (This is a reflection of the fact that in linear theories one typically do not expect the existence of solitons.)

In nonlinear theory, however, the fields can have internal structure. From these internal attractions come the possibility of solitary states, solutions which are concentrated spatially. A striking example is the phenomenon of tidal bores. For small amplitude surface waves on water, the equation of motion is well approximated by the linear wave equation. Hence we see the waves disperse as it propagates outwards in rings. For larger amplitude water waves in a narrow and shallow channel, however, the equation of motion is better described by the Kortewag-de Vries equation, whose nonlinearity better models the internal structure of the wave. The tidal bores observed in nature are reproducible theoretically as a soliton solution to the KdV equation.

Now, the soliton solutions to KdV are necessarily traveling waves. However, for other equations that are used to model nature, soliton solutions can be stationary or even static. Some examples are given by the focusing nonlinear Schrodinger equation, focusing semi-linear wave equation, the Yang-Mills instantons, and, in this particular case we are considering, the Skyrmions. The Skyrmions are used in nuclear physics to model baryons. Their equation of motion also has a nonlinearity that captures the presence of internal structure. The question we are interested in then is whether two such baryons can remain in static equilibrium.

Now, in the case of gravity, two astronomical bodies cannot remain in static equilibrium: this is because gravity is a purely attractive force. But the interaction of Skyrmions, like interaction of magnets, can be either attractive or repulsive. So one may try to look for situations where the attractive force exactly counter balances the repulsive force. In the case of the Maxwell theory of electromagnetism, because the theory is purely linear, the only way for two bodies to have exactly counter-balancing forces is for them to be without electromagnetic charge. (Whenever there is a potential gradient the charges will flow to even out the electromagnetic potential.) In the case of a non-linear theory like Skyrmions, it is possible for an extended body to have “positively charged” and “negatively charged” regions that are held apart by the internal structure and do not immediately cancel each other out, unlike the case for classical electromagnetism. Then it is conceivable that certain configurations can exist in which two such extended bodies have their various regions aligned just right so that the net force between them is zero.

And such configurations do exist under the name of Sphalerons.

What we prove in this paper is that in other types of arrangements unlike that of sphalerons, we can mathematically rigorously show that the two bodies cannot be kept at equilibrium.

The trick is one about symmetries. For a scalar valued function, there are basically two symmetry types you can have after reflection across a plane: even or odd. For a vector valued function, however, there are more allowed symmetries. A symmetry compatible with the reflection across a plane is just any symmetry that, when you do it twice, you recover the identity (what we call an action of \mathbb{Z}_2). If a function takes vector values, then besides the simple symmetries like the identity x\to x and the complete negation x\to -x, we can also have reflections across vector-subspaces in the target. The sphaleron solution is exactly one such: it has a symmetry that is a nontrivial reflection in the target.

In our paper, we show that the two simplest symmetry types (identity and total negation) cannot lead to static equilibria. The proof essentially boils down to a statement about the internal structure of the Skyrmions, that in these types of symmetries, the complicated “region-by-region” interactions that may allow to two bodies to remain in equilibrium in fact completely cancel each other. So for these types of symmetries the interaction between two bodies is dictated only by the “total charge” of the each of them. And thus we again have that the opposites attract…

And now for the mathematics. What we exploit in the Skyrme model is a manifestation of the dominant energy condition: we consider is the internal stress of the solution. In general, the stress can take arbitrary sign, as long as they average out to zero so there is no bulk motion. But by imposing a symmetry condition, we require that on the mirror surface the solution has either a Dirichlet (negation symmetry) or Neumann (identity symmetry) boundary. The Skyrme model, along with some other Lagrangian field theories, has the property that on such boundaries the stress has a sign (positive if Dirichlet; negative in Neumann). Now, since there is no bulk motion in a static solution, the total stress across the mirror surface cannot be anything but zero. This in fact forces the solution to have both a Dirichlet and a Neumann boundary condition. Using certain properties associated to the ellipticity of the static solution (either the maximum principle or strong unique continuation), we can then conclude that the solution must then vanish everywhere.

The “Hoop Conjecture” of Kip Thorne and Spherically Symmetric Space-times

Abstract. (This being a rather long post, I feel the need to write one.) In the post I first gather some miscellaneous thoughts on what the hoop conjecture is and why it is difficult to prove in general. After this motivation, I show also how the statement becomes much easier to state and prove in spherical symmetry: the entire argument collapses to an exercise in ordinary differential equations. In particular, I demonstrate a theorem that is analogous, yet slightly different, from a recent result of Markus Khuri, using much simpler machinery.

The Hoop conjecture is a proposed criterion for when a black-hole will form under gravitational collapse. Kip Thorne, in 1972 [see Thorne, Nonspherical Gravitational Collapse: a Short Review in Magic without Magic] made the conjecture that (I paraphrase here)

Horizons form when and only when a mass M gets compacted into a region whose circumference C in EVERY direction is bounded by C \lesssim M.

This conjecture, now widely under the name of “Hoop conjecture”, is deliberately vague. (This seemed to have been the trend in physics, especially in general relativity. Conjectures are often stated in such a way that half the effort spent in proving said conjectures are used to find the correct formulation of the statement itself.) Read the rest of this entry »

Newton-Cartan part 3: gravitating particles

As a simple example of a physical theory on a Galilean manifold, let us consider the physics of a collection of massive particles that do not interact except for their gravitational interaction. In other words, let us consider a collisionless kinetic theory coupled to Newtonian gravity.

Vlasov system
The Vlasov system is a transport equation describing the free flow of collisionless particles. Let (M,\nabla) be a manifold with an affine connection that represents the spacetime. We postulate Newton’s first law:

Physical assumption 1
The motion of a free particle is geodesic.

Therefore the motion of a free particle is described by the following system of equations: let \tau denote proper time as experienced by the particle, and \gamma(\tau) the world-line of the particle (its spacetime trajectory) parametrized by \tau, then we have the hyperbolic system of equations.

Equation 2
(\frac{d}{d\tau}\gamma)(\tau) = V\circ\gamma(\tau) \in T\gamma \subset TM
and
\frac{d}{d\tau}(V\circ\gamma) (= \frac{d^2}{d\tau^2}\gamma) = \nabla_VV = 0

Read the rest of this entry »

Newton-Cartan, Part 2

After writing up the previous post on Newton-Cartan theory, I came to realize that it is actually a very nice exercise for myself to dig into the geometry more. So here goes a bit more on the implications of the Galilean geometry and the Newton-Cartan theory. Read the rest of this entry »

Newton-Cartan Gravity

A few weeks ago I discussed Einstein-Cartan geometry, with a focus on relaxing the “torsion-free” condition on a Levi-Civita connection. In this post I will talk about the Newton-Cartan theory of gravity, which is in some sense the Newtonian limit of general relativity, and which relaxes the metric-compatibility of a Levi-Civita connection (while keeping the torsion-free condition).

Newtonian gravity and the naive formulation
(Note: the material in this section is re-hashed from Section 12.1 of Misner, Thorne, and Wheeler’s black-covered bible.)

Consider first Newtonian theory of gravity. The space-time is \mathbb{R}^1\times\mathbb{R}^3 with Galilean symmetry, and gravitational interaction is represented by the gravitation potential \Phi(t,x). In Newtonian theory, the gravitational field is given by minus the gradient of the potential \vec{F}_G = - \vec{\nabla} \Phi (I will put the arrows over symbols to denote the fact that they are three-dimensional vectors, and the derivative symbols should be interpreted in the sense of three-dimensional vector calculus). The force on a particle is given by the product of the gravitational field and the gravitational charge of the particle m_G\vec{F}_G. By Newton’s second law, the force is also equal to the product of the inertial mass and the acceleration of the particle m_I \vec{a}. Now, by the principle of equivalence (or the observation that the gravitational charge is equal to the inertial mass), we have that the gravitational field is equal to the acceleration of the particle.

Now consider a particle traveling in the gravitational field in free fall. Write its trajectory in \mathbb{R}^3 as \vec{\xi}(t) = (\xi_1(t), \xi_2(t), \xi_3(t)). Lifting to the space-time the world line of the particle is given by (t,\vec{\xi}(t)). (For people familiar with General Relativity already: in GR the world-line is usually given as a geodesic with unit speed. Under the 3+1 split in Newtonian theory, “proper time” is not defined, so the natural parametrization is by the global/invariant time.) The velocity vector in the space-time is (1,\dot{\vec{\xi}}(t)) and the acceleration vector is (0,\ddot{\vec{\xi}}(t)). The Newtonian equation of motion then is described by

Equation 1
\displaystyle \frac{d^2}{dt^2} (t, \vec{\xi}(t)) + (0, \vec{\nabla}\Phi(t, \vec{\xi}(t))) = 0

Now, observe that if we do an affine change of variables t \to t(s) (affine means here $d^2t/ds^2 = 0$), and notice that the chain rule gives d/ds = dt/ds . d/dt, (and by abuse of notation we write \vec{\xi}(s) = \vec{\xi}(t(s)))

Equation 1′
\displaystyle \frac{d^2}{ds^2}\left(t(s), \vec{\xi}(s)\right) + \left(0, \vec{\nabla}\Phi(t(s),\vec{\xi}(s))\right) \left(\frac{dt}{ds}\right)^2 = 0

Read the rest of this entry »