Bubbles Bad; Ripples Good

… Data aequatione quotcunque fluentes quantitates involvente fluxiones invenire et vice versa …

Category: Maths

Extensions of (co)vector fields to tangent bundles

I am reading Sasaki’s original paper on the construction of the Sasaki metric (a canonical Riemannian metric on the tangent bundle of a Riemannian manifold), and the following took me way too long to understand. So I’ll write it down in case I forgot in the future.

In section two of the paper, Sasaki consider “extended transformations and extended tensors”. Basically he wanted to give a way to “lift” tensor fields from a manifold to tensor fields of the same rank on its tangent bundle. And he did so in the language of coordinate changes, which geometrical content is a bit hard to parse. I’ll discuss his construction in a bit. But first I’ll talk about something different.

The trivial lifts
Let M, N be smooth manifolds, and let f:M\to N a submersion. Then we can trivially lift covariant objects on N to equivalent objects on M by the pull-back operation. To define the pull-back, we start with a covariant tensor field \vartheta \in \Gamma T^0_kN, and set f^*\vartheta \in \Gamma T^0_kM by the formula:

\displaystyle f^*\vartheta(X_1,\ldots,X_k) = \vartheta(df\circ X_1, \ldots, df\circ X_k)

where the X_1, \ldots, X_k \in T_pM, and we use that df(p): T_pM \to T_{f(p)}N. Observe that for a function g: N \to \mathbb{R}, the pull-back is simply f^*g = g\circ f :M\to N\to\mathbb{R}.

On the other hand, for contravariant tensor fields, the pull-back is not uniquely defined: using that f is a submersion, we have that TM / \ker(df) = TN, so while, given a vector field v on N, we can always find a vector field w on M such that df(w) = v, the vector field w is only unique up to an addition of a vector field that lies in the kernel of df. If, however, that M is Riemannian, then we can take the orthogonal decomposition of TM into the kernel and its complement, thereby getting a well-defined lift of the vector field (in other words, by exploiting the identification between the tangent and cotangent spaces).

Remarkably, the extensions defined by Sasaki is not this one.

(Let me just add a remark here: given two manifolds, once one obtain a well defined way of lifting vectors, covectors, and functions from one to the other, such that they are compatible (\vartheta^*(v^*) = [\vartheta(v)]^*), one can extend this mapping to arbitrary tensor fields.)

The extensions defined by Sasaki
As seen above, if we just rely on the canonical submersion \pi:TM\to M, we cannot generally extend vector fields. Sasaki’s construction, however, strongly exploits the fact that TM is the tangent bundle of M.

We start by looking at the vector field extension defined by equation (2.6) of the linked paper. We first observe that a vector field v on a manifold M is a section of the tangent bundle. That is, v is a map M\to TM such that the composition with the canonical projection \pi\circ v:M\to M is the identity map. This implies, using the chain rule, that the map d(\pi\circ v)= d\pi \circ dv: TM\to TM is also the identity map. Now, d\pi: T(TM) \to TM is the projection induced by the projection map \pi, which is different from the canonical projection \pi_2: T(TM) \to TM from the tangent bundle of a manifold to the manifold itself. However, a Proposition of Kobayashi (see “Theory of Connections” (1957), Proposition 1.4), shows that there exists an automorphism \alpha:T(TM) \to T(TM) such that d\pi \circ \alpha = \pi_2 and \pi_2\circ\alpha = d\pi. So v as a differential mapping induces a map \alpha\circ dv: TM \to T(TM), which is a map from the tangent bundle TM to the double tangent bundle T(TM), which when composed with the canonical projection \pi_2 is the identity. In other words, \alpha\circ dv is a vector field on TM.

Next we look at the definition (2.7) for one-forms. Give \vartheta a one-form on M, it induces naturally a scalar function on TM: for p\in M, v\in T_pM, we have \vartheta: TM\to \mathbb{R} taking value \vartheta(p)\cdot v. Hence its differential d\vartheta is a one-form over TM.

Now, what about scalar functions? Let \vartheta be a one-form and v be a vector field on M, we consider the pairing of their extensions to TM. It is not too hard to check that the corresponding scalar field to \vartheta(v), when evaluated at (p,w)\in TM, is in fact d(\vartheta(v))|_{p,w}, the derivative of the scalar function \vartheta(v) in the direction of w at point p. In general, the compatible lift of scalar fields g:M\to \mathbb{R} to TM is the function \tilde{g}(p,v) = dg(p)[v].

Using this we can extend the construction to arbitrary tensor fields, and a simple computation yields that this construction is in fact identical, for rank-2 tensors, to the expressions given in (2.8), (2.9), and (2.10) in the paper.

The second extension
The above extension is not the only map sending vectors on M to vectors on TM. In the statement of Lemmas 3 there is also another construction. Given a vector field v, it induces a one parameter family of diffeomorphisms on TM via that maps \psi_t(p,w) = (p, w+vt). Its differential \frac{d}{dt}\psi_t|_{t=0} is a vector field over TM.

The construction in the statement of Lemma 4 is the trivial one mentioned at the start of this post.

Decay of waves IIIb: tails for homogeneous linear equation on curved background

Now we will actually show that the specific decay properties of the linear wave equation on Minkowski space–in particular the strong Huygens’ principle–is very strongly tied to the global geometry of that space-time. In particular, we’ll build, by hand, an example of a space-time where geometry itself induces back-scattering, and even linear, homogeneous waves will exhibit a tail.

For convenience, the space-time we construct will be spherically symmetric, and we will only consider spherically symmetric solutions of the wave equation on it. We will also focus on the 1+3 dimensional case. Read the rest of this entry »

Decay of waves IIIa: nonlinear tails in Minkowski space redux

Before we move on to the geometric case, I want to flesh out the nonlinear case mentioned in the end of the last post a bit more. Recall that it was shown for generic nonlinear (actually semilinear; for quasilinear and worse equations we cannot use Duhamel’s principle) wave equations, if we put in compact support for the initial data, we expect the first iterate to exhibit a tail. One may ask whether it is possible that, in fact, this is an artifact of the successive approximation scheme; that in fact somehow it always transpires that a conspiracy happens, and all the higher order iterates cancel out the tail coming from the first iterate. This is rather unlikely, owing to the fact that the convergence to \phi_\infty is dominated by a geometric series. But to just make double sure, here we give a nonlinear system of wave equations such that the successive approximation scheme converges after finitely many steps (in fact, after the first iterate), and so we can also explicitly compute the rate of decay for the nonlinear tail. While the decay rate is not claimed to be generic (though it is), the existence of one such example with a fixed decay rate shows that for a statement quantifying over all nonlinear wave equations, it would be impossible to demonstrate better decay rate than the one exhibited. Read the rest of this entry »

Decay of waves IIb: Minkowski space, with right-hand side

In the first half of this second part of the series, we considered solutions to the linear, homogeneous wave equation on flat Minkowski space, and showed that for compactly supported initial data, we have strong Huygens’ principle. We further made references to the fact that this behaviour is expected to be unstable. In this post, we will further illustrate this instability by looking at Equation 1 first with a fixed source F = F(t,x), and then with a nonlinearity F = F(t,x, \phi, \partial\phi).

Duhamel’s Principle

To study how one can incorporate inhomogeneous terms into a linear equation, and to get a qualitative grasp of how the source term contributes to the solution, we need to discuss the abstract method known as Duhamel’s Principle. We start by illustrating this for a very simple ordinary differential equation.

Consider the ODE satisfied by a scalar function \alpha:

Equation 13
\displaystyle \frac{d}{ds}\alpha(s) = k(s)\alpha(s) + \beta(s)

when \beta\equiv 0, we can easily solve the equation with integration factors

\displaystyle \alpha(s) = \alpha(0) e^{\int_0^s k(t) dt}

Using this as a sort of an ansatz, we can solve the inhomogeneous equation as follows. For convenience we denote by K(s) = \int_0^s k(t) dt the anti-derivative of k. Then multiplying Equation 13 through by \exp -K(s), we have that

Equation 14
\displaystyle \frac{d}{ds} \left( e^{-K(s)}\alpha(s)\right) = e^{-K(s)}\beta(s)

which we solve by integrating

Equation 15
\displaystyle \alpha(s) = e^{K(s)}\alpha(0) + e^{K(s)} \int_0^s e^{-K(t)}\beta(t) dt

If we write K(s;t) = \int_t^s k(u) du, then we can rewrite Equation 15 as given by an integral operator

Equation 15′
\displaystyle \alpha(s) = e^{K(s)}\alpha(0) + \int_0^s e^{K(s;t)}\beta(t) dt

Read the rest of this entry »

Decay of waves IIa: Minkowski background, homogeneous case

Now let us get into the mathematics. The wave equations that we will consider take the form

Equation 1
-\partial_t^2 \phi + \triangle \phi = F

where \phi:\mathbb{R}^{1+n}\to\mathbb{R} is a real valued function defined on (1+n)-dimensional Minkowski space that describes our solution, and F represents a “source” term. When F vanishes identically, we say that we are looking at the linear, homogeneous wave equation. When F is itself a function of \phi and its first derivatives, we say that the equation is a semilinear wave equation.

We first start with the homogeneous, linear case.

Homogeneous wave equation in one spatial dimension

One interesting aspect of the wave equation is that it only possesses the second, multidimensional, dispersive mechanism as described in my previous post. In physical parlance, the “phase velocity” and the “group velocity” of the wave equation are the same. And therefore, a solution of the wave equation, quite unlike a solution of the Schroedinger equation, will not exhibit decay when there is only one spatial dimension (mathematically this is one significant difference between relativistic and quantum mechanics). In this section we make a computation to demonstrate this, a fact that would also be useful later on when we look at higher (in particular, three) dimensions.

Use x\in\mathbb{R} for the variable representing spatial position. The wave equation can be written as

-\partial_t^2 \phi + \partial_x^2\phi = 0

Now we perform a change of variables: let u = \frac{1}{2}(t-x) and v = \frac{1}{2}(t+x) be the canonical null variables. The change of variable formula replaces

Equation 2
\displaystyle \partial_t \to \frac{\partial u}{\partial t} \partial_u + \frac{\partial v}{\partial t} \partial v = \frac{1}{2}\partial_u + \frac{1}{2}\partial_v
\displaystyle \partial_x \to \frac{\partial u}{\partial x} \partial_u + \frac{\partial v}{\partial x} \partial v = -\frac{1}{2}\partial_u + \frac{1}{2}\partial_v

and we get that in the (u,v) coordinate system,

Equation 3
-\partial_u \partial_v \phi = 0

Read the rest of this entry »

Decay of waves I: Introduction

In the next week or so, I will compose a series of posts on the heuristics for the decay of the solutions of the wave equation on curved (and flat) backgrounds. (I have my fingers crossed that this does not end up aborted like my series of posts on compactness.) In this first post I will give some physical intuition of why waves decay. In the next post I will write about the case of linear and nonlinear waves on flat space-time, which will be used to motivate the construction, in post number three, of an example space-time which gives an upper bound on the best decay that can be generally expected for linear waves on non-flat backgrounds. This last argument, due to Mihalis Dafermos, shows that why the heuristics known as Price’s Law is as good as one can reasonably hope for in the linear case. (In the nonlinear case, things immediately get much much worse as we will see already in the next post.)

This first post will not be too heavily mathematical, indeed, the only realy foray into mathematics will be in the appendix; the next ones, however, requires some basic familiarity with partial differential equations and pseudo-Riemannian geometry. Read the rest of this entry »

Why do Volvox spin?

For today’s High-Energy-Physics/General-Relativity Colloquium, we had a speaker whose research is rather far from the usual topics. Raymond Goldstein of DAMTP gave a talk on the physics of multicellular organisms, with particular focus (since the field is so broad and so new for most of the audience members) on the example of Volvox, a kind of green algae composed of spherical colonies of about 50,000 cells.

One of the very interesting things about them is that, if you look under a microscope (or even a magnifying glass! Each colony is about half a millimeter across, so you can even see them with the naked eye), they spin. (Yes, the Goldstein lab has its own YouTube channel.)

(The video also shows how their motion can be constrained by hydrodynamical bound states formed due to their individual spinning motion.)

Now, we have a pretty good idea of the very basic locomotive mechanism of these organisms. Each colony is formed with an exterior ball of “swimming” cells and some interior balls of “reproducing” cells. The swimming cells each have two flagella pointed outwards into the surrounding fluid. Their beating will give rise to the motion for the whole colony. But the strange thing is that they do not swim straight: the cells colonies tend to travel in one direction, will spinning with the axes aligned with the direction of travel. Why? Isn’t is inefficient to expand extra energy to spin all the time? This was a central question around which the presentation today was built.

Two main results were described in the talk today. First is a result about how the two flagella of each cell interacts. It was observed (some time ago) that, by direct observation under a microscope, the two flagella can exhibit three types of interaction. First is complete synchronisation: the two flagella beats in unison, like how a swimmer’s arms move when pulling the breaststroke. This is observed 85% of the time. Then there is “slippage”, where for some reason one flagellum is slips out-of-phase from the other briefly, and recovers after a while. This happens about 10% of the time. And lastly there is a completely lack of synchronisation when the two flagella beats with different frequencies for about 5% of the time. The original report on this surmised that this difference represents three different “types” of cells: since each observation is short in time, they didn’t observe much in terms of transitions from one type to the other. What was discovered more recently is that, in fact, the three behaviour all belong to the one single type of cells making up Volvox, and the transition is stochastic!

Now, why this may be surprising is the following: each flagellum is a mechanical beater and has some innate characteristic frequency at which it beats. So in an ideal, linear situation, the two independent flagella should not interact. And so there cannot be reinforcements of any type. Now, one may guess that since the two flagella are swimming in water, the hydrodynamics may serve as a medium for the interaction. However, a pure hydrodynamic interaction should lead to something like sympathy, a phenomenon first observed by Huygens. Basically, Huygens put two pendulum clocks on the same wall, and set their pendulums to be of arbitrary phase relative to each other. After some time, however, he discovers that invariably the clocks settle down to a state where the their pendulums are completely out of phase. This “tuning” is attributed to vibrations being passed along the supporting beams of the wall.

(One can do a similar experiment at home with a board, two metronomes, and two soda cans. A dramatic example is shown below.)

But the problem with the synchronisation theory is that it can only explain the 85% of the time occurrence of completely in phase swimming, but not the other 15%. The solution to this problem requires real consideration of the chaos on a molecular level. As it turns out, one of the force we have so far neglected is the force driving the flagella. This is dependent on the biochemical processes inside the cells. By considering the biochemical noise which contributes a stochastic forcing on the entire system, one can recover the other 15% of out-of-phase behaviour. (The noise is not thermal, as thermal noise should have much lower amplitude than required to cause the phenomena.)

The second beautiful result described in the talk was on the spinning of the colonies, and its relation to phototaxis, the attraction of the green algae to light. How are to two related? It is a quite magnificent feat of evolution. Now, in this colony of 50,000 cells, there is no central nervous system, so how do the cells coordinate their motion to swim toward the light? You cannot rely on chemical signals, or even hydrodynamical synchronisation, since the physical distance between the cells are typically larger than the cells themselves. The effects of this signalling would be too weak and too slow. It is more reasonable to expect the behaviour to be “crowd sourced” (for viewers of the Ghost in the Shell anime series, a cellular level of “stand alone complex”): each cell is programmed to behave in a certain way, and when taken as a whole, their joint behaviour gives rise to the desired response of the colony as a whole.

Well, at the level of the cell, what can they do? Each cell is equipped with a photosensing organelle. And like the classic Gary Larsen cartoon, each cell is really only capable of stimulus-response. Experimentally it was confirmed that each individual cell reacts to light. When a cell is initially “facing away” (the light-sensor is direction sensitive) from the light source, and turns to “see” the light, the stimulus would shock the cell into slowing down its flagella’s beating. After a very short while the cell gets used to the light, and the beating resumes in earnest. The reverse change from light to darkness, however, does not cause changes in the beating of the flagella.

And this explains the spinning of the Volvox! Imagine the colony swimming along, minding its own business, when suddenly light hits one side of the colony. The cells on the lit side slows down its flagella beating, and gradually recovers its beating as it rotates out of view of the light source. So the net effect of the spinning of the colony is that “new” cells kept being brought into view of the light, receive the shock, slows its flagella, and recovers as it “retreats into the night”, only to be shocked again “as sun rises the next day”. So the flagella beats more fervently on the dark side of the colony compared to the bright side, so, as anyone who has tried swimming one-armed would know, the colony will slowly turn toward the light source.

The best part about this process is that it is self-correcting. As the axis of rotation gets more and more aligned with the light source, more and more of the cells experience an “Alaskan summer” with the “sun” perpetually overhead. These cells that are not brought back into darkness no-longer receive the periodic shock that slows their flagella, and so swim equally as hard through the entire “day”, and therefore no longer contributes to turning. When the spin axis is perfectly aligned with the light source, the entire “northern hemisphere” is perpetually illuminated, while the “southern” is not, so until the light-source changes directions, the colony will cease to change directions and move straight toward the light source.

For this all to work, it requires that the spin rate of the colony be exactly the same as the rate at which the cells recover from the shock of seeing the light. And this is experimentally confirmed. (An interesting question brought up at the end is whether we can use this as a laboratory test for evolution: if we add some syrup or something to the water to make it more viscous, the spin rate will necessarily slow down. Then the original strand of Volvox will not be as effective at swimming toward the light. It would be interesting to see whether after a few hundred generations, a mutant strand evolves with slower recovery time from illumination.)

Shock singularities in Burgers’ equation

It is generally well known that partial differential equations that model fluid motion can exhibit “shock waves”. In fact, the subject I will write about today is generally presented as the canonical example for such behaviour in a first course in partial differential equations (while also introducing the method of characteristics). The focus here, however, will not be so much on the formation of shocks, but on the profile of the shock boundary. This discussion tends to be omitted from introductory texts.

Solving Burgers’ equation
First we recall the inviscid Burgers’ equation, a fundamental partial differential equation in the study of fluids. The equation is written

Equation 1. Inviscid Burgers’ equation
\displaystyle \frac{\partial}{\partial t} u  + u \frac{\partial}{\partial x} u = 0

where u = u(t,x) is the “local fluid velocity” at time t and at spatial coordinate x. The solution of the equation is closely related to its derivation: notice that we can re-write the equation as

v \cdot \nabla u = (\partial_t + u \partial_x) u = 0

The question we consider is the initial value problem for the PDE: given some initial velocity configuration u_0(x), we want to find a solution u(t,x) to Burgers’ equation such that u(0,x) = u_0(x).

The traditional way of obtaining a solution is via the method of characteristics. We first observe (1) the alternate form of the equation above means that if X(t) is a curve tangent to the vector field v = \partial_t + u\partial_x, we must have u(t,X(t)) be a constant valued function of the parameter t. (2) Plugging this back in implies that along such a curve X(t), the vector field v = \partial_t + u\partial_x = \partial_t + u_0 \partial_x is constant. (3) A curve whose tangent vector is constant is a straight line. So we have that a solution of the Burgers’ equation must verify

u(t, x + u_0(x) \cdot t) = u_0(x)

And we call the family of curves given by X_x(t) = x + u_0(x) \cdot t the characteristic curves of the solution.

To extract more qualitative information about Burgers’ equation, let us take another spatial derivative of the equation, and call the function w = \partial_x u. Then we have

\partial_t w + w^2 + u \partial_x w = 0 \implies v \cdot w + w^2 = 0

So letting X(t) be a characteristic curve, and write W(t) = w(t, X(t)), we have that along the characteristic curve

\displaystyle \frac{d}{dt}W = - W^2 \implies W(t) = \frac{1}{t+W(0)^{-1}}

So in particular, we see that if W(0) < 0, W(t) must blow up in time t \leq |W(0)|^{-1}.

Plot of divergent flow So what does this mean? We’ve seen that along characteristic lines, the value of u stays constant. But we’ve also seen that along those lines, the value of its spatial derivative can blow up if the initial slope is negative. Perhaps the best thing to do is to illustrate it with two pictures. In the pictures the thick, red curve is the initial velocity distribution u_0(x), shown with the black line representing the x-axis: so when the curve is above the axis, initially the local fluid velocity is positive, and the fluid is moving to the right. The blue curves are the characteristic lines. In the first image to the right, we see that the initial velocity distribution is such that the velocity is increasing to the right. And so w(0,x) is always positive. We see that in this situation the flow is divergent, the flow lines getting further and further apart, corresponding to the solution where w(t,x) gets smaller and smaller along a flow line. For the second image here on our left, the situation is different. The initial velocity distribution starts out increasing, then hits a maximum, dips down to a minimum, and finally increases again. In the regions where the velocity distribution is increasing, we see the same “spreading out” behaviour as before, with the flow lines getting further and further apart (especially in the upper left region). But for flowlines originating in the region where the velocity distribution is decreasing, those characteristic curves gets bunched together as time goes on, eventually intersecting! This intersection is what is known as a shock. From the picture, it becomes clear what the blow-up of W(t) means: Suppose the initial velocity distribution is such that for two points x_1  u_0(x_2). Since the flow line originating from x_1 is moving faster, it will eventually catch up to the the flow line originating from x_2. When the two flow lines intersect, we have a problem: if we follow the flow line from x_1, the function u must take the value u_0(x_1) at the point; but if we follow the flow line from x_2, the function must take the value u_0(x_2) at the point. So we cannot consistently assign a value to the function u at the points of intersection for flow-lines in a way that satisfies Burgers’ equation.

Another way of thinking about this difficulty is in terms of particle dynamics. Imagine the line being a highway, and points on it being cars. The dynamics of the traffic flow described by Burgers’ equation is one in which each driver starts at one speed (which can be in reverse), and maintains that speed completely without regard for the cars in front of or behind it. If we start out with a distribution where the leading cars always drive faster than the trailing ones, then the cars will spread further apart as time goes on. But if we start out with a distribution where a car in front is driving slower than a car behind, the second car will eventually catch up and crash into the one in front. And this is the formation of the shock wave.

(Now technically, in this view, once the two cars crash their flow-lines should end, and so cars that are in front of the collision and moving forward should not be affected by the collision at all. But if we imagine that instead of real cars, we are driving bumper cars, so after a collision, the car in front maintains speed at the velocity of the car that hit it, while the car in back drives at the velocity of the car it hit [so the they swap speeds in an elastic collision], then we have something like the picture plotted above.)

Shock boundary
Having established that shocks can form, we move on to the main discussion of this post: the geometry of the set of shock singularities. We will consider the purely local effects of the shocks; by which we mean that we will ignore the chain reactions as described in the parenthetical remark above. Therefore we will assume that at the formation of the shock, the flow-lines terminate and the particles they represent disappear. In other words, we will consider only shocks coming from nearest neighbor collisions. In this scenario, the time of existence of a characteristic line is precisely governed by the equation on W we derived before: that is given u_0(x), the characteristic line emanating from x = x_0 will run into the shock precisely at the time t = - \frac{1}{\partial_x u_0(x_0)}. (It will continue indefinitely in the future if the derivative is positive.)

The most well-known image of a shock formation is the image on the right, where we see the classic fan/wedge type shock. (Due to the simplicity in sketching thie diagram by hand, this is probably how most people are introduced to this type of diagrams, either on a homework set or in class.) What we see here is an illustration of the fact that

If for x_1 < x < x_2, we have \partial^2_{xx} u_0(x) = 0, and \partial_x u_0(x) < 0, then the shock boundary is degenerate: it consists of a single focal point.

To see this analytically: observe that because the blow-up time depends on the first derivative of the initial velocity distribution, for such a set-up the blow-up time t_0 = - (\partial_x u_0)^{-1} is constant for the various points. Then we see that the spatial coordinate of the blow-up will be x + u_0(x) t_0. But since u_0(x) is linear in x, we have

\displaystyle x + u_0(x) t_0 = x_1 + (x-x_1) + u_0(x_1)t_0 + \partial_xu_0 \cdot (x - x_1) t_0 = x_1 + u_0(x_1) t_0

is constant. And therefore the shock boundary is degenerate.


Next we consider the case where \partial^2_{xx} u_0 vanishes at some point x_0, but \partial^3_{xxx}u_0(x_0) \neq 0. The two pictures to the right of this paragraph illustrates the typical shock boundary behaviour. On the far right we have the slightly aphysical situation: notice that for a particle coming in from the left, before it hits its shock boundary, it first crosses the shock boundary formed by the particles coming in from the right. This is the situation where the third derivative is positive, and the cusp point which corresponds to the shock boundary for x_0 opens to the future. The nearer picture is the situation where the third derivative is negative, with the cusp point opening downwards. Notice that since we are in a neighborhood of a point where the second derivative vanishes, the initial velocity distributions both look almost straight, and it is hard to distinguish from this image the sign of the third derivative. The picture on the far right is based on an arctan type initial distribution, whereas the nearer picture is based on an x^3 type initial distribution. Let us again analyse the situation more deeply. Near the point x_0, we shall assume that \partial^3_{xxx}u_0 \sim \partial^3_{xxx}u_0(x_0) = C for some constant. And we will assume, using Galilean transformations, that u_0(x_0) = 0 = x_0. Then letting t_0 = - (\partial_x u_0(x_0))^{-1}, we have

\displaystyle u_0(x) = \frac{C}{6} x^3 - \frac{1}{t_0} x

Thus as a function of x, the blow-up times of flow lines are given by

\displaystyle t(x) = \frac{t_0}{1 - \frac{C}{2}t_0 x^2}

Solving for their blow-up profile y = x + u_0(x) t(x) then gives (after quite a bit of algebraic manipulation)

\displaystyle \frac{ (\frac{t}{t_0} - 1)^3}{t} = \frac{9C}{8} y^2

which can be easily seen to be a cusp: \frac{dy}{dt} = 0 at y=0, t = t_0. And it is clear that the side the cusp opens is dependent on the sign of the third derivative, C.

The last bit of computation we will do is for the case D = \partial^2_{xx}u_0(x) \neq 0. In this case we can take

\displaystyle u_0(x) = - \frac{1}{t_0}x + \frac{D}{2} x^2

as an approximation. Then the blowup times will be

\displaystyle t(x) = \frac{t_0}{1 - D t_0 x}

which leads to the blowup profile y being [Thanks to Huy for the correction.]

\displaystyle y = -\frac{1}{2Dt} \left( 1 - \frac{t}{t_0}\right)^2

and a direct computation will then lead to the conclusion that in this generic scenario, the shock boundary will be everywhere tangent to the flow-line that ends there.

Conway’s Base 13 Function

(N.b. Credit where credit’s due: I learned about this function from an answer of Robin Chapman’s on MathOverflow, and its measurability from Noah Stein.)

Conway’s base 13-function is a strange beast. It was originally crafted by John Conway as a counterexample to the converse of the intermediate value theorem, and has the property that on any open interval its image contains the entire real line. In addition, its support set also serves as an illustration of a dense, uncountable set of numbers whose Lebesgue measure is 0. Read the rest of this entry »

Arrow’s Impossibility Theorem

Partially prompted by Terry’s buzz, I decided to take a look at Arrow’s Impossibility Theorem. The name I have heard before, since I participated in CollegeBowl as an undergraduate, and questions about Arrow’s theorem are perennial favourites. The theorem’s most famous interpretation is in voting theory:

Some definitions

  1. Given a set of electors E and a finite set of candidates C, a preference \pi assigns to each elector e \in E an ordering of the set C. In particular, we can write \pi_e(c_1) > \pi_e(c_2) for the statement “the elector e prefers candidate c_1 to candidate c_2“. The set of all possible preferences is denoted \Pi.
  2. A voting system v assigns to each preference \pi\in\Pi an ordering of the set C.
  3. Given a preference \pi and two candidates c_1,c_2, a bloc biased toward c_1 is defined as the subset b(\pi,c_1,c_2) := \{ e\in E | \pi_e(c_1) > \pi_e(c_2) \}
  4. The voting system is said to be
    1. unanimous if, whenever all electors prefer candidate c_1 to c_2, the voting system will return as such. In other words, “\pi_e(c_1) > \pi_e(c_2) \forall e\in E \implies v(\pi,c_1) > v(\pi,c_2)“.
    2. independent if the voting results comparing candidates c_1 and c_2 only depend on the individual preferences between them. In particular, whether v(\pi,c_1) > v(\pi,c_2) only depends on b(\pi,c_1,c_2). An independent system is said to be monotonic if, in addition, a strictly larger biased bloc will give the same voting result: if v(\pi,c_1) > v(\pi,c_2) and b(\pi,c_1,c_2) \subset b(\pi',c_1,c_2), then v(\pi',c_1) > v(\pi',c_2) necessarily.
    3. dictator-free if there isn’t one elector e_0\in E whose vote always coincides with the end-result. In other words, we define a dictator to be an elector e_0 such that v(\pi,c_1) > v(\pi,c_2) \iff \pi_{e_0}(c_1) > \pi_{e_0}(c_2) for any \pi\in \Pi, c_1,c_2\in C.
  5. A voting system is said to be fair if it is unanimous, independent and monotonic, and has no dictators.

And the theorem states

Arrow’s Impossibility Theorem
In an election consisting of a finite set of electors E with at least three candidates C, there can be no fair voting system.

As we shall see, finiteness of the set of electors and the lower-bound on the number of candidates are crucial. In the case where there are only two candidates, the simple-majority test is a fair voting system. (Finiteness is more subtle.) It is also easy to see that if we allow dictators, i.e. force the voting results to align with the preference of a particular predetermined individual, then unanimity, independence, and monotonicity are all trivially satisfied.

What’s wrong with the simple majority test in more than three candidates? The problem is that it is not, by definition, a proper voting system: it can create loops! Imagine we have three electors e_1, e_2, e_3 and three candidates c_1,c_2,c_3. The simple majority test says that v(\pi,c_1) > v(\pi,c_2) if and only if two or more of the electors prefer c_1 to c_2. But this causes a problem in the following scenario:

e_1: c_1 > c_2 > c_3
e_2: c_2 > c_3 > c_1
e_3: c_3 > c_1 > c_2

then the voting result will have v(c_1) > v(c_2), v(c_2) > v(c_3), and v(c_3) > v(c_1), a circular situation which implies that the “result” is not an ordering of the candidates! (An ordering of the set requires the comparisons to be transitive.) So the simple-majority test is, in fact, not a valid voting system.

From this first example, we see already that, in the situation of more than 2 candidates, designing a voting system is a non-trivial problem. Making them fair, as we shall see, will be impossible. Read the rest of this entry »