Bubbles Bad; Ripples Good

… Data aequatione quotcunque fluentes quantitates involvente fluxiones invenire et vice versa …

Category: Require introductory level university maths

Bouncing a quantum particle back and forth

If you have not seen my previous two posts, you should read them first.

In the two previous posts, I shot particles (okay, simulated the shooting on a computer) at a single potential barrier and looked at what happens. What happens when we have more than one barrier? In the classical case the picture is easy to understand: a particle with insufficient energy to escape will be trapped in the local potential well for ever, while a particle with sufficiently high energy will gain freedom and never come back. But what happens in the quantum case?

If the intuition we developed from scattering a quantum particle against a potential barrier, where we see that depending on the frequency (energy) of the particle, some portion gets transmitted and some portion gets reflected, is indeed correct, what we may expect to see is that the quantum particle bounces between the two barriers, each time losing some amplitude due to tunneling.

But we also saw that the higher frequency components of the quantum particle have higher transmission amplitudes. So we may expect that the high frequency components to decay more rapidly than the low frequency ones, so the frequency of the “left over” parts will continue to decay in time. This however, would be wrong, because we would be overlooking one simple fact: by the uncertainty principle again, very low frequency waves cannot be confined to a small physical region. So when we are faced with two potential barriers, the distance between them gives a characteristic frequency. Below this frequency (energy) it is actually not possible to fit a (half) wave between the barriers, and so the low frequency waves must have significant physical extent beyond the barriers, which means that large portions of these low frequency waves will just radiate away freely. Much above the characteristic frequency, however, the waves have large transmission coefficients and will not be confined.

So the net result is that we should expect for each double barrier a characteristic frequency at which the wave can remain “mostly” stuck between the two barriers, losing a little bit of amplitude at each bounce. This will look like a slowly, but exponentially, decaying standing wave. And I have some videos to show for that!

In the video we start with the same random initial data and evolve it under the linear wave equation with different potentials: the equations look like

\displaystyle - \partial^2_{tt} u + \partial^2_{xx} u - V u = 0

where V is a non-negative potential taken in the form

\displaystyle V(x) = a_1 \exp( - x^2 / b_1) - a_2 \exp( -x^2 / b_2)

which is a difference of two Gaussians. For the five waves shown the values of a_1, b_1 are the same throughout. The coefficients a_2 (taken to be \leq a_1) and b_2 (taken to be < b_1) increases from top to bottom, resulting in more and more-widely separated double barriers. Qualitatively we see, as we expected,

  • The shallower and narrower the dip the faster the solution decays.
  • The shallower and narrower the dip the higher the “characteristic frequency”.

As an aside: the video shown above is generated using Python, in particular NumPy and MatPlotLib; the code took significantly longer to run (20+hours) than to write (not counting the HPDE solver I wrote before for a different project, coding and debugging this simulation took about 3 hours or less). On the other hand, this only uses one core of my quad-core machine, and leaves the computer responsive in the mean time for other things. Compare that to Auto-QCM: the last time I ran it to grade a stack of 400+ multiple choice exams it locked up all four cores of my desktop computer for almost an entire day.

As a further aside, this post is related somewhat to my MathOverflow question to which I have not received a satisfactory answer.

… and scattering a quantum particle

In the previous post we shot a classical particle at a potential barrier. In this post we shoot a quantum particle.

Whereas the behaviour of the classical particle is governed by Newton’s laws (where the external force providing the acceleration is given as minus the gradient of the potential), we allow our quantum particle to be governed by the Klein-Gordon equations.

  • Mathematically, the Klein-Gordon equation is a partial differential equation, whereas Newton’s laws form ordinary differential equations. A typical physical interpretation is that the state space of quantum particles are infinite dimensional, whereas the phase space of physics has finite dimensions.
  • Note that physically the Klein-Gordon equation was designed to model a relativistic particle, while in the previous post we used the non-relativistic Newton’s laws. In some ways it would’ve been better to model the quantum particle using Schroedinger’s equation. I plead here however that (a) qualitatively there is not a big difference in terms of the simulated outcomes and (b) it is more convenient for me to use the Klein-Gordon model as I already have a finite-difference solver for hyperbolic PDEs coded in Python on my computer.

To model a particle, we set the initial data to be a moving wave packet, such that at the initial time the solution is strongly localized and satisfies \partial_t u + \partial_x u = 0. Absent the mass and potential energy terms in the Klein-Gordon equation (so under the evolution of the free wave equation), this wave packet will stay coherent and just translate to the right as time goes along. The addition of the mass term causes some small dispersion, but the mass is chosen small so that this is not a large effect. The main change to the evolution is the potential barrier, which you can see illustrated in the simulation.

The video shows 8 runs of the simulation with different initial data. Whereas in the classical picture the initial kinetic energy is captured by the initial speed at which the particle is moving, in the quantum/wave picture the kinetic energy is related to the central frequency of your wave packet. So each of the 8 runs have increasing frequency offset that represents increasing kinetic energy. The simulation has two plots, the top shows the square of the solution itself, which gives a good indication of where physically the wave packet is located. The bottom shows a normalized kinetic energy density (I have to include a normalization since the kinetic energy of the first and last particles differ roughly 10 fold).

One notices that in the first two runs, the kinetic energy is sufficiently small that the particle mostly bounces back to the left after hitting the potential.

For the third and fourth runs (frequency shift \sqrt{2} and \sqrt{3} respectively) we see that while a significant portion of the particle bounces back, a noticeable portion “tunnels through” the barrier: this caused by a combination of the quantum tunneling phenomenon and the wave packet form of the initial data.

The phenomenon of quantum tunneling manifests in that all non-zero energy waves will penetrate a finite potential barrier a little bit. But the amount of penetration decays to zero as the energy of the wave goes to zero: this is known as the semiclassical regime. In the semiclassical limit it is known that quantum mechanics converge toward classical mechanics, and so in the low-energy limit we expect our particle to behave like a classical particle and bounce off. So we see that naturally increasing the energy (frequency) of our wave packet we expect more of the tunneling to happen.

Further, observe that by shaping our data into a wave packet it necessarily contains some high frequency components (due to Heisenberg uncertainty principle); high frequency, and hence high energy components do not “see” the potential barrier. Even in the classical picture high energy particles would fly over the potential barrier. So for wave packets there will always be some (perhaps not noticeable due to the resolution of our computation) leakage of energy through the potential barrier. The quantum effect on these high energy waves is that they back-scatter. Whereas the classical high energy particles just fly directly over the barrier, a high energy quantum particle will leave some parts of itself behind the barrier always. We see this in the sixth and seventh runs of the simulation, where the particle mostly passes through the barrier, but a noticeable amount bounces off in the opposite direction.

In between during the fifth run, where the frequency shift is 2, we see that the barrier basically split the particle in two and send one half flying to the right and the other half flying to the left. Classically this is the turning point between particles that go over the bump and particles that bounces back, and would be the case (hard to show numerically!) where a classical particle comes in from afar with just enough energy that it comes to a half at the top of the potential barrier!

And further increasing the energy after the seventh run, we see in the final run a situation where only a negligible amount of the particle scatters backward with almost all of it passing through the barrier unchanged. One interesting thing to note however is that just like the case of the classical particle, the wave packet appears to “slow down” a tiny bit as it goes over the potential barrier.

An optimization problem: variation

Examining the theorem proven in the previous post, we are led naturally to ask whether there are higher order generalizations.

Question: Let f \in C^{k}([-1,1]) with f^{(k)} > 0. What can we say about the minimizer of C = \int_{-1}^1 |f(x) - p(x)|~\mathrm{d}x where p ranges over degree k-1 polynomials?

It is pretty easy to see that we expect p to intersect f at the maximum number of points, which is k. We label those points x_1, \ldots, x_k and call x_0 = -1 and x_{k+1}= 1. Then the cost function can be written as
\displaystyle C = \sum_{j = 0}^k (-1)^j \int_{x_j}^{x_{j+1}} f(x) - p(x; x_1, \ldots, x_k) ~\mathrm{d}x
Since we know that values of p at the points x_1, \ldots, x_k we can write down the interpolation polynomial explicitly using Sylvester’s formula:
\displaystyle p = \sum_{j = 1}^k \left( \prod_{1 \leq m \leq k, m\neq j} \frac{x - x_m}{x_j - x_m} \right) f(x_j) = \sum L_j(x; x_1, \ldots, x_k) f(x_j)

The partial derivatives are now
\displaystyle \partial_n C = \sum_{j = 0}^k (-1)^{j+1} \int_{x_j}^{x_{j+1}} \partial_n p(x; x_1, \ldots, x_k) ~\mathrm{d}x
It remains to compute \partial_n p for 1 \leq n \leq k. We observe that when n \neq j
\displaystyle \partial_n L_j = - \frac{1}{x - x_n} L_j + \frac{1}{x_j - x_n} L_j
and also
\displaystyle \partial_n L_n = - \left( \sum_{1\leq m \leq k, m\neq n} \frac{1}{x_n - x_m} \right) L_n
So
\displaystyle \partial_n p =  \sum_{j \neq n} \frac{x-x_j}{(x_j - x_n)(x - x_n)} L_j f(x_j) + L_n f'(x_n) - \left( \sum_{1\leq m \leq k, m\neq n} \frac{1}{x_n - x_m} \right) L_n f(x_n)
Now, we observe that
\displaystyle \frac{x - x_j}{x - x_n} L_j =   - \left( \prod_{m \neq n,j} \frac{x_n - x_m}{x_j - x_m} \right)  L_n
so after some computation we arrive at
\displaystyle \partial_n p = L_n(x) \cdot \left[ f'(x_n) - \sum_{j \neq n} \frac{1}{x_j - x_n} \left(\left( \prod_{m \neq j,n}\frac{x_n - x_m}{x_j - x_m}\right)f(x_j) - f(x_n) \right)\right]
which we can further simplify to
\displaystyle \partial_n p = L_n(x) \cdot \left( f'(x_n) - p'(x_n)\right)
Now, since f and p cross transversely at x_n, the difference of their derivatives is non-zero. (This harks back to our assumption that f^{(k)} > 0.) So we are down, as in the case where k = 2, to equations entirely independent of f.

More precisely, we see that the stationarity condition becomes the choice of x_1, \ldots, x_k such that the integrals
\displaystyle \sum_{j = 0}^k (-1)^{j} \int_{x_j}^{x_{j+1}} L_n(x)  ~\mathrm{d}x = 0
for each n. Since L_n form a basis for the polynomials of degree at most k-1, we have that the function
\chi(x) = (-1)^j \qquad x \in (x_j, x_{j+1})
is L^2 orthogonal to every polynomial of degree at most k-1. So in particular the x_j are solutions to the following system of equations
x_0 = -1, \qquad x_{k+1} = 1
\sum_{j = 0}^k (-1)^j \left[ x_{j+1}^d - x_{j}^d \right] = 0 \qquad \forall d \in \{1, \ldots, k\}

From symmetry considerations we have that x_j = - x_{k+1 - j}. This also kills about half of the equations. For the low k we have

  1. \{ 0\}
  2. \{ -1/2, 1/2\}
  3. \{-1/2, 0, 1/2\}
  4. \{ (\pm 1 \pm \sqrt{5})/4 \}
  5. \{ 0, \pm\frac12, \pm \frac{\sqrt{3}}2 \}

Bessaga’s converse to the contraction mapping theorem

In preparing some lecture notes for the implicit function theorem, I took a look at Schechter’s delightfully comprehensive Handbook of Analysis and its Foundations (which you can also find on his website), and I learned something new about the Banach fixed point theorem. To quote Schechter:

… although Banach’s theorem is quite easy to prove, a longer proof cannot yield stronger results.

I will write a little bit here about a “converse” to the Banach theorem due to Bessaga, which uses a little bit of help from the Axiom of Choice.

Read the rest of this entry »

Compactifying (p,q)-Minkowski space

In a previous post I described a method of thinking about conformal compactifications, and I mentioned in passing that, in principle, the method should also apply to arbitrary signature pseudo-Euclidean space \mathbb{R}^{p,q}. A few days ago while visiting Oxford I had a conversation with Sergiu Klainerman where this came up, and we realised that we don’t actually know what the conformal compactifications are! So let me write down here the computations in case I need to think about it again in the future. Read the rest of this entry »

Products and expectation values

Let us start with an instructive example (modified from one I learned from Steven Landsburg). Let us play a game:

I show you three identical looking boxes. In the first box there are 3 red marbles and 1 blue one. In the second box there are 2 red marbles and 1 blue one. In the last box there is 1 red marble and 4 blue ones. You choose one at random. What is …

  • The expected number of red marbles you will find?
  • The expected number of blue marbles you will find?
  • The expected number of marbles, irregardless of colour, you will find?
  • The expected percentage of red marbles you will find?
  • The expected percentage of blue marbles you will find?

Answer below the cut… Read the rest of this entry »

Mariş’s Theorem

During a literature search (to answer my question concerning symmetries of “ground states” in variational problem, I came across a very nice theorem due to Mihai Mariş. The theorem itself is, more than anything else, a statement about the geometry of Euclidean spaces. I will give a rather elementary write-up of (a special case of) the theorem here. (The proof presented here can equally well be applied to get the full strength of the theorem as presented in Maris’s paper; I give just the special case for clarity of the discussion.)

Read the rest of this entry »

Continuity of the infimum

Just realised (two seeks ago, but only gotten around to finish this blog posting now) that an argument used to prove a proposition in a project I am working on is wrong. After reducing the problem to its core I found that it is something quite elementary. So today’s post would be of a different flavour from the ones of recent past.

Question Let X,Y be topological spaces. Let f:X\times Y\to\mathbb{R} be a bounded, continuous function. Is the function g(x) = \inf_{y\in Y}f(x,y) continuous?

Intuitively, one may be tempted to say “yes”. Indeed, there are plenty of examples where the answer is in the positive. The simplest one is when we can replace the infimum with the minimum:

Example Let the space Y be a finite set with the discrete topology. Then g(x) = \min_{y\in Y} f(x,y) is continuous.
Proof left as exercise.

But in fact, the answer to the question is “No”. Here’s a counterexample:

Example Let X = Y = \mathbb{R} with the standard topology. Define

\displaystyle f(x,y) = \begin{cases} 1 & x > 0 \\ 0 & x < -e^{y} \\ 1 + x e^{-y} & x\in [-e^{y},0]  \end{cases}

which is clearly continuous. But the infimum function g(x) is roughly the Heaviside function: g(x) = 1 if x \geq 0, and g(x) = 0 if x < 0.

So what is it about the first example that makes the argument work? What is the different between the minimum and the infimum? A naive guess maybe that in the finite case, we are taking a minimum, and therefore the infimum is attained. This guess is not unreasonable: there are a lot of arguments in analysis where when the infimum can be assumed to be attained, the problem becomes a lot easier (when we are then allowed to deal with a minimizer instead of a minimizing sequence). But sadly that is not (entirely) the case here: for every x_0, we can certainly find a y_0 such that f(x_0,y_0) = g(x_0). So attaining the infimum point-wise is not enough.

What we need, here, is compactness. In fact, we have the following

Theorem If X,Y are topological spaces and Y is compact. Then for any continuous f:X\times Y\to\mathbb{R}, the function g(x) := \inf_{y\in Y} f(x,y) is well-defined and continuous.

Proof usually proceeds in three parts. That g(x) > -\infty follows from the fact that for any fixed x\in X, f(x,\cdot):Y\to\mathbb{R} is a continuous function defined on a compact space, and hence is bounded (in fact the infimum is attained). Then using that the sets (-\infty,a) and (b,\infty) form a subbase for the topology of \mathbb{R}, it suffices to check that g^{-1}((-\infty,a)) and g^{-1}((b,\infty)) are open.

Let \pi_X be the canonical projection \pi_X:X\times Y\to X, which we recall is continuous and open. It is easy to see that g^{-1}((-\infty,a)) = \pi_X \circ f^{-1}((-\infty,a)). So continuity of f implies that this set is open. (Note that this part does not depend on compactness of Y. In fact, a minor modification of this proof shows that for any family of upper semicontinuous functions \{f_c\}_C, the pointwise infimum \inf_{c\in C} f_c is also upper semicontinuous, a fact that is very useful in convex analysis. And indeed, the counterexample function given above is upper semicontinuous.)

It is in this last part, showing that g^{-1}((b,\infty)) is open, that compactness is crucially used. Observe that g(x) > b \implies f(x,y) > b~ \forall y. In other words g(x) > b \implies \forall y, (x,y) \in f^{-1}((b,\infty)) an open set. This in particular implies that \forall x\in g^{-1}((b,\infty)) \forall y\in Y there exists a “box” neighborhood U_{(x,y)}\times V_{(x,y)} contained in f^{-1}((b,\infty)). Now using compactness of Y, a finite subset \{(x,y_i)\} of all these boxes cover \{x\}\times Y. And in particular we have

\displaystyle \{x\}\times Y \subset \left(\cap_{i = 1}^k U_{(x,y_i)}\right)\times Y \subset f^{-1}((b,\infty))

and hence g^{-1}((b,\infty)) = \cup_{x\in g^{-1}((b,\infty))} \cap_{i = 1}^{k(x)} U_{x,y_i} is open. Q.E.D.

One question we may ask is how sharp is the requirement that Y is compact. As with most things in topology, counterexamples abound.

Example Let Y be any uncountably infinite set equipped with the co-countable topology. That is, the collection of open subsets are precisely the empty set and all subsets whose complement is countable. The two interesting properties of this topology are (a) Y is not compact and (b) Y is hyperconnected. (a) is easy to see: let C be some countably infinite subset of Y. For each c\in C let U_c = \{c\}\cup (Y\setminus C). This forms an open cover with not finite sub-cover. Hyperconnected spaces are, roughly speaking, spaces in which all open nonempty sets are “large”, in the sense that they mutually overlap a lot. In particular, a continuous map from a hyperconnected space to a Hausdorff space must be constant. In our case we can see this directly: suppose h:Y\to \mathbb{R} is a continuous map. Fix y_1,y_2\in Y. Let N_{1,2}\subset \mathbb{R} be open neighborhoods of f(y_{1,2}). Since h is continuous, h^{-1}(N_1)\cap h^{-1}(N_2) is open and non-empty (by the co-countable assumption). Therefore N_1\cap N_2\neq \emptyset for any pairs of neighborhoods. Since \mathbb{R} is Hausdorff, this forces h to be the constant map. This implies that for any topological space X, a continuous function f:X\times Y\to\mathbb{R} is constant along Y, and hence for any y_0\in Y, we have \inf_{y\in Y} f(x,y) =: g(x) = f(x,y_0) is continuous.

One can try to introduce various regularity/separation assumptions on the spaces X,Y to see at what level compactness becomes a crucial requirement. As an analyst, however, I really only care about topological manifolds. In which case the second counterexample up top can be readily used. We can slightly weaken the assumptions and still prove the following partial converse in essentially the same way.

Theorem Let X be Tychonoff, connected, and first countable, such that X contains a non-trivial open subset whose closure is not the entire space; and let Y be paracompact, Lindelof. Then if Y is noncompact, there exists a continuous function f:X\times Y\to\mathbb{R} such that \inf_{y\in Y}f:X\to \mathbb{R} is not continuous.

Remark Connected (nontrivial) topological manifolds automatically satisfy the conditions on X and Y except for non-compactness. The conditions given are not necessary for the theorem to hold; but they more or less capture the topological properties used in the construction of the second counterexample above.

Remark If X is such that every open set’s closure is the entire space, we must have that it is hyperconnected (let C\subset X be a closed set. Suppose D\subset X is another closed set such that C\cup D = X. Then C\subset D^c and vice versa, but D^c is open, so C = X. Hence X cannot be written as the union of two proper closed subsets). And if it is Tychonoff, then X is either the empty-set or the one-point set.

Lemma For a paracompact Lindelof space that is noncompact, there exists a countably infinite open cover \{U_k\} and a sequence of points y_k \in U_k such that \{y_k\}\cap U_j = \emptyset if j\neq k.

Proof: By noncompactness, there exists an open cover that is infinite. By Lindelof, this open cover can be assumed to be countable, which we enumerate by \{V_k\} and assume WLOG that \forall k, V_k \setminus \cup_{j =1}^{k-1} V_j \neq \emptyset. Define \{U_k\} and \{y_k\} inductively by: U_k = V_k \setminus \cup_{j = 1}^{k-1} \{ y_j\} and choose y_k \in U_k \setminus \cup_{j=1}^{k-1}U_j.

Proof of theorem: We first construct a sequence of continuous functions on X. Let G\subset X be a non-empty open set such that its closure-complement H = (\bar{G})^c is a non-empty open set (G exists by assumption). By connectedness \bar{G}\cap \bar{H} \neq \emptyset, so we can pick x_0 in the intersection. Let \{x_j\}\subset H be a sequence of points converging to x_0, which exists by first countability. Using Tychonoff, we can get a sequence of continuous functions f_jon X such that f_j|_{\bar{G}} = 0 and f_j(x_j) = -1.

On Y, choose an open cover \{U_k\} and points \{y_k\} per the previous Lemma. By paracompactness we have a partition of unity \{\psi_k\} subordinate to U_k, and by the conclusion of the Lemma we have that \psi_k(y_k) = 1. Now we define the function

\displaystyle f(x,y) = \sum_{k} f_k(x)\psi_k(y)

which is continuous, and such that f|_{\bar{G}\times Y} = 0. But by construction \inf_{y\in Y}f(x,y) \leq f(x_k,y_k) = f_k(x_k) = -1, which combined with the fact that x_k \to x_0 \in \bar{G} shows the desired result. q.e.d.

Inverted time translations

Plot of the vector field K_0 and its stream function

In the study of the global properties of wave-type equations, a well-developed method is the vector field method due to Sergiu Klainerman and Demetrios Christodoulou. Maybe in another day I will write a more detailed treatise on what the vector field method is and how to apply it; I won’t do it now. The method is crucial in many proofs of nonlinear stability for wave-type problems, and with perhaps the most striking application the global nonlinear stability of Minkowski space. The main idea behind the vector field method is to construct a tensor that measures the local energy content of the solution to our equations, and exploit the properties of this tensor via vector fields. Examples of this tensor includes the Einstein-Hilbert stress for electromagnetism, as well as the Bel-Robinson tensor for spin-2 (graviton) fields. To exploit the fine properties of this tensor field, one applies the divergence theorem to the tensor field contracted against suitable vector fields. For vector fields associated to the symmetries of the problem, this procedure will produce conservation laws, which will give control of the physical solution at a later time based on control at the present.

As it turns out, the useful symmetries of the equation, in the geometrical case, are closedly related to the conformal symmetries of Minkowski space. These include the true symmetries (translations, rotations, and Lorentzian boosts), as well as the conformal scaling and, what we will discuss here, the inverted time translation, which lies at the heart of decay estimates for spin-1 and spin-2 fields on Minkowski space.

The inverted time translation, often denoted K_0, is the vector field given in radial coordinates K_0 = (t^2 + r^2)\partial_t + 2tr \partial_r. In the picture to the upper left, the vector field is plotted along with its stream function. This vector field is a conformal symmetry of Minkowski space. The name of the vector field indicates the fact that it is associated to a conformal inversion (which is also used in the conformal compactification of Minkowski space). On Minkowski space, the inversion map x^\mu \mapsto \frac{x^\mu}{\langle x,x\rangle} is a conformal isometry. The vector field K_0 can be checked to be the vector field \partial_t conjugated by the inversion map. As such, it has a very nice property compared with the other symmetry vector fields. The time translation \partial_t and the inverted time translation K_0 are essentially (up to Lorentz boosts) the only globally causal conformal vector fields of Minkowski space. As such, with a dominant-energy type condition, they are the ones associated to which we have nonnegative energy controls.

Shock singularities in Burgers’ equation

It is generally well known that partial differential equations that model fluid motion can exhibit “shock waves”. In fact, the subject I will write about today is generally presented as the canonical example for such behaviour in a first course in partial differential equations (while also introducing the method of characteristics). The focus here, however, will not be so much on the formation of shocks, but on the profile of the shock boundary. This discussion tends to be omitted from introductory texts.

Solving Burgers’ equation
First we recall the inviscid Burgers’ equation, a fundamental partial differential equation in the study of fluids. The equation is written

Equation 1. Inviscid Burgers’ equation
\displaystyle \frac{\partial}{\partial t} u  + u \frac{\partial}{\partial x} u = 0

where u = u(t,x) is the “local fluid velocity” at time t and at spatial coordinate x. The solution of the equation is closely related to its derivation: notice that we can re-write the equation as

v \cdot \nabla u = (\partial_t + u \partial_x) u = 0

The question we consider is the initial value problem for the PDE: given some initial velocity configuration u_0(x), we want to find a solution u(t,x) to Burgers’ equation such that u(0,x) = u_0(x).

The traditional way of obtaining a solution is via the method of characteristics. We first observe (1) the alternate form of the equation above means that if X(t) is a curve tangent to the vector field v = \partial_t + u\partial_x, we must have u(t,X(t)) be a constant valued function of the parameter t. (2) Plugging this back in implies that along such a curve X(t), the vector field v = \partial_t + u\partial_x = \partial_t + u_0 \partial_x is constant. (3) A curve whose tangent vector is constant is a straight line. So we have that a solution of the Burgers’ equation must verify

u(t, x + u_0(x) \cdot t) = u_0(x)

And we call the family of curves given by X_x(t) = x + u_0(x) \cdot t the characteristic curves of the solution.

To extract more qualitative information about Burgers’ equation, let us take another spatial derivative of the equation, and call the function w = \partial_x u. Then we have

\partial_t w + w^2 + u \partial_x w = 0 \implies v \cdot w + w^2 = 0

So letting X(t) be a characteristic curve, and write W(t) = w(t, X(t)), we have that along the characteristic curve

\displaystyle \frac{d}{dt}W = - W^2 \implies W(t) = \frac{1}{t+W(0)^{-1}}

So in particular, we see that if W(0) < 0, W(t) must blow up in time t \leq |W(0)|^{-1}.

Plot of divergent flow So what does this mean? We’ve seen that along characteristic lines, the value of u stays constant. But we’ve also seen that along those lines, the value of its spatial derivative can blow up if the initial slope is negative. Perhaps the best thing to do is to illustrate it with two pictures. In the pictures the thick, red curve is the initial velocity distribution u_0(x), shown with the black line representing the x-axis: so when the curve is above the axis, initially the local fluid velocity is positive, and the fluid is moving to the right. The blue curves are the characteristic lines. In the first image to the right, we see that the initial velocity distribution is such that the velocity is increasing to the right. And so w(0,x) is always positive. We see that in this situation the flow is divergent, the flow lines getting further and further apart, corresponding to the solution where w(t,x) gets smaller and smaller along a flow line. For the second image here on our left, the situation is different. The initial velocity distribution starts out increasing, then hits a maximum, dips down to a minimum, and finally increases again. In the regions where the velocity distribution is increasing, we see the same “spreading out” behaviour as before, with the flow lines getting further and further apart (especially in the upper left region). But for flowlines originating in the region where the velocity distribution is decreasing, those characteristic curves gets bunched together as time goes on, eventually intersecting! This intersection is what is known as a shock. From the picture, it becomes clear what the blow-up of W(t) means: Suppose the initial velocity distribution is such that for two points x_1  u_0(x_2). Since the flow line originating from x_1 is moving faster, it will eventually catch up to the the flow line originating from x_2. When the two flow lines intersect, we have a problem: if we follow the flow line from x_1, the function u must take the value u_0(x_1) at the point; but if we follow the flow line from x_2, the function must take the value u_0(x_2) at the point. So we cannot consistently assign a value to the function u at the points of intersection for flow-lines in a way that satisfies Burgers’ equation.

Another way of thinking about this difficulty is in terms of particle dynamics. Imagine the line being a highway, and points on it being cars. The dynamics of the traffic flow described by Burgers’ equation is one in which each driver starts at one speed (which can be in reverse), and maintains that speed completely without regard for the cars in front of or behind it. If we start out with a distribution where the leading cars always drive faster than the trailing ones, then the cars will spread further apart as time goes on. But if we start out with a distribution where a car in front is driving slower than a car behind, the second car will eventually catch up and crash into the one in front. And this is the formation of the shock wave.

(Now technically, in this view, once the two cars crash their flow-lines should end, and so cars that are in front of the collision and moving forward should not be affected by the collision at all. But if we imagine that instead of real cars, we are driving bumper cars, so after a collision, the car in front maintains speed at the velocity of the car that hit it, while the car in back drives at the velocity of the car it hit [so the they swap speeds in an elastic collision], then we have something like the picture plotted above.)

Shock boundary
Having established that shocks can form, we move on to the main discussion of this post: the geometry of the set of shock singularities. We will consider the purely local effects of the shocks; by which we mean that we will ignore the chain reactions as described in the parenthetical remark above. Therefore we will assume that at the formation of the shock, the flow-lines terminate and the particles they represent disappear. In other words, we will consider only shocks coming from nearest neighbor collisions. In this scenario, the time of existence of a characteristic line is precisely governed by the equation on W we derived before: that is given u_0(x), the characteristic line emanating from x = x_0 will run into the shock precisely at the time t = - \frac{1}{\partial_x u_0(x_0)}. (It will continue indefinitely in the future if the derivative is positive.)

The most well-known image of a shock formation is the image on the right, where we see the classic fan/wedge type shock. (Due to the simplicity in sketching thie diagram by hand, this is probably how most people are introduced to this type of diagrams, either on a homework set or in class.) What we see here is an illustration of the fact that

If for x_1 < x < x_2, we have \partial^2_{xx} u_0(x) = 0, and \partial_x u_0(x) < 0, then the shock boundary is degenerate: it consists of a single focal point.

To see this analytically: observe that because the blow-up time depends on the first derivative of the initial velocity distribution, for such a set-up the blow-up time t_0 = - (\partial_x u_0)^{-1} is constant for the various points. Then we see that the spatial coordinate of the blow-up will be x + u_0(x) t_0. But since u_0(x) is linear in x, we have

\displaystyle x + u_0(x) t_0 = x_1 + (x-x_1) + u_0(x_1)t_0 + \partial_xu_0 \cdot (x - x_1) t_0 = x_1 + u_0(x_1) t_0

is constant. And therefore the shock boundary is degenerate.


Next we consider the case where \partial^2_{xx} u_0 vanishes at some point x_0, but \partial^3_{xxx}u_0(x_0) \neq 0. The two pictures to the right of this paragraph illustrates the typical shock boundary behaviour. On the far right we have the slightly aphysical situation: notice that for a particle coming in from the left, before it hits its shock boundary, it first crosses the shock boundary formed by the particles coming in from the right. This is the situation where the third derivative is positive, and the cusp point which corresponds to the shock boundary for x_0 opens to the future. The nearer picture is the situation where the third derivative is negative, with the cusp point opening downwards. Notice that since we are in a neighborhood of a point where the second derivative vanishes, the initial velocity distributions both look almost straight, and it is hard to distinguish from this image the sign of the third derivative. The picture on the far right is based on an arctan type initial distribution, whereas the nearer picture is based on an x^3 type initial distribution. Let us again analyse the situation more deeply. Near the point x_0, we shall assume that \partial^3_{xxx}u_0 \sim \partial^3_{xxx}u_0(x_0) = C for some constant. And we will assume, using Galilean transformations, that u_0(x_0) = 0 = x_0. Then letting t_0 = - (\partial_x u_0(x_0))^{-1}, we have

\displaystyle u_0(x) = \frac{C}{6} x^3 - \frac{1}{t_0} x

Thus as a function of x, the blow-up times of flow lines are given by

\displaystyle t(x) = \frac{t_0}{1 - \frac{C}{2}t_0 x^2}

Solving for their blow-up profile y = x + u_0(x) t(x) then gives (after quite a bit of algebraic manipulation)

\displaystyle \frac{ (\frac{t}{t_0} - 1)^3}{t} = \frac{9C}{8} y^2

which can be easily seen to be a cusp: \frac{dy}{dt} = 0 at y=0, t = t_0. And it is clear that the side the cusp opens is dependent on the sign of the third derivative, C.

The last bit of computation we will do is for the case D = \partial^2_{xx}u_0(x) \neq 0. In this case we can take

\displaystyle u_0(x) = - \frac{1}{t_0}x + \frac{D}{2} x^2

as an approximation. Then the blowup times will be

\displaystyle t(x) = \frac{t_0}{1 - D t_0 x}

which leads to the blowup profile y being [Thanks to Huy for the correction.]

\displaystyle y = -\frac{1}{2Dt} \left( 1 - \frac{t}{t_0}\right)^2

and a direct computation will then lead to the conclusion that in this generic scenario, the shock boundary will be everywhere tangent to the flow-line that ends there.

Follow

Get every new post delivered to your Inbox.

Join 57 other followers