Bubbles Bad; Ripples Good

… Data aequatione quotcunque fluentes quantitates involvente fluxiones invenire et vice versa …

Category: partial differential equations

Simulating closed cosmic strings (or yet another adventure in Julia)

One of my current research interests is in the geometry of constant-mean-curvature time-like submanifolds of Minkowski space. A special case of this is the study of cosmic strings under the Nambu-Goto action. Mathematically the classical (as opposed to quantum) behavior of such strings are quite well understood, by combining the works of theoretical physicists since the 80s (especially those of Jens Hoppe and collaborators) together with recent mathematical developments (such as the work of Nguyen and Tian and that of Jerrard, Novaga, and Orlandi). To get a better handle on what is going on with these results, and especially to provide some visual aids, I’ve tried to produce some simulations of the dynamics of closed cosmic strings. (This was also an opportunity for me to practice code-writing in Julia and learn a bit about the best practices for performance and optimization in that language.)

The code

After some false starts, here are some reasonably stable code.

function MC_corr3!(position::Array{Float64,2}, prev_vel::Array{Float64,2}, next_vel::Array{Float64,2}, result::Array{Float64,2}, dt::Float64)
  # We will assume that data is stored in the format point(coord,number), so as a 3x1000 array or something. 
  num_points = size(position,2)
  num_dims = size(position,1)

  curr_vel = zeros(num_dims)
  curr_vs = zeros(num_dims)
  curr_ps = zeros(num_dims)
  curr_pss = zeros(num_dims)

  pred_vel = zeros(num_dims)
  agreement = true

  for col = 1:num_points  #Outer loop is column
    if col == 1
      prev_col = num_points
      next_col = 2
    elseif col == num_points
      prev_col = num_points - 1
      next_col = 1
    else
      prev_col = col -1
      next_col = col + 1
    end

    for row = 1:num_dims
      curr_vel[row] = (next_vel[row,col] + prev_vel[row,col])/2
      curr_vs[row] = (next_vel[row,next_col] + prev_vel[row,next_col] - next_vel[row,prev_col] - prev_vel[row,prev_col])/4
      curr_ps[row] = (position[row,next_col] - position[row,prev_col])/2
      curr_pss[row] = position[row,next_col] + position[row,prev_col] - 2*position[row,col]
    end

    beta = (1 + dot(curr_vel,curr_vel))^(1/2)
    sigma = dot(curr_ps,curr_ps)
    psvs = dot(curr_ps,curr_vs)
    bvvs = dot(curr_vs,curr_vel) / (beta^2)
    pssps = dot(curr_pss,curr_ps)

    for row in 1:num_dims
      result[row,col] = curr_pss[row] / (sigma * beta) - curr_ps[row] * pssps / (sigma^2 * beta) - curr_vel[row] * psvs / (sigma * beta) - curr_ps[row] * bvvs / (sigma * beta)
      pred_vel[row] = prev_vel[row,col] + dt * result[row,col]
    end

    agreement = agreement && isapprox(next_vel[:,col], pred_vel, rtol=sqrt(eps(Float64)))
  end

  return agreement
end


function find_next_vel!(position::Array{Float64,2}, prev_vel::Array{Float64,2}, next_vel::Array{Float64,2}, dt::Float64; max_tries::Int64=50)
  tries = 1
  result = zeros(next_vel)
  agreement = MC_corr3!(position,prev_vel,next_vel,result,dt)
  for j in 1:size(next_vel,2), i in 1:size(next_vel,1)
    next_vel[i,j] = prev_vel[i,j] + result[i,j]*dt
  end
  while !agreement && tries < max_tries
    agreement = MC_corr3!(position,prev_vel,next_vel,result,dt)
    for j in 1:size(next_vel,2), i in 1:size(next_vel,1)
      next_vel[i,j] = prev_vel[i,j] + result[i,j]*dt
    end
    tries +=1
  end
  return tries, agreement
end

This first file does the heavy lifting of solving the evolution equation. The scheme is a semi-implicit finite difference scheme. The function MC_Corr3 takes as input the current position, the previous velocity, and the next velocity, and computes the correct current acceleration. The function find_next_vel iterates MC_Corr3 until the computed acceleration agrees (up to numerical errors) with the input previous and next velocities.

Or, in notations:
MC_Corr3: ( x[t], v[t-1], v[t+1] ) --> Delta-v[t]
and find_next_vel iterates MC_Corr3 until
Delta-v[t] == (v[t+1] - v[t-1]) / 2

The code in this file is also where the performance matters the most, and I spent quite some time experimenting with different algorithms to find one with most reasonable speed.

function make_ellipse(a::Float64,b::Float64, n::Int64, extra_dims::Int64=1)  # a,b are relative lengths of x and y axes
  s = linspace(0,2π * (n-1)/n, n)
  if extra_dims == 0
    return vcat(transpose(a*cos(s)), transpose(b*sin(s)))
  elseif extra_dims > 0
    return vcat(transpose(a*cos(s)), transpose(b*sin(s)), zeros(extra_dims,n))
  else
    error("extra_dims must be non-negative")
  end
end

function perturb_data!(data::Array{Float64,2}, coeff::Vector{Float64}, num_modes::Int64) 
  # num_modes is the number of modes
  # coeff are the relative sizes of the perturbations

  numpts = size(data,2)

  for j in 2:num_modes
    rcoeff = rand(length(coeff),2)

    for pt in 1:numpts
      theta = 2j * π * pt / numpts
      for d in 1:length(coeff)
        data[d,pt] += ( (rcoeff[d,1] - 0.5) *  cos(theta) + (rcoeff[d,2] - 0.5) * sin(theta)) * coeff[d] / j^2
      end
    end
  end

  nothing
end

This file just sets up the initial data. Note that in principle the number of ambient spatial dimensions is arbitrary.

using Plots

pyplot(size=(1920,1080), reuse=true)

function plot_data2D(filename_prefix::ASCIIString, filename_offset::Int64, titlestring::ASCIIString, data::Array{Float64,2}, additional_data...)
  x_max = 1.5
  y_max = 1.5
  plot(transpose(data)[:,1], transpose(data)[:,2] , xlims=(-x_max,x_max), ylims=(-y_max,y_max), title=titlestring)
  
  if length(additional_data) > 0
    for i in 1:length(additional_data)
      plot!(transpose(additional_data[i][1,:]), transpose(additional_data[i][2,:]))
    end
  end
  
  png(filename_prefix*dec(filename_offset,5)*".png")
  nothing
end

function plot_data3D(filename_prefix::ASCIIString, filename_offset::Int64, titlestring::ASCIIString, data::Array{Float64,2}, additional_data...)
  x_max = 1.5
  y_max = 1.5
  z_max = 0.9
  tdata = transpose(data)
  plot(tdata[:,1], tdata[:,2],tdata[:,3], xlims=(-x_max,x_max), ylims=(-y_max,y_max),zlims=(-z_max,z_max), title=titlestring)

  if length(additional_data) > 0
    for i in 1:length(additional_data)
      tdata = transpose(additional_data[i])
      plot!(tdata[:,1], tdata[:,2], tdata[:,3]) 
    end
  end

  png(filename_prefix*dec(filename_offset,5)*".png")
  nothing
end

This file provides some wrapper commands for generating the plots.

include("InitialData3.jl")
include("MeanCurvature3.jl")
include("GraphCode3.jl")

num_pts = 3000
default_timestep = 0.01 / num_pts
max_time = 3
plot_every_ts = 1500

my_data = make_ellipse(1.0,1.0,num_pts,0)
perturb_data!(my_data, [1.0,1.0], 15)
this_vel = zeros(my_data)
next_vel = zeros(my_data)

for t = 0:floor(Int64,max_time / default_timestep)
  num_tries, agreement = find_next_vel!(my_data, this_vel,next_vel,default_timestep)

  if !agreement
    warn("Time $(t*default_timestep): failed to converge when finding next_vel.")
    warn("Dumping information:")
    max_beta = 1.0
    max_col = 1
    for col in 1:size(my_data,2)
      beta = (1 + dot(next_vel[:,col], next_vel[:,col]))^(1/2)
      if beta > max_beta
        max_beta = beta
	max_col = col
      end
    end
    warn("   Beta attains maximum at position $max_col")
    warn("   Beta = $max_beta")
    warn("   Position = ", my_data[:,max_col])
    prevcol = max_col - 1
    nextcol = max_col + 1
    if max_col == 1
      prevcol = size(my_data,2)
    elseif max_col == size(my_data,2)
      nextcol = 1
    end
    warn("   Deltas")
    warn("    Left:  ", my_data[:,max_col] - my_data[:,prevcol])
    warn("    Right:  ", my_data[:,nextcol] - my_data[:,max_col])
    warn("   Previous velocity: ", this_vel[:,max_col])
    warn("   Putative next velocity: ", next_vel[:,max_col])
    warn("Quitting...")
    break
  end

  for col in 1:size(my_data,2)
    beta = (1 + dot(next_vel[:,col], next_vel[:,col]))^(1/2)
    for row in 1:size(my_data,1)
      my_data[row,col] += next_vel[row,col] * default_timestep / beta
      this_vel[row,col] = next_vel[row,col]
    end

    if beta > 1e7
      warn("time: ", t * default_timestep)
      warn("Almost null... beta = ", beta)
      warn("current position = ", my_data[:,col])
      warn("current Deltas")
      prevcol = col - 1
      nextcol = col + 1
      if col == 1
        prevcol = size(my_data,2)
      elseif col == size(my_data,2)
        nextcol = 1
      end
      warn(" Left: ", my_data[:,col] - my_data[:,prevcol])
      warn(" Right: ", my_data[:,nextcol] - my_data[:,col])
    end
  end

  if t % plot_every_ts ==0
    plot_data2D("3Dtest", div(t,plot_every_ts), @sprintf("elapsed: %0.4f",t*default_timestep), my_data, make_ellipse(cos(t*default_timestep), cos(t*default_timestep),100,0))
    info("Frame $(t/plot_every_ts):  used $num_tries tries.")
  end
end

And finally the main file. Mostly it just ties the other files together to produce the plots using the simulation code; there are some diagnostics included for me to keep an eye on the output.

The results

First thing to do is to run a sanity check against explicit solutions. In rotational symmetry, the solution to the cosmic string equations can be found analytically. As you can see below the simulation closely replicates the explicit solution in this case.

The video ends when the simulation stopped. The simulation stopped because a singularity has formed; in this video the singularity can be seen as the collapse of the string to a single point.

Next we can play around with a more complicated initial configuration.

In this video the blue curve is the closed cosmic string, which starts out as a random perturbation of the circle with zero initial speed. The string contracts with acceleration determined by the Nambu-Goto action. The simulation ends when a singularity has formed. It is perhaps a bit hard to see directly where the singularity happened. The diagnostic messages, however, help in this regard. From it we know that the onset of singularity can be seen in the final frame:

3Dtest00268

The highlighted region is getting quite pointy. In fact, that is accompanied with the “corner” picking up infinite acceleration (in other words, experiencing an infinite force). The mathematical singularity corresponds to something unreasonable happening in the physics.

To make it easier to see the “speed” at which the curve is moving, the following videos show the string along with its “trail”. This first one again shows how a singularity can happen as the curve gets gradually more bent, eventually forming a corner.

This next one does a good job emphasizing the “wave” nature of the motion.

The closed cosmic strings behave like a elastic band. The string, overall, wants to contract to a point. Small undulations along the string however are propagated like traveling waves. Both of these tendencies can be seen quite clearly in the above video. That the numerical solver can solve “past” the singular point is a happy accident; while theoretically the solutions can in fact be analytically continued past the singular points, the renormalization process involved in this continuation is numerically unstable and we shouldn’t be able to see it on the computer most of the time.

The next video also emphasizes the wave nature of the motion. In addition to the traveling waves, pay attention to the bottom left of the video. Initially the string is almost straight there. This total lack of curvature is a stationary configuration for the string, and so initially there is absolutely no acceleration of that segment of the string. The curvature from the left and right of that segment slowly intrudes on the quiescent piece until the whole thing starts moving.

The last video for this post is a simulation when the ambient space is 3 dimensional. The motion of the string, as you can see, becomes somewhat more complicated. When the ambient space is 2 dimensional a point either accelerates or decelerates based on the local (signed) curvature of the string. But when the ambient space is 3 dimensional, the curvature is now a vector and this additional degree of freedom introduces complications into the behavior. For example, when the ambient space is 2 dimensional it is known that all closed cosmic strings become singular in finite time. But in 3 dimensions there are many closed cosmic strings that vibrate in place without every becoming singular. The video below is one that does however become singular. In addition to a fading trail to help visualize the speed of the curve, this plot also includes the shadows: projections of the curve onto the three coordinate planes.

Advertisements

Bouncing a quantum particle back and forth

If you have not seen my previous two posts, you should read them first.

In the two previous posts, I shot particles (okay, simulated the shooting on a computer) at a single potential barrier and looked at what happens. What happens when we have more than one barrier? In the classical case the picture is easy to understand: a particle with insufficient energy to escape will be trapped in the local potential well for ever, while a particle with sufficiently high energy will gain freedom and never come back. But what happens in the quantum case?

If the intuition we developed from scattering a quantum particle against a potential barrier, where we see that depending on the frequency (energy) of the particle, some portion gets transmitted and some portion gets reflected, is indeed correct, what we may expect to see is that the quantum particle bounces between the two barriers, each time losing some amplitude due to tunneling.

But we also saw that the higher frequency components of the quantum particle have higher transmission amplitudes. So we may expect that the high frequency components to decay more rapidly than the low frequency ones, so the frequency of the “left over” parts will continue to decay in time. This however, would be wrong, because we would be overlooking one simple fact: by the uncertainty principle again, very low frequency waves cannot be confined to a small physical region. So when we are faced with two potential barriers, the distance between them gives a characteristic frequency. Below this frequency (energy) it is actually not possible to fit a (half) wave between the barriers, and so the low frequency waves must have significant physical extent beyond the barriers, which means that large portions of these low frequency waves will just radiate away freely. Much above the characteristic frequency, however, the waves have large transmission coefficients and will not be confined.

So the net result is that we should expect for each double barrier a characteristic frequency at which the wave can remain “mostly” stuck between the two barriers, losing a little bit of amplitude at each bounce. This will look like a slowly, but exponentially, decaying standing wave. And I have some videos to show for that!

In the video we start with the same random initial data and evolve it under the linear wave equation with different potentials: the equations look like

\displaystyle - \partial^2_{tt} u + \partial^2_{xx} u - V u = 0

where V is a non-negative potential taken in the form

\displaystyle V(x) = a_1 \exp( - x^2 / b_1) - a_2 \exp( -x^2 / b_2)

which is a difference of two Gaussians. For the five waves shown the values of a_1, b_1 are the same throughout. The coefficients a_2 (taken to be \leq a_1) and b_2 (taken to be < b_1) increases from top to bottom, resulting in more and more-widely separated double barriers. Qualitatively we see, as we expected,

  • The shallower and narrower the dip the faster the solution decays.
  • The shallower and narrower the dip the higher the “characteristic frequency”.

As an aside: the video shown above is generated using Python, in particular NumPy and MatPlotLib; the code took significantly longer to run (20+hours) than to write (not counting the HPDE solver I wrote before for a different project, coding and debugging this simulation took about 3 hours or less). On the other hand, this only uses one core of my quad-core machine, and leaves the computer responsive in the mean time for other things. Compare that to Auto-QCM: the last time I ran it to grade a stack of 400+ multiple choice exams it locked up all four cores of my desktop computer for almost an entire day.

As a further aside, this post is related somewhat to my MathOverflow question to which I have not received a satisfactory answer.

… and scattering a quantum particle

In the previous post we shot a classical particle at a potential barrier. In this post we shoot a quantum particle.

Whereas the behaviour of the classical particle is governed by Newton’s laws (where the external force providing the acceleration is given as minus the gradient of the potential), we allow our quantum particle to be governed by the Klein-Gordon equations.

  • Mathematically, the Klein-Gordon equation is a partial differential equation, whereas Newton’s laws form ordinary differential equations. A typical physical interpretation is that the state space of quantum particles are infinite dimensional, whereas the phase space of physics has finite dimensions.
  • Note that physically the Klein-Gordon equation was designed to model a relativistic particle, while in the previous post we used the non-relativistic Newton’s laws. In some ways it would’ve been better to model the quantum particle using Schroedinger’s equation. I plead here however that (a) qualitatively there is not a big difference in terms of the simulated outcomes and (b) it is more convenient for me to use the Klein-Gordon model as I already have a finite-difference solver for hyperbolic PDEs coded in Python on my computer.

To model a particle, we set the initial data to be a moving wave packet, such that at the initial time the solution is strongly localized and satisfies \partial_t u + \partial_x u = 0. Absent the mass and potential energy terms in the Klein-Gordon equation (so under the evolution of the free wave equation), this wave packet will stay coherent and just translate to the right as time goes along. The addition of the mass term causes some small dispersion, but the mass is chosen small so that this is not a large effect. The main change to the evolution is the potential barrier, which you can see illustrated in the simulation.

The video shows 8 runs of the simulation with different initial data. Whereas in the classical picture the initial kinetic energy is captured by the initial speed at which the particle is moving, in the quantum/wave picture the kinetic energy is related to the central frequency of your wave packet. So each of the 8 runs have increasing frequency offset that represents increasing kinetic energy. The simulation has two plots, the top shows the square of the solution itself, which gives a good indication of where physically the wave packet is located. The bottom shows a normalized kinetic energy density (I have to include a normalization since the kinetic energy of the first and last particles differ roughly 10 fold).

One notices that in the first two runs, the kinetic energy is sufficiently small that the particle mostly bounces back to the left after hitting the potential.

For the third and fourth runs (frequency shift \sqrt{2} and \sqrt{3} respectively) we see that while a significant portion of the particle bounces back, a noticeable portion “tunnels through” the barrier: this caused by a combination of the quantum tunneling phenomenon and the wave packet form of the initial data.

The phenomenon of quantum tunneling manifests in that all non-zero energy waves will penetrate a finite potential barrier a little bit. But the amount of penetration decays to zero as the energy of the wave goes to zero: this is known as the semiclassical regime. In the semiclassical limit it is known that quantum mechanics converge toward classical mechanics, and so in the low-energy limit we expect our particle to behave like a classical particle and bounce off. So we see that naturally increasing the energy (frequency) of our wave packet we expect more of the tunneling to happen.

Further, observe that by shaping our data into a wave packet it necessarily contains some high frequency components (due to Heisenberg uncertainty principle); high frequency, and hence high energy components do not “see” the potential barrier. Even in the classical picture high energy particles would fly over the potential barrier. So for wave packets there will always be some (perhaps not noticeable due to the resolution of our computation) leakage of energy through the potential barrier. The quantum effect on these high energy waves is that they back-scatter. Whereas the classical high energy particles just fly directly over the barrier, a high energy quantum particle will leave some parts of itself behind the barrier always. We see this in the sixth and seventh runs of the simulation, where the particle mostly passes through the barrier, but a noticeable amount bounces off in the opposite direction.

In between during the fifth run, where the frequency shift is 2, we see that the barrier basically split the particle in two and send one half flying to the right and the other half flying to the left. Classically this is the turning point between particles that go over the bump and particles that bounces back, and would be the case (hard to show numerically!) where a classical particle comes in from afar with just enough energy that it comes to a half at the top of the potential barrier!

And further increasing the energy after the seventh run, we see in the final run a situation where only a negligible amount of the particle scatters backward with almost all of it passing through the barrier unchanged. One interesting thing to note however is that just like the case of the classical particle, the wave packet appears to “slow down” a tiny bit as it goes over the potential barrier.

Decay of Waves IV: Numerical Interlude

I offer two videos. In both videos the same colour scheme is used: we have four waves in red, green, blue, and magenta. The four represent the amplitudes of spherically symmetric free waves on four different types of spatial geometries: 1 dimension flat space, 2 dimensional flat space, 3 dimensional flat space, and a 3 dimensional asymptotically flat manifold with “trapping” (has closed geodesics). Can you tell which is which? (Answer below the fold.)

Read the rest of this entry »

Bubbles!

Joachim Krieger and I posted a new pre-print on the critical nonlinear wave equation. After close to four years of the existence of this blog I finally have a paper out that actually relates to the title of this blog! Considering that the paper itself is only ten pages long, I will just direct readers to the arXiv instead of writing more about it here

Mariş’s Theorem

During a literature search (to answer my question concerning symmetries of “ground states” in variational problem, I came across a very nice theorem due to Mihai Mariş. The theorem itself is, more than anything else, a statement about the geometry of Euclidean spaces. I will give a rather elementary write-up of (a special case of) the theorem here. (The proof presented here can equally well be applied to get the full strength of the theorem as presented in Maris’s paper; I give just the special case for clarity of the discussion.)

Read the rest of this entry »

Gauge invariance, geometrically

A somewhat convoluted chain of events led me to think about the geometric description of partial differential equations. And a question I asked myself this morning was

Question
What is the meaning of gauge invariance in the jet-bundle treatment of partial differential equations?

The answer, actually, is quite simple.

Review of geometric formulation PDE
We consider here abstract PDEs formulated geometrically. All objects considered will be smooth. For more about the formal framework presented here, a good reference is H. Goldschmidt, “Integrability criteria for systems of nonlinear partial differential equations”, JDG (1967) 1:269–307.

A quick review: the background manifold X is assumed (here we take a slightly more restrictive point of view) to be a connected smooth manifold. The configuration space \mathcal{C} is defined to be a fibred manifold p:\mathcal{C}\to X. By J^r\mathcal{C} we refer to the fibred manifold of r-jets of \mathcal{C}, whose projection p^r = \pi^r_0 \circ p where for r > s we use \pi^r_s: J^r\mathcal{C}\to J^s\mathcal{C} for the canonical projection.

A field is a (smooth) section \phi \subset \Gamma \mathcal{C}. A simple example that capture most of the usual cases: if we are studying mappings between manifolds \phi: X\to N, then we take \mathcal{C} = N\times X the trivial fibre bundle. The s-jet operator naturally sends j^s: \Gamma\mathcal{C} \ni \phi \mapsto j^s\phi \in \Gamma J^r\mathcal{C}.

A partial differential equation of order r is defined to be a fibred submanifold J^r\mathcal{C} \supset R^r \to X. A field is said to solve the PDE if j^r\phi \subset R^r.

In the usual case of systems of PDEs on Euclidean space, X is taken to be \mathbb{R}^d and \mathcal{C} = \mathbb{R}^n\times X the trivial vector bundle. A system of m PDEs of order r is usually taken to be F(x,\phi, \partial\phi, \partial^2\phi, \ldots, \partial^r\phi) = 0 where

\displaystyle F: X\times \mathbb{R}^n \times \mathbb{R}^{dn} \times \mathbb{R}^{\frac{1}{2}d(d+1)n} \times \cdots \times \mathbb{R}^{{d+r-1 \choose r} n} \to \mathbb{R}^m

is some function. We note that the domain of F can be identified in this case with J^r\mathcal{C}, We can then extend F to \tilde{F}: J^r\mathcal{C} \ni c \mapsto (F(c),p^r(c)) \in \mathbb{R}^m\times X a fibre bundle morphism.

If we assume that \tilde{F} has constant rank, then \tilde{F}^{-1}(0) is a fibred submanifold of J^r\mathcal{C}, and this is our differential equation.

Gauge invariance
In this frame work, the gauge invariance of a partial differential equation relative to certain symmetry groups can be captured by requiring R^r be an invariant submanifold.

More precisely, we take

Definition
A symmetry/gauge group \mathcal{G} is a subgroup of \mathrm{Diff}(\mathcal{C}), with the property that for any g\in\mathcal{G}, there exists a g'\in \mathrm{Diff}(X) with p\circ g = g' \circ p.

It is important we are looking at the diffeomorphism group for \mathcal{C}, not J^r\mathcal{C}. In general diffeomorphisms of J^r\mathcal{C} will not preserve holonomy for sections of the form j^r\phi, a condition that is essential for solving PDEs. The condition that the symmetry operation “commutes with projections” is to ensure that g:\Gamma\mathcal{C}\to\Gamma\mathcal{C}, which in particular guarantees that g extends to a diffeomorphism of J^rC with itself that commutes with projections.

From this point of view, a (system of) partial differential equation(s) R^r is said to be \mathcal{G}-invariant if for every g\in\mathcal{G}, we have g(R^r) \subset R^r.

We give two examples showing that this description agrees with the classical notions.

Gauge theory. In classical gauged theories, the configuration space \mathcal{C} is a fibre bundle with structure group G which acts on the fibres. A section of G\times X \to X induces a diffeomorphism of \mathcal{C} by fibre-wise action. In fact, the gauge symmetry is a fibre bundle morphism (fixes the base points).

General relativity. In general relativity, the configuration space is the space of Lorentzian metrics. So the background manifold is the space-time X. And the configuration space is the open submanifold of S^2T^*X given by non-degenerate symmetric bilinear forms with signature (-+++). A diffeomorphism \Psi:X\to X induces T^*\Psi = (\Psi^{-1})^*: T^*X \to T^*X and hence a configuration space diffeomorphism that commutes with projection. It is in this sense that Einstein’s equations are diffeomorphism invariant.

Notice of course, this formulation does not contain the “physical” distinction between global and local gauge transformations. For example, for a linear PDE (so \mathcal{C} is a vector bundle and R^r is closed under linear operations), the trivial “global scaling” of a solution is considered in this frame work a gauge symmetry, though it is generally ignored in physics.

Decay of waves IIIb: tails for homogeneous linear equation on curved background

Now we will actually show that the specific decay properties of the linear wave equation on Minkowski space–in particular the strong Huygens’ principle–is very strongly tied to the global geometry of that space-time. In particular, we’ll build, by hand, an example of a space-time where geometry itself induces back-scattering, and even linear, homogeneous waves will exhibit a tail.

For convenience, the space-time we construct will be spherically symmetric, and we will only consider spherically symmetric solutions of the wave equation on it. We will also focus on the 1+3 dimensional case. Read the rest of this entry »

Decay of waves IIIa: nonlinear tails in Minkowski space redux

Before we move on to the geometric case, I want to flesh out the nonlinear case mentioned in the end of the last post a bit more. Recall that it was shown for generic nonlinear (actually semilinear; for quasilinear and worse equations we cannot use Duhamel’s principle) wave equations, if we put in compact support for the initial data, we expect the first iterate to exhibit a tail. One may ask whether it is possible that, in fact, this is an artifact of the successive approximation scheme; that in fact somehow it always transpires that a conspiracy happens, and all the higher order iterates cancel out the tail coming from the first iterate. This is rather unlikely, owing to the fact that the convergence to \phi_\infty is dominated by a geometric series. But to just make double sure, here we give a nonlinear system of wave equations such that the successive approximation scheme converges after finitely many steps (in fact, after the first iterate), and so we can also explicitly compute the rate of decay for the nonlinear tail. While the decay rate is not claimed to be generic (though it is), the existence of one such example with a fixed decay rate shows that for a statement quantifying over all nonlinear wave equations, it would be impossible to demonstrate better decay rate than the one exhibited. Read the rest of this entry »

Decay of waves IIb: Minkowski space, with right-hand side

In the first half of this second part of the series, we considered solutions to the linear, homogeneous wave equation on flat Minkowski space, and showed that for compactly supported initial data, we have strong Huygens’ principle. We further made references to the fact that this behaviour is expected to be unstable. In this post, we will further illustrate this instability by looking at Equation 1 first with a fixed source F = F(t,x), and then with a nonlinearity F = F(t,x, \phi, \partial\phi).

Duhamel’s Principle

To study how one can incorporate inhomogeneous terms into a linear equation, and to get a qualitative grasp of how the source term contributes to the solution, we need to discuss the abstract method known as Duhamel’s Principle. We start by illustrating this for a very simple ordinary differential equation.

Consider the ODE satisfied by a scalar function \alpha:

Equation 13
\displaystyle \frac{d}{ds}\alpha(s) = k(s)\alpha(s) + \beta(s)

when \beta\equiv 0, we can easily solve the equation with integration factors

\displaystyle \alpha(s) = \alpha(0) e^{\int_0^s k(t) dt}

Using this as a sort of an ansatz, we can solve the inhomogeneous equation as follows. For convenience we denote by K(s) = \int_0^s k(t) dt the anti-derivative of k. Then multiplying Equation 13 through by \exp -K(s), we have that

Equation 14
\displaystyle \frac{d}{ds} \left( e^{-K(s)}\alpha(s)\right) = e^{-K(s)}\beta(s)

which we solve by integrating

Equation 15
\displaystyle \alpha(s) = e^{K(s)}\alpha(0) + e^{K(s)} \int_0^s e^{-K(t)}\beta(t) dt

If we write K(s;t) = \int_t^s k(u) du, then we can rewrite Equation 15 as given by an integral operator

Equation 15′
\displaystyle \alpha(s) = e^{K(s)}\alpha(0) + \int_0^s e^{K(s;t)}\beta(t) dt

Read the rest of this entry »