Bubbles Bad; Ripples Good

… Data aequatione quotcunque fluentes quantitates involvente fluxiones invenire et vice versa …

Category: Life of a mathematician

Abusing JabRef to manage snipplets of TeX

I use JabRef as my reference manager. In this post, however, I will discuss how we can abuse it to do some other things.

The problem

Let’s start with a concrete example: I keep a “lab notebook”. It is where I document all my miscellaneous thoughts and computations that come up during my research. Some of those are immediately useful and are collected into papers for publication. Some of those are not, and I prefer to keep them for future reference. These computations range over many different subjects. Now and then, I want to share with a collaborator or a student some subset of these notes. So I want a way to quickly search (by keywords/abstract) for relevant notes, and that compile them into one large LaTeX document.

Another concrete example: I am starting to collect a bunch of examples and exercises in analysis for use in my various classes. Again, I want to have them organized for easy search and retrieval, especially to make into exercise sheets.

The JabRef solution

The “correct” way to do this is probably with a database (or a document store), with each document tagged with a list of keywords. But that requires a bit more programming than I want to worry about at the moment.

JabRef, as it turns out, is sort of a metadata database: by defining a customized entry type you can use the BibTeX syntax as a proxy for JSON-style data. So for my lab notebook example, I define a custom type lnbentry in JabRef with

  • Required fields: year, month, day, title, file
  • Optional fields: keywords, abstract

I store each lab notebook entry as an individual TeX file, whose file system address is stored in the file field. The remaining metadata fields’ contents are self-evident.

(Technical note: in my case I actually store the metadata in the TeX file and have a script to parse the TeX files and update the bib database accordingly.)

For generating output, we can use JabRef’s convenient export filter support. In the simplest case we can create a custom export layout with the main layout file containing the single line

\\input{\file}

with appropriate begin and end incantations to make the output a fully-formed TeX file. Then one can simply select the entries to be exported, click on “Export”, and generate the appropriate TeX file on the fly.

(Technical note: JabRef can also be run without a GUI. So one can use this to perform searches through the database on the command line.)

Simulating closed cosmic strings (or yet another adventure in Julia)

One of my current research interests is in the geometry of constant-mean-curvature time-like submanifolds of Minkowski space. A special case of this is the study of cosmic strings under the Nambu-Goto action. Mathematically the classical (as opposed to quantum) behavior of such strings are quite well understood, by combining the works of theoretical physicists since the 80s (especially those of Jens Hoppe and collaborators) together with recent mathematical developments (such as the work of Nguyen and Tian and that of Jerrard, Novaga, and Orlandi). To get a better handle on what is going on with these results, and especially to provide some visual aids, I’ve tried to produce some simulations of the dynamics of closed cosmic strings. (This was also an opportunity for me to practice code-writing in Julia and learn a bit about the best practices for performance and optimization in that language.)

The code

After some false starts, here are some reasonably stable code.

function MC_corr3!(position::Array{Float64,2}, prev_vel::Array{Float64,2}, next_vel::Array{Float64,2}, result::Array{Float64,2}, dt::Float64)
  # We will assume that data is stored in the format point(coord,number), so as a 3x1000 array or something. 
  num_points = size(position,2)
  num_dims = size(position,1)

  curr_vel = zeros(num_dims)
  curr_vs = zeros(num_dims)
  curr_ps = zeros(num_dims)
  curr_pss = zeros(num_dims)

  pred_vel = zeros(num_dims)
  agreement = true

  for col = 1:num_points  #Outer loop is column
    if col == 1
      prev_col = num_points
      next_col = 2
    elseif col == num_points
      prev_col = num_points - 1
      next_col = 1
    else
      prev_col = col -1
      next_col = col + 1
    end

    for row = 1:num_dims
      curr_vel[row] = (next_vel[row,col] + prev_vel[row,col])/2
      curr_vs[row] = (next_vel[row,next_col] + prev_vel[row,next_col] - next_vel[row,prev_col] - prev_vel[row,prev_col])/4
      curr_ps[row] = (position[row,next_col] - position[row,prev_col])/2
      curr_pss[row] = position[row,next_col] + position[row,prev_col] - 2*position[row,col]
    end

    beta = (1 + dot(curr_vel,curr_vel))^(1/2)
    sigma = dot(curr_ps,curr_ps)
    psvs = dot(curr_ps,curr_vs)
    bvvs = dot(curr_vs,curr_vel) / (beta^2)
    pssps = dot(curr_pss,curr_ps)

    for row in 1:num_dims
      result[row,col] = curr_pss[row] / (sigma * beta) - curr_ps[row] * pssps / (sigma^2 * beta) - curr_vel[row] * psvs / (sigma * beta) - curr_ps[row] * bvvs / (sigma * beta)
      pred_vel[row] = prev_vel[row,col] + dt * result[row,col]
    end

    agreement = agreement && isapprox(next_vel[:,col], pred_vel, rtol=sqrt(eps(Float64)))
  end

  return agreement
end


function find_next_vel!(position::Array{Float64,2}, prev_vel::Array{Float64,2}, next_vel::Array{Float64,2}, dt::Float64; max_tries::Int64=50)
  tries = 1
  result = zeros(next_vel)
  agreement = MC_corr3!(position,prev_vel,next_vel,result,dt)
  for j in 1:size(next_vel,2), i in 1:size(next_vel,1)
    next_vel[i,j] = prev_vel[i,j] + result[i,j]*dt
  end
  while !agreement && tries < max_tries
    agreement = MC_corr3!(position,prev_vel,next_vel,result,dt)
    for j in 1:size(next_vel,2), i in 1:size(next_vel,1)
      next_vel[i,j] = prev_vel[i,j] + result[i,j]*dt
    end
    tries +=1
  end
  return tries, agreement
end

This first file does the heavy lifting of solving the evolution equation. The scheme is a semi-implicit finite difference scheme. The function MC_Corr3 takes as input the current position, the previous velocity, and the next velocity, and computes the correct current acceleration. The function find_next_vel iterates MC_Corr3 until the computed acceleration agrees (up to numerical errors) with the input previous and next velocities.

Or, in notations:
MC_Corr3: ( x[t], v[t-1], v[t+1] ) --> Delta-v[t]
and find_next_vel iterates MC_Corr3 until
Delta-v[t] == (v[t+1] - v[t-1]) / 2

The code in this file is also where the performance matters the most, and I spent quite some time experimenting with different algorithms to find one with most reasonable speed.

function make_ellipse(a::Float64,b::Float64, n::Int64, extra_dims::Int64=1)  # a,b are relative lengths of x and y axes
  s = linspace(0,2π * (n-1)/n, n)
  if extra_dims == 0
    return vcat(transpose(a*cos(s)), transpose(b*sin(s)))
  elseif extra_dims > 0
    return vcat(transpose(a*cos(s)), transpose(b*sin(s)), zeros(extra_dims,n))
  else
    error("extra_dims must be non-negative")
  end
end

function perturb_data!(data::Array{Float64,2}, coeff::Vector{Float64}, num_modes::Int64) 
  # num_modes is the number of modes
  # coeff are the relative sizes of the perturbations

  numpts = size(data,2)

  for j in 2:num_modes
    rcoeff = rand(length(coeff),2)

    for pt in 1:numpts
      theta = 2j * π * pt / numpts
      for d in 1:length(coeff)
        data[d,pt] += ( (rcoeff[d,1] - 0.5) *  cos(theta) + (rcoeff[d,2] - 0.5) * sin(theta)) * coeff[d] / j^2
      end
    end
  end

  nothing
end

This file just sets up the initial data. Note that in principle the number of ambient spatial dimensions is arbitrary.

using Plots

pyplot(size=(1920,1080), reuse=true)

function plot_data2D(filename_prefix::ASCIIString, filename_offset::Int64, titlestring::ASCIIString, data::Array{Float64,2}, additional_data...)
  x_max = 1.5
  y_max = 1.5
  plot(transpose(data)[:,1], transpose(data)[:,2] , xlims=(-x_max,x_max), ylims=(-y_max,y_max), title=titlestring)
  
  if length(additional_data) > 0
    for i in 1:length(additional_data)
      plot!(transpose(additional_data[i][1,:]), transpose(additional_data[i][2,:]))
    end
  end
  
  png(filename_prefix*dec(filename_offset,5)*".png")
  nothing
end

function plot_data3D(filename_prefix::ASCIIString, filename_offset::Int64, titlestring::ASCIIString, data::Array{Float64,2}, additional_data...)
  x_max = 1.5
  y_max = 1.5
  z_max = 0.9
  tdata = transpose(data)
  plot(tdata[:,1], tdata[:,2],tdata[:,3], xlims=(-x_max,x_max), ylims=(-y_max,y_max),zlims=(-z_max,z_max), title=titlestring)

  if length(additional_data) > 0
    for i in 1:length(additional_data)
      tdata = transpose(additional_data[i])
      plot!(tdata[:,1], tdata[:,2], tdata[:,3]) 
    end
  end

  png(filename_prefix*dec(filename_offset,5)*".png")
  nothing
end

This file provides some wrapper commands for generating the plots.

include("InitialData3.jl")
include("MeanCurvature3.jl")
include("GraphCode3.jl")

num_pts = 3000
default_timestep = 0.01 / num_pts
max_time = 3
plot_every_ts = 1500

my_data = make_ellipse(1.0,1.0,num_pts,0)
perturb_data!(my_data, [1.0,1.0], 15)
this_vel = zeros(my_data)
next_vel = zeros(my_data)

for t = 0:floor(Int64,max_time / default_timestep)
  num_tries, agreement = find_next_vel!(my_data, this_vel,next_vel,default_timestep)

  if !agreement
    warn("Time $(t*default_timestep): failed to converge when finding next_vel.")
    warn("Dumping information:")
    max_beta = 1.0
    max_col = 1
    for col in 1:size(my_data,2)
      beta = (1 + dot(next_vel[:,col], next_vel[:,col]))^(1/2)
      if beta > max_beta
        max_beta = beta
	max_col = col
      end
    end
    warn("   Beta attains maximum at position $max_col")
    warn("   Beta = $max_beta")
    warn("   Position = ", my_data[:,max_col])
    prevcol = max_col - 1
    nextcol = max_col + 1
    if max_col == 1
      prevcol = size(my_data,2)
    elseif max_col == size(my_data,2)
      nextcol = 1
    end
    warn("   Deltas")
    warn("    Left:  ", my_data[:,max_col] - my_data[:,prevcol])
    warn("    Right:  ", my_data[:,nextcol] - my_data[:,max_col])
    warn("   Previous velocity: ", this_vel[:,max_col])
    warn("   Putative next velocity: ", next_vel[:,max_col])
    warn("Quitting...")
    break
  end

  for col in 1:size(my_data,2)
    beta = (1 + dot(next_vel[:,col], next_vel[:,col]))^(1/2)
    for row in 1:size(my_data,1)
      my_data[row,col] += next_vel[row,col] * default_timestep / beta
      this_vel[row,col] = next_vel[row,col]
    end

    if beta > 1e7
      warn("time: ", t * default_timestep)
      warn("Almost null... beta = ", beta)
      warn("current position = ", my_data[:,col])
      warn("current Deltas")
      prevcol = col - 1
      nextcol = col + 1
      if col == 1
        prevcol = size(my_data,2)
      elseif col == size(my_data,2)
        nextcol = 1
      end
      warn(" Left: ", my_data[:,col] - my_data[:,prevcol])
      warn(" Right: ", my_data[:,nextcol] - my_data[:,col])
    end
  end

  if t % plot_every_ts ==0
    plot_data2D("3Dtest", div(t,plot_every_ts), @sprintf("elapsed: %0.4f",t*default_timestep), my_data, make_ellipse(cos(t*default_timestep), cos(t*default_timestep),100,0))
    info("Frame $(t/plot_every_ts):  used $num_tries tries.")
  end
end

And finally the main file. Mostly it just ties the other files together to produce the plots using the simulation code; there are some diagnostics included for me to keep an eye on the output.

The results

First thing to do is to run a sanity check against explicit solutions. In rotational symmetry, the solution to the cosmic string equations can be found analytically. As you can see below the simulation closely replicates the explicit solution in this case.

The video ends when the simulation stopped. The simulation stopped because a singularity has formed; in this video the singularity can be seen as the collapse of the string to a single point.

Next we can play around with a more complicated initial configuration.

In this video the blue curve is the closed cosmic string, which starts out as a random perturbation of the circle with zero initial speed. The string contracts with acceleration determined by the Nambu-Goto action. The simulation ends when a singularity has formed. It is perhaps a bit hard to see directly where the singularity happened. The diagnostic messages, however, help in this regard. From it we know that the onset of singularity can be seen in the final frame:

3Dtest00268

The highlighted region is getting quite pointy. In fact, that is accompanied with the “corner” picking up infinite acceleration (in other words, experiencing an infinite force). The mathematical singularity corresponds to something unreasonable happening in the physics.

To make it easier to see the “speed” at which the curve is moving, the following videos show the string along with its “trail”. This first one again shows how a singularity can happen as the curve gets gradually more bent, eventually forming a corner.

This next one does a good job emphasizing the “wave” nature of the motion.

The closed cosmic strings behave like a elastic band. The string, overall, wants to contract to a point. Small undulations along the string however are propagated like traveling waves. Both of these tendencies can be seen quite clearly in the above video. That the numerical solver can solve “past” the singular point is a happy accident; while theoretically the solutions can in fact be analytically continued past the singular points, the renormalization process involved in this continuation is numerically unstable and we shouldn’t be able to see it on the computer most of the time.

The next video also emphasizes the wave nature of the motion. In addition to the traveling waves, pay attention to the bottom left of the video. Initially the string is almost straight there. This total lack of curvature is a stationary configuration for the string, and so initially there is absolutely no acceleration of that segment of the string. The curvature from the left and right of that segment slowly intrudes on the quiescent piece until the whole thing starts moving.

The last video for this post is a simulation when the ambient space is 3 dimensional. The motion of the string, as you can see, becomes somewhat more complicated. When the ambient space is 2 dimensional a point either accelerates or decelerates based on the local (signed) curvature of the string. But when the ambient space is 3 dimensional, the curvature is now a vector and this additional degree of freedom introduces complications into the behavior. For example, when the ambient space is 2 dimensional it is known that all closed cosmic strings become singular in finite time. But in 3 dimensions there are many closed cosmic strings that vibrate in place without every becoming singular. The video below is one that does however become singular. In addition to a fading trail to help visualize the speed of the curve, this plot also includes the shadows: projections of the curve onto the three coordinate planes.

Adventures in Julia

Recently I have been playing around with the Julia programming language as a way to run some cheap simulations for some geometric analysis stuff that I am working on. So far the experience has been awesome.

A few random things …

Juno

Julia has a decent IDE in JunoLab, which is built on top of Atom. In terms of functionality it captures most of the sort of things I used to use with Spyder for python, so is very convenient.

Jupyter

Julia interfaces with Jupyter notebooks through the IJulia kernel. I am a fan of Jupyter (I will be using it with the MATLAB kernel for a class I am teaching this fall).

Plots.jl

For plotting, right now one of the most convenience ways is through Plots.jl, which is a plotting front-end the bridges between your code and various different backends that can be almost swapped in and out on the fly. The actual plotting is powered by things like matplotlib or plotlyJS, but for the most part you can ignore the backend. This drastically simplifies the production of visualizations. (At least compared to what I remembered for my previous simulations in python.)

Automatic Differentiation

I just learned very recently about automatic differentiation. At a cost in running time for my scripts, it can very much simplify the coding of the scripts. For example, we can have a black-box root finder using Newton iteration that does not require pre-computing the Jacobian by hand:

module NewtonIteration
using ForwardDiff

export RootFind

function RootFind(f, init_guess::Vector, accuracy::Float64, cache::ForwardDiffCache; method="Newton", max_iters=100, chunk_size=0)
  ### Takes input function f(x::Vector) → y::Vector of the same dimension and an initial guess init_guess. Apply Newton iteration to find solution of f(x) = 0. Stop when accuracy is better than prescribed, or when max_iters is reached, at which point a warning is raised.
  ### Setting chunk_size=0 deactivates chunking. But for large dimensional functions, chunk_size=5 or 10 improves performance drastically. Note that chunk_size must evenly divide the dimension of the input vector.
  ### Available methods are Newton or Chord

  # First check if we are already within the accuracy bounds
  error_term = f(init_guess)
  if norm(error_term) < accuracy
    info("Initial guess accurate.")
    return init_guess
  end

  # Different solution methods
  i = 1
  current_guess = init_guess
  if method=="Chord"
    df = jacobian(f,current_guess,chunk_size=chunk_size)
    while norm(error_term) >= accuracy && i <= max_iters
      current_guess -= df \ error_term
      error_term = f(current_guess)
      i += 1
    end
  elseif method=="Newton"
    jake = jacobian(f, ForwardDiff.AllResults, chunk_size=chunk_size, cache=cache)
    df, lower_order = jake(init_guess)
    while norm(value(lower_order)) >= accuracy && i <= max_iters
      current_guess -= df \ value(lower_order)
      df, lower_order = jake(current_guess)
      i += 1
    end
    error_term = value(lower_order)
  else
    warn("Unknown method: ", method, ", returning initial guess.")
    return init_guess
  end

  # Check if converged
  if norm(error_term) >= accuracy
    warn("Did not converge, check initial guess or try increasing max_iters (currently: ", max_iters, ").")
  end
  info("Used ", i, " iterations; remaining error=", norm(error_term))
  return current_guess
end

end

This can then be wrapped in finite difference code for solving nonlinear PDEs!

LaTeX runtime for NeoVim

I’ve just recently migrated to using NeoVim instead of traditional Vim. One of the nice features in NeoVim (or nvim) is that it now supports asynchronous job dispatch. This makes it a bit nicer to call external previewers for instance (otherwise the previewer may block the editing). So here are the latest LaTeX runtime code that I use, modified for NeoVim.

function Dvipreview()
	let dviviewjob = jobstart(['xdvi', '-sourceposition', line(".")." ".expand("%"),  expand("%:r") . ".dvi"])
endfunction

function PDFpreview()
	let pdfviewjob = jobstart(['evince', expand("%:r") . ".pdf"])
endfunction

au BufRead *.tex call LaTeXStartup()

function LaTeXStartup()
	set dictionary+=~/.config/nvim/custom/latextmp/labelsdictionary
	set iskeyword=@,48-57,_,:
	call SimpleTexFold()
	set completefunc=CompleteBib
	set completeopt=menuone,preview
	runtime custom/latextmp/bibdictionary
	call SetShortCuts()
endfunction

function SimpleTexFold()
	exe "normal mz"
	1
	set foldmethod=manual
	if search('\\begin{document}','nW') 
		1,/\\begin{document}/-1fold
		if search('\\section','nW')
			/\\section/1
		endif
		while search('\\section','nW')
			.,/\\section/-1fold
			/\\section/1
		endwhile
		.,$fold
	endif
	if search('\\begin{entry}','nW')
		/\\begin{entry}/1
		while search('\\begin{entry}','nW')
			.,/\\begin{entry}/-1fold
			/\\begin{entry}/1
		endwhile
		.,$fold
	endif
	exe "normal g`zzv"
endfunction

function SetShortCuts()
	" Map <F2> to save and compile
        imap <F2> ^[:w^M:! latex -src-specials % >/dev/null^M^Mi
        " Map S-<F2> to save and compile as PDF 
        " apparently <S-F2> sends the same keycode as <F12>?
        imap <F12> ^[:w^M:! pdflatex % >/dev/null^M^Mi
        " Map <F3> to Dvipreview()
        imap <F3> ^[:call Dvipreview()^M
        " Map S-<F3> to PDFpreview()
        " apparently <S-F3> = <F13>
        imap <F13> ^[:call PDFpreview()^M
        " Map <F4> to bibtex
        imap <F4> ^[:! bibtex "%:r" >/dev/null^M^Mi
        " Map <F5> to change the previous word into a latex \begin .. \end environment
        imap <F5> ^[diwi\begin{^[pi<Right>}^M^M\end{^[pi<Right>}<Up>
        " Map <F6> to 'escape the current \begin .. \end environment
        imap <F6> ^[/\\end{.*}/e^Mi<Right>
        " Map <F7> to search the labels dictionary for matching labels
        imap <F7> ^[diwi\ref{^[pi<Right>^X^K
        " Map <F8> to rebuild the labels dictionary
        imap <F8> ^[:w^M:! ~/.config/nvim/custom/latexreadlabels.sh %^M^Mi
	" Map <F9> to search using the bibs dictionary
        imap <F9> ^[diwi\cite{^[pi<Right>^X^U
        imap <S-Tab>C ^[diwi\mathcal{^[pi<Right>}
        imap <S-Tab>B ^[diwi\mathbb{^[pi<Right>}
        imap <S-Tab>F ^[diwi\mathfrak{^[pi<Right>}
        imap <S-Tab>R ^[diwi\mathrm{^[pi<Right>}
        imap <S-Tab>O ^[diwi\mathop{^[pi<Right>}
        imap <S-Tab>= ^[diWi\bar{^[pi<Right>}
        imap <S-Tab>. ^[diWi\dot{^[pi<Right>}
        imap <S-Tab>" ^[diWi\ddot{^[pi<Right>}
        imap <S-Tab>- ^[diWi\overline{^[pi<Right>}
        imap <S-Tab>^ ^[diWi\widehat{^[pi<Right>}
        imap <S-Tab>~ ^[diWi\widetilde{^[pi<Right>}
        imap <S-Tab>_ ^[diWi\underline{^[pi<Right>}

endfunction

Pay attention that the control characters did not copy-paste entirely correctly in the SetShortCuts() routine. Those need to be replaced by the actual control-X sequences. The read labels shell script is simply

#!/bin/sh
grep '\label{' $1 | sed -r 's/.*\\label\{([^}]*)\}.*/\1/' > ~/.config/nvim/custom/latextmp/labelsdictionary

(I probably should observe the proper directory structure and dump the dictionary into ~/.local/share/ instead.)

So starts the svn to git migration…

For five years now I have been a happy user of svn to manage my research work, and I probably would have remained so if it weren’t for my next job favoring git instead. So in the past few weeks I have been reading up on git and in the process discovering all sorts of things that I have been doing wrong, or at least sub-optimally. So here are just some notes on what I’ve just figured out (yay slow me!).

Each paper should be a repository

Previously I keep one single giant repository for all my research work. I’ve discovered that this is not the best idea for multiple reasons:

  • Collaboration: one of the great things about version control systems is that it makes collaboration easier to manage. But your collaborators are not a static set and you probably don’t want them to peek at every one of your research ideas. The easiest way to share individual projects with only those who should be allowed to see and edit them is to have one repo for each paper. (I got away with what I did mostly because I failed to convince any of my collaborators to use a VCS beyond that built-in support in Dropbox.)
  • Organisation: to keep track of papers I have them stored in subdirectories, some of which are “stuff I am working on” and some of which are “stuff that is finished from year X” and some of which are “stuff that is being refereed”. It is a bit silly that I have to do svn mv changes to “graduate” a project from one subdirectory to the next. By keeping each paper in its own (git) repository, the local directory representation of the storage is immaterial. And this makes more sense to me.
  • (In)compatibility: here’s something that I changed my mind on. Previously I thought it a great idea to keep a single up-to-date bibtex file containing all the references that I can ever need, and a single up-to-date version of my custom LaTeX class and style files. The advantage of course is that I just need to issue one svn up to get the newest versions of everything. But the disadvantage is that when upgrading my class and style files, or when updating my bibtex files, I have to maintain backward compatibility. And when I do break the compatibility, it is then required that I keep a copy of the old versions of the files along with the LaTeX source that uses them, which, when you think about it, defeats the purpose of having a single up-to-date version in one repo completely.

So my new workflow, instead of one giant repository, is that I will create a repo for each paper/project. My LaTeX class and style files will be itself a separate Git repo, on which I can upgrade and develop to my hearts desire. When I start a new paper I will simply make a copy of the current version of the files (with git archive instead of git clone because I won’t need the previous versions, nor will I want to track the changes). This also allows me to set-up my “development environment” (via .gitattributes and .gitignore) quickly.

Keyword substitution is not necessary

The papers I keep in my svn repo I have been using the svn and svn-multi packages to add time-stamp and versioning information to the PDF files. Both of those packages rely on the “keyword substitution” capabilities of the svn system at commit time. Naturally when I wanted to start using git, I looked for a replacement. The obvious one is gitinfot2. One thing I don’t like is that unlike the keyword replacements, this package does not directly modified the source LaTeX file; instead it creates (via commit and checkout hooks) a supplementary file in the .git/ directory which it searches for and inserts when building the PDF file. This makes it a bit more of a hassle when uploading stuff to the arXiv, for example.

So I started reading up on how one can actually imitate keyword expansion using commit and checkout filters. And I went so far as to implement something for LaTeX. And then I read the discussion by the kernel devs on this issue, and Linus Torvalds’ comments left an impression on me. In short:

  • When you are working on the code in a git repository, you don’t need this tagging since you can just “ask git”.
  • Conversely, this sort of tagging is only needed when your code is ready to leave the repository (upload to arXiv or sent to non-git-using collaborators, for example).

So philosophically it is much less useful to have something that work on the working copy compared to something that works on an exported archive. And while git, by design, cannot and will not do keyword expansion on commits, it is perfectly happy to do keyword expansion when one exports the repo. Furthermore, since the export substitution can be essentially formatted arbitrarily, this moots the need for something like svn or svn-multi to parse the string generated by the RCS: we can make the string appear how we want to start with. The only hiccup is that before the substitution (i.e. when you are working in the working copy), the syntax for the export substitution is not exactly compatible with LaTeX, and requires a little mucking about with catcodes. But with that problem solved, and with the workflow now accounting for each paper as a separate repository, for arXiv uploads the easiest thing will actually be to simply issue git archive and upload the resulting tarball.

So I will be living in Michigan for the next few years…

This year I have had a little bit of sucess with my job search. At the end I sent in around 45 applications in total (though goodness for MathJobs, though a few applications were dealt with differently), from which I got 5 interviews which resulted in 2 offers, 1 rejection, and 2 “we cannot solve your two body problem so let’s not bother going through the motions”.

Starting mid August I will be affiliated with Michigan State University in East Lansing, Michigan.

All in all, a pretty stressful 6 months (job season starts around mid September, and my decision was sent in mid March), and now a slightly less stressful few months of preparing an international move from Switzerland to the US.

I’ve noticed that it is somewhat fashionable nowadays for early career individuals who have found jobs to post their application material on their websites, to serve as a sample for graduate students and new PhDs who are on the market. I think it is a pretty good idea. This was my fourth time applying for jobs. (Last year I applied also, despite there still being time left on my contract here. In the current market that is something I would recommend.) And looking back on the previous research statements I have written, the ones I wrote for my first two times applying for jobs really weren’t that great: they are both too narrowly focussed and too technical. Of course, a chunk of this has to do with my maturing as a professional mathematician. But some blame must be put on the lack of “models” to which I can compare my writing. Similarly, until this year I think my approach to the teaching statement has been on the naive and flippant side, which again partly has to do with me not knowing any better. (And this is to say nothing of the excuse of a cover letter I wrote 6 years ago.)

So below I share my research and teaching statements from this year (no, you don’t get to see the awful stuff I’ve written in the years prior). If you find it in anyway helpful, don’t hesistate to let me know.

2014 Research Statement (please note that this was written around September/October of 2014; the field of research in which I work has seen already some very interesting developments since then, so don’t treat this in anyway as an up-to-date survey!)

2014 Teaching Statement

Using Prezi

I was introduced to the use of Prezi for doing presentations (I know, I am slightly behind the times) from a MOOC I am taking, and so for my recent trip to Cambridge I decided to give it a try. The audience seemed to like it, and I felt that in this particular instance, the use of Prezi vastly improved my talk. On the other hand, I don’t think in the future I will do every one of my presentations as Prezis: that presentation style is suitable for some topics but traditional (be it slides or board talks) methods do shine for others. Read the rest of this entry »

Automated annotation in LaTeX using OCG

This is written mostly in response to Qiaochu’s Google+ post. The problem is: is there a way to make easy “cross reference previews”? If so, we can just write


\begin{theorem}
Assuming Assumptions \ref{ass1} and \ref{ass2}, we have blah
\end{theorem}

in our document, and on mouse-over we can see what the assumptions are. Or, as another example,


due to \eqref{eq3}, our equation \eqref{eq4} above implies
\[ E = mc^2\]

and mousing over the two generated equation references will show the equations, without us having to necessarily flip back to the page involved.

As it turns out, there is such a utility available: Fancy-Preview. It uses the fancytooltips package for LaTeX to generate PDF tooltips where the tooltips are clipped from an external PDF file. The upside to using fancy-preview is that the results are very pretty. The downside is that it is extremely non-portable: while I am not certain, it seems to require the PDF viewer to have javascript support, and on Linux the only PDF viewer that stands a chance of working with these tooltips is the official Adobe Acrobat Reader. If you are a Linux user you’ll know why I uttered the last phrase with disgust. (Though rumour has it that both evince and okular may eventually support the type of operations required for fancy-preview, the fact is that right now these tooltips are not very useful for Linux users.)

Instead of tooltips, however, one can get a similar effect by using the Optional Content Group feature of the PDF specification. The OCG basically allows the visibility of certain elements to be changeable by user interaction, and the ocg-p package implements easy access to the OCG layers. However, just the ocg-p package is not enough: the way it detects user interaction still doesn’t work on evince (I haven’t tested other PDF viewers). Luckily, there is the ocgx package which builds on the ocg-p package and does not use Javascript. The code has been tested to run on Acrobat Reader, FoxIt, Evince, and others. So I know at least one Linux reader is capable of using this content.

Now, OCG handling by itself only allows us to toggle the visibility of elements. We still need a way to actually present the annotations in the PDF file. Here’s where it gets tricky: the annotations should be invisible by default, and be shown with certain triggers are clicked. So the idea is to typeset the annotations as zero-size objects, offset properly so that it does not obscure the trigger button. The code I am about to show you below does this… to a certain extent. There are some bugs that I have not yet been able to iron out:

  • In the function \annotatetext, if the text starts too near line-breaking boundaries we get sometimes strange behaviours, one of which is that after every compilation pdflatex will tell us to re-run since the labels may have changed.
  • I originally intended to build also a height-detection code so that if there is not enough space above the line, we can try to type-set the annotation below the trigger. However we run into one of the main limitations of the OCG method: the order of setting the text is the order of the layers, at least with the current version of ocg-p. So the annotation text, when shown, will overwrite text that precedes it in the TeX file, but will be overwritten by text that follows. So below-the-line annotations becomes illegible. (Any suggestions on how I can fix this will be welcome!)

Note also that the code below is not “production”, it has only seen limited testing. So use it at your own peril. (I should note here that to make use of the utils, it is necessary to compile the file using some version of pdfTeX.)

%%%%% Annotation utils %%%%%
% The goal is to provide a clickable
% tool-tip like interface to show cross reference information. 
% This duplicates the function of the 'fancy-preview' and
% 'fancytooltips' packages, and does not support mouse-over events,
% but has the advantage of working in evince also. 

% We need some packages
\usepackage[usenames,dvipsnames]{color}
\usepackage{zref-savepos}
\usepackage{xifthen}
\usepackage{ocgx}
\usepackage{xspace}
% the package ocgx also depends on ocg-p, I think.

% Some configuration stuff
% Default colours, see
% http://en.wikibooks.org/wiki/LaTeX/Colors
\newcommand*\annotatetextcolour{OliveGreen} % For the inline text
\newcommand*\annotateboxbordercolour{Dandelion} % For the annotate box
\newcommand*\annotateboxbkgdcolour{Goldenrod} % Ditto
\newcommand*\annotateboxtextcolour{Black} % Ditto
\newcommand*\annotateboxtextfont{\small} % Other font configuration stuff
\newcommand*\annotateheremarktext{note} % Default text for \annotatehere 
\newcommand*\annotatewithmarkmark{$\Uparrow$} % Default mark for \annotatewithmark

\makeatletter
% Dummy variables
\newcounter{@anntposmark}
\newlength\@annt@oldfboxsep

% Usage: \annotatetext{text}{annotation}
% Note: there can be some bugs when the text to be annotated starts
% near the end of a line. The other two commands seems to behave
% better, but will still need extensive testing!
\newcommand\annotatetext[2]{%
        \stepcounter{@anntposmark}%
        \zsavepos{@annt@pos\the@anntposmark}%
        \hskip\dimexpr - \zposx{@annt@pos\the@anntposmark}sp + \zposx{@anntleftmargin}sp + 3em%
        \smash{\raisebox{3ex}{\makebox[0pt][l]{\begin{ocg}{Annotation Layer \the@anntposmark}{anntlayer\the@anntposmark}{0}\fcolorbox{\annotateboxbordercolour}{\annotateboxbkgdcolour}{\parbox[b]{\textwidth-6em}{{\annotateboxtextfont\color{\annotateboxtextcolour} #2}}}\end{ocg}}}}%
        \hskip\dimexpr + \zposx{@annt@pos\the@anntposmark}sp - \zposx{@anntleftmargin}sp - 3em%
        \switchocg{anntlayer\the@anntposmark}{{\color{\annotatetextcolour}#1}}%
        \xspace}

% Usage: \annotatehere[note mark text]{annotations}
% The default mark text is set above, the mark text is type-set in a superscript box
\newcommand\annotatehere[2][\annotateheremarktext]{%
        \stepcounter{@anntposmark}%
        \zsavepos{@annt@pos\the@anntposmark}%
        \hskip\dimexpr - \zposx{@annt@pos\the@anntposmark}sp + \zposx{@anntleftmargin}sp + 3em%
        \smash{\raisebox{3.3ex}{\makebox[0pt][l]{\begin{ocg}{Annotation Layer \the@anntposmark}{anntlayer\the@anntposmark}{0}\fcolorbox{\annotateboxbordercolour}{\annotateboxbkgdcolour}{\parbox[b]{\textwidth-6em}{{\annotateboxtextfont\color{\annotateboxtextcolour}#2}}}\end{ocg}}}}%
        \hskip\dimexpr + \zposx{@annt@pos\the@anntposmark}sp - \zposx{@anntleftmargin}sp - 3em%
        \setlength\@annt@oldfboxsep\fboxsep%
        \setlength\fboxsep{1pt}%
        \switchocg{anntlayer\the@anntposmark}{\raisebox{1ex}{\fcolorbox{\annotatetextcolour}{White}{\color{\annotatetextcolour}\tiny \scshape #1}}}%
        \setlength\fboxsep\@annt@oldfboxsep%
        \xspace}

% Usage: \annotatewithmark[mark]{annotations}
% Very similar to \annotatehere, but the mark is unframed, and no conversion made to small caps
\newcommand\annotatewithmark[2][\annotatewithmarkmark]{%
        \stepcounter{@anntposmark}%
        \zsavepos{@annt@pos\the@anntposmark}%
        \hskip\dimexpr - \zposx{@annt@pos\the@anntposmark}sp + \zposx{@anntleftmargin}sp + 3em%
        \smash{\raisebox{3.3ex}{\makebox[0pt][l]{\begin{ocg}{Annotation Layer \the@anntposmark}{anntlayer\the@anntposmark}{0}\fcolorbox{\annotateboxbordercolour}{\annotateboxbkgdcolour}{\parbox[b]{\textwidth-6em}{{\annotateboxtextfont\color{\annotateboxtextcolour}#2}}}\end{ocg}}}}%
        \hskip\dimexpr + \zposx{@annt@pos\the@anntposmark}sp - \zposx{@anntleftmargin}sp - 3em%
        \setlength\@annt@oldfboxsep\fboxsep%
        \setlength\fboxsep{1pt}%
        \switchocg{anntlayer\the@anntposmark}{\raisebox{1ex}{{\color{\annotatetextcolour}\tiny #1}}}%
        \setlength\fboxsep\@annt@oldfboxsep%
        \xspace}

% Usage: \annotatelabel{label}{annotations}
% Replacement for \label, where attached to each label there is an annotation 
% text which can be recalled using \annotateref{label}. See below. 
\newcommand\annotatelabel[2]{%
        \label{#1}%
        \global\@namedef{@annt@label@#1}{#2}}
% Usage: \annotateref{label}  and  \annotateeqref{label}
% Replacement for \ref and \eqref, where we insert an \annotatewithmark with 
% the text set with \annotatelabel
\newcommand\annotateref[1]{\ref{#1}\annotatewithmark{\@nameuse{@annt@label@#1}}}
\newcommand\annotateeqref[1]{\eqref{#1}\annotatewithmark{\@nameuse{@annt@label@#1}}}

\AtBeginDocument{\zsavepos{@anntleftmargin}} %This figures out where the left margin is
\makeatother

%%%%% End Annotation utils %%%%%

Let me give an example (to compile, replace the line \input{helper_commands} by the code listing above, or save the code listing above in the file helper_commands.tex

\documentclass{article}
\usepackage[standard]{ntheorem}

\newcommand\eqref[1]{(\ref{#1})} % Defined only because I am not using amsart

\input{helper_commands}

\begin{document}
We can \annotatetext{annotate a piece of text}{Here be an
annotation.}. We can equally well annotate a piece of mathematics:
\[ E = \annotatetext{m}{The is the mass} \times \annotatetext{c^2}{Square of
the speed of light} \]
Instead of using the text itself as a toggle, we can use a
mark.\annotatewithmark{See here!} The mark itself
can\annotatehere[click me!]{Wheee!} be descriptive.

Now we can test some cross referencing. We first write down an
equation
\begin{equation}\annotatelabel{waveq}{The nonlinear wave equation \
$\Box u = B(\partial u,\partial u)$}
- \partial_t^2 u + \triangle u = \sum_{i,j = 0}^3 B^{ij} \partial_i u
  \partial_j u
\end{equation}
with respect to which we define
\begin{definition}\annotatelabel{def:nullcon}{The null condition is
when $B^{ij}\xi_i\xi_j = 0$ for any $\xi$ satisfying $m^{ij}\xi_i\xi_j
= 0$ where $m = \mathrm{diag}(-1,1,1,1)$ is the Minkowski metric.}
We say that the null condition is satisfied for \annotateeqref{waveq} if
the term $B^{ij}$ satisfies $\sum_{i,j = 0}^3 B^{ij}\xi_i
\xi_j = 0$ for every $\xi$ satisfying $- \xi_0^2 + \xi_1^2 + \xi_2^2 +
\xi_3^2 = 0$.
\end{definition}
And perhaps a theorem
\begin{theorem}\annotatelabel{thm:nullcon}{Small data global existence
holds provided null condition (Def. \ref{def:nullcon}) is
satisfied}
Small data global existence hold for \annotateeqref{waveq} provided
that the null condition is satisfied (see Definition
\annotateref{def:nullcon}.)
\end{theorem}

Let us talk a bit more about Theorem \annotateref{thm:nullcon}.
\end{document}

As you can see, one has to specify, in the current version, the annotation text for each of the labels when the labels are defined. It is a slight drawback compared to the fancy-preview script, which extracts the annotation text automatically; but on the other hand, this gives slightly better configurability: in particular, nested annotations don’t currently work, so one would have to avoid \annotateref and \annotateeqref commands within an \annotatelabel (as I did in the example above).

If you want to see what the result looks like without building the LaTeX file above yourself, here’s the results: Annotation Demo. (You should download it and open it in a standalone viewer, and not use the built-in ones for Firefox and Chrome. I am pretty sure the Firefox viewer cannot handle OCG content yet.)

What’s wrong with tests

Find the errors!

I was tasked with grading the following exam question:

Using methods discussed in class this term, find the mean value over [-\pi,\pi] of the function f(x) = \sin(2x) \cdot \exp [1 - \cos (2x)].

The conceptual parts of the question are (based on the syllabus of the course)

  1. Connecting “mean value of a continuous function over an interval” with “integration”, an application of calculus to probability theory and statistics.
  2. Evaluating an integral by substitutions/change of variables.
  3. Familiarity with the trigonometric functions \sin, \cos and their properties (periodicity, derivative relations, etc).

I was told to grade with an emphasis on the above, so I prepared a grading rubric such that the above three key ideas gave most of the points. Here’s an otherwise reasonable answer that unfortunately does not use the methods discussed in class and so would receive (close to) zero credit (luckily no students turned in an answer like this):

The function f(x) satisfies f(x) = - f(-x), i.e. it is odd. So the average (f(x) + f(-x))/2 = 0. Since for every x\in [-\pi,\pi], we also have -x \in [-\pi,\pi], the mean value of f(x) over that interval must be zero.

Here are some responses that can get quite a good number of points* (at least more than the above answer) based on the grading rubric (I guess it means I wasn’t imaginative enough in coming up with possible student errors). (I took the liberty of combining some of the most awful bits from different answers; the vast majority of the students’ answers are not nearly that horrible**, though only one student remembered that when changing variables one also needs to change the limits of integration.) Since most students who made any reasonable attempt on the question successfully wrote down the integral

\displaystyle \mu = \frac{1}{\pi - (-\pi)} \int_{-\pi}^\pi f(y)~\mathrm{d}y

(which is not to say no unreasonable attempts were made: just ask the poor bloke who decided that the Mean Value Theorem must play a role in this question), I will start from there. All mistakes below are intentional on my part. What amazed me most is how many students were able to get to the correct mean value… Read the rest of this entry »