Bubbles Bad; Ripples Good

… Data aequatione quotcunque fluentes quantitates involvente fluxiones invenire et vice versa …

Category: harmonic analysis

Where does the Pi go?

A very short, sort of random thought.

It is fairly well known that experts in partial differential equations and experts in harmonic analysis prefer to define the Fourier transform differently. For harmonic analysts, the Fourier transform and its inverse are naturally

\displaystyle \mathcal{F}[f](\xi) = \int f(x) \exp(-2\pi i \xi\cdot x) dx,~ \mathcal{F}^{-1}[g](x) = \int g(\xi) \exp(2\pi i \xi \cdot x) d\xi

This definition has the advantange that the formulae, up to the single minus sign, is virtually identical for the forwards and backwards transforms. Furthermore, the transform is an isometry on the (Hilbert) space of square integrable functions. For people who do PDEs, the Fourier transform is often written

\displaystyle \mathcal{F}[f](\xi) = \int_{\mathbb{R}^n} f(x) \exp(- i \xi \cdot x) dx, ~\mathcal{F}^{-1}[g](\xi) = \frac{1}{(2\pi)^{n}}\int_{\mathbb{R}^n} g(\xi) \exp(i \xi \cdot x) d\xi

so that the Fourier transform of the derivative \nabla f is given by i\xi \mathcal{F}[f]. This convention is more convenient for pseudo-differential calculus where you don’t want to carry around the factors of 2\pi everywhere. Sometimes to keep the transform as an isometry, the factor of 1 / (2\pi)^n gets distributed half on the forward transform and half on the backwards transform.

I’ve long taken as granted that these are two different point-of-views, and that depending on whether one uses more often the transforms themselves or the differentiation-multiplication-duality properties, one choose a convention that minimizes the spurious factors of 2\pi that floats around in the formula. More importantly, I’ve always thought that some factors of 2\pi are a necessary evil.

And yesterday, while trying to finally learn the Atiyah-Singer index theorem from the monograph by Peter Gilkey, I came across a wonderful definition that just blew my mind. On page 3, he just defines the notation dx, dy, d\xi, etc to be 1 / (2\pi)^{n/2} times the standard Lebesgue measure. And problem solved.

A little Hilbert space problem

First let us consider the following question on a finite dimensional vector space. Let (V, \langle\rangle) be a k-dimensional Hermitian-product space. Let (e_i)_{1\leq i \leq k} be an orthonormal basis for V. Let T:V\to V be the linear operator defined by T(e_i) = e_{i+1} when i < k, and T(e_k) = 0. Does there exist any non-trivial vector v\in V such that \langle v,v\rangle = 1 and \langle v, T^jv\rangle = 0?

The answer, in this case, is no. Present v = \sum v_i e_i where v_i are complex numbers. Let a be the smallest index such that v_a \neq 0 and v_i = 0, i < a. Similarly let b be the largest non-vanishing index. If a = b, then v = v_a e_a is a multiple of a standard basis element, and so is trivial. So assume a < b. Now, by the requirement \langle v, T^{b-a}v\rangle = 0, we see that v_a v_b = 0, which contradicts our assumption that a,b are the minimum and maximum non-vanishing indices. In this proof, we used crucially that V is finite dimensional, so that a largest element b can exist.

Now, onto the real question

Question
Take the complex Hilbert space \ell^2(\mathbb{N}), i.e. the set of all complex sequences (a_i)_{0\leq i < \infty} satisfying \sum_{i\in\mathbb{N}} |a_i|^2 < \infty. Let e = (1,0,0,\ldots), and let latex T$ be the right shift operator: (Ta)_{i+1} = a_i and (Ta)_0 = 0. Then T^ke is an orthonormal basis of \ell^2, and we have \langle e, T^ke\rangle = \delta_0^k. Does there exist non-trivial elements of \ell^2 for which \langle v, T^kv\rangle = \delta_0^k hold?

The answer is yes by the way. Read the rest of this entry »