Tables for
Volume B
Reciprocal space
Edited by U. Shmueli

International Tables for Crystallography (2006). Vol. B, ch. 1.3, pp. 25-49   | 1 | 2 |

Section 1.3.2. The mathematical theory of the Fourier transformation

G. Bricognea

aMRC Laboratory of Molecular Biology, Hills Road, Cambridge CB2 2QH, England, and LURE, Bâtiment 209D, Université Paris-Sud, 91405 Orsay, France

1.3.2. The mathematical theory of the Fourier transformation

| top | pdf | Introduction

| top | pdf |

The Fourier transformation and the practical applications to which it gives rise occur in three different forms which, although they display a similar range of phenomena, normally require distinct formulations and different proof techniques:

  • (i) Fourier transforms, in which both function and transform depend on continuous variables;

  • (ii) Fourier series, which relate a periodic function to a discrete set of coefficients indexed by n-tuples of integers;

  • (iii) discrete Fourier transforms, which relate finite-dimensional vectors by linear operations representable by matrices.

At the same time, the most useful property of the Fourier transformation – the exchange between multiplication and convolution – is mathematically the most elusive and the one which requires the greatest caution in order to avoid writing down meaningless expressions.

It is the unique merit of Schwartz's theory of distributions (Schwartz, 1966[link]) that it affords complete control over all the troublesome phenomena which had previously forced mathematicians to settle for a piecemeal, fragmented theory of the Fourier transformation. By its ability to handle rigorously highly `singular' objects (especially δ-functions, their derivatives, their tensor products, their products with smooth functions, their translates and lattices of these translates), distribution theory can deal with all the major properties of the Fourier transformation as particular instances of a single basic result (the exchange between multiplication and convolution), and can at the same time accommodate the three previously distinct types of Fourier theories within a unique framework. This brings great simplification to matters of central importance in crystallography, such as the relations between

  • (a) periodization, and sampling or decimation;

  • (b) Shannon interpolation, and masking by an indicator function;

  • (c) section, and projection;

  • (d) differentiation, and multiplication by a monomial;

  • (e) translation, and phase shift.

All these properties become subsumed under the same theorem.

This striking synthesis comes at a slight price, which is the relative complexity of the notion of distribution. It is first necessary to establish the notion of topological vector space and to gain sufficient control (or, at least, understanding) over convergence behaviour in certain of these spaces. The key notion of metrizability cannot be circumvented, as it underlies most of the constructs and many of the proof techniques used in distribution theory. Most of Section[link] builds up to the fundamental result at the end of Section[link], which is basic to the definition of a distribution in Section[link] and to all subsequent developments.

The reader mostly interested in applications will probably want to reach this section by starting with his or her favourite topic in Section 1.3.4[link], and following the backward references to the relevant properties of the Fourier transformation, then to the proof of these properties, and finally to the definitions of the objects involved. Hopefully, he or she will then feel inclined to follow the forward references and thus explore the subject from the abstract to the practical. The books by Dieudonné (1969)[link] and Lang (1965)[link] are particularly recommended as general references for all aspects of analysis and algebra. Preliminary notions and notation

| top | pdf |

Throughout this text, [{\bb R}] will denote the set of real numbers, [{\bb Z}] the set of rational (signed) integers and [ {\bb N}] the set of natural (unsigned) integers. The symbol [{\bb R}^{n}] will denote the Cartesian product of n copies of [{\bb R}]: [{\bb R}^{n} = {\bb R} \times \ldots \times {\bb R} \quad (n \hbox{ times}, n \geq 1),] so that an element x of [{\bb R}^{n}] is an n-tuple of real numbers: [{\bf x} = (x_{1}, \ldots, x_{n}).] Similar meanings will be attached to [{\bb Z}^{n}] and [{\bb N}^{n}].

The symbol [{\bb C}] will denote the set of complex numbers. If [z \in {\bb C}], its modulus will be denoted by [|z|], its conjugate by [\bar{z}] (not [z^{*}]), and its real and imaginary parts by [{\scr Re}\; (z)] and [{\scr Im}\; (z)]: [{\scr Re}\; (z) = {\textstyle{1 \over 2}} (z + \bar{z}), \qquad {\scr Im}\; (z) = {1 \over 2i} (z - \bar{z}).]

If X is a finite set, then [|X|] will denote the number of its elements. If mapping f sends an element x of set X to the element [f(x)] of set Y, the notation [f: x \;\longmapsto\; f(x)] will be used; the plain arrow → will be reserved for denoting limits, as in [\lim\limits_{\rho \rightarrow \infty} \left(1 + {x \over p}\right)^{p} = e^{x}.]

If X is any set and S is a subset of X, the indicator function [\chi_{s}] of S is the real-valued function on X defined by [\eqalign{\chi_{S} (x) &= 1 \quad \hbox{if } x \in S\cr &= 0 \quad \hbox{if } x \;\notin\; S.}] Metric and topological notions in [{\bb R}^{n}]

| top | pdf |

The set [{\bb R}^{n}] can be endowed with the structure of a vector space of dimension n over [{\bb R}], and can be made into a Euclidean space by treating its standard basis as an orthonormal basis and defining the Euclidean norm: [\|{\bf x}\| = \left({\textstyle\sum\limits_{i = 1}^{n}} x_{i}^{2}\right)^{1/2}.]

By misuse of notation, x will sometimes also designate the column vector of coordinates of [{\bf x} \in {\bb R}^{n}]; if these coordinates are referred to an orthonormal basis of [{\bb R}^{n}], then [\|{\bf x}\| = ({\bf x}^{T} {\bf x})^{1/2},] where [{\bf x}^{T}] denotes the transpose of x.

The distance between two points x and y defined by [d({\bf x},{\bf y}) = \|{\bf x} - {\bf y}\|] allows the topological structure of [{\bb R}] to be transferred to [{\bb R}^{n}], making it a metric space. The basic notions in a metric space are those of neighbourhoods, of open and closed sets, of limit, of continuity, and of convergence (see Section[link]).

A subset S of [{\bb R}^{n}] is bounded if sup [\|{\bf x} - {\bf y}\| \;\lt\; \infty] as x and y run through S; it is closed if it contains the limits of all convergent sequences with elements in S. A subset K of [{\bb R}^{n}] which is both bounded and closed has the property of being compact, i.e. that whenever K has been covered by a family of open sets, a finite subfamily can be found which suffices to cover K. Compactness is a very useful topological property for the purpose of proof, since it allows one to reduce the task of examining infinitely many local situations to that of examining only finitely many of them. Functions over [{\bb R}^{n}]

| top | pdf |

Let ϕ be a complex-valued function over [{\bb R}^{n}]. The support of ϕ, denoted Supp ϕ, is the smallest closed subset of [{\bb R}^{n}] outside which ϕ vanishes identically. If Supp ϕ is compact, ϕ is said to have compact support.

If [{\bf t} \in {\bb R}^{n}], the translate of ϕ by t, denoted [\tau_{\bf t} \varphi], is defined by [(\tau_{\bf t} \varphi) ({\bf x}) = \varphi ({\bf x} - {\bf t}).] Its support is the geometric translate of that of ϕ: [\hbox{Supp } \tau_{\bf t} \varphi = \{{\bf x} + {\bf t} | {\bf x} \in \hbox{Supp } \varphi\}.]

If A is a non-singular linear transformation in [{\bb R}^{n}], the image of ϕ by A, denoted [A^{\#} \varphi], is defined by [(A^{\#} \varphi) ({\bf x}) = \varphi [A^{-1} ({\bf x})].] Its support is the geometric image of Supp ϕ under A: [\hbox{Supp } A^{\#} \varphi = \{A ({\bf x}) | {\bf x} \in \hbox{Supp } \varphi\}.]

If S is a non-singular affine transformation in [{\bb R}^{n}] of the form [S({\bf x}) = A({\bf x}) + {\bf b}] with A linear, the image of ϕ by S is [S^{\#} \varphi = \tau_{\bf b} (A^{\#} \varphi)], i.e. [(S^{\#} \varphi) ({\bf x}) = \varphi [A^{-1} ({\bf x} - {\bf b})].] Its support is the geometric image of Supp ϕ under S: [\hbox{Supp } S^{\#} \varphi = \{S({\bf x}) | {\bf x} \in \hbox{Supp } \varphi\}.]

It may be helpful to visualize the process of forming the image of a function by a geometric operation as consisting of applying that operation to the graph of that function, which is equivalent to applying the inverse transformation to the coordinates x. This use of the inverse later affords the `left-representation property' [see Section[link](e)[link]] when the geometric operations form a group, which is of fundamental importance in the treatment of crystallographic symmetry (Sections[link],[link]). Multi-index notation

| top | pdf |

When dealing with functions in n variables and their derivatives, considerable abbreviation of notation can be obtained through the use of multi-indices.

A multi-index [{\bf p} \in {\bb N}^{n}] is an n-tuple of natural integers: [{\bf p} = (p_{1}, \ldots, p_{n})]. The length of p is defined as [|{\bf p}| = {\textstyle\sum\limits_{i = 1}^{n}}\; p_{i},] and the following abbreviations will be used: [\displaylines{\quad (\hbox{i})\qquad\;\;{\bf x}^{{\bf p}} = x_{1}^{p_{1}} \ldots x_{n}^{p_{n}}\hfill\cr \quad (\hbox{ii})\;\qquad D_{i} f = {\partial f \over \partial x_{i}} = \partial_{i}\; f\hfill\cr \quad (\hbox{iii})\qquad D^{{\bf p}} f = D_{1}^{p_{1}} \ldots D_{n}^{p_{n}} f = {\partial^{|{\bf p}|} f \over \partial x_{1}^{p_{1}} \ldots \partial x_{n}^{p_{n}}}\hfill\cr \quad (\hbox{iv})\qquad {\bf q} \leq {\bf p} \hbox{ if and only if } q_{i} \leq p_{i} \hbox{ for all } i = 1, \ldots, n\hfill\cr \quad (\hbox{v})\qquad\;{\bf p} - {\bf q} = (p_{1} - q_{1}, \ldots, p_{n} - q_{n})\hfill\cr \quad (\hbox{vi})\qquad {\bf p}! = p_{1}! \times \ldots \times p_{n}!\hfill\cr \quad (\hbox{vii})\qquad\!\! \pmatrix{{\bf p}\cr {\bf q}\cr} = \pmatrix{p_{1}\cr q_{1}\cr} \times \ldots \times \pmatrix{p_{n}\cr q_{n}\cr}.\hfill}]

Leibniz's formula for the repeated differentiation of products then assumes the concise form [D^{\bf p} (fg) = \sum\limits_{{\bf q} \leq {\bf p}} \pmatrix{{\bf p}\cr {\bf q}\cr} D^{{\bf p} - {\bf q}} f D^{\bf q} g,] while the Taylor expansion of f to order m about [{\bf x} = {\bf a}] reads [f({\bf x}) = \sum\limits_{|{\bf p}| \leq m} {1 \over {\bf p}!} [D^{\bf p} f ({\bf a})] ({\bf x} - {\bf a})^{\bf p} + o (\|{\bf x} - {\bf a}\|^{m}).]

In certain sections the notation [\nabla f] will be used for the gradient vector of f, and the notation [(\nabla \nabla^{T})f] for the Hessian matrix of its mixed second-order partial derivatives: [\displaylines{\nabla = \pmatrix{\displaystyle{\partial \over \partial x_{1}}\cr \vdots\cr\noalign{\vskip6pt} {\displaystyle{\partial \over \partial x_{n}}}\cr}, \quad \nabla f = \pmatrix{\displaystyle{\partial f \over \partial x_{1}}\cr \vdots\cr\noalign{\vskip6pt}  {\displaystyle{\partial f \over \partial x_{n}}}\cr},\cr (\nabla \nabla^{T}) f = \pmatrix{\displaystyle{\partial^{2} f \over \partial x_{1}^{2}} &\ldots &{\displaystyle{\partial^{2} f \over \partial x_{1} \partial x_{n}}}\cr \vdots &\ddots &\vdots\cr\noalign{\vskip6pt}  {\displaystyle{\partial^{2} f \over \partial x_{n} \partial x_{1}}} &\ldots &{\displaystyle{\partial^{2} f \over \partial x_{n}^{2}}}\cr}.}] Integration, [L^{p}] spaces

| top | pdf |

The Riemann integral used in elementary calculus suffers from the drawback that vector spaces of Riemann-integrable functions over [{\bb R}^{n}] are not complete for the topology of convergence in the mean: a Cauchy sequence of integrable functions may converge to a non-integrable function.

To obtain the property of completeness, which is fundamental in functional analysis, it was necessary to extend the notion of integral. This was accomplished by Lebesgue [see Berberian (1962)[link], Dieudonné (1970)[link], or Chapter 1 of Dym & McKean (1972)[link] and the references therein, or Chapter 9 of Sprecher (1970)[link]], and entailed identifying functions which differed only on a subset of zero measure in [{\bb R}^{n}] (such functions are said to be equal `almost everywhere'). The vector spaces [L^{p} ({\bb R}^{n})] consisting of function classes f modulo this identification for which [\|{\bf f}\|_{p} = \left({\textstyle\int\limits_{{\bb R}^{n}}} |\;f ({\bf x}) |^{p}\ {\rm d}^{n} {\bf x}\right)^{1/p} \;\lt\; \infty] are then complete for the topology induced by the norm [\|.\|_{p}]: the limit of every Cauchy sequence of functions in [L^{p}] is itself a function in [L^{p}] (Riesz–Fischer theorem).

The space [L^{1} ({\bb R}^{n})] consists of those function classes f such that [\|\;f \|_{1} = {\textstyle\int\limits_{{\bb R}^{n}}} |\;f ({\bf x})|\;\hbox{d}^{n} {\bf x} \;\lt\; \infty] which are called summable or absolutely integrable. The convolution product: [\eqalign{(\;f * g) ({\bf x}) &= {\textstyle\int\limits_{{\bb R}^{n}}} f({\bf y}) g({\bf x} - {\bf y})\;\hbox{d}^{n} {\bf y}\cr &= {\textstyle\int\limits_{{\bb R}^{n}}} f({\bf x} - {\bf y}) g ({\bf y})\;\hbox{d}^{n} {\bf y} = (g * f) ({\bf x})}] is well defined; combined with the vector space structure of [L^{1}], it makes [L^{1}] into a (commutative) convolution algebra. However, this algebra has no unit element: there is no [f \in L^{1}] such that [f * g = g] for all [g \in L^{1}]; it has only approximate units, i.e. sequences [(f_{\nu })] such that [f_{\nu } * g] tends to g in the [L^{1}] topology as [\nu \rightarrow \infty]. This is one of the starting points of distribution theory.

The space [L^{2} ({\bb R}^{n})] of square-integrable functions can be endowed with a scalar product [(\;f, g) = {\textstyle\int\limits_{{\bb R}^{n}}} \overline{f({\bf x})} g({\bf x})\;\hbox{d}^{n} {\bf x}] which makes it into a Hilbert space. The Cauchy–Schwarz inequality [|(\;f, g)| \leq [(\;f, f) (g, g)]^{1/2}] generalizes the fact that the absolute value of the cosine of an angle is less than or equal to 1.

The space [L^{\infty} ({\bb R}^{n})] is defined as the space of functions f such that [\|\;f \|_{\infty} = \lim\limits_{p \rightarrow \infty} \|\;f \|_{p} = \lim\limits_{p \rightarrow \infty} \left({\textstyle\int\limits_{{\bb R}^{n}}} |\; f({\bf x}) |^{p} \;\hbox{d}^{n} {\bf x}\right)^{1/p} \;\lt\; \infty.] The quantity [\|\;f \|_{\infty}] is called the `essential sup norm' of f, as it is the smallest positive number which [|\;f({\bf x})|] exceeds only on a subset of zero measure in [{\bb R}^{n}]. A function [f \in L^{\infty}] is called essentially bounded. Tensor products. Fubini's theorem

| top | pdf |

Let [f \in L^{1} ({\bb R}^{m})], [g \in L^{1} ({\bb R}^{n})]. Then the function [f \otimes g: ({\bf x},{\bf y}) \;\longmapsto\; f({\bf x}) g({\bf y})] is called the tensor product of f and g, and belongs to [L^{1} ({\bb R}^{m} \times {\bb R}^{n})]. The finite linear combinations of functions of the form [f \otimes g] span a subspace of [L^{1} ({\bb R}^{m} \times {\bb R}^{n})] called the tensor product of [L^{1} ({\bb R}^{m})] and [L^{1} ({\bb R}^{n})] and denoted [L^{1} ({\bb R}^{m}) \otimes L^{1} ({\bb R}^{n})].

The integration of a general function over [{\bb R}^{m} \times {\bb R}^{n}] may be accomplished in two steps according to Fubini's theorem. Given [F \in L^{1} ({\bb R}^{m} \times {\bb R}^{n})], the functions [\eqalign{F_{1} : {\bf x} &\;\longmapsto\; {\textstyle\int\limits_{{\bb R}^{n}}} F ({\bf x},{\bf y}) \;\hbox{d}^{n} {\bf y}\cr F_{2} : {\bf y} &\;\longmapsto\; {\textstyle\int\limits_{{\bb R}^{m}}} F ({\bf x},{\bf y}) \;\hbox{d}^{m} {\bf x}}] exist for almost all [{\bf x} \in {\bb R}^{m}] and almost all [{\bf y} \in {\bb R}^{n}], respectively, are integrable, and [\textstyle\int\limits_{{\bb R}^{m} \times {\bb R}^{n}} F ({\bf x},{\bf y}) \;\hbox{d}^{m} {\bf x} \;\hbox{d}^{n} {\bf y} = {\textstyle\int\limits_{{\bb R}^{m}}} F_{1} ({\bf x}) \;\hbox{d}^{m} {\bf x} = {\textstyle\int\limits_{{\bb R}^{n}}} F_{2} ({\bf y}) \;\hbox{d}^{n} {\bf y}.] Conversely, if any one of the integrals [\displaylines{\quad (\hbox{i})\qquad {\textstyle\int\limits_{{\bb R}^{m} \times {\bb R}^{n}}} |F ({\bf x},{ \bf y})| \;\hbox{d}^{m} {\bf x} \;\hbox{d}^{n} {\bf y}\qquad \hfill\cr \quad (\hbox{ii})\qquad {\textstyle\int\limits_{{\bb R}^{m}}} \left({\textstyle\int\limits_{{\bb R}^{n}}} |F ({\bf x},{ \bf y})| \;\hbox{d}^{n} {\bf y}\right) \;\hbox{d}^{m} {\bf x}\hfill\cr \quad (\hbox{iii})\qquad {\textstyle\int\limits_{{\bb R}^{n}}} \left({\textstyle\int\limits_{{\bb R}^{m}}} |F ({\bf x},{ \bf y})| \;\hbox{d}^{m} {\bf x}\right) \;\hbox{d}^{n} {\bf y}\hfill}] is finite, then so are the other two, and the identity above holds. It is then (and only then) permissible to change the order of integrations.

Fubini's theorem is of fundamental importance in the study of tensor products and convolutions of distributions. Topology in function spaces

| top | pdf |

Geometric intuition, which often makes `obvious' the topological properties of the real line and of ordinary space, cannot be relied upon in the study of function spaces: the latter are infinite-dimensional, and several inequivalent notions of convergence may exist. A careful analysis of topological concepts and of their interrelationship is thus a necessary prerequisite to the study of these spaces. The reader may consult Dieudonné (1969[link], 1970[link]), Friedman (1970)[link], Trèves (1967)[link] and Yosida (1965)[link] for detailed expositions. General topology

| top | pdf |

Most topological notions are first encountered in the setting of metric spaces. A metric space E is a set equipped with a distance function d from [E \times E] to the non-negative reals which satisfies: [\matrix{(\hbox{i})\hfill & d(x, y) = d(y, x)\hfill &\forall x, y \in E\hfill &\hbox{(symmetry);}\hfill\cr\cr (\hbox{ii})\hfill &d(x, y) = 0 \hfill &\hbox{iff } x = y\hfill &\hbox{(separation);}\hfill\cr\cr (\hbox{iii})\hfill & d(x, z) \leq d(x, y) + d(y, z)\hfill &\forall x, y, z \in E\hfill &\hbox{(triangular}\hfill\cr& & &\hbox{inequality).}\hfill}] By means of d, the following notions can be defined: open balls, neighbourhoods; open and closed sets, interior and closure; convergence of sequences, continuity of mappings; Cauchy sequences and completeness; compactness; connectedness. They suffice for the investigation of a great number of questions in analysis and geometry (see e.g. Dieudonné, 1969[link]).

Many of these notions turn out to depend only on the properties of the collection [{\scr O}(E)] of open subsets of E: two distance functions leading to the same [{\scr O}(E)] lead to identical topological properties. An axiomatic reformulation of topological notions is thus possible: a topology in E is a collection [{\scr O}(E)] of subsets of E which satisfy suitable axioms and are deemed open irrespective of the way they are obtained. From the practical standpoint, however, a topology which can be obtained from a distance function (called a metrizable topology) has the very useful property that the notions of closure, limit and continuity may be defined by means of sequences. For non-metrizable topologies, these notions are much more difficult to handle, requiring the use of `filters' instead of sequences.

In some spaces E, a topology may be most naturally defined by a family of pseudo-distances [(d_{\alpha})_{\alpha \in A}], where each [d_{\alpha}] satisfies (i) and (iii) but not (ii). Such spaces are called uniformizable. If for every pair [(x, y) \in E \times E] there exists [\alpha \in A] such that [d_{\alpha} (x, y) \neq 0], then the separation property can be recovered. If furthermore a countable subfamily of the [d_{\alpha}] suffices to define the topology of E, the latter can be shown to be metrizable, so that limiting processes in E may be studied by means of sequences. Topological vector spaces

| top | pdf |

The function spaces E of interest in Fourier analysis have an underlying vector space structure over the field [{\bb C}] of complex numbers. A topology on E is said to be compatible with a vector space structure on E if vector addition [i.e. the map [({\bf x},{ \bf y}) \;\longmapsto\; {\bf x} + {\bf y}]] and scalar multiplication [i.e. the map [(\lambda, {\bf x}) \;\longmapsto\; \lambda {\bf x}]] are both continuous; E is then called a topological vector space. Such a topology may be defined by specifying a `fundamental system S of neighbourhoods of [{\bf 0}]', which can then be translated by vector addition to construct neighbourhoods of other points [{\bf x} \neq {\bf 0}].

A norm ν on a vector space E is a non-negative real-valued function on [E \times E] such that [\displaylines{\quad (\hbox{i}')\;\;\quad\nu (\lambda {\bf x}) = |\lambda | \nu ({\bf x}) \phantom{|\lambda | v ({\bf x} =i} \hbox{for all } \lambda \in {\bb C} \hbox{ and } {\bf x} \in E\hbox{;}\hfill\cr \quad (\hbox{ii}')\;\quad\nu ({\bf x}) = 0 \phantom{|\lambda | v ({\bf x} = |\lambda | vxxx}\; \hbox{if and only if } {\bf x} = {\bf 0}\hbox{;}\hfill\cr \quad (\hbox{iii}')\quad \nu ({\bf x} + {\bf y}) \leq \nu ({\bf x}) + \nu ({\bf y}) \quad \hbox{for all } {\bf x},{\bf y} \in E.\hfill}] Subsets of E defined by conditions of the form [\nu ({\bf x}) \leq r] with [r\gt 0] form a fundamental system of neighbourhoods of 0. The corresponding topology makes E a normed space. This topology is metrizable, since it is equivalent to that derived from the translation-invariant distance [d({\bf x},{ \bf y}) = \nu ({\bf x} - {\bf y})]. Normed spaces which are complete, i.e. in which all Cauchy sequences converge, are called Banach spaces; they constitute the natural setting for the study of differential calculus.

A semi-norm σ on a vector space E is a positive real-valued function on [E \times E] which satisfies (i′) and (iii′) but not (ii′). Given a set Σ of semi-norms on E such that any pair (x, y) in [E \times E] is separated by at least one [\sigma \in \Sigma], let B be the set of those subsets [\Gamma_{\sigma{, \,} r}] of E defined by a condition of the form [\sigma ({\bf x}) \leq r] with [\sigma \in \Sigma] and [r \gt 0]; and let S be the set of finite intersections of elements of B. Then there exists a unique topology on E for which S is a fundamental system of neighbourhoods of 0. This topology is uniformizable since it is equivalent to that derived from the family of translation-invariant pseudo-distances [({\bf x},{ \bf y}) \;\longmapsto\; \sigma ({\bf x} - {\bf y})]. It is metrizable if and only if it can be constructed by the above procedure with Σ a countable set of semi-norms. If furthermore E is complete, E is called a Fréchet space.

If E is a topological vector space over [{\bb C}], its dual [E^{*}] is the set of all linear mappings from E to [{\bb C}] (which are also called linear forms, or linear functionals, over E). The subspace of [E^{*}] consisting of all linear forms which are continuous for the topology of E is called the topological dual of E and is denoted E′. If the topology on E is metrizable, then the continuity of a linear form [T \in E'] at [f \in E] can be ascertained by means of sequences, i.e. by checking that the sequence [[T(\;f_{j})]] of complex numbers converges to [T(\;f)] in [{\bb C}] whenever the sequence [(\;f_{j})] converges to f in E. Elements of the theory of distributions

| top | pdf | Origins

| top | pdf |

At the end of the 19th century, Heaviside proposed under the name of `operational calculus' a set of rules for solving a class of differential, partial differential and integral equations encountered in electrical engineering (today's `signal processing'). These rules worked remarkably well but were devoid of mathematical justification (see Whittaker, 1928[link]). In 1926, Dirac introduced his famous δ-function [see Dirac (1958)[link], pp. 58–61], which was found to be related to Heaviside's constructs. Other singular objects, together with procedures to handle them, had already appeared in several branches of analysis [Cauchy's `principal values'; Hadamard's `finite parts' (Hadamard, 1932[link], 1952[link]); Riesz's regularization methods for certain divergent integrals (Riesz, 1938[link], 1949[link])] as well as in the theories of Fourier series and integrals (see e.g. Bochner, 1932[link], 1959[link]). Their very definition often verged on violating the rigorous rules governing limiting processes in analysis, so that subsequent recourse to limiting processes could lead to erroneous results; ad hoc precautions thus had to be observed to avoid mistakes in handling these objects.

In 1945–1950, Laurent Schwartz proposed his theory of distributions (see Schwartz, 1966[link]), which provided a unified and definitive treatment of all these questions, with a striking combination of rigour and simplicity. Schwartz's treatment of Dirac's δ-function illustrates his approach in a most direct fashion. Dirac's original definition reads: [\displaylines{\quad (\hbox{i})\;\quad\delta ({\bf x}) = 0 \hbox{ for } {\bf x} \neq {\bf 0},\hfill\cr \quad (\hbox{ii})\quad {\textstyle\int_{{\bb R}^{n}}} \delta ({\bf x}) \;\hbox{d}^{n} {\bf x} = 1.\hfill}] These two conditions are irreconcilable with Lebesgue's theory of integration: by (i), δ vanishes almost everywhere, so that its integral in (ii) must be 0, not 1.

A better definition consists in specifying that [\displaylines{\quad (\hbox{iii})\quad {\textstyle\int_{{\bb R}^{n}}} \delta ({\bf x}) \varphi ({\bf x}) \;\hbox{d}^{n} {\bf x} = \varphi ({\bf 0})\hfill}] for any function ϕ sufficiently well behaved near [{\bf x} = {\bf 0}]. This is related to the problem of finding a unit for convolution (Section[link]). As will now be seen, this definition is still unsatisfactory. Let the sequence [(\;f_{\nu})] in [L^{1} ({\bb R}^{n})] be an approximate convolution unit, e.g. [f_{\nu} ({\bf x}) = \left({\nu \over 2\pi}\right)^{1/2} \exp (-{\textstyle{1 \over 2}} \nu^{2} \|{\bf x}\|^{2}).] Then for any well behaved function ϕ the integrals [{\textstyle\int\limits_{{\bb R}^{n}}} f_{\nu} ({\bf x}) \varphi ({\bf x}) \;\hbox{d}^{n} {\bf x}] exist, and the sequence of their numerical values tends to [\varphi ({\bf 0})]. It is tempting to combine this with (iii) to conclude that δ is the limit of the sequence [(\;f_{\nu})] as [\nu \rightarrow \infty]. However, [\lim f_{\nu} ({\bf x}) = 0 \quad \hbox{as } \nu \rightarrow \infty] almost everywhere in [{\bb R}^{n}] and the crux of the problem is that [\eqalign{\varphi ({\bf 0}) &= \lim\limits_{\nu \rightarrow \infty} {\textstyle\int\limits_{{\bb R}^{n}}} f_{\nu} ({\bf x}) \varphi ({\bf x}) \;\hbox{d}^{n} {\bf x} \cr &\neq {\textstyle\int\limits_{{\bb R}^{n}}} \left[\lim\limits_{\nu \rightarrow \infty} f_{v} ({\bf x}) \right] \varphi ({\bf x}) \;\hbox{d}^{n} {\bf x} = 0}] because the sequence [(\;f_{\nu})] does not satisfy the hypotheses of Lebesgue's dominated convergence theorem.

Schwartz's solution to this problem is deceptively simple: the regular behaviour one is trying to capture is an attribute not of the sequence of functions [(\;f_{\nu})], but of the sequence of continuous linear functionals [T_{\nu}: \varphi \;\longmapsto\; {\textstyle\int\limits_{{\bb R}^{n}}} f_{\nu} ({\bf x}) \varphi ({\bf x}) \;\hbox{d}^{n} {\bf x}] which has as a limit the continuous functional [T: \varphi \;\longmapsto\; \varphi ({\bf 0}).] It is the latter functional which constitutes the proper definition of δ. The previous paradoxes arose because one insisted on writing down the simple linear operation T in terms of an integral.

The essence of Schwartz's theory of distributions is thus that, rather than try to define and handle `generalized functions' via sequences such as [(\;f_{\nu})] [an approach adopted e.g. by Lighthill (1958)[link] and Erdélyi (1962)[link]], one should instead look at them as continuous linear functionals over spaces of well behaved functions.

There are many books on distribution theory and its applications. The reader may consult in particular Schwartz (1965[link], 1966[link]), Gel'fand & Shilov (1964)[link], Bremermann (1965)[link], Trèves (1967)[link], Challifour (1972)[link], Friedlander (1982)[link], and the relevant chapters of Hörmander (1963)[link] and Yosida (1965)[link]. Schwartz (1965)[link] is especially recommended as an introduction. Rationale

| top | pdf |

The guiding principle which leads to requiring that the functions ϕ above (traditionally called `test functions') should be well behaved is that correspondingly `wilder' behaviour can then be accommodated in the limiting behaviour of the [f_{\nu}] while still keeping the integrals [{\textstyle\int_{{\bb R}^{n}}} f_{\nu} \varphi \;\hbox{d}^{n} {\bf x}] under control. Thus

  • (i) to minimize restrictions on the limiting behaviour of the [f_{\nu}] at infinity, the ϕ's will be chosen to have compact support;

  • (ii) to minimize restrictions on the local behaviour of the [f_{\nu}], the ϕ's will be chosen infinitely differentiable.

To ensure further the continuity of functionals such as [T_{\nu}] with respect to the test function ϕ as the [f_{\nu}] go increasingly wild, very strong control will have to be exercised in the way in which a sequence [(\varphi_{j})] of test functions will be said to converge towards a limiting ϕ: conditions will have to be imposed not only on the values of the functions [\varphi_{j}], but also on those of all their derivatives. Hence, defining a strong enough topology on the space of test functions ϕ is an essential prerequisite to the development of a satisfactory theory of distributions. Test-function spaces

| top | pdf |

With this rationale in mind, the following function spaces will be defined for any open subset Ω of [{\bb R}^{n}] (which may be the whole of [{\bb R}^{n}]):

  • (a) [{\scr E}(\Omega)] is the space of complex-valued functions over Ω which are indefinitely differentiable;

  • (b) [{\scr D}(\Omega)] is the subspace of [{\scr E}(\Omega)] consisting of functions with (unspecified) compact support contained in [{\bb R}^{n}];

  • (c) [{\scr D}_{K} (\Omega)] is the subspace of [{\scr D}(\Omega)] consisting of functions whose (compact) support is contained within a fixed compact subset K of Ω.

When Ω is unambiguously defined by the context, we will simply write [{\scr E},{\scr D},{\scr D}_{K}].

It sometimes suffices to require the existence of continuous derivatives only up to finite order m inclusive. The corresponding spaces are then denoted [{\scr E}^{(m)},{\scr D}^{(m)},{\scr D}_{K}^{(m)}] with the convention that if [m = 0], only continuity is required.

The topologies on these spaces constitute the most important ingredients of distribution theory, and will be outlined in some detail. Topology on [{\scr E}(\Omega)]

| top | pdf |

It is defined by the family of semi-norms [\varphi \in {\scr E}(\Omega) \;\longmapsto\; \sigma_{{\bf p}, \,  K} (\varphi) = \sup\limits_{{\bf x} \in K} |D^{{\bf p}} \varphi ({\bf x})|,] where p is a multi-index and K a compact subset of Ω. A fundamental system S of neighbourhoods of the origin in [{\scr E}(\Omega)] is given by subsets of [{\scr E}(\Omega)] of the form [V (m, \varepsilon, K) = \{\varphi \in {\scr E}(\Omega)| |{\bf p}| \leq m \Rightarrow \sigma_{{\bf p}, K} (\varphi) \;\lt\; \varepsilon\}] for all natural integers m, positive real [epsilon], and compact subset K of Ω. Since a countable family of compact subsets K suffices to cover Ω, and since restricted values of [epsilon] of the form [\varepsilon = 1/N] lead to the same topology, S is equivalent to a countable system of neighbourhoods and hence [{\scr E}(\Omega)] is metrizable.

Convergence in [{\scr E}] may thus be defined by means of sequences. A sequence [(\varphi_{\nu})] in [{\scr E}] will be said to converge to 0 if for any given [V (m, \varepsilon, K)] there exists [\nu_{0}] such that [\varphi_{\nu} \in V (m, \varepsilon, K)] whenever [\nu \gt \nu_{0}]; in other words, if the [\varphi_{\nu}] and all their derivatives [D^{\bf p} \varphi_{\nu}] converge to 0 uniformly on any given compact K in Ω. Topology on [{\scr D}_{k} (\Omega)]

| top | pdf |

It is defined by the family of semi-norms [\varphi \in {\scr D}_{K} (\Omega) \;\longmapsto\; \sigma_{\bf p} (\varphi) = \sup\limits_{{\bf x} \in K} |D^{{\bf p}} \varphi ({\bf x})|,] where K is now fixed. The fundamental system S of neighbourhoods of the origin in [{\scr D}_{K}] is given by sets of the form [V (m, \varepsilon) = \{\varphi \in {\scr D}_{K} (\Omega)| |{\bf p}| \leq m \Rightarrow \sigma_{\bf p} (\varphi) \;\lt\; \varepsilon\}.] It is equivalent to the countable subsystem of the [V (m, 1/N)], hence [{\scr D}_{K} (\Omega)] is metrizable.

Convergence in [{\scr D}_{K}] may thus be defined by means of sequences. A sequence [(\varphi_{\nu})] in [{\scr D}_{K}] will be said to converge to 0 if for any given [V(m, \varepsilon)] there exists [\nu_{0}] such that [\varphi_{\nu} \in V(m, \varepsilon)] whenever [\nu \gt \nu_{0}]; in other words, if the [\varphi_{\nu}] and all their derivatives [D^{\bf p} \varphi_{\nu}] converge to 0 uniformly in K. Topology on [{\scr D}(\Omega)]

| top | pdf |

It is defined by the fundamental system of neighbourhoods of the origin consisting of sets of the form [\eqalign{&V((m), (\varepsilon)) \cr &\qquad = \left\{\varphi \in {\scr D}(\Omega)| |{\bf p}| \leq m_{\nu} \Rightarrow \sup\limits_{\|{\bf x}\| \leq \nu} |D^{{\bf p}} \varphi ({\bf x})| \;\lt\; \varepsilon_{\nu} \hbox{ for all } \nu\right\},}] where (m) is an increasing sequence [(m_{\nu})] of integers tending to [+ \infty] and ([epsilon]) is a decreasing sequence [(\varepsilon_{\nu})] of positive reals tending to 0, as [\nu \rightarrow \infty].

This topology is not metrizable, because the sets of sequences (m) and ([epsilon]) are essentially uncountable. It can, however, be shown to be the inductive limit of the topology of the subspaces [{\scr D}_{K}], in the following sense: V is a neighbourhood of the origin in [{\scr D}] if and only if its intersection with [{\scr D}_{K}] is a neighbourhood of the origin in [{\scr D}_{K}] for any given compact K in Ω.

A sequence [(\varphi_{\nu})] in [{\scr D}] will thus be said to converge to 0 in [{\scr D}] if all the [\varphi_{\nu}] belong to some [{\scr D}_{K}] (with K a compact subset of Ω independent of ν) and if [(\varphi_{\nu})] converges to 0 in [{\scr D}_{K}].

As a result, a complex-valued functional T on [{\scr D}] will be said to be continuous for the topology of [{\scr D}] if and only if, for any given compact K in Ω, its restriction to [{\scr D}_{K}] is continuous for the topology of [{\scr D}_{K}], i.e. maps convergent sequences in [{\scr D}_{K}] to convergent sequences in [{\bb C}].

This property of [{\scr D}], i.e. having a non-metrizable topology which is the inductive limit of metrizable topologies in its subspaces [{\scr D}_{K}], conditions the whole structure of distribution theory and dictates that of many of its proofs. Topologies on [{\scr E}^{(m)}, {\scr D}_{k}^{(m)},{\scr D}^{(m)}]

| top | pdf |

These are defined similarly, but only involve conditions on derivatives up to order m. Definition of distributions

| top | pdf |

A distribution T on Ω is a linear form over [{\scr D}(\Omega)], i.e. a map [T: \varphi \;\longmapsto\; \langle T, \varphi \rangle] which associates linearly a complex number [\langle T, \varphi \rangle] to any [\varphi \in {\scr D}(\Omega)], and which is continuous for the topology of that space. In the terminology of Section[link], T is an element of [{\scr D}\,'(\Omega)], the topological dual of [{\scr D}(\Omega)].

Continuity over [{\scr D}] is equivalent to continuity over [{\scr D}_{K}] for all compact K contained in Ω, and hence to the condition that for any sequence [(\varphi_{\nu})] in [{\scr D}] such that

  • (i) Supp [\varphi_{\nu}] is contained in some compact K independent of ν,

  • (ii) the sequences [(|D^{\bf p} \varphi_{\nu}|)] converge uniformly to 0 on K for all multi-indices p;

then the sequence of complex numbers [\langle T, \varphi_{\nu}\rangle] converges to 0 in [{\bb C}].

If the continuity of a distribution T requires (ii)[link] for [|{\bf p}| \leq m] only, T may be defined over [{\scr D}^{(m)}] and thus [T \in {\scr D}\,'^{(m)}]; T is said to be a distribution of finite order m. In particular, for [m = 0, {\scr D}^{(0)}] is the space of continuous functions with compact support, and a distribution [T \in {\scr D}\,'^{(0)}] is a (Radon) measure as used in the theory of integration. Thus measures are particular cases of distributions.

Generally speaking, the larger a space of test functions, the smaller its topological dual: [m \;\lt\; n \Rightarrow {\scr D}^{(m)} \supset {\scr D}^{(n)} \Rightarrow {\scr D}\,'^{(n)} \supset {\scr D}\,'^{(m)}.] This clearly results from the observation that if the ϕ's are allowed to be less regular, then less wildness can be accommodated in T if the continuity of the map [\varphi \;\longmapsto\; \langle T, \varphi \rangle] with respect to ϕ is to be preserved. First examples of distributions

| top | pdf |

  • (i) The linear map [\varphi \;\longmapsto\; \langle \delta, \varphi \rangle = \varphi ({\bf 0})] is a measure (i.e. a zeroth-order distribution) called Dirac's measure or (improperly) Dirac's `δ-function'.

  • (ii) The linear map [\varphi \;\longmapsto\; \langle \delta_{({\bf a})}, \varphi \rangle = \varphi ({\bf a})] is called Dirac's measure at point [{\bf a} \in {\bb R}^{n}].

  • (iii) The linear map [\varphi\;\longmapsto\; (-1)^{\bf p} D^{\bf p} \varphi ({\bf a})] is a distribution of order [m = |{\bf p}| \gt 0], and hence is not a measure.

  • (iv) The linear map [\varphi \;\longmapsto\; {\textstyle\sum_{\nu \gt 0}} \varphi^{(\nu)} (\nu)] is a distribution of infinite order on [{\bb R}]: the order of differentiation is bounded for each ϕ (because ϕ has compact support) but is not as ϕ varies.

  • (v) If [({\bf p}_{\nu})] is a sequence of multi-indices [{\bf p}_{\nu} = (p_{1\nu}, \ldots, p_{n\nu})] such that [|{\bf p}_{\nu}| \rightarrow \infty] as [\nu \rightarrow \infty], then the linear map [\varphi \;\longmapsto\; {\textstyle\sum_{\nu \gt 0}} (D^{{\bf p}_{\nu}} \varphi) ({\bf p}_{\nu})] is a distribution of infinite order on [{\bb R}^{n}]. Distributions associated to locally integrable functions

| top | pdf |

Let f be a complex-valued function over Ω such that [{\textstyle\int_{K}} | \;f({\bf x}) | \;\hbox{d}^{n} {\bf x}] exists for any given compact K in Ω; f is then called locally integrable.

The linear mapping from [{\scr D}(\Omega)] to [{\bb C}] defined by [\varphi \;\longmapsto\; {\textstyle\int\limits_{\Omega}} f({\bf x}) \varphi ({\bf x}) \;\hbox{d}^{n} {\bf x}] may then be shown to be continuous over [{\scr D}(\Omega)]. It thus defines a distribution [T_{f} \in {\scr D}\,'(\Omega)]: [\langle T_{f}, \varphi \rangle = {\textstyle\int\limits_{\Omega}} f({\bf x}) \varphi ({\bf x}) \;\hbox{d}^{n} {\bf x}.] As the continuity of [T_{f}] only requires that [\varphi \in {\scr D}^{(0)} (\Omega)], [T_{f}] is actually a Radon measure.

It can be shown that two locally integrable functions f and g define the same distribution, i.e. [\langle T_{f}, \varphi \rangle = \langle T_{K}, \varphi \rangle \quad \hbox{for all } \varphi \in {\scr D},] if and only if they are equal almost everywhere. The classes of locally integrable functions modulo this equivalence form a vector space denoted [L_{\rm loc}^{1} (\Omega)]; each element of [L_{\rm loc}^{1} (\Omega)] may therefore be identified with the distribution [T_{f}] defined by any one of its representatives f. Support of a distribution

| top | pdf |

A distribution [T \in {\scr D}\,'(\Omega)] is said to vanish on an open subset ω of Ω if it vanishes on all functions in [{\scr D}(\omega)], i.e. if [\langle T, \varphi \rangle = 0] whenever [\varphi \in {\scr D}(\omega)].

The support of a distribution T, denoted Supp T, is then defined as the complement of the set-theoretic union of those open subsets ω on which T vanishes; or equivalently as the smallest closed subset of Ω outside which T vanishes.

When [T = T_{f}] for [f \in L_{\rm loc}^{1} (\Omega)], then Supp [T = \hbox{Supp } f], so that the two notions coincide. Clearly, if Supp T and Supp ϕ are disjoint subsets of Ω, then [\langle T, \varphi \rangle = 0].

It can be shown that any distribution [T \in {\scr D}\,'] with compact support may be extended from [{\scr D}] to [{\scr E}] while remaining continuous, so that [T \in {\scr E}\,']; and that conversely, if [S \in {\scr E}\,'], then its restriction T to [{\scr D}] is a distribution with compact support. Thus, the topological dual [{\scr E}\,'] of [{\scr E}] consists of those distributions in [{\scr D}\,'] which have compact support. This is intuitively clear since, if the condition of having compact support is fulfilled by T, it needs no longer be required of ϕ, which may then roam through [{\scr E}] rather than [{\scr D}]. Convergence of distributions

| top | pdf |

A sequence [(T_{j})] of distributions will be said to converge in [{\scr D}\,'] to a distribution T as [j \rightarrow \infty] if, for any given [\varphi \in {\scr D}], the sequence of complex numbers [(\langle T_{j}, \varphi \rangle)] converges in [{\bb C}] to the complex number [\langle T, \varphi \rangle].

A series [{\textstyle\sum_{j=0}^{\infty}} T_{j}] of distributions will be said to converge in [{\scr D}\,'] and to have distribution S as its sum if the sequence of partial sums [S_{k} = {\textstyle\sum_{j=0}^{k}}] converges to S.

These definitions of convergence in [{\scr D}\,'] assume that the limits T and S are known in advance, and are distributions. This raises the question of the completeness of [{\scr D}\,']: if a sequence [(T_{j})] in [{\scr D}\,'] is such that the sequence [(\langle T_{j}, \varphi \rangle)] has a limit in [{\bb C}] for all [\varphi \in {\scr D}], does the map [\varphi \;\longmapsto\; \lim_{j \rightarrow \infty} \langle T_{j}, \varphi \rangle] define a distribution [T \in {\scr D}\,']? In other words, does the limiting process preserve continuity with respect to ϕ? It is a remarkable theorem that, because of the strong topology on [{\scr D}], this is actually the case. An analogous statement holds for series. This notion of convergence does not coincide with any of the classical notions used for ordinary functions: for example, the sequence [(\varphi_{\nu})] with [\varphi_{\nu} (x) = \cos \nu x] converges to 0 in [{\scr D}\,'({\bb R})], but fails to do so by any of the standard criteria.

An example of convergent sequences of distributions is provided by sequences which converge to δ. If [(\;f_{\nu})] is a sequence of locally summable functions on [{\bb R}^{n}] such that

  • (i) [\textstyle{\int_{\|{\bf x}\| \lt\; b}} \;f_{\nu} ({\bf x}) \;\hbox{d}^{n} {\bf x} \rightarrow 1] as [\nu \rightarrow \infty] for all [b \gt 0];

  • (ii) [{\textstyle\int_{a \leq \|{\bf x}\| \leq 1/a}} |\;f_{\nu} ({\bf x})| \;\hbox{d}^{n} {\bf x} \rightarrow 0] as [\nu \rightarrow \infty] for all [0 \;\lt\; a \;\lt\; 1];

  • (iii) there exists [d \gt 0] and [M \gt 0] such that [{\textstyle\int_{\|{\bf x}\|\lt\; d}} |\;f_{\nu} ({\bf x})| \;\hbox{d}^{n} {\bf x}\lt M] for all ν;

then the sequence [(T_{f_{\nu}})] of distributions converges to δ in [{\scr D}\,'({\bb R}^{n})]. Operations on distributions

| top | pdf |

As a general rule, the definitions are chosen so that the operations coincide with those on functions whenever a distribution is associated to a function.

Most definitions consist in transferring to a distribution T an operation which is well defined on [\varphi \in {\scr D}] by `transposing' it in the duality product [\langle T, \varphi \rangle]; this procedure will map T to a new distribution provided the original operation maps [{\scr D}] continuously into itself. Differentiation

| top | pdf |

  • (a) Definition and elementary properties

    If T is a distribution on [{\bb R}^{n}], its partial derivative [\partial_{i} T] with respect to [x_{i}] is defined by [\langle \partial_{i} T, \varphi \rangle = - \langle T, \partial_{i} \varphi \rangle]

    for all [\varphi \in {\scr D}]. This does define a distribution, because the partial differentiations [\varphi \;\longmapsto\; \partial_{i} \varphi] are continuous for the topology of [{\scr D}].

    Suppose that [T = T_{f}] with f a locally integrable function such that [\partial_{i}\; f] exists and is almost everywhere continuous. Then integration by parts along the [x_{i}] axis gives [\eqalign{&{\textstyle\int\limits_{{\bb R}^{n}}} \partial_{i}\; f(x_{\rm l}, \ldots, x_{i}, \ldots, x_{n}) \varphi (x_{\rm l}, \ldots, x_{i}, \ldots, x_{n}) \;\hbox{d}x_{i} \cr &\quad = (\;f\varphi)(x_{\rm l}, \ldots, + \infty, \ldots, x_{n}) - (\;f\varphi)(x_{\rm l}, \ldots, - \infty, \ldots, x_{n}) \cr &\qquad - {\textstyle\int\limits_{{\bb R}^{n}}} f(x_{\rm l}, \ldots, x_{i}, \ldots, x_{n}) \partial_{i} \varphi (x_{\rm l}, \ldots, x_{i}, \ldots, x_{n}) \;\hbox{d}x_{i}\hbox{;}}] the integrated term vanishes, since ϕ has compact support, showing that [\partial_{i} T_{f} = T_{\partial_{i}\; f}].

    The test functions [\varphi \in {\scr D}] are infinitely differentiable. Therefore, transpositions like that used to define [\partial_{i} T] may be repeated, so that any distribution is infinitely differentiable. For instance, [\displaylines{\langle \partial_{ij}^{2} T, \varphi \rangle = - \langle \partial_{j} T, \partial_{i} \varphi \rangle = \langle T, \partial_{ij}^{2} \varphi \rangle, \cr \langle D^{\bf p} T, \varphi \rangle = (-1)^{|{\bf p}|} \langle T, D^{\bf p} \varphi \rangle, \cr \langle \Delta T, \varphi \rangle = \langle T, \Delta \varphi \rangle,}] where Δ is the Laplacian operator. The derivatives of Dirac's δ distribution are [\langle D^{\bf p} \delta, \varphi \rangle = (-1)^{|{\bf p}|} \langle \delta, D^{\bf p} \varphi \rangle = (-1)^{|{\bf p}|} D^{\bf p} \varphi ({\bf 0}).]

    It is remarkable that differentiation is a continuous operation for the topology on [{\scr D}\,']: if a sequence [(T_{j})] of distributions converges to distribution T, then the sequence [(D^{\bf p} T_{j})] of derivatives converges to [D^{\bf p} T] for any multi-index p, since as [j \rightarrow \infty] [\langle D^{\bf p} T_{j}, \varphi \rangle = (-1)^{|{\bf p}|} \langle T_{j}, D^{\bf p} \varphi \rangle \rightarrow (-1)^{|{\bf p}|} \langle T, D^{\bf p} \varphi \rangle = \langle D^{\bf p} T, \varphi \rangle.] An analogous statement holds for series: any convergent series of distributions may be differentiated termwise to all orders. This illustrates how `robust' the constructs of distribution theory are in comparison with those of ordinary function theory, where similar statements are notoriously untrue.

  • (b) Differentiation under the duality bracket

    Limiting processes and differentiation may also be carried out under the duality bracket [\langle ,\rangle] as under the integral sign with ordinary functions. Let the function [\varphi = \varphi ({\bf x}, \lambda)] depend on a parameter [\lambda \in \Lambda] and a vector [{\bf x} \in {\bb R}^{n}] in such a way that all functions [\varphi_{\lambda}: {\bf x} \;\longmapsto\; \varphi ({\bf x}, \lambda)] be in [{\scr D}({\bb R}^{n})] for all [\lambda \in \Lambda]. Let [T \in {\scr D}^{\prime}({\bb R}^{n})] be a distribution, let [I(\lambda) = \langle T, \varphi_{\lambda}\rangle] and let [\lambda_{0} \in \Lambda] be given parameter value. Suppose that, as λ runs through a small enough neighbourhood of [\lambda_{0}],

    • (i) all the [\varphi_{\lambda}] have their supports in a fixed compact subset K of [{\bb R}^{n}];

    • (ii) all the derivatives [D^{\bf p} \varphi_{\lambda}] have a partial derivative with respect to λ which is continuous with respect to x and λ.

    Under these hypotheses, [I(\lambda)] is differentiable (in the usual sense) with respect to λ near [\lambda_{0}], and its derivative may be obtained by `differentiation under the [\langle ,\rangle] sign': [{\hbox{d}I \over \hbox{d}\lambda} = \langle T, \partial_{\lambda} \varphi_{\lambda}\rangle.]

  • (c) Effect of discontinuities

    When a function f or its derivatives are no longer continuous, the derivatives [D^{\bf p} T_{f}] of the associated distribution [T_{f}] may no longer coincide with the distributions associated to the functions [D^{\bf p} f].

    In dimension 1, the simplest example is Heaviside's unit step function [Y\; [Y(x) = 0 \hbox{ for } x \;\lt\; 0, Y(x) = 1 \hbox{ for } x \geq 0]]: [\langle (T_{Y})', \varphi \rangle = - \langle (T_{Y}), \varphi'\rangle = - {\textstyle\int\limits_{0}^{+ \infty}} \varphi' (x) \;\hbox{d}x = \varphi (0) = \langle \delta, \varphi \rangle.] Hence [(T_{Y})' = \delta], a result long used `heuristically' by electrical engineers [see also Dirac (1958)[link]].

    Let f be infinitely differentiable for [x \;\lt\; 0] and [x \gt 0] but have discontinuous derivatives [f^{(m)}] at [x = 0] [[\;f^{(0)}] being f itself] with jumps [\sigma_{m} = f^{(m)} (0 +) - f^{(m)} (0 -)]. Consider the functions: [\eqalign{g_{0} &= f - \sigma_{0} Y \cr g_{1} &= g'_{0} - \sigma_{1} Y \cr---&-------\cr g_{k} &= g'_{k - 1} - \sigma_{k} Y.}] The [g_{k}] are continuous, their derivatives [g'_{k}] are continuous almost everywhere [which implies that [(T_{g_{k}})' = T_{g'_{k}}] and [g'_{k} = f^{(k + 1)}] almost everywhere]. This yields immediately: [\eqalign{(T_{f})' &= T_{f'} + \sigma_{0} \delta \cr (T_{f})'' &=T_{f''} + \sigma_{0} \delta' + \sigma_{\rm 1} \delta \cr----&--------------\cr (T_{f})^{(m)} &= T_{f^{(m)}} + \sigma_{0} \delta^{(m - 1)} + \ldots + \sigma_{m - 1} \delta.\cr----&--------------\cr}] Thus the `distributional derivatives' [(T_{f})^{(m)}] differ from the usual functional derivatives [T_{f^{(m)}}] by singular terms associated with discontinuities.

    In dimension n, let f be infinitely differentiable everywhere except on a smooth hypersurface S, across which its partial derivatives show discontinuities. Let [\sigma_{0}] and [\sigma_{\nu}] denote the discontinuities of f and its normal derivative [\partial_{\nu} \varphi] across S (both [\sigma_{0}] and [\sigma_{\nu}] are functions of position on S), and let [\delta_{(S)}] and [\partial_{\nu} \delta_{(S)}] be defined by [\eqalign{\langle \delta_{(S)}, \varphi \rangle &= {\textstyle\int\limits_{S}} \varphi \;\hbox{d}^{n - 1} S \cr \langle \partial_{\nu} \delta_{(S)}, \varphi \rangle &= - {\textstyle\int\limits_{S}} \partial_{\nu} \varphi \;\hbox{d}^{n - 1} S.}] Integration by parts shows that [\partial_{i} T_{f} = T_{\partial_{i}\; f} + \sigma_{0} \cos \theta_{i} \delta_{(S)},] where [\theta_{i}] is the angle between the [x_{i}] axis and the normal to S along which the jump [\sigma_{0}] occurs, and that the Laplacian of [T_{f}] is given by [\Delta (T_{f}) = T_{\Delta f} + \sigma_{\nu} \delta_{(S)} + \partial_{\nu} [\sigma_{0} \delta_{(S)}].] The latter result is a statement of Green's theorem in terms of distributions. It will be used in Section[link] to calculate the Fourier transform of the indicator function of a molecular envelope. Integration of distributions in dimension 1

| top | pdf |

The reverse operation from differentiation, namely calculating the `indefinite integral' of a distribution S, consists in finding a distribution T such that [T' = S].

For all [\chi \in {\scr D}] such that [\chi = \psi'] with [\psi \in {\scr D}], we must have [\langle T, \chi \rangle = - \langle S, \psi \rangle .] This condition defines T in a `hyperplane' [{\scr H}] of [{\scr D}], whose equation [\langle 1, \chi \rangle \equiv \langle 1, \psi' \rangle = 0] reflects the fact that ψ has compact support.

To specify T in the whole of [{\scr D}], it suffices to specify the value of [\langle T, \varphi_{0} \rangle] where [\varphi_{0} \in {\scr D}] is such that [\langle 1, \varphi_{0} \rangle = 1]: then any [\varphi \in {\scr D}] may be written uniquely as [\varphi = \lambda \varphi_{0} + \psi'] with [\lambda = \langle 1, \varphi \rangle, \qquad \chi = \varphi - \lambda \varphi_{0}, \qquad \psi (x) = {\textstyle\int\limits_{0}^{x}} \chi (t) \;\hbox{d}t,] and T is defined by [\langle T, \varphi \rangle = \lambda \langle T, \varphi_{0} \rangle - \langle S, \psi \rangle.] The freedom in the choice of [\varphi_{0}] means that T is defined up to an additive constant. Multiplication of distributions by functions

| top | pdf |

The product [\alpha T] of a distribution T on [{\bb R}^{n}] by a function α over [{\bb R}^{n}] will be defined by transposition: [\langle \alpha T, \varphi \rangle = \langle T, \alpha \varphi \rangle \quad \hbox{for all } \varphi \in {\scr D}.] In order that [\alpha T] be a distribution, the mapping [\varphi \;\longmapsto\; \alpha \varphi] must send [{\scr D}({\bb R}^{n})] continuously into itself; hence the multipliers α must be infinitely differentiable. The product of two general distributions cannot be defined. The need for a careful treatment of multipliers of distributions will become clear when it is later shown (Section[link]) that the Fourier transformation turns convolutions into multiplications and vice versa.

If T is a distribution of order m, then α needs only have continuous derivatives up to order m. For instance, δ is a distribution of order zero, and [\alpha \delta = \alpha ({\bf 0}) \delta] is a distribution provided α is continuous; this relation is of fundamental importance in the theory of sampling and of the properties of the Fourier transformation related to sampling (Sections[link],[link]). More generally, [D^{{\bf p}}\delta] is a distribution of order [|{\bf p}|], and the following formula holds for all [\alpha \in {\scr D}^{(m)}] with [m = |{\bf p}|]: [\alpha (D^{{\bf p}}\delta) = {\displaystyle\sum\limits_{{\bf q} \leq {\bf p}}} (-1)^{|{\bf p}-{\bf q}|} \pmatrix{{\bf p}\cr {\bf q}\cr} (D^{{\bf p}-{\bf q}} \alpha) ({\bf 0}) D^{\bf q}\delta.]

The derivative of a product is easily shown to be [\partial_{i}(\alpha T) = (\partial_{i}\alpha) T + \alpha (\partial_{i}T)] and generally for any multi-index p [D^{\bf p}(\alpha T) = {\displaystyle\sum\limits_{{\bf q}\leq {\bf p}}} \pmatrix{{\bf p}\cr {\bf q}\cr} (D^{{\bf p}-{\bf q}} \alpha) ({\bf 0}) D^{{\bf q}}T.] Division of distributions by functions

| top | pdf |

Given a distribution S on [{\bb R}^{n}] and an infinitely differentiable multiplier function α, the division problem consists in finding a distribution T such that [\alpha T = S].

If α never vanishes, [T = S/\alpha] is the unique answer. If [n = 1], and if α has only isolated zeros of finite order, it can be reduced to a collection of cases where the multiplier is [x^{m}], for which the general solution can be shown to be of the form [T = U + {\textstyle\sum\limits_{i=0}^{m-1}} c_{i}\delta^{(i)},] where U is a particular solution of the division problem [x^{m} U = S] and the [c_{i}] are arbitrary constants.

In dimension [n \gt 1], the problem is much more difficult, but is of fundamental importance in the theory of linear partial differential equations, since the Fourier transformation turns the problem of solving these into a division problem for distributions [see Hörmander (1963)[link]]. Transformation of coordinates

| top | pdf |

Let σ be a smooth non-singular change of variables in [{\bb R}^{n}], i.e. an infinitely differentiable mapping from an open subset Ω of [{\bb R}^{n}] to Ω′ in [{\bb R}^{n}], whose Jacobian [J(\sigma) = \det \left[{\partial \sigma ({\bf x}) \over \partial {\bf x}}\right]] vanishes nowhere in Ω. By the implicit function theorem, the inverse mapping [\sigma^{-1}] from Ω′ to Ω is well defined.

If f is a locally summable function on Ω, then the function [\sigma^{\#} f] defined by [(\sigma^{\#} f)({\bf x}) = f[\sigma^{-1}({\bf x})]] is a locally summable function on Ω′, and for any [\varphi \in {\scr D}(\Omega')] we may write: [\eqalign{{\textstyle\int\limits_{\Omega'}} (\sigma^{\#} f) ({\bf x}) \varphi ({\bf x}) \;\hbox{d}^{n} {\bf x} &= {\textstyle\int\limits_{\Omega'}} f[\sigma^{-1} ({\bf x})] \varphi ({\bf x}) \;\hbox{d}^{n} {\bf x} \cr &= {\textstyle\int\limits_{\Omega'}} f({\bf y}) \varphi [\sigma ({\bf y})]|J(\sigma)| \;\hbox{d}^{n} {\bf y} \quad \hbox{by } {\bf x} = \sigma ({\bf y}).}] In terms of the associated distributions [\langle T_{\sigma^{\#} f}, \varphi \rangle = \langle T_{f}, |J(\sigma)|(\sigma^{-1})^{\#} \varphi \rangle.]

This operation can be extended to an arbitrary distribution T by defining its image [\sigma^{\#} T] under coordinate transformation σ through [\langle \sigma^{\#} T, \varphi \rangle = \langle T, |J(\sigma)|(\sigma^{-1})^{\#} \varphi \rangle,] which is well defined provided that σ is proper, i.e. that [\sigma^{-1}(K)] is compact whenever K is compact.

For instance, if [\sigma: {\bf x} \;\longmapsto\; {\bf x} + {\bf a}] is a translation by a vector a in [{\bb R}^{n}], then [|J(\sigma)| = 1]; [\sigma^{\#}] is denoted by [\tau_{\bf a}], and the translate [\tau_{\bf a} T] of a distribution T is defined by [\langle \tau_{\bf a} T, \varphi \rangle = \langle T, \tau_{-{\bf a}} \varphi \rangle.]

Let [A: {\bf x} \;\longmapsto\; {\bf Ax}] be a linear transformation defined by a non-singular matrix A. Then [J(A) = \det {\bf A}], and [\langle A^{\#} T, \varphi \rangle = |\det {\bf A}| \langle T, (A^{-1})^{\#} \varphi \rangle.] This formula will be shown later (Sections[link],[link]) to be the basis for the definition of the reciprocal lattice.

In particular, if [{\bf A} = -{\bf I}], where I is the identity matrix, A is an inversion through a centre of symmetry at the origin, and denoting [A^{\#} \varphi] by [\breve{\varphi}] we have: [\langle \breve{T}, \varphi \rangle = \langle T, \breve{\varphi} \rangle.] T is called an even distribution if [\breve{T} = T], an odd distribution if [\breve{T} = -T].

If [{\bf A} = \lambda {\bf I}] with [\lambda \gt 0], A is called a dilation and [\langle A^{\#} T, \varphi \rangle = \lambda^{n} \langle T, (A^{-1})^{\#} \varphi \rangle.] Writing symbolically δ as [\delta ({\bf x})] and [A^{\#} \delta] as [\delta ({\bf x}/\lambda)], we have: [\delta ({\bf x}/\lambda) = \lambda^{n} \delta ({\bf x}).] If [n = 1] and f is a function with isolated simple zeros [x_{j}], then in the same symbolic notation [\delta [\;f(x)] = \sum\limits_{j} {1 \over |\;f'(x_{j})|} \delta (x_{j}),] where each [\lambda_{j} = 1/|\;f'(x_{j})|] is analogous to a `Lorentz factor' at zero [x_{j}]. Tensor product of distributions

| top | pdf |

The purpose of this construction is to extend Fubini's theorem to distributions. Following Section[link], we may define the tensor product [L_{\rm loc}^{1} ({\bb R}^{m}) \otimes L_{\rm loc}^{1} ({\bb R}^{n})] as the vector space of finite linear combinations of functions of the form [f \otimes g: ({\bf x},{ \bf y}) \;\longmapsto\; f({\bf x})g({\bf y}),] where [{\bf x} \in {\bb R}^{m},{\bf y} \in {\bb R}^{n}, f \in L_{\rm loc}^{1} ({\bb R}^{m})] and [g \in L_{\rm loc}^{1} ({\bb R}^{n})].

Let [S_{\bf x}] and [T_{\bf y}] denote the distributions associated to f and g, respectively, the subscripts x and y acting as mnemonics for [{\bb R}^{m}] and [{\bb R}^{n}]. It follows from Fubini's theorem (Section[link]) that [f \otimes g \in L_{\rm loc}^{1} ({\bb R}^{m} \times {\bb R}^{n})], and hence defines a distribution over [{\bb R}^{m} \times {\bb R}^{n}]; the rearrangement of integral signs gives [\langle S_{\bf x} \otimes T_{\bf y}, \varphi_{{\bf x}, \,{\bf y}} \rangle = \langle S_{\bf x}, \langle T_{\bf y}, \varphi_{{\bf x}, \,{\bf y}} \rangle\rangle = \langle T_{\bf y}, \langle S_{\bf x}, \varphi_{{\bf x}, \, {\bf y}} \rangle\rangle] for all [\varphi_{{\bf x}, \,{\bf y}} \in {\scr D}({\bb R}^{m} \times {\bb R}^{n})]. In particular, if [\varphi ({\bf x},{ \bf y}) = u({\bf x}) v({\bf y})] with [u \in {\scr D}({\bb R}^{m}),v \in {\scr D}({\bb R}^{n})], then [\langle S \otimes T, u \otimes v \rangle = \langle S, u \rangle \langle T, v \rangle.]

This construction can be extended to general distributions [S \in {\scr D}\,'({\bb R}^{m})] and [T \in {\scr D}\,'({\bb R}^{n})]. Given any test function [\varphi \in {\scr D}({\bb R}^{m} \times {\bb R}^{n})], let [\varphi_{\bf x}] denote the map [{\bf y} \;\longmapsto\; \varphi ({\bf x}, {\bf y})]; let [\varphi_{\bf y}] denote the map [{\bf x} \;\longmapsto\; \varphi ({\bf x},{\bf y})]; and define the two functions [\theta ({\bf x}) = \langle T, \varphi_{\bf x} \rangle] and [\omega ({\bf y}) = \langle S, \varphi_{\bf y} \rangle]. Then, by the lemma on differentiation under the [\langle,\rangle] sign of Section[link], [\theta \in {\scr D}({\bb R}^{m}),\omega \in {\scr D}({\bb R}^{n})], and there exists a unique distribution [S \otimes T] such that [\langle S \otimes T, \varphi \rangle = \langle S, \theta \rangle = \langle T, \omega \rangle.] [S \otimes T] is called the tensor product of S and T.

With the mnemonic introduced above, this definition reads identically to that given above for distributions associated to locally integrable functions: [\langle S_{\bf x} \otimes T_{\bf y}, \varphi_{{\bf x}, \, {\bf y}} \rangle = \langle S_{\bf x}, \langle T_{\bf y}, \varphi_{{\bf x}, \, {\bf y}} \rangle\rangle = \langle T_{\bf y}, \langle S_{\bf x}, \varphi_{{\bf x}, \, {\bf y}} \rangle\rangle.]

The tensor product of distributions is associative: [(R \otimes S) \otimes T = R \otimes (S \otimes T).] Derivatives may be calculated by [D_{\bf x}^{\bf p} D_{\bf y}^{\bf q} (S_{\bf x} \otimes T_{\bf y}) = (D_{\bf x}^{\bf p} S_{\bf x}) \otimes (D_{\bf y}^{\bf q} T_{\bf y}).] The support of a tensor product is the Cartesian product of the supports of the two factors. Convolution of distributions

| top | pdf |

The convolution [f * g] of two functions f and g on [{\bb R}^{n}] is defined by [(\;f * g) ({\bf x}) = {\textstyle\int\limits_{{\bb R}^{n}}} f({\bf y}) g({\bf x} - {\bf y}) \;\hbox{d}^{n}{\bf y} = {\textstyle\int\limits_{{\bb R}^{n}}} f({\bf x} - {\bf y}) g ({\bf y}) \;\hbox{d}^{n}{\bf y}] whenever the integral exists. This is the case when f and g are both in [L^{1} ({\bb R}^{n})]; then [f * g] is also in [L^{1} ({\bb R}^{n})]. Let S, T and W denote the distributions associated to f, g and [f * g,] respectively: a change of variable immediately shows that for any [\varphi \in {\scr D}({\bb R}^{n})], [\langle W, \varphi \rangle = {\textstyle\int\limits_{{\bb R}^{n} \times {\bb R}^{n}}} f({\bf x}) g({\bf y}) \varphi ({\bf x} + {\bf y}) \;\hbox{d}^{n}{\bf x} \;\hbox{d}^{n}{\bf y}.] Introducing the map σ from [{\bb R}^{n} \times {\bb R}^{n}] to [{\bb R}^{n}] defined by [\sigma ({\bf x}, {\bf y}) = {\bf x} + {\bf y}], the latter expression may be written: [\langle S_{\bf x} \otimes T_{\bf y}, \varphi \circ \sigma \rangle] (where [\circ] denotes the composition of mappings) or by a slight abuse of notation: [\langle W, \varphi \rangle = \langle S_{\bf x} \otimes T_{\bf y}, \varphi ({\bf x} + {\bf y}) \rangle.]

A difficulty arises in extending this definition to general distributions S and T because the mapping σ is not proper: if K is compact in [{\bb R}^{n}], then [\sigma^{-1} (K)] is a cylinder with base K and generator the `second bisector' [{\bf x} + {\bf y} = {\bf 0}] in [{\bb R}^{n} \times {\bb R}^{n}]. However, [\langle S \otimes T, \varphi \circ \sigma \rangle] is defined whenever the intersection between Supp [(S \otimes T) = (\hbox{Supp } S) \times (\hbox{Supp } T)] and [\sigma^{-1} (\hbox{Supp } \varphi)] is compact.

We may therefore define the convolution [S * T] of two distributions S and T on [{\bb R}^{n}] by [\langle S * T, \varphi \rangle = \langle S \otimes T, \varphi \circ \sigma \rangle = \langle S_{\bf x} \otimes T_{\bf y}, \varphi ({\bf x} + {\bf y})\rangle] whenever the following support condition is fulfilled:

`the set [\{({\bf x},{\bf y})|{\bf x} \in A, {\bf y} \in B, {\bf x} + {\bf y} \in K\}] is compact in [{\bb R}^{n} \times {\bb R}^{n}] for all K compact in [{\bb R}^{n}]'.

The latter condition is met, in particular, if S or T has compact support. The support of [S * T] is easily seen to be contained in the closure of the vector sum [A + B = \{{\bf x} + {\bf y}|{\bf x} \in A, {\bf y} \in B\}.]

Convolution by a fixed distribution S is a continuous operation for the topology on [{\scr D}\,']: it maps convergent sequences [(T_{j})] to convergent sequences [(S * T_{j})]. Convolution is commutative: [S * T = T * S].

The convolution of p distributions [T_{1}, \ldots, T_{p}] with supports [A_{1}, \ldots, A_{p}] can be defined by [\langle T_{1} * \ldots * T_{p}, \varphi \rangle = \langle (T_{1})_{{\bf x}_{1}} \otimes \ldots \otimes (T_{p})_{{\bf x}_{p}}, \varphi ({\bf x}_{1} + \ldots + {\bf x}_{p})\rangle] whenever the following generalized support condition:

`the set [\{({\bf x}_{1}, \ldots, {\bf x}_{p})|{\bf x}_{1} \in A_{1}, \ldots, {\bf x}_{p} \in A_{p}, {\bf x}_{1} + \ldots + {\bf x}_{p} \in K\}] is compact in [({\bb R}^{n})^{p}] for all K compact in [{\bb R}^{n}]'

is satisfied. It is then associative. Interesting examples of associativity failure, which can be traced back to violations of the support condition, may be found in Bracewell (1986[link], pp. 436–437).

It follows from previous definitions that, for all distributions [T \in {\scr D}\,'], the following identities hold:

  • (i) [\delta * T = T]: [\delta] is the unit convolution;

  • (ii) [\delta_{({\bf a})} * T = \tau_{\bf a} T]: translation is a convolution with the corresponding translate of δ;

  • (iii) [(D^{{\bf p}} \delta) * T = D^{{\bf p}} T]: differentiation is a convolution with the corresponding derivative of δ;

  • (iv) translates or derivatives of a convolution may be obtained by translating or differentiating any one of the factors: convolution `commutes' with translation and differentiation, a property used in Section[link] to speed up least-squares model refinement for macromolecules.

The latter property is frequently used for the purpose of regularization: if T is a distribution, α an infinitely differentiable function, and at least one of the two has compact support, then [T * \alpha] is an infinitely differentiable ordinary function. Since sequences [(\alpha_{\nu})] of such functions α can be constructed which have compact support and converge to δ, it follows that any distribution T can be obtained as the limit of infinitely differentiable functions [T * \alpha_{\nu}]. In topological jargon: [{\scr D}({\bb R}^{n})] is `everywhere dense' in [{\scr D}\,'({\bb R}^{n})]. A standard function in [{\scr D}] which is often used for such proofs is defined as follows: put [\eqalign{\theta (x) &= {1 \over A} \exp \left(- {1 \over 1-x^{2}}\right){\hbox to 10.5pt{}} \hbox{for } |x| \leq 1, \cr &= 0 \phantom{\exp \left(- {1 \over x^{2} - 1}\right)a}\quad \hbox{for } |x| \geq 1,}] with [A = \int\limits_{-1}^{+1} \exp \left(- {1 \over 1-x^{2}}\right) \;\hbox{d}x] (so that θ is in [{\scr D}] and is normalized), and put [\eqalign{\theta_{\varepsilon} (x) &= {1 \over \varepsilon} \theta \left({x \over \varepsilon}\right){\hbox to 13.5pt{}}\hbox{ in dimension } 1,\cr \theta_{\varepsilon} ({\bf x}) &= \prod\limits_{j=1}^{n} \theta_{\varepsilon} (x_{j})\quad \hbox{in dimension } n.}]

Another related result, also proved by convolution, is the structure theorem: the restriction of a distribution [T \in {\scr D}\,'({\bb R}^{n})] to a bounded open set Ω in [{\bb R}^{n}] is a derivative of finite order of a continuous function.

Properties (i)[link] to (iv)[link] are the basis of the symbolic or operational calculus (see Carslaw & Jaeger, 1948[link]; Van der Pol & Bremmer, 1955[link]; Churchill, 1958[link]; Erdélyi, 1962[link]; Moore, 1971[link]) for solving integro-differential equations with constant coefficients by turning them into convolution equations, then using factorization methods for convolution algebras (Schwartz, 1965[link]). Fourier transforms of functions

| top | pdf | Introduction

| top | pdf |

Given a complex-valued function f on [{\bb R}^{n}] subject to suitable regularity conditions, its Fourier transform [{\scr F}[\;f]] and Fourier cotransform [\bar{\scr F}[\;f]] are defined as follows: [\eqalign{{\scr F}[\;f] (\xi) &= {\textstyle\int\limits_{{\bb R}^{n}}} f({\bf x}) \exp (-2\pi i {\boldxi} \cdot {\bf x}) \;\hbox{d}^{n} {\bf x}\cr \bar{\scr F}[\;f] (\xi) &= {\textstyle\int\limits_{{\bb R}^{n}}} f({\bf x}) \exp (+2\pi i {\boldxi} \cdot {\bf x}) \;\hbox{d}^{n} {\bf x},}] where [{\boldxi} \cdot {\bf x} = {\textstyle\sum_{i=1}^{n}} \xi_{i}x_{i}] is the ordinary scalar product. The terminology and sign conventions given above are the standard ones in mathematics; those used in crystallography are slightly different (see Section[link]). These transforms enjoy a number of remarkable properties, whose natural settings entail different regularity assumptions on f: for instance, properties relating to convolution are best treated in [L^{1} ({\bb R}^{n})], while Parseval's theorem requires the Hilbert space structure of [L^{2} ({\bb R}^{n})]. After a brief review of these classical properties, the Fourier transformation will be examined in a space [{\scr S}({\bb R}^{n})] particularly well suited to accommodating the full range of its properties, which will later serve as a space of test functions to extend the Fourier transformation to distributions.

There exists an abundant literature on the `Fourier integral'. The books by Carslaw (1930)[link], Wiener (1933)[link], Titchmarsh (1948)[link], Katznelson (1968)[link], Sneddon (1951[link], 1972[link]), and Dym & McKean (1972)[link] are particularly recommended. Fourier transforms in [L^{1}]

| top | pdf | Linearity

| top | pdf |

Both transformations [{\scr F}] and [\bar{\scr F}] are obviously linear maps from [L^{1}] to [L^{\infty}] when these spaces are viewed as vector spaces over the field [{\bb C}] of complex numbers. Effect of affine coordinate transformations

| top | pdf |

[{\scr F}] and [\bar{\scr F}] turn translations into phase shifts: [\eqalign{{\scr F}[\tau_{\bf a}\; f] ({\boldxi}) &= \exp (-2\pi i {\boldxi} \cdot {\bf a}) {\scr F}[\;f] ({\boldxi})\cr \bar{\scr F}[\tau_{\bf a}\; f] ({\boldxi}) &= \exp (+2\pi i {\boldxi} \cdot {\bf a}) \bar{\scr F}[\;f] ({\boldxi}).}]

Under a general linear change of variable [{\bf x} \;\longmapsto\; {\bf Ax}] with non-singular matrix A, the transform of [A^{\#} f] is [\eqalign{{\scr F}[A^{\#} f] ({\boldxi}) &= {\textstyle\int\limits_{{\bb R}^{n}}} f({\bf A}^{-1} {\bf x}) \exp (-2\pi i {\boldxi} \cdot {\bf x}) \;\hbox{d}^{n} {\bf x}\cr &= {\textstyle\int\limits_{{\bb R}^{n}}} f({\bf y}) \exp (-2\pi i (A^{T} {\boldxi}) \cdot {\bf y}) |\det {\bf A}| \;\hbox{d}^{n} {\bf y}\cr &\phantom{{\bb R}^{n}f({\bf y}) \exp (-2\pi i (A^{T} {\boldxi}) \cdot {\bf y}) |\det {\bf A}|} \hbox{ by } {\bf x} = {\bf Ay}\cr &= |\det {\bf A}| {\scr F}[\;f] ({\bf A}^{T} {\boldxi})}] i.e. [{\scr F}[A^{\#} f] = |\det {\bf A}| [({\bf A}^{-1})^{T}]^{\#} {\scr F}[\;f]] and similarly for [\bar{\scr F}]. The matrix [({\bf A}^{-1})^{T}] is called the contragredient of matrix A.

Under an affine change of coordinates [{\bf x} \;\longmapsto\; S({\bf x}) = {\bf Ax} + {\bf b}] with non-singular matrix A, the transform of [S^{\#} f] is given by [\eqalign{{\scr F}[S^{\#} f] ({\boldxi}) &= {\scr F}[\tau_{\bf b} (A^{\#} f)] ({\boldxi})\cr &= \exp (-2\pi i {\boldxi} \cdot {\bf b}) {\scr F}[A^{\#} f] ({\boldxi})\cr &= \exp (-2\pi i {\boldxi} \cdot {\bf b}) |\det {\bf A}| {\scr F}[\;f] ({\bf A}^{T} {\boldxi})}] with a similar result for [\bar{\scr F}], replacing −i by +i. Conjugate symmetry

| top | pdf |

The kernels of the Fourier transformations [{\scr F}] and [\bar{\scr F}] satisfy the following identities: [\exp (\pm 2\pi i {\boldxi} \cdot {\bf x}) = \exp \overline{[\pm 2\pi i {\boldxi} \cdot (-{\bf x})]} = \exp \overline{[\pm 2\pi i (-{\boldxi}) \cdot {\bf x}]}.] As a result the transformations [{\scr F}] and [\bar{\scr F}] themselves have the following `conjugate symmetry' properties [where the notation [\breve{f}({\bf x}) = f(-{\bf x})] of Section[link] will be used]: [\displaylines{{\scr F}[\;f] ({\boldxi}) = \overline{{\scr F}[\bar{\; f}] (-{\boldxi})} = \breve{\overline{{\scr F}[\bar{\; f}] ({\boldxi})}}\cr {\scr F}[\;f] ({\boldxi}) = \overline{{\scr F}[\breve{\bar{\;f}}] ({\boldxi})}.}] Therefore,

  • (i) f real [\Leftrightarrow f = \bar{f} \Leftrightarrow {\scr F}[\;f] = \breve{\overline{{\scr F}[\;f]}} \Leftrightarrow {\scr F}[\;f] ({\boldxi}) = \overline{{\scr F}[\;f] (-{\boldxi})}:{\scr F}[\;f]] is said to possess Hermitian symmetry;

  • (ii) f centrosymmetric [\Leftrightarrow f = \breve{f} \Leftrightarrow {\scr F}[\;f] = \overline{{\scr F}[\bar{\; f}]}];

  • (iii) f real centrosymmetric [\Leftrightarrow f = \bar{f} = \breve{f} \Leftrightarrow {\scr F}[\;f] = \overline{{\scr F}[\;f]} = \breve{\overline{{\scr F}[\;f]}} \Leftrightarrow {\scr F}[\;f]] real centrosymmetric.

Conjugate symmetry is the basis of Friedel's law (Section[link]) in crystallography. Tensor product property

| top | pdf |

Another elementary property of [{\scr F}] is its naturality with respect to tensor products. Let [u \in L^{1} ({\bb R}^{m})] and [v \in L^{1} ({\bb R}^{n})], and let [{\scr F}_{\bf x},{\scr F}_{\bf y},{\scr F}_{{\bf x}, \,{\bf y}}] denote the Fourier transformations in [L^{1} ({\bb R}^{m}),L^{1} ({\bb R}^{n})] and [L^{1} ({\bb R}^{m} \times {\bb R}^{n})], respectively. Then [{\scr F}_{{\bf x}, \, {\bf y}} [u \otimes v] = {\scr F}_{\bf x} [u] \otimes {\scr F}_{\bf y} [v].] Furthermore, if [f \in L^{1} ({\bb R}^{m} \times {\bb R}^{n})], then [{\scr F}_{\bf y} [\;f] \in L^{1} ({\bb R}^{m})] as a function of x and [{\scr F}_{\bf x} [\;f] \in L^{1} ({\bb R}^{n})] as a function of y, and [{\scr F}_{{\bf x}, \,{\bf y}} [\;f] = {\scr F}_{\bf x} [{\scr F}_{\bf y} [\;f]] = {\scr F}_{\bf y} [{\scr F}_{\bf x} [\;f]].] This is easily proved by using Fubini's theorem and the fact that [({\boldxi}, {\boldeta}) \cdot ({\bf x},{ \bf y}) = {\boldxi} \cdot {\bf x} + {\boldeta} \cdot {\bf y}], where [{\bf x}, {\boldxi} \in {\bb R}^{m},{\bf y}, {\boldeta} \in {\bb R}^{n}]. This property may be written: [{\scr F}_{{\bf x}, \, {\bf y}} = {\scr F}_{\bf x} \otimes {\scr F}_{\bf y}.] Convolution property

| top | pdf |

If f and g are summable, their convolution [f * g] exists and is summable, and [{\scr F}[\;f * g] ({\boldxi}) = {\textstyle\int\limits_{{\bb R}^{n}}} \left[{\textstyle\int\limits_{{\bb R}^{n}}} f({\bf y}) g({\bf x} - {\bf y}) \;\hbox{d}^{n} {\bf y}\right] \exp (-2\pi i {\boldxi} \cdot {\bf x}) \;\hbox{d}^{n} {\bf x}.] With [{\bf x} = {\bf y} + {\bf z}], so that [\exp (-2\pi i{\boldxi} \cdot {\bf x}) = \exp (-2\pi i{\boldxi} \cdot {\bf y}) \exp (-2\pi i{\boldxi} \cdot {\bf z}),] and with Fubini's theorem, rearrangement of the double integral gives: [{\scr F}[\;f * g] = {\scr F}[\;f] \times {\scr F}[g]] and similarly [\bar{\scr F}[\;f * g] = \bar{\scr F}[\;f] \times \bar{\scr F}[g].] Thus the Fourier transform and cotransform turn convolution into multiplication. Reciprocity property

| top | pdf |

In general, [{\scr F}[\;f]] and [\bar{\scr F}[\;f]] are not summable, and hence cannot be further transformed; however, as they are essentially bounded, their products with the Gaussians [G_{t} (\xi) = \exp (-2\pi^{2} \|\xi\|^{2} t)] are summable for all [t \gt 0], and it can be shown that [f = \lim\limits_{t\rightarrow 0} \bar{\scr F}[G_{t} {\scr F}[\;f]] = \lim\limits_{t \rightarrow 0} {\scr F}[G_{t} \bar{\scr F}[\;f]],] where the limit is taken in the topology of the [L^{1}] norm [\|.\|_{1}]. Thus [{\scr F}] and [\bar{\scr F}] are (in a sense) mutually inverse, which justifies the common practice of calling [\bar{\scr F}] the `inverse Fourier transformation'. Riemann–Lebesgue lemma

| top | pdf |

If [f \in L^{1} ({\bb R}^{n})], i.e. is summable, then [{\scr F}[\;f]] and [\bar{\scr F}[\;f]] exist and are continuous and essentially bounded: [\|{\scr F}[\;f]\|_{\infty} = \|\bar{\scr F}[\;f]\|_{\infty} \leq \|\;f\|_{1}.] In fact one has the much stronger property, whose statement constitutes the Riemann–Lebesgue lemma, that [{\scr F}[\;f] ({\boldxi})] and [\bar{\scr F}[\;f] ({\boldxi})] both tend to zero as [\|{\boldxi}\| \rightarrow \infty]. Differentiation

| top | pdf |

Let us now suppose that [n = 1] and that [f \in L^{1} ({\bb R})] is differentiable with [f' \in L^{1} ({\bb R})]. Integration by parts yields [\eqalign{{\scr F}[\;f'] (\xi) &= {\textstyle\int\limits_{-\infty}^{+\infty}} f' (x) \exp (-2\pi i\xi \cdot x) \;\hbox{d}x\cr &= [\;f(x) \exp (-2\pi i\xi \cdot x)]_{-\infty}^{+\infty}\cr &\quad + 2\pi i\xi {\textstyle\int\limits_{-\infty}^{+\infty}} f(x) \exp (-2\pi i\xi \cdot x) \;\hbox{d}x.}] Since f′ is summable, f has a limit when [x \rightarrow \pm \infty], and this limit must be 0 since f is summable. Therefore [{\scr F}[\;f'] (\xi) = (2\pi i\xi) {\scr F}[\;f] (\xi)] with the bound [\|2\pi \xi {\scr F}[\;f]\|_{\infty} \leq \|\;f'\|_{1}] so that [|{\scr F}[\;f] (\xi)|] decreases faster than [1/|\xi| \rightarrow \infty].

This result can be easily extended to several dimensions and to any multi-index m: if f is summable and has continuous summable partial derivatives up to order [|{\bf m}|], then [{\scr F}[D^{{\bf m}} f] ({\boldxi}) = (2\pi i{\boldxi})^{{\bf m}} {\scr F}[\;f] ({\boldxi})] and [\|(2\pi {\boldxi})^{{\bf m}} {\scr F}[\;f]\|_{\infty} \leq \|D^{{\bf m}} f\|_{1}.]

Similar results hold for [\bar{\scr F}], with [2\pi i{\boldxi}] replaced by [-2\pi i{\boldxi}]. Thus, the more differentiable f is, with summable derivatives, the faster [{\scr F}[\;f]] and [\bar{\scr F}[\;f]] decrease at infinity.

The property of turning differentiation into multiplication by a monomial has many important applications in crystallography, for instance differential syntheses (Sections[link],[link],[link]) and moment-generating functions [Section[link](c [link])]. Decrease at infinity

| top | pdf |

Conversely, assume that f is summable on [{\bb R}^{n}] and that f decreases fast enough at infinity for [{\bf x}^{{\bf m}} f] also to be summable, for some multi-index m. Then the integral defining [{\scr F}[\;f]] may be subjected to the differential operator [D^{{\bf m}}], still yielding a convergent integral: therefore [D^{{\bf m}} {\scr F}[\;f]] exists, and [D^{{\bf m}} ({\scr F}[\;f]) ({\boldxi}) = {\scr F}[(-2\pi i{\bf x})^{{\bf m}} f] ({\boldxi})] with the bound [\|D^{{\bf m}} ({\scr F}[\;f])\|_{\infty} = \|(2\pi {\bf x})^{{\bf m}} f\|_{1}.]

Similar results hold for [\bar{\scr F}], with [-2\pi i {\bf x}] replaced by [2\pi i{\bf x}]. Thus, the faster f decreases at infinity, the more [{\scr F}[\;f]] and [\bar{\scr F}[\;f]] are differentiable, with bounded derivatives. This property is the converse of that described in Section[link], and their combination is fundamental in the definition of the function space [{\scr S}] in Section[link], of tempered distributions in Section[link], and in the extension of the Fourier transformation to them. The Paley–Wiener theorem

| top | pdf |

An extreme case of the last instance occurs when f has compact support: then [{\scr F}[\;f]] and [\bar{\scr F}[\;f]] are so regular that they may be analytically continued from [{\bb R}^{n}] to [{\bb C}^{n}] where they are entire functions, i.e. have no singularities at finite distance (Paley & Wiener, 1934[link]). This is easily seen for [{\scr F}[\;f]]: giving vector [{\boldxi} \in {\bb R}^{n}] a vector [{\boldeta} \in {\bb R}^{n}] of imaginary parts leads to [\eqalign{{\scr F}[\;f] ({\boldxi} + i{\boldeta}) &= {\textstyle\int\limits_{{\bb R}^{n}}} f({\bf x}) \exp [-2\pi i ({\boldxi} + i{\boldeta}) \cdot {\bf x}] \;\hbox{d}^{n} {\bf x}\cr &= {\scr F}[\exp (2\pi {\boldeta} \cdot {\bf x})f] ({\boldxi}),}] where the latter transform always exists since [\exp (2\pi {\boldeta} \cdot {\bf x})f] is summable with respect to x for all values of η. This analytic continuation forms the basis of the saddlepoint method in probability theory [Section[link](f)[link]] and leads to the use of maximum-entropy distributions in the statistical theory of direct phase determination [Section[link](e)[link]].

By Liouville's theorem, an entire function in [{\bb C}^{n}] cannot vanish identically on the complement of a compact subset of [{\bb R}^{n}] without vanishing everywhere: therefore [{\scr F}[\;f]] cannot have compact support if f has, and hence [{\scr D}({\bb R}^{n})] is not stable by Fourier transformation. Fourier transforms in [L^{2}]

| top | pdf |

Let f belong to [L^{2} ({\bb R}^{n})], i.e. be such that [\|\;f\|_{2} = \left({\textstyle\int\limits_{{\bb R}^{n}}} |\;f({\bf x})|^{2} \;\hbox{d}^{n} {\bf x}\right)^{1/2} \;\lt\; \infty.] Invariance of [L^{2}]

| top | pdf |

[{\scr F}[\;f]] and [\bar{\scr F}[\;f]] exist and are functions in [L^{2}], i.e. [{\scr F}L^{2} = L^{2}], [\bar{\scr F}L^{2} = L^{2}]. Reciprocity

| top | pdf |

[{\scr F}[\bar{\scr F}[\;f]] = f] and [\bar{\scr F}[{\scr F}[\;f]] = f], equality being taken as `almost everywhere' equality. This again leads to calling [\bar{\scr F}] the `inverse Fourier transformation' rather than the Fourier cotransformation. Isometry

| top | pdf |

[{\scr F}] and [\bar{\scr F}] preserve the [L^{2}] norm: [\|{\scr F}[\;f]\|_{2} = \|\bar{\scr F}[\;f]\|_{2} = \|\;f\|_{2} \hbox{ (Parseval's/Plancherel's theorem)}.] This property, which may be written in terms of the inner product (,) in [L^{2}({\bb R}^{n})] as [({\scr F}[\;f], {\scr F}[g]) = (\bar{\scr F}[\;f], \bar{\scr F}[g]) = (\;f,g),] implies that [{\scr F}] and [\bar{\scr F}] are unitary transformations of [L^{2}({\bb R}^{n})] into itself, i.e. infinite-dimensional `rotations'. Eigenspace decomposition of [L^{2}]

| top | pdf |

Some light can be shed on the geometric structure of these rotations by the following simple considerations. Note that [\eqalign{{\scr F}^{2}[\;f]({\bf x}) &= {\textstyle\int\limits_{{\bb R}^{n}}} {\scr F}[\;f]({\boldxi}) \exp (-2\pi i{\bf x}\cdot {\boldxi}) \;\hbox{d}^{n}{\boldxi}\cr &= \bar{\scr F}[{\scr F}[\;f]](-{\bf x}) = f(-{\bf x})}] so that [{\scr F}^{4}] (and similarly [\bar{\scr F}^{4}]) is the identity map. Any eigenvalue of [{\scr F}] or [\bar{\scr F}] is therefore a fourth root of unity, i.e. ±1 or [\pm i], and [L^{2}({\bb R}^{n})] splits into an orthogonal direct sum [{\bf H}_{0} \otimes {\bf H}_{1} \otimes {\bf H}_{2} \otimes {\bf H}_{3},] where [{\scr F}] (respectively [\bar{\scr F}]) acts in each subspace [{\bf H}_{k}(k = 0, 1, 2, 3)] by multiplication by [(-i)^{k}]. Orthonormal bases for these subspaces can be constructed from Hermite functions (cf. Section[link]) This method was used by Wiener (1933[link], pp. 51–71). The convolution theorem and the isometry property

| top | pdf |

In [L^{2}], the convolution theorem (when applicable) and the Parseval/Plancherel theorem are not independent. Suppose that f, g, [f \times g] and [f * g] are all in [L^{2}] (without questioning whether these properties are independent). Then [f * g] may be written in terms of the inner product in [L^{2}] as follows: [(\;f * g)({\bf x}) = {\textstyle\int\limits_{{\bb R}^{n}}} f({\bf x} - {\bf y})g({\bf y}) \;\hbox{d}^{n}{\bf y} = {\textstyle\int\limits_{{\bb R}^{n}}} \overline{\breve{\bar{f}}({\bf y} - {\bf x})}g({\bf y}) \;\hbox{d}^{n}{\bf y},] i.e. [(\;f * g)({\bf x}) = (\tau_{\bf x}\;\breve{\bar{f}}, g).]

Invoking the isometry property, we may rewrite the right-hand side as [\eqalign{({\scr F}[\tau_{\bf x}\;\breve{\bar{f}}], {\scr F}[g]) &= (\exp (- 2\pi i{\bf x} \cdot {\boldxi}) \overline{{\scr F}[\;f]_{\boldxi}}, {\scr F}[g]_{\boldxi})\cr &= {\textstyle\int\limits_{{\bb R}^{n}}} ({\scr F}[\;f] \times {\scr F}[g])({\bf x})\cr &\quad \times \exp (+ 2\pi i{\bf x} \cdot {\boldxi}) \;\hbox{d}^{n}{\boldxi}\cr &= \bar{\scr F}[{\scr F}[\;f] \times {\scr F}[g]],}] so that the initial identity yields the convolution theorem.

To obtain the converse implication, note that [\eqalign{(\;f, g) &= {\textstyle\int\limits_{{\bb R}^{n}}} \overline{f({\bf y})}g({\bf y}) \;\hbox{d}^{n}{\bf y} = (\; \breve{\bar{f}} * g)({\bf 0})\cr &= \bar{\scr F}[{\scr F}[\;\breve{\bar{ f}}] \times {\scr F}[g]]({\bf 0})\cr &= {\textstyle\int\limits_{{\bb R}^{n}}} \overline{{\scr F}[\;f]({\boldxi})} {\scr F}[g]({\boldxi}) \;\hbox{d}^{n}{\boldxi} = ({\scr F}[\;f], {\scr F}[g]),}] where conjugate symmetry (Section[link]) has been used.

These relations have an important application in the calculation by Fourier transform methods of the derivatives used in the refinement of macromolecular structures (Section[link]). Fourier transforms in [{\scr S}]

| top | pdf | Definition and properties of [{\scr S}]

| top | pdf |

The duality established in Sections[link] and[link] between the local differentiability of a function and the rate of decrease at infinity of its Fourier transform prompts one to consider the space [{\scr S}({\bb R}^{n})] of functions f on [{\bb R}^{n}] which are infinitely differentiable and all of whose derivatives are rapidly decreasing, so that for all multi-indices k and p [({\bf x}^{\bf k}D^{\bf p}f)({\bf x})\rightarrow 0 \quad\hbox{as } \|{\bf x}\|\rightarrow \infty.] The product of [f \in {\scr S}] by any polynomial over [{\bb R}^{n}] is still in [{\scr S}] ([{\scr S}] is an algebra over the ring of polynomials). Furthermore, [{\scr S}] is invariant under translations and differentiation.

If [f \in {\scr S}], then its transforms [{\scr F}[\;f]] and [\bar{\scr F}[\;f]] are

  • (i) infinitely differentiable because f is rapidly decreasing;

  • (ii) rapidly decreasing because f is infinitely differentiable;

hence [{\scr F}[\;f]] and [\bar{\scr F}[\;f]] are in [{\scr S}]: [{\scr S}] is invariant under [{\scr F}] and [\bar{\scr F}].

Since [L^{1} \supset {\scr S}] and [L^{2} \supset {\scr S}], all properties of [{\scr F}] and [\bar{\scr F}] already encountered above are enjoyed by functions of [{\scr S}], with all restrictions on differentiability and/or integrability lifted. For instance, given two functions f and g in [{\scr S}], then both f g and [f * g] are in [{\scr S}] (which was not the case with [L^{1}] nor with [L^{2}]) so that the reciprocity theorem inherited from [L^{2}] [{\scr F}[\bar{\scr F}[\;f]] = f \quad\hbox{and}\quad \bar{\scr F}[{\scr F}[\;f]] = f] allows one to state the reverse of the convolution theorem first established in [L^{1}]: [\eqalign{{\scr F}[\;fg] &= {\scr F}[\;f] * {\scr F}[g]\cr \bar{\scr F}[\;fg] &= \bar{\scr F}[\;f] * \bar{\scr F}[g].}] Gaussian functions and Hermite functions

| top | pdf |

Gaussian functions are particularly important elements of [{\scr S}]. In dimension 1, a well known contour integration (Schwartz, 1965[link], p. 184) yields [{\scr F}[\exp (- \pi x^{2})](\xi) = \bar{\scr F}[\exp (- \pi x^{2})](\xi) = \exp (- \pi \xi^{2}),] which shows that the `standard Gaussian' [\exp (- \pi x^{2})] is invariant under [{\scr F}] and [\bar{\scr F}]. By a tensor product construction, it follows that the same is true of the standard Gaussian [G({\bf x}) = \exp (- \pi \|{\bf x}\|^{2})] in dimension n: [{\scr F}[G]({\boldxi}) = \bar{\scr F}[G]({\boldxi}) = G({\boldxi}).] In other words, G is an eigenfunction of [{\scr F}] and [\bar{\scr F}] for eigenvalue 1 (Section[link]).

A complete system of eigenfunctions may be constructed as follows. In dimension 1, consider the family of functions [H_{m} = {D^{m}G^{2} \over G}\quad (m \geq 0),] where D denotes the differentiation operator. The first two members of the family [H_{0} = G,\qquad H_{1} = 2 DG,] are such that [{\scr F}[H_{0}] = H_{0}], as shown above, and [DG(x) = - 2\pi xG(x) = i(2\pi ix)G(x) = i{\scr F}[DG](x),] hence [{\scr F}[H_{1}] = (- i)H_{1}.] We may thus take as an induction hypothesis that [{\scr F}[H_{m}] = (-i)^{m}H_{m}.] The identity [D\left({D^{m}G^{2} \over G}\right) = {D^{m+1}G^{2} \over G} - {DG \over G} {D^{m}G^{2} \over G}] may be written [H_{m+1}(x) = (DH_{m})(x) - 2\pi xH_{m}(x),] and the two differentiation theorems give: [\eqalign{{\scr F}[DH_{m}](\xi) &= (2\pi i{\boldxi}) {\scr F}[H_{m}](\xi)\cr {\scr F}[-2\pi xH_{m}](\xi) &= - iD({\scr F}[H_{m}])(\xi).}] Combination of this with the induction hypothesis yields [\eqalign{{\scr F}[H_{m+1}](\xi) &= (-i)^{m+1}[(DH_{m})(\xi) - 2\pi \xi H_{m}(\xi)]\cr &= (-i)^{m+1} H_{m+1}(\xi),}] thus proving that [H_{m}] is an eigenfunction of [{\scr F}] for eigenvalue [(-i)^{m}] for all [m \geq 0]. The same proof holds for [\bar{\scr F}], with eigenvalue [i^{m}]. If these eigenfunctions are normalized as [{\scr H}_{m}(x) = {(-1)^{m}2^{1/4} \over \sqrt{m!}2^{m}\pi^{m/2}} H_{m}(x),] then it can be shown that the collection of Hermite functions [\{{\scr H}_{m}(x)\}_{m \geq 0}] constitutes an orthonormal basis of [L^{2}({\bb R})] such that [{\scr H}_{m}] is an eigenfunction of [{\scr F}] (respectively [\bar{\scr F}]) for eigenvalue [(-i)^{m}] (respectively [i^{m}]).

In dimension n, the same construction can be extended by tensor product to yield the multivariate Hermite functions [{\scr H}_{\bf m}({\bf x}) = {\scr H}_{m_{1}}(x_{1}) \times {\scr H}_{m_{2}}(x_{2}) \times \ldots \times {\scr H}_{m_{n}}(x_{n})] (where [{\bf m} \geq {\bf 0}] is a multi-index). These constitute an orthonormal basis of [L^{2}({\bb R}^{n})], with [{\scr H}_{\bf m}] an eigenfunction of [{\scr F}] (respectively [\bar{\scr F}]) for eigenvalue [(-i)^{|{\bf m}|}] (respectively [i^{|{\bf m}|}]). Thus the subspaces [{\bf H}_{k}] of Section[link] are spanned by those [{\scr H}_{\bf m}] with [|{\bf m}| \equiv k\hbox{ mod } 4\ (k = 0, 1, 2, 3)].

General multivariate Gaussians are usually encountered in the non-standard form [G_{\bf A}({\bf x}) = \exp (- {\textstyle{1 \over 2}} {\bf x}^{T} \cdot {\bf Ax}),] where A is a symmetric positive-definite matrix. Diagonalizing A as [{\bf E}\boldLambda{\bf E}^{T}] with [{\bf EE}^{T}] the identity matrix, and putting [{\bf A}^{1/2} = {\bf E}{\boldLambda}^{1/2}{\bf E}^{T}], we may write [G_{\bf A}({\bf x}) = G\left[\left({{\bf A} \over 2 \pi}\right)^{1/2} {\bf x}\right]] i.e. [G_{\bf A} = [(2\pi {\bf A}^{-1})^{1/2}]^{\#} G\hbox{;}] hence (by Section[link]) [{\scr F}[G_{\bf A}] = |\det (2\pi {\bf A}^{-1})|^{1/2} \left[\left({{\bf A} \over 2 \pi}\right)^{1/2}\right]^{\#} G,] i.e. [{\scr F}[G_{\bf A}]({\boldxi}) = |\det (2\pi {\bf A}^{-1})|^{1/2} G[(2\pi {\bf A}^{-1})^{1/2}{\boldxi}],] i.e. finally [{\scr F}[G_{\bf A}] = |\det (2\pi {\bf A}^{-1})|^{1/2} G_{4\pi^{2}{\bf A}^{-1}}.]

This result is widely used in crystallography, e.g. to calculate form factors for anisotropic atoms (Section[link]) and to obtain transforms of derivatives of Gaussian atomic densities (Section[link]). Heisenberg's inequality, Hardy's theorem

| top | pdf |

The result just obtained, which also holds for [\bar{\scr F}], shows that the `peakier' [G_{\bf A}], the `broader' [{\scr F}[G_{\bf A}]]. This is a general property of the Fourier transformation, expressed in dimension 1 by the Heisenberg inequality (Weyl, 1931[link]): [\eqalign{&\left({\int} x^{2}|\;f(x)|^{2} \;\hbox{d}x\right) \left({\int} \xi^{2}|{\scr F}[\;f]( \xi)|^{2} \;\hbox{d}\xi \right)\cr &\quad \geq {1 \over 16\pi^{2}} \left({\int} |\;f(x)|^{2} \;\hbox{d}x\right)^{2},}] where, by a beautiful theorem of Hardy (1933)[link], equality can only be attained for f Gaussian. Hardy's theorem is even stronger: if both f and [{\scr F}[\;f]] behave at infinity as constant multiples of G, then each of them is everywhere a constant multiple of G; if both f and [{\scr F}[\;f]] behave at infinity as constant multiples of [G \times \hbox{monomial}], then each of them is a finite linear combination of Hermite functions. Hardy's theorem is invoked in Section[link] to derive the optimal procedure for spreading atoms on a sampling grid in order to obtain the most accurate structure factors.

The search for optimal compromises between the confinement of f to a compact domain in x-space and of [{\scr F}[\;f]] to a compact domain in ξ-space leads to consideration of prolate spheroidal wavefunctions (Pollack & Slepian, 1961[link]; Landau & Pollack, 1961[link], 1962[link]). Symmetry property

| top | pdf |

A final formal property of the Fourier transform, best established in [{\scr S}], is its symmetry: if f and g are in [{\scr S}], then by Fubini's theorem [\eqalign{\langle {\scr F}[\;f], g\rangle &= {\textstyle\int\limits_{{\bb R}^{n}}} \left({\textstyle\int\limits_{{\bb R}^{n}}} f({\bf x}) \exp (-2\pi i{\boldxi} \cdot {\bf x}) \;\hbox{d}^{n}{\bf x}\right) g({\boldxi}) \;\hbox{d}^{n}{\boldxi}\cr &= {\textstyle\int\limits_{{\bb R}^{n}}} f({\bf x}) \left({\textstyle\int\limits_{{\bb R}^{n}}} g({\boldxi}) \exp (-2\pi i{\boldxi} \cdot {\bf x}) \;\hbox{d}^{n}{\boldxi}\right) \;\hbox{d}^{n}{\bf x}\cr &= \langle f, {\scr F}[g]\rangle.}]

This possibility of `transposing' [{\scr F}] (and [\bar{\scr F}]) from the left to the right of the duality bracket will be used in Section[link] to extend the Fourier transformation to distributions. Various writings of Fourier transforms

| top | pdf |

Other ways of writing Fourier transforms in [{\bb R}^{n}] exist besides the one used here. All have the form [{\scr F}_{h, \, \omega}[\;f]({\boldxi}) = {1 \over h^{n}} {\int\limits_{{\bb R}^{n}}} f({\bf x}) \exp (-i\omega {\boldxi} \cdot {\bf x}) \;\hbox{d}^{n}{\bf x},] where h is real positive and ω real non-zero, with the reciprocity formula written: [f({\bf x}) = {1 \over k^{n}} {\int\limits_{{\bb R}^{n}}} {\scr F}_{h, \,\omega}[\;f]({\boldxi}) \exp (+i\omega {\boldxi} \cdot {\bf x}) \;\hbox{d}^{n}{\bf x}] with k real positive. The consistency condition between h, k and ω is [hk = {2\pi \over |\omega|}.]

The usual choices are: [\displaylines{\quad (\hbox{i})\quad\; \omega = \pm 2 \pi, h = k = 1 {\hbox to 18pt{}} (\hbox{as here})\hbox{;}\hfill\cr \quad (\hbox{ii})\quad\omega = \pm 1, h = 1, k = 2 \pi {\hbox to 9.5pt{}} (\hbox{in probability theory}\hfill\cr \quad \phantom{(\hbox{ii})\quad\omega = \pm 1, h = 1, k = 2 \pi {\hbox to 10pt{}}} \hbox{and in solid-state physics})\hbox{;}\hfill\cr \quad(\hbox{iii})\;\;\; \omega = \pm 1, h = k = \sqrt{2 \pi} {\hbox to 10pt{}} (\hbox{in much of classical analysis}).\hfill}]

It should be noted that conventions (ii) and (iii) introduce numerical factors of 2π in convolution and Parseval formulae, while (ii) breaks the symmetry between [{\scr F}] and [\bar{\scr F}]. Tables of Fourier transforms

| top | pdf |

The books by Campbell & Foster (1948)[link], Erdélyi (1954)[link], and Magnus et al. (1966)[link] contain extensive tables listing pairs of functions and their Fourier transforms. Bracewell (1986)[link] lists those pairs particularly relevant to electrical engineering applications. Fourier transforms of tempered distributions

| top | pdf | Introduction

| top | pdf |

It was found in Section[link] that the usual space of test functions [{\scr D}] is not invariant under [{\scr F}] and [\bar{\scr F}]. By contrast, the space [{\scr S}] of infinitely differentiable rapidly decreasing functions is invariant under [{\scr F}] and [\bar{\scr F}], and furthermore transposition formulae such as [\langle {\scr F}[\;f], g\rangle = \langle \;f, {\scr F}[g]\rangle] hold for all [f, g \in {\scr S}]. It is precisely this type of transposition which was used successfully in Sections[link] and[link] to define the derivatives of distributions and their products with smooth functions.

This suggests using [{\scr S}] instead of [{\scr D}] as a space of test functions ϕ, and defining the Fourier transform [{\scr F}[T]] of a distribution T by [\langle {\scr F}[T], \varphi \rangle = \langle T, {\scr F}[\varphi] \rangle] whenever T is capable of being extended from [{\scr D}] to [{\scr S}] while remaining continuous. It is this latter proviso which will be subsumed under the adjective `tempered'. As was the case with the construction of [{\scr D}\,'], it is the definition of a sufficiently strong topology (i.e. notion of convergence) in [{\scr S}] which will play a key role in transferring to the elements of its topological dual [{\scr S}\,'] (called tempered distributions) all the properties of the Fourier transformation.

Besides the general references to distribution theory mentioned in Section[link] the reader may consult the books by Zemanian (1965[link], 1968[link]). Lavoine (1963)[link] contains tables of Fourier transforms of distributions. [{\scr S}] as a test-function space

| top | pdf |

A notion of convergence has to be introduced in [{\scr S}({\bb R}^{n})] in order to be able to define and test the continuity of linear functionals on it.

A sequence [(\varphi_{j})] of functions in [{\scr S}] will be said to converge to 0 if, for any given multi-indices k and p, the sequence [({\bf x}^{{\bf k}}D^{{\bf p}} \varphi_{j})] tends to 0 uniformly on [{\bb R}^{n}].

It can be shown that [{\scr D}({\bb R}^{n})] is dense in [{\scr S}({\bb R}^{n})]. Translation is continuous for this topology. For any linear differential operator [P(D) = {\textstyle\sum_{\bf p}} a_{\bf p} D^{{\bf p}}] and any polynomial [Q({\bf x})] over [{\bb R}^{n}], [(\varphi_{j}) \rightarrow 0] implies [[Q({\bf x}) \times P(D)\varphi_{j}] \rightarrow 0] in the topology of [{\scr S}]. Therefore, differentiation and multiplication by polynomials are continuous for the topology on [{\scr S}].

The Fourier transformations [{\scr F}] and [\bar{\scr F}] are also continuous for the topology of [{\scr S}]. Indeed, let [(\varphi_{j})] converge to 0 for the topology on [{\scr S}]. Then, by Section[link], [\|(2\pi \boldxi)^{{\bf m}} D^{{\bf p}} ({\scr F}[\varphi_{j}])\|_{\infty} \leq \| D^{{\bf m}} [(2\pi {\bf x})^{{\bf p}} \varphi_{j}]\|_{1}.] The right-hand side tends to 0 as [j \rightarrow \infty] by definition of convergence in [{\scr S}], hence [\|\boldxi\|^{{\bf m}} D^{{\bf p}} ({\scr F}[\varphi_{j}]) \rightarrow 0] uniformly, so that [({\scr F}[\varphi_{j}]) \rightarrow 0] in [{\scr S}] as [j \rightarrow \infty]. The same proof applies to [\bar{\scr F}]. Definition and examples of tempered distributions

| top | pdf |

A distribution [T \in {\scr D}\,'({\bb R}^{n})] is said to be tempered if it can be extended into a continuous linear functional on [{\scr S}].

If [{\scr S}\,'({\bb R}^{n})] is the topological dual of [{\scr S}({\bb R}^{n})], and if [S \in {\scr S}^{\prime}({\bb R}^{n})], then its restriction to [{\scr D}] is a tempered distribution; conversely, if [T \in {\scr D}\,'] is tempered, then its extension to [{\scr S}] is unique (because [{\scr D}] is dense in [{\scr S}]), hence it defines an element S of [{\scr S}\,']. We may therefore identify [{\scr S}\,'] and the space of tempered distributions.

A distribution with compact support is tempered, i.e. [{\scr S}\,' \supset {\scr E}\,']. By transposition of the corresponding properties of [{\scr S}], it is readily established that the derivative, translate or product by a polynomial of a tempered distribution is still a tempered distribution.

These inclusion relations may be summarized as follows: since [{\scr S}] contains [{\scr D}] but is contained in [{\scr E}], the reverse inclusions hold for the topological duals, and hence [{\scr S}\,'] contains [{\scr E}\,'] but is contained in [{\scr D}\,'].

A locally summable function f on [{\bb R}^{n}] will be said to be of polynomial growth if [|\;f({\bf x})|] can be majorized by a polynomial in [\|{\bf x}\|] as [\|{\bf x}\| \rightarrow \infty]. It is easily shown that such a function f defines a tempered distribution [T_{f}] via [\langle T_{f}, \varphi \rangle = {\textstyle\int\limits_{{\bb R}^{n}}} f({\bf x}) \varphi ({\bf x}) \;\hbox{d}^{n} {\bf x}.] In particular, polynomials over [{\bb R}^{n}] define tempered distributions, and so do functions in [{\scr S}]. The latter remark, together with the transposition identity (Section[link]), invites the extension of [{\scr F}] and [\bar{\scr F}] from [{\scr S}] to [{\scr S}\,']. Fourier transforms of tempered distributions

| top | pdf |

The Fourier transform [{\scr F}[T]] and cotransform [\bar{\scr F}[T]] of a tempered distribution T are defined by [\eqalign{\langle {\scr F}[T], \varphi \rangle &= \langle T, {\scr F}[\varphi]\rangle \cr \langle \bar{\scr F}[T], \varphi \rangle &= \langle T, \bar{\scr F}[\varphi]\rangle}] for all test functions [\varphi \in {\scr S}]. Both [{\scr F}[T]] and [\bar{\scr F}[T]] are themselves tempered distributions, since the maps [\varphi \;\longmapsto\; {\scr F}[\varphi]] and [\varphi \;\longmapsto\; \bar{\scr F}[\varphi]] are both linear and continuous for the topology of [{\scr S}]. In the same way that x and ξ have been used consistently as arguments for ϕ and [{\scr F}[\varphi]], respectively, the notation [T_{\bf x}] and [{\scr F}[T]_{\boldxi}] will be used to indicate which variables are involved.

When T is a distribution with compact support, its Fourier transform may be written [{\scr F}[T_{\bf x}]_{\boldxi} = \langle T_{\bf x}, \exp (- 2\pi i \boldxi \cdot {\bf x})\rangle] since the function [{\bf x} \;\longmapsto\; \exp (- 2\pi i {\boldxi} \cdot {\bf x})] is in [{\scr E}] while [T_{\bf x} \in {\scr E}\,']. It can be shown, as in Section[link], to be analytically continuable into an entire function over [{\bb C}^{n}]. Transposition of basic properties

| top | pdf |

The duality between differentiation and multiplication by a monomial extends from [{\scr S}] to [{\scr S}\,'] by transposition: [\eqalign{{\scr F}[D_{\bf x}^{{\bf p}} T_{\bf x}]_{\boldxi} &= (2\pi i \boldxi)^{{\bf p}} {\scr F}[T_{\bf x}]_{\boldxi} \cr D_{\boldxi}^{{\bf p}} ({\scr F}[T_{\bf x}]_{\boldxi}) &= {\scr F}[(- 2\pi i {\bf x})^{{\bf p}} T_{\bf x}]_{\boldxi}.}] Analogous formulae hold for [\bar{\scr F}], with i replaced by −i.

The formulae expressing the duality between translation and phase shift, e.g. [\eqalign{{\scr F}[\tau_{\bf a} T_{\bf x}]_{\boldxi} &= \exp (-2\pi i{\bf a} \cdot {\boldxi}) {\scr F}[T_{\bf x}]_{\boldxi} \cr \tau_{\boldalpha} ({\scr F}[T_{\bf x}]_{\boldxi}) &= {\scr F}[\exp (2\pi i{\boldalpha} \cdot {\bf x}) T_{\bf x}]_{\boldxi}\hbox{;}}] between a linear change of variable and its contragredient, e.g. [{\scr F}[A^{\#} T] = |\hbox{det } {\bf A}| [({\bf A}^{-1})^{T}]^{\#} {\scr F}[T]\hbox{;}] are obtained similarly by transposition from the corresponding identities in [{\scr S}]. They give a transposition formula for an affine change of variables [{\bf x} \;\longmapsto\; S({\bf x}) = {\bf Ax} + {\bf b}] with non-singular matrix A: [\eqalign{{\scr F}[S^{\#} T] &= \exp (-2\pi i{\boldxi} \cdot {\bf b}) {\scr F}[A^{\#} T] \cr &= \exp (-2\pi i{\boldxi} \cdot {\bf b}) |\hbox{det } {\bf A}| [({\bf A}^{-1})^{T}]^{\#} {\scr F}[T],}] with a similar result for [\bar{\scr F}], replacing −i by +i.

Conjugate symmetry is obtained similarly: [{\scr F}[\bar{T}] = \breve{\overline{{\scr F}[T]}}, {\scr F}[\breve{\bar{T}}] = \overline{{\scr F}[T]},] with the same identities for [\bar{\scr F}].

The tensor product property also transposes to tempered distributions: if [U \in {\scr S}\,'({\bb R}^{m}), V \in {\scr S}\,'({\bb R}^{n})], [\eqalign{{\scr F}[U_{\bf x} \otimes V_{\bf y}] &= {\scr F}[U]_{\boldxi} \otimes {\scr F}[V]_{\boldeta} \cr \bar{\scr F}[U_{\bf x} \otimes V_{\bf y}] &= \bar{\scr F}[U]_{\boldxi} \otimes \bar{\scr F}[V]_{\boldeta}.}] Transforms of δ-functions

| top | pdf |

Since δ has compact support, [{\scr F}[\delta_{\bf x}]_{\boldxi} = \langle \delta_{\bf x}, \exp (-2\pi i{\boldxi} \cdot {\bf x})\rangle = 1_{\boldxi},\quad i.e.\ {\scr F}[\delta] = 1.] It is instructive to show that conversely [{\scr F}[1] = \delta] without invoking the reciprocity theorem. Since [\partial_{j} 1 = 0] for all [j = 1, \ldots, n], it follows from Section[link] that [{\scr F}[1] = c\delta]; the constant c can be determined by using the invariance of the standard Gaussian G established in Section[link]: [\langle {\scr F}[1]_{\bf x}, G_{\bf x}\rangle = \langle 1_{\boldxi}, G_{\boldxi}\rangle = 1\hbox{;}] hence [c = 1]. Thus, [{\scr F}[1] = \delta].

The basic properties above then read (using multi-indices to denote differentiation): [\eqalign{{\scr F}[\delta_{\bf x}^{({\bf m})}]_{\boldxi} = (2\pi i{\boldxi})^{{\bf m}}, \quad &{\scr F}[{\bf x}^{{\bf m}}]_{\boldxi} = (-2\pi i)^{-|{\bf m}|} \delta_{\boldxi}^{({\bf m})}\hbox{;} \cr {\scr F}[\delta_{\bf a}]_{\boldxi} = \exp (-2\pi i{\bf a} \cdot {\boldxi}), \quad &{\scr F}[\exp (2\pi i{\boldalpha} \cdot {\bf x})]_{\boldxi} = \delta_{\boldalpha},}] with analogous relations for [\bar{\scr F}], i becoming −i. Thus derivatives of δ are mapped to monomials (and vice versa), while translates of δ are mapped to `phase factors' (and vice versa). Reciprocity theorem

| top | pdf |

The previous results now allow a self-contained and rigorous proof of the reciprocity theorem between [{\scr F}] and [\bar{\scr F}] to be given, whereas in traditional settings (i.e. in [L^{1}] and [L^{2}]) the implicit handling of δ through a limiting process is always the sticking point.

Reciprocity is first established in [{\scr S}] as follows: [\eqalign{\bar{\scr F}[{\scr F}[\varphi]] ({\bf x}) &= {\textstyle\int\limits_{{\bb R}^{n}}} {\scr F}[\varphi] ({\boldxi}) \exp (2\pi i{\boldxi} \cdot {\bf x})\ {\rm d}^{n} {\boldxi} \cr &= {\textstyle\int\limits_{{\bb R}^{n}}} {\scr F}[\tau_{-{\bf x}} \varphi] ({\boldxi})\ {\rm d}^{n} {\boldxi} \cr &= \langle 1, {\scr F}[\tau_{-{\bf x}} \varphi]\rangle \cr &= \langle {\scr F}[1], \tau_{-{\bf x}} \varphi\rangle \cr &= \langle \tau_{\bf x} \delta, \varphi\rangle \cr &= \varphi ({\bf x})}] and similarly [{\scr F}[\bar{\scr F}[\varphi]] ({\bf x}) = \varphi ({\bf x}).]

The reciprocity theorem is then proved in [{\scr S}\,'] by transposition: [\bar{\scr F}[{\scr F}[T]] = {\scr F}[\bar{\scr F}[T]] = T \quad\hbox{for all } T \in {\scr S}\,'.] Thus the Fourier cotransformation [\bar{\scr F}] in [{\scr S}\,'] may legitimately be called the `inverse Fourier transformation'.

The method of Section[link] may then be used to show that [{\scr F}] and [\bar{\scr F}] both have period 4 in [{\scr S}\,']. Multiplication and convolution

| top | pdf |

Multiplier functions [\alpha ({\bf x})] for tempered distributions must be infinitely differentiable, as for ordinary distributions; furthermore, they must grow sufficiently slowly as [\|x\| \rightarrow \infty] to ensure that [\alpha \varphi \in {\scr S}] for all [\varphi \in {\scr S}] and that the map [\varphi \;\longmapsto\; \alpha \varphi] is continuous for the topology of [{\scr S}]. This leads to choosing for multipliers the subspace [{\scr O}_{M}] consisting of functions [\alpha \in {\scr E}] of polynomial growth. It can be shown that if f is in [{\scr O}_{M}], then the associated distribution [T_{f}] is in [{\scr S}\,'] (i.e. is a tempered distribution); and that conversely if T is in [{\scr S}\,', \mu * T] is in [{\scr O}_{M}] for all [\mu \in {\scr D}].

Corresponding restrictions must be imposed to define the space [{\scr O}'_{C}] of those distributions T whose convolution [S * T] with a tempered distribution S is still a tempered distribution: T must be such that, for all [\varphi \in {\scr S}, \theta ({\bf x}) = \langle T_{\bf y}, \varphi ({\bf x} + {\bf y})\rangle] is in [{\scr S}]; and such that the map [\varphi \;\longmapsto\; \theta] be continuous for the topology of [{\scr S}]. This implies that S is `rapidly decreasing'. It can be shown that if f is in [{\scr S}], then the associated distribution [T_{f}] is in [{\scr O}'_{C}]; and that conversely if T is in [{\scr O}'_{C}, \mu * T] is in [{\scr S}] for all [\mu \in {\scr D}].

The two spaces [{\scr O}_{M}] and [{\scr O}'_{C}] are mapped into each other by the Fourier transformation [\eqalign{{\scr F}({\scr O}_{M}) &= \bar{\scr F}({\scr O}_{M}) = {\scr O}'_{C} \cr {\scr F}({\scr O}'_{C}) &= \bar{\scr F}({\scr O}'_{C}) = {\scr O}_{M}}] and the convolution theorem takes the form [\eqalign{{\scr F}[\alpha S] &= {\scr F}[\alpha] * {\scr F}[S] \quad\; S \in {\scr S}\,', \alpha \in {\scr O}_{M},{\scr F}[\alpha] \in {\scr O}'_{C}\hbox{;}\cr {\scr F}[S * T] &= {\scr F}[S] \times {\scr F}[T] \quad S \in {\scr S}\,', T \in {\scr O}'_{C},{\scr F}[T] \in {\scr O}_{M}.}] The same identities hold for [\bar{\scr F}]. Taken together with the reciprocity theorem, these show that [{\scr F}] and [\bar{\scr F}] establish mutually inverse isomorphisms between [{\scr O}_{M}] and [{\scr O}'_{C}], and exchange multiplication for convolution in [{\scr S}\,'].

It may be noticed that most of the basic properties of [{\scr F}] and [\bar{\scr F}] may be deduced from this theorem and from the properties of δ. Differentiation operators [D^{\bf m}] and translation operators [\tau_{\bf a}] are convolutions with [D^{\bf m}\delta] and [\tau_{\bf a} \delta]; they are turned, respectively, into multiplication by monomials [(\pm 2\pi i{\boldxi})^{{\bf m}}] (the transforms of [D^{{\bf m}}\delta]) or by phase factors [\exp(\pm 2 \pi i{\boldxi} \cdot {\boldalpha})] (the transforms of [\tau_{\bf a}\delta]).

Another consequence of the convolution theorem is the duality established by the Fourier transformation between sections and projections of a function and its transform. For instance, in [{\bb R}^{3}], the projection of [f(x, y, z)] on the x, y plane along the z axis may be written [(\delta_{x} \otimes \delta_{y} \otimes 1_{z}) * f\hbox{;}] its Fourier transform is then [(1_{\xi} \otimes 1_{\eta} \otimes \delta_{\zeta}) \times {\scr F}[\;f],] which is the section of [{\scr F}[\;f]] by the plane [\zeta = 0], orthogonal to the z axis used for projection. There are numerous applications of this property in crystallography (Section[link]) and in fibre diffraction (Section[link]). [L^{2}] aspects, Sobolev spaces

| top | pdf |

The special properties of [{\scr F}] in the space of square-integrable functions [L^{2}({\bb R}^{n})], such as Parseval's identity, can be accommodated within distribution theory: if [u \in L^{2}({\bb R}^{n})], then [T_{u}] is a tempered distribution in [{\scr S}\,'] (the map [u \;\longmapsto\; T_{u}] being continuous) and it can be shown that [S = {\scr F}[T_{u}]] is of the form [S_{v}], where [u = {\scr F}[u]] is the Fourier transform of u in [L^{2}({\bb R}^{n})]. By Plancherel's theorem, [\|u\|_{2} = \|v\|_{2}].

This embedding of [L^{2}] into [{\scr S}\,'] can be used to derive the convolution theorem for [L^{2}]. If u and v are in [L^{2}({\bb R}^{n})], then [u * v] can be shown to be a bounded continuous function; thus [u * v] is not in [L^{2}], but it is in [{\scr S}\,'], so that its Fourier transform is a distribution, and [{\scr F}[u * v] = {\scr F}[u] \times {\scr F}[v].]

Spaces of tempered distributions related to [L^{2}({\bb R}^{n})] can be defined as follows. For any real s, define the Sobolev space [H_{s}({\bb R}^{n})] to consist of all tempered distributions [S \in {\scr S}\,'({\bb R}^{n})] such that [(1 + |\boldxi|^{2})^{s/2} {\scr F}[S]_{\boldxi} \in L^{2}({\bb R}^{n}).]

These spaces play a fundamental role in the theory of partial differential equations, and in the mathematical theory of tomographic reconstruction – a subject not unrelated to the crystallographic phase problem (Natterer, 1986[link]). Periodic distributions and Fourier series

| top | pdf | Terminology

| top | pdf |

Let [{\bb Z}^{n}] be the subset of [{\bb R}^{n}] consisting of those points with (signed) integer coordinates; it is an n-dimensional lattice, i.e. a free Abelian group on n generators. A particularly simple set of n generators is given by the standard basis of [{\bb R}^{n}], and hence [{\bb Z}^{n}] will be called the standard lattice in [{\bb R}^{n}]. Any other `non-standard' n-dimensional lattice Λ in [{\bb R}^{n}] is the image of this standard lattice by a general linear transformation.

If we identify any two points in [{\bb R}^{n}] whose coordinates are congruent modulo [{\bb Z}^{n}], i.e. differ by a vector in [{\bb Z}^{n}], we obtain the standard n-torus [{\bb R}^{n}/{\bb Z}^{n}]. The latter may be viewed as [({\bb R}/{\bb Z})^{n}], i.e. as the Cartesian product of n circles. The same identification may be carried out modulo a non-standard lattice Λ, yielding a non-standard n-torus [{\bb R}^{n}/\Lambda]. The correspondence to crystallographic terminology is that `standard' coordinates over the standard 3-torus [{\bb R}^{3}/{\bb Z}^{3}] are called `fractional' coordinates over the unit cell; while Cartesian coordinates, e.g. in ångströms, constitute a set of non-standard coordinates.

Finally, we will denote by I the unit cube [[0, 1]^{n}] and by [C_{\varepsilon}] the subset [C_{\varepsilon} = \{{\bf x} \in {\bb R}^{n}\|x_{j}| \;\lt\; \varepsilon \hbox{ for all } j = 1, \ldots, n\}.] [{\bb Z}^{n}]-periodic distributions in [{\bb R}^{n}]

| top | pdf |

A distribution [T \in {\scr D}\,' ({\bb R}^{n})] is called periodic with period lattice [{\bb Z}^{n}] (or [{\bb Z}^{n}]-periodic) if [\tau_{\bf m} T = T] for all [{\bf m} \in {\bb Z}^{n}] (in crystallography the period lattice is the direct lattice).

Given a distribution with compact support [T^{0} \in {\scr E}\,' ({\bb R}^{n})], then [T = {\textstyle\sum_{{\bf m} \in {\bb Z}^{n}}} \tau_{\bf m} T^{0}] is a [{\bb Z}^{n}]-periodic distribution. Note that we may write [T = r * T^{0}], where [r = {\textstyle\sum_{{\bf m} \in {\bb Z}^{n}}} \delta_{({\bf m})}] consists of Dirac δ's at all nodes of the period lattice [{\bb Z}^{n}].

Conversely, any [{\bb Z}^{n}]-periodic distribution T may be written as [r * T^{0}] for some [T^{0} \in {\scr E}\,']. To retrieve such a `motif' [T^{0}] from T, a function ψ will be constructed in such a way that [\psi \in {\scr D}] (hence has compact support) and [r * \psi = 1]; then [T^{0} = \psi T]. Indicator functions (Section[link]) such as [\chi_{1}] or [\chi_{C_{1/2}}] cannot be used directly, since they are discontinuous; but regularized versions of them may be constructed by convolution (see Section[link]) as [\psi_{0} = \chi_{C_{\varepsilon}} * \theta_{\eta}], with [epsilon] and η such that [\psi_{0} ({\bf x}) = 1] on [C_{1/2}] and [\psi_{0}({\bf x}) = 0] outside [C_{3/4}]. Then the function [\psi = {\psi_{0} \over {\textstyle\sum_{{\bf m} \in {\bb Z}^{n}}} \tau_{\bf m} \psi_{0}}] has the desired property. The sum in the denominator contains at most [2^{n}] non-zero terms at any given point x and acts as a smoothly varying `multiplicity correction'. Identification with distributions over [{\bb R}^{n}/{\bb Z}^{n}]

| top | pdf |

Throughout this section, `periodic' will mean `[{\bb Z}^{n}]-periodic'.

Let [s \in {\bb R}], and let [s] denote the largest integer [\leq s]. For [x = (x_{1}, \ldots, x_{n}) \in {\bb R}^{n}], let [\tilde{{\bf x}}] be the unique vector [(\tilde{x}_{1}, \ldots, \tilde{x}_{n})] with [\tilde{x}_{j} = x_{j} - [x_{j}]]. If [{\bf x},{\bf y} \in {\bb R}^{n}], then [\tilde{{\bf x}} = \tilde{{\bf y}}] if and only if [{\bf x} - {\bf y} \in {\bb Z}^{n}]. The image of the map [{\bf x} \;\longmapsto\; \tilde{{\bf x}}] is thus [{\bb R}^{n}] modulo [{\bb Z}^{n}], or [{\bb R}^{n}/{\bb Z}^{n}].

If f is a periodic function over [{\bb R}^{n}], then [\tilde{{\bf x}} = \tilde{{\bf y}}] implies [f({\bf x}) = f({\bf y})]; we may thus define a function [\tilde{f}] over [{\bb R}^{n}/{\bb Z}^{n}] by putting [\tilde{f}(\tilde{{\bf x}}) = f({\bf x})] for any [{\bf x} \in {\bb R}^{n}] such that [{\bf x} - \tilde{{\bf x}} \in {\bb Z}^{n}]. Conversely, if [\tilde{f}] is a function over [{\bb R}^{n}/{\bb Z}^{n}], then we may define a function f over [{\bb R}^{n}] by putting [f({\bf x}) = \tilde{f}(\tilde{{\bf x}})], and f will be periodic. Periodic functions over [{\bb R}^{n}] may thus be identified with functions over [{\bb R}^{n}/{\bb Z}^{n}], and this identification preserves the notions of convergence, local summability and differentiability.

Given [\varphi^{0} \in {\scr D}({\bb R}^{n})], we may define [\varphi ({\bf x}) = {\textstyle\sum\limits_{{\bf m} \in {\bb Z}^{n}}} (\tau_{\bf m} \varphi^{0}) ({\bf x})] since the sum only contains finitely many non-zero terms; ϕ is periodic, and [\tilde{\varphi} \in {\scr D}({\bb R}^{n}/{\bb Z}^{n})]. Conversely, if [\tilde{\varphi} \in {\scr D}({\bb R}^{n}/{\bb Z}^{n})] we may define [\varphi \in {\scr E}({\bb R}^{n})] periodic by [\varphi ({\bf x}) = \tilde{\varphi} (\tilde{{\bf x}})], and [\varphi^{0} \in {\scr D}({\bb R}^{n})] by putting [\varphi^{0} = \psi \varphi] with ψ constructed as above.

By transposition, a distribution [\tilde{T} \in {\scr D}\,'({\bb R}^{n}/{\bb Z}^{n})] defines a unique periodic distribution [T \in {\scr D}\,'({\bb R}^{n})] by [\langle T, \varphi^{0} \rangle = \langle \tilde{T}, \tilde{\varphi} \rangle]; conversely, [T \in {\scr D}\,'({\bb R}^{n})] periodic defines uniquely [\tilde{T} \in {\scr D}\,'({\bb R}^{n}/{\bb Z}^{n})] by [\langle \tilde{T}, \tilde{\varphi}\rangle = \langle T, \varphi^{0}\rangle].

We may therefore identify [{\bb Z}^{n}]-periodic distributions over [{\bb R}^{n}] with distributions over [{\bb R}^{n}/{\bb Z}^{n}]. We will, however, use mostly the former presentation, as it is more closely related to the crystallographer's perception of periodicity (see Section[link]). Fourier transforms of periodic distributions

| top | pdf |

The content of this section is perhaps the central result in the relation between Fourier theory and crystallography (Section[link]).

Let [T = r * T^{0}] with r defined as in Section[link]. Then [r \in {\scr S}\,'], [T^{0} \in {\scr E}\,'] hence [T^{0} \in {\scr O}'_{C}], so that [T \in {\scr S}\,']: [{\bb Z}^{n}]-periodic distributions are tempered, hence have a Fourier transform. The convolution theorem (Section[link]) is applicable, giving: [{\scr F}[T] = {\scr F}[r] \times {\scr F}[T^{0}]] and similarly for [\bar{\scr F}].

Since [{\scr F}[\delta_{({\bf m})}] (\xi) = \exp (-2 \pi i {\boldxi} \cdot {\bf m})], formally [{\scr F}[r]_{\boldxi} = {\textstyle\sum\limits_{{\bf m} \in {\bb Z}^{n}}} \exp (-2 \pi i \boldxi \cdot {\bf m}) = Q,] say.

It is readily shown that Q is tempered and periodic, so that [Q = {\textstyle\sum_{{\boldmu} \in {\bb Z}^{n}}} \tau_{{\boldmu}} (\psi Q)], while the periodicity of r implies that [[\exp (-2 \pi i \xi_{j}) - 1] \psi Q = 0, \quad j = 1, \ldots, n.] Since the first factors have single isolated zeros at [\xi_{j} = 0] in [C_{3/4}], [\psi Q = c\delta] (see Section[link]) and hence by periodicity [Q = cr]; convoluting with [\chi_{C_{1}}] shows that [c = 1]. Thus we have the fundamental result: [Scheme scheme1] so that [{\scr F}[T] = r \times {\scr F}[T^{0}]\hbox{;}] i.e., according to Section[link], [{\scr F}[T]_{\boldxi} = {\textstyle\sum\limits_{{\boldmu} \in {\bb Z}^{n}}} {\scr F}[T^{0}] ({\boldmu}) \times \delta_{({\boldmu})}.]

The right-hand side is a weighted lattice distribution, whose nodes [{\boldmu} \in {\bb Z}^{n}] are weighted by the sample values [{\scr F}[T^{0}] ({\boldmu})] of the transform of the motif [T^{0}] at those nodes. Since [T^{0} \in {\scr E}\,'], the latter values may be written [{\scr F}[T^{0}]({\boldmu}) = \langle T_{\bf x}^{0}, \exp (-2 \pi i {\boldmu} \cdot {\bf x})\rangle.] By the structure theorem for distributions with compact support (Section[link]), [T^{0}] is a derivative of finite order of a continuous function; therefore, from Section[link] and Section[link], [{\scr F}[T^{0}]({\boldmu})] grows at most polynomially as [\|{\boldmu}\| \rightarrow \infty] (see also Section[link] about this property). Conversely, let [W = {\textstyle\sum_{{\boldmu} \in {\bb Z}^{n}}} w_{{\boldmu}} \delta_{({\boldmu})}] be a weighted lattice distribution such that the weights [w_{\boldmu}] grow at most polynomially as [\|{\boldmu}\| \rightarrow \infty]. Then W is a tempered distribution, whose Fourier cotransform [T_{\bf x} = {\textstyle\sum_{{\boldmu} \in {\bb Z}^{n}}} w_{\boldmu} \exp (+2 \pi i {\boldmu} \cdot {\bf x})] is periodic. If T is now written as [r * T^{0}] for some [T^{0} \in {\scr E}\,'], then by the reciprocity theorem [w_{\boldmu} = {\scr F}[T^{0}]({\boldmu}) = \langle T_{\bf x}^{0}, \exp (-2 \pi i {\boldmu} \cdot {\bf x})\rangle.] Although the choice of [T^{0}] is not unique, and need not yield back the same motif as may have been used to build T initially, different choices of [T^{0}] will lead to the same coefficients [w_{\boldmu}] because of the periodicity of [\exp (-2 \pi i {\boldmu} \cdot {\bf x})].

The Fourier transformation thus establishes a duality between periodic distributions and weighted lattice distributions. The pair of relations [\displaylines{\quad (\hbox{i})\hfill w_{\boldmu} = \langle T_{\bf x}^{0}, \exp (-2 \pi i {\boldmu} \cdot {\bf x})\rangle \quad\hfill\cr \quad(\hbox{ii})\hfill T_{\bf x} = {\textstyle\sum\limits_{{\boldmu} \in {\bb Z}^{n}}} w_{\boldmu} \exp (+2 \pi i {\boldmu} \cdot {\bf x}) \hfill}] are referred to as the Fourier analysis and the Fourier synthesis of T, respectively (there is a discrepancy between this terminology and the crystallographic one, see Section[link]). In other words, any periodic distribution [T \in {\scr S}\,'] may be represented by a Fourier series (ii), whose coefficients are calculated by (i). The convergence of (ii) towards T in [{\scr S}\,'] will be investigated later (Section[link]). The case of non-standard period lattices

| top | pdf |

Let Λ denote the non-standard lattice consisting of all vectors of the form [{\textstyle\sum_{j=1}} m_{j} {\bf a}_{j}], where the [m_{j}] are rational integers and [{\bf a}_{1}, \ldots, {\bf a}_{n}] are n linearly independent vectors in [{\bb R}^{n}]. Let R be the corresponding lattice distribution: [R = {\textstyle\sum_{{ x} \in \Lambda}} \delta_{({\bf x})}].

Let A be the non-singular [n \times n] matrix whose successive columns are the coordinates of vectors [{\bf a}_{1}, \ldots, {\bf a}_{n}] in the standard basis of [{\bb R}^{n}]; A will be called the period matrix of Λ, and the mapping [{\bf x} \;\longmapsto\; {\bf Ax}] will be denoted by A. According to Section[link] we have [\langle R, \varphi \rangle = {\textstyle\sum\limits_{{\bf m} \in {\bb Z}^{n}}} \varphi ({\bf Am}) = \langle r, (A^{-1})^{\#} \varphi \rangle = |\det {\bf A}|^{-1} \langle A^{\#} r, \varphi \rangle] for any [\varphi \in {\scr S}], and hence [R = |\det {\bf A}|^{-1} A^{\#} r]. By Fourier transformation, according to Section[link], [{\scr F}[R] = |\det {\bf A}|^{-1} {\scr F}[A^{\#} r] = [({\bf A}^{-1})^{T}]^{\#} {\scr F}[r] = [({\bf A}^{-1})^{T}]^{\#} r,] which we write: [{\scr F}[R] = |\det {\bf A}|^{-1} R^{*}] with [R^{*} = |\det {\bf A}| [({\bf A}^{-1})^{T}]^{\#} r.]

[R^{*}] is a lattice distribution: [R^{*} = {\textstyle\sum\limits_{{\boldmu} \in {\bb Z}^{n}}} \delta_{[({\bf A}^{-1})^{T} {\boldmu}]} = {\textstyle\sum\limits_{{\boldxi} \in \Lambda^{*}}} \delta_{({\boldxi})}] associated with the reciprocal lattice [\Lambda^{*}] whose basis vectors [{\bf a}_{1}^{*}, \ldots, {\bf a}_{n}^{*}] are the columns of [({\bf A}^{-1})^{T}]. Since the latter matrix is equal to the adjoint matrix (i.e. the matrix of co-factors) of A divided by det A, the components of the reciprocal basis vectors can be written down explicitly (see Section[link] for the crystallographic case [n = 3]).

A distribution T will be called Λ-periodic if [\tau_{\boldxi} T = T] for all [{\boldxi} \in \Lambda]; as previously, T may be written [R * T^{0}] for some motif distribution [T^{0}] with compact support. By Fourier transformation, [\eqalignno{{\scr F}[T] &= |\det {\bf A}|^{-1} R^{*} \cdot {\scr F}[T^{0}]\cr &= |\det {\bf A}|^{-1} {\textstyle\sum\limits_{{\boldxi} \in \Lambda^{*}}} {\scr F}[T^{0}] ({\boldxi}) \delta_{({\boldxi})}\cr &= |\det {\bf A}|^{-1} {\textstyle\sum\limits_{{\boldmu} \in {\bb Z}^{n}}} {\scr F}[T^{0}] [{({\bf A}^{-1})^{T}}{\boldmu}] \delta_{{[({\bf A}^{-1})^{T}} {\boldmu}]}}] so that [{\scr F}[T]] is a weighted reciprocal-lattice distribution, the weight attached to node [{\boldxi} \in \Lambda^{*}] being [|\det {\bf A}|^{-1}] times the value [{\scr F}[T^{0}](\boldxi)] of the Fourier transform of the motif [T^{0}].

This result may be further simplified if T and its motif [T^{0}] are referred to the standard period lattice [{\bb Z}^{n}] by defining t and [t^{0}] so that [T = A^{\#} t], [T^{0} = A^{\#} t^{0}], [t = r * t^{0}]. Then [{\scr F}[T^{0}] ({\boldxi}) = |\det {\bf A}| {\scr F}[t^{0}] ({\bf A}^{T} {\boldxi}),] hence [{\scr F}[T^{0}] [{({\bf A}^{-1})^{T}}{\boldmu}] = |\det {\bf A}| {\scr F}[t^{0}] ({\boldmu}),] so that [{\scr F}[T] = {\textstyle\sum\limits_{{\boldmu} \in {\bb Z}^{n}}} {\scr F}[t^{0}] ({\boldmu}) \delta_{[({\bf A}^{-1})^{T} {\boldmu}]}] in non-standard coordinates, while [{\scr F}[t] = {\textstyle\sum\limits_{{\boldmu} \in {\bb Z}^{n}}} {\scr F}[t^{0}] ({\boldmu}) \delta_{({\boldmu})}] in standard coordinates.

The reciprocity theorem may then be written: [\displaylines{\quad (\hbox{iii}) \hfill W_{\boldxi} = |\det {\bf A}|^{-1} \langle T_{\bf x}^{0}, \exp (-2 \pi i {\boldxi} \cdot {\bf x})\rangle, \quad {\boldxi} \in \boldLambda^{*} \hfill\cr \quad (\hbox{iv}) \hfill T_{\bf x} = {\textstyle\sum\limits_{{\boldxi} \in \Lambda^{*}}} W_{\boldxi} \exp (+2 \pi i {\boldxi} \cdot {\bf x})\qquad\qquad\qquad\quad\hfill}] in non-standard coordinates, or equivalently: [\displaylines{\quad (\hbox{v}) \hfill w_{\boldmu} = \langle t_{\bf x}^{0}, \exp (-2 \pi i {\boldmu} \cdot {\bf x})\rangle, \quad {\boldmu} \in {\bb Z}^{n} \hfill\cr \quad (\hbox{vi}) \hfill t_{\bf x} = {\textstyle\sum\limits_{{\boldmu} \in {\bb Z}^{n}}} w_{\boldmu} \exp (+2 \pi i {\boldmu} \cdot {\bf x}) \quad\qquad\hfill}] in standard coordinates. It gives an n-dimensional Fourier series representation for any periodic distribution over [{\bb R}^{n}]. The convergence of such series in [{\scr S}\,' ({\bb R}^{n})] will be examined in Section[link]. Duality between periodization and sampling

| top | pdf |

Let [T^{0}] be a distribution with compact support (the `motif'). Its Fourier transform [\bar{\scr F}[T^{0}]] is analytic (Section[link]) and may thus be used as a multiplier.

We may rephrase the preceding results as follows:

  • (i) if [T^{0}] is `periodized by R' to give [R * T^{0}], then [\bar{\scr F}[T^{0}]] is `sampled by [R^{*}]' to give [|\det {\bf A}|^{-1} R^{*} \cdot \bar{\scr F}[T^{0}]];

  • (ii) if [\bar{\scr F}[T^{0}]] is `sampled by [R^{*}]' to give [R^{*} \cdot \bar{\scr F}[T^{0}]], then [T^{0}] is `periodized by R' to give [|\det {\bf A}| R * T^{0}].

Thus the Fourier transformation establishes a duality between the periodization of a distribution by a period lattice Λ and the sampling of its transform at the nodes of lattice [\Lambda^{*}] reciprocal to Λ. This is a particular instance of the convolution theorem of Section[link].

At this point it is traditional to break the symmetry between [{\scr F}] and [\bar{\scr F}] which distribution theory has enabled us to preserve even in the presence of periodicity, and to perform two distinct identifications:

  • (i) a Λ-periodic distribution T will be handled as a distribution [\tilde{T}] on [{\bb R}^{n} / \Lambda], was done in Section[link];

  • (ii) a weighted lattice distribution [W = {\textstyle\sum_{{\boldmu} \in {\bb Z}^{n}}} W_{\boldmu} \delta_{[({\bf A}^{-1})^{T} {\boldmu}]}] will be identified with the collection [\{W_{\boldmu}|{\boldmu} \in {\bb Z}^{n}\}] of its n-tuply indexed coefficients. The Poisson summation formula

| top | pdf |

Let [\varphi \in {\scr S}], so that [{\scr F}[\varphi] \in {\scr S}]. Let R be the lattice distribution associated to lattice Λ, with period matrix A, and let [R^{*}] be associated to the reciprocal lattice [\Lambda^{*}]. Then we may write: [\eqalignno{\langle R, \varphi \rangle &= \langle R, \bar{\scr F}[{\scr F}[\varphi]]\rangle\cr &= \langle \bar{\scr F}[R], {\scr F}[\varphi]\rangle\cr &= |\det {\bf A}|^{-1} \langle R^{*}, {\scr F}[\varphi]\rangle}] i.e. [{\textstyle\sum\limits_{{\bf x} \in \Lambda}} \varphi ({\bf x}) = |\det {\bf A}|^{-1} {\textstyle\sum\limits_{{\boldxi} \in \Lambda^{*}}} {\scr F}[\varphi] ({\boldxi}).]

This identity, which also holds for [\bar{\scr F}], is called the Poisson summation formula. Its usefulness follows from the fact that the speed of decrease at infinity of ϕ and [{\scr F}[\varphi]] are inversely related (Section[link]), so that if one of the series (say, the left-hand side) is slowly convergent, the other (say, the right-hand side) will be rapidly convergent. This procedure has been used by Ewald (1921)[link] [see also Bertaut (1952)[link], Born & Huang (1954)[link]] to evaluate lattice sums (Madelung constants) involved in the calculation of the internal electrostatic energy of crystals (see Chapter 3.4[link] in this volume on convergence acceleration techniques for crystallographic lattice sums).

When ϕ is a multivariate Gaussian [\varphi ({\bf x}) = G_{\bf B} ({\bf x}) = \exp (-\textstyle{{1 \over 2}} {\bf x}^{T} {\bf Bx}),] then [{\scr F}[\varphi] (\boldxi) = |\det (2 \pi {\bf B}^{-1})|^{1/2} G_{{\bf B}^{-1}} (\boldxi),] and Poisson's summation formula for a lattice with period matrix A reads: [\eqalignno{{\textstyle\sum\limits_{{\bf m} \in {\bb Z}^{n}}} G_{\bf B} ({\bf Am}) &= |\det {\bf A}|^{-1}| \det (2 \pi {\bf B}^{-1})|^{1/2}\cr &\quad \times \textstyle\sum\limits_{{\boldmu} \in {\bb Z}^{n}} G_{4 \pi^{2}{\bf B}^{-1}} [({\bf A}^{-1})^{T} {\boldmu}]}] or equivalently [{\textstyle\sum\limits_{{\bf m} \in {\bb Z}^{n}}} G_{C} ({\bf m}) = |\det (2 \pi {\bf C}^{-1})|^{1/2} {\textstyle\sum\limits_{{\boldmu} \in {\bb Z}^{n}}} G_{4 \pi^{2}}{{_{{\bf C}^{-1}}}} ({\boldmu})] with [{\bf C} = {\bf A}^{T} {\bf BA}.] Convolution of Fourier series

| top | pdf |

Let [S = R * S^{0}] and [T = R * T^{0}] be two Λ-periodic distributions, the motifs [S^{0}] and [T^{0}] having compact support. The convolution [S * T] does not exist, because S and T do not satisfy the support condition (Section[link]). However, the three distributions R, [S^{0}] and [T^{0}] do satisfy the generalized support condition, so that their convolution is defined; then, by associativity and commutativity: [R * S^{0} * T^{0} = S * T^{0} = S^{0} * T.]

By Fourier transformation and by the convolution theorem: [\eqalignno{R^{*} \times {\scr F}[S^{0} * T^{0}] &= (R^{*} \times {\scr F}[S^{0}]) \times {\scr F}[T^{0}]\cr &= {\scr F}[T^{0}] \times (R^{*} \times {\scr F}[S^{0}]).}] Let [\{U_{\boldxi}\}_{{\boldxi} \in \Lambda^{*}}], [\{V_{\boldxi}\}_{{\boldxi} \in \Lambda^{*}}] and [\{W_{\boldxi}\}_{{\boldxi} \in \Lambda^{*}}] be the sets of Fourier coefficients associated to S, T and [S * T^{0} (= S^{0} * T)], respectively. Identifying the coefficients of [\delta_{\boldxi}] for [{\boldxi} \in \Lambda^{*}] yields the forward version of the convolution theorem for Fourier series: [W_{\boldxi} = |\det {\bf A}| U_{\boldxi} V_{\boldxi}.]

The backward version of the theorem requires that T be infinitely differentiable. The distribution [S \times T] is then well defined and its Fourier coefficients [\{Q_{\boldxi}\}_{\boldxi \in \Lambda^{*}}] are given by [Q_{\boldxi} = {\textstyle\sum\limits_{{\boldeta} \in \Lambda^{*}}} U_{\boldeta} V_{{\boldxi} - {\boldeta}}.] Toeplitz forms, Szegö's theorem

| top | pdf |

Toeplitz forms were first investigated by Toeplitz (1907[link], 1910[link], 1911a[link]). They occur in connection with the `trigonometric moment problem' (Shohat & Tamarkin, 1943[link]; Akhiezer, 1965[link]) and probability theory (Grenander, 1952[link]) and play an important role in several direct approaches to the crystallographic phase problem [see Sections[link],[link](e)][link]. Many aspects of their theory and applications are presented in the book by Grenander & Szegö (1958)[link]. Toeplitz forms

| top | pdf |

Let [f \in L^{1} ({\bb R} / {\bb Z})] be real-valued, so that its Fourier coefficients satisfy the relations [c_{-m} (\;f) = \overline{c_{m} (\;f)}]. The Hermitian form in [n + 1] complex variables [T_{n} [\;f] ({\bf u}) = {\textstyle\sum\limits_{\mu = 0}^{n}}\; {\textstyle\sum\limits_{\nu = 0}^{n}} \;\overline{u_{\mu}} c_{\mu - \nu}u_{\nu}] is called the nth Toeplitz form associated to f. It is a straightforward consequence of the convolution theorem and of Parseval's identity that [T_{n} [\;f]] may be written: [T_{n} [\;f] ({\bf u}) = {\textstyle\int\limits_{0}^{1}} \left|{\textstyle\sum\limits_{\nu = 0}^{n}} {u}_{\nu} \exp (2 \pi i\nu x)\right|^{2} f (x) \;\hbox{d}x.] The Toeplitz–Carathéodory–Herglotz theorem

| top | pdf |

It was shown independently by Toeplitz (1911b)[link], Carathéodory (1911)[link] and Herglotz (1911)[link] that a function [f \in L^{1}] is almost everywhere non-negative if and only if the Toeplitz forms [T_{n} [\;f]] associated to f are positive semidefinite for all values of n.

This is equivalent to the infinite system of determinantal inequalities [D_{n} = \det \pmatrix{c_{0} &c_{-1} &\cdot &\cdot &c_{-n}\cr c_{1} &c_{0} &c_{-1} &\cdot &\cdot\cr \cdot &c_{1} &\cdot &\cdot &\cdot\cr \cdot &\cdot &\cdot &\cdot &c_{-1}\cr c_{n} &\cdot &\cdot &c_{1} &c_{0}\cr} \geq 0 \quad \hbox{for all } n.] The [D_{n}] are called Toeplitz determinants. Their application to the crystallographic phase problem is described in Section[link]. Asymptotic distribution of eigenvalues of Toeplitz forms

| top | pdf |

The eigenvalues of the Hermitian form [T_{n} [\;f]] are defined as the [n + 1] real roots of the characteristic equation [\det \{T_{n} [\;f - \lambda]\} = 0]. They will be denoted by [\lambda_{1}^{(n)}, \lambda_{2}^{(n)}, \ldots, \lambda_{n + 1}^{(n)}.]

It is easily shown that if [m \leq f(x) \leq M] for all x, then [m \leq \lambda_{\nu}^{(n)} \leq M] for all n and all [\nu = 1, \ldots, n + 1]. As [n \rightarrow \infty] these bounds, and the distribution of the [\lambda^{(n)}] within these bounds, can be made more precise by introducing two new notions.

  • (i) Essential bounds: define ess inf f as the largest m such that [f(x) \geq m] except for values of x forming a set of measure 0; and define ess sup f similarly.

  • (ii) Equal distribution. For each n, consider two sets of [n + 1] real numbers: [a_{1}^{(n)}, a_{2}^{(n)}, \ldots, a_{n + 1}^{(n)}, \quad\hbox{and}\quad b_{1}^{(n)}, b_{2}^{(n)}, \ldots, b_{n + 1}^{(n)}.] Assume that for each [\nu] and each n, [|a_{\nu}^{(n)}| \;\lt\; K] and [|b_{\nu}^{(n)}| \;\lt\; K] with K independent of [\nu] and n. The sets [\{a_{\nu}^{(n)}\}] and [\{b_{\nu}^{(n)}\}] are said to be equally distributed in [[-K, +K]] if, for any function F over [[-K, +K]], [\lim\limits_{n \rightarrow \infty} {1 \over n + 1} \sum\limits_{\nu = 1}^{n + 1} [F (a_{\nu}^{(n)}) - F (b_{\nu}^{(n)})] = 0.]

We may now state an important theorem of Szegö (1915[link], 1920[link]). Let [f \in L^{1}], and put [m = \hbox{ess inf}\; f], [M = \hbox{ess sup}\;f]. If m and M are finite, then for any continuous function [F(\lambda)] defined in the interval [m, M] we have [\lim\limits_{n \rightarrow \infty} {1 \over n + 1} \sum\limits_{\nu = 1}^{n + 1} F (\lambda_{\nu}^{(n)}) = \int\limits_{0}^{1} F[\;f(x)] \;\hbox{d}x.] In other words, the eigenvalues [\lambda_{\nu}^{(n)}] of the [T_{n}] and the values [f[\nu/(n + 2)]] of f on a regular subdivision of ]0, 1[ are equally distributed.

Further investigations into the spectra of Toeplitz matrices may be found in papers by Hartman & Wintner (1950[link], 1954[link]), Kac et al. (1953)[link], Widom (1965)[link], and in the notes by Hirschman & Hughes (1977)[link]. Consequences of Szegö's theorem

| top | pdf |

  • (i) If the λ's are ordered in ascending order, then [\lim\limits_{n \rightarrow \infty} \lambda_{1}^{(n)} = m = \hbox{ess inf}\; f, \quad \lim\limits_{n \rightarrow \infty} \lambda_{n + 1}^{(n)} = M = \hbox{ess sup}\; f.] Thus, when [f \geq 0], the condition number [\lambda_{n + 1}^{(n)} / \lambda_{1}^{(n)}] of [T_{n}[\;f]] tends towards the `essential dynamic range' [M/m] of f.

  • (ii) Let [F(\lambda) = \lambda^{s}] where s is a positive integer. Then [\lim\limits_{n \rightarrow \infty} {1 \over n + 1} \sum\limits_{\nu = 1}^{n + 1}\; [\lambda_{\nu}^{(n)}]^{s} = \int\limits_{0}^{1} [\;f(x)]^{s} \;\hbox{d}x.]

  • (iii) Let [m \gt 0], so that [\lambda_{\nu}^{(n)} \gt 0], and let [D_{n}(\;f) = \det T_{n}(\;f)]. Then [D_{n}(\;f) = \textstyle\prod\limits_{\nu = 1}^{n + 1} \lambda_{\nu}^{(n)},] hence [\log D_{n}(\;f) = {\textstyle\sum\limits_{\nu = 1}^{n + 1}} \log \lambda_{\nu}^{(n)}.]

    Putting [F(\lambda) = \log \lambda], it follows that [\lim\limits_{n \rightarrow \infty} [D_{n} (\;f)]^{1/(n + 1)} = \exp \left\{{\textstyle\int\limits_{0}^{1}} \log f(x) \;\hbox{d}x\right\}.]

Further terms in this limit were obtained by Szegö (1952)[link] and interpreted in probabilistic terms by Kac (1954)[link]. Convergence of Fourier series

| top | pdf |

The investigation of the convergence of Fourier series and of more general trigonometric series has been the subject of intense study for over 150 years [see e.g. Zygmund (1976)[link]]. It has been a constant source of new mathematical ideas and theories, being directly responsible for the birth of such fields as set theory, topology and functional analysis.

This section will briefly survey those aspects of the classical results in dimension 1 which are relevant to the practical use of Fourier series in crystallography. The books by Zygmund (1959)[link], Tolstov (1962)[link] and Katznelson (1968)[link] are standard references in the field, and Dym & McKean (1972)[link] is recommended as a stimulant. Classical [L^{1}] theory

| top | pdf |

The space [L^{1} ({\bb R} / {\bb Z})] consists of (equivalence classes of) complex-valued functions f on the circle which are summable, i.e. for which [\|\;f \|_{1} \equiv {\textstyle\int\limits_{0}^{1}}\; | \;f(x) | \;\hbox{d}x \;\lt\; + \infty.] It is a convolution algebra: If f and g are in [L^{1}], then [f * g] is in [L^{1}].

The mth Fourier coefficient [c_{m} (\;f)] of f, [c_{m} (\;f) = {\textstyle\int\limits_{0}^{1}}\; f(x) \exp (-2 \pi imx) \;\hbox{d}x] is bounded: [|c_{m} (\;f)| \leq \|\;f \|_{1}], and by the Riemann–Lebesgue lemma [c_{m} (\;f) \rightarrow 0] as [m \rightarrow \infty]. By the convolution theorem, [c_{m} (\;f * g) = c_{m} (\;f) c_{m} (g)].

The pth partial sum [S_{p}(\;f)] of the Fourier series of f, [S_{p}(\;f) (x) = {\textstyle\sum\limits_{|m|\leq p}} c_{m} (\;f) \exp (2 \pi imx),] may be written, by virtue of the convolution theorem, as [S_{p}(\;f) = D_{p} * f], where [D_{p} (x) = {\sum\limits_{|m|\leq p}} \exp (2 \pi imx) = {\sin [(2p + 1) \pi x] \over \sin \pi x}] is the Dirichlet kernel. Because [D_{p}] comprises numerous slowly decaying oscillations, both positive and negative, [S_{p}(\;f)] may not converge towards f in a strong sense as [p \rightarrow \infty]. Indeed, spectacular pathologies are known to exist where the partial sums, examined pointwise, diverge everywhere (Zygmund, 1959[link], Chapter VIII). When f is piecewise continuous, but presents isolated jumps, convergence near these jumps is marred by the Gibbs phenomenon: [S_{p}(\;f)] always `overshoots the mark' by about 9%, the area under the spurious peak tending to 0 as [p \rightarrow \infty] but not its height [see Larmor (1934)[link] for the history of this phenomenon].

By contrast, the arithmetic mean of the partial sums, also called the pth Cesàro sum, [C_{p}(\;f) = {1 \over p + 1} [S_{0}(\;f) + \ldots + S_{p}(\;f)],] converges to f in the sense of the [L^{1}] norm: [\|C_{p}(\;f) - f\|_{1} \rightarrow 0] as [p \rightarrow \infty]. If furthermore f is continuous, then the convergence is uniform, i.e. the error is bounded everywhere by a quantity which goes to 0 as [p \rightarrow \infty]. It may be shown that [C_{p} (\;f) = F_{p} * f,] where [\eqalign{F_{p} (x) &= {\sum\limits_{|m| \leq p}} \left(1 - {|m| \over p + 1}\right) \exp (2 \pi imx) \cr &= {1 \over p + 1} \left[{\sin (p + 1) \pi x \over \sin \pi x}\right]^{2}}] is the Fejér kernel. [F_{p}] has over [D_{p}] the advantage of being everywhere positive, so that the Cesàro sums [C_{p} (\;f)] of a positive function f are always positive.

The de la Vallée Poussin kernel [V_{p} (x) = 2 F_{2p + 1} (x) - F_{p} (x)] has a trapezoidal distribution of coefficients and is such that [c_{m} (V_{p}) = 1] if [|m| \leq p + 1]; therefore [V_{p} * f] is a trigonometric polynomial with the same Fourier coefficients as f over that range of values of m.

The Poisson kernel[\eqalign{P_{r} (x) &= 1 + 2 {\sum\limits_{m = 1}^{\infty}} r^{m} \cos 2 \pi mx \cr &= {1 - r^{2} \over 1 - 2r \cos 2 \pi mx + r^{2}}}] with [0 \leq r \;\lt\; 1] gives rise to an Abel summation procedure [Tolstov (1962[link], p. 162); Whittaker & Watson (1927[link], p. 57)] since [(P_{r} * f) (x) = {\textstyle\sum\limits_{m \in {\bb Z}}} c_{m} (\;f) r^{|m|} \exp (2 \pi imx).] Compared with the other kernels, [P_{r}] has the disadvantage of not being a trigonometric polynomial; however, [P_{r}] is the real part of the Cauchy kernel (Cartan, 1961[link]; Ahlfors, 1966[link]): [P_{r} (x) = {\scr Re}\left[{1 + r \exp (2 \pi ix) \over 1 - r \exp (2 \pi ix)}\right]] and hence provides a link between trigonometric series and analytic functions of a complex variable.

Other methods of summation involve forming a moving average of f by convolution with other sequences of functions [\alpha_{p} ({\bf x})] besides [D_{p}] of [F_{p}] which `tend towards δ' as [p \rightarrow \infty]. The convolution is performed by multiplying the Fourier coefficients of f by those of [\alpha_{p}], so that one forms the quantities [S'_{p} (\;f) (x) = {\textstyle\sum\limits_{|m| \leq p}} c_{m} (\alpha_{p}) c_{m} (\;f) \exp (2 \pi imx).] For instance the `sigma factors' of Lanczos (Lanczos, 1966[link], p. 65), defined by [\sigma_{m} = {\sin [m \pi / p] \over m \pi /p},] lead to a summation procedure whose behaviour is intermediate between those using the Dirichlet and the Fejér kernels; it corresponds to forming a moving average of f by convolution with [\alpha_{p} = p\chi_{[-1/(2p), \, 1/(2p)]}{*} D_{p},] which is itself the convolution of a `rectangular pulse' of width [1/p] and of the Dirichlet kernel of order p.

A review of the summation problem in crystallography is given in Section[link]. Classical [L^{2}] theory

| top | pdf |

The space [L^{2}({\bb R}/{\bb Z})] of (equivalence classes of) square-integrable complex-valued functions f on the circle is contained in [L^{1}({\bb R}/{\bb Z})], since by the Cauchy–Schwarz inequality [\eqalign{\|\;f \|_{1}^{2} &= \left({\textstyle\int\limits_{0}^{1}} |\;f (x)| \times 1 \;\hbox{d}x\right)^{2} \cr &\leq \left({\textstyle\int\limits_{0}^{1}} |\;f (x)|^{2} \;\hbox{d}x\right) \left({\textstyle\int\limits_{0}^{1}} {1}^{2} \;\hbox{d}x\right) = \|\;f \|_{2}^{2} \leq \infty.}] Thus all the results derived for [L^{1}] hold for [L^{2}], a great simplification over the situation in [{\bb R}] or [{\bb R}^{n}] where neither [L^{1}] nor [L^{2}] was contained in the other.

However, more can be proved in [L^{2}], because [L^{2}] is a Hilbert space (Section[link]) for the inner product [(\;f, g) = {\textstyle\int\limits_{0}^{1}}\; \overline{f (x)} g (x) \;\hbox{d}x,] and because the family of functions [\{\exp (2 \pi imx)\}_{m \in {\bb Z}}] constitutes an orthonormal Hilbert basis for [L^{2}].

The sequence of Fourier coefficients [c_{m} (\;f)] of [f \in L^{2}] belongs to the space [\ell^{2}({\bb Z})] of square-summable sequences: [{\textstyle\sum\limits_{m \in {\bb Z}}} |c_{m} (\;f)|^{2} \;\lt\; \infty.] Conversely, every element [c = (c_{m})] of [\ell^{2}] is the sequence of Fourier coefficients of a unique function in [L^{2}]. The inner product [(c, d) = {\textstyle\sum\limits_{m \in {\bb Z}}} \overline{c_{m}} d_{m}] makes [\ell^{2}] into a Hilbert space, and the map from [L^{2}] to [\ell^{2}] established by the Fourier transformation is an isometry (Parseval/Plancherel): [\|\;f \|_{L^{2}} = \| c (\;f) \|_{{\ell}^{2}}] or equivalently: [(\;f, g) = (c (\;f), c (g)).] This is a useful property in applications, since (f, g) may be calculated either from f and g themselves, or from their Fourier coefficients [c(\;f)] and [c(g)] (see Section[link]) for crystallographic applications).

By virtue of the orthogonality of the basis [\{\exp (2 \pi imx)\}_{m \in {\bb Z}}], the partial sum [S_{p} (\;f)] is the best mean-square fit to f in the linear subspace of [L^{2}] spanned by [\{\exp (2 \pi imx)\}_{|m| \leq p}], and hence (Bessel's inequality) [{\textstyle\sum\limits_{|m| \leq p}} |c_{m} (\;f)|^{2} = \|\;f \|_{2}^{2} - {\textstyle\sum\limits_{|M| \geq p}} |c_{M} (\;f)|^{2} \leq \|\;f \|_{2}^{2}.] The viewpoint of distribution theory

| top | pdf |

The use of distributions enlarges considerably the range of behaviour which can be accommodated in a Fourier series, even in the case of general dimension n where classical theories meet with even more difficulties than in dimension 1.

Let [\{w_{m}\}_{m \in {\bb Z}}] be a sequence of complex numbers with [|w_{m}|] growing at most polynomially as [|m| \rightarrow \infty], say [|w_{m}| \leq C |m|^{K}]. Then the sequence [\{w_{m} / (2 \pi im)^{K + 2}\}_{m \in {\bb Z}}] is in [\ell^{2}] and even defines a continuous function [f \in L^{2}({\bb R}/{\bb Z})] and an associated tempered distribution [T_{f} \in {\scr D}\,'({\bb R}/{\bb Z})]. Differentiation of [T_{f}] [(K + 2)] times then yields a tempered distribution whose Fourier transform leads to the original sequence of coefficients. Conversely, by the structure theorem for distributions with compact support (Section[link]), the motif [T^{0}] of a [{\bb Z}]-periodic distribution is a derivative of finite order of a continuous function; hence its Fourier coefficients will grow at most polynomially with [|m|] as [|m| \rightarrow \infty].

Thus distribution theory allows the manipulation of Fourier series whose coefficients exhibit polynomial growth as their order goes to infinity, while those derived from functions had to tend to 0 by virtue of the Riemann–Lebesgue lemma. The distribution-theoretic approach to Fourier series holds even in the case of general dimension n, where classical theories meet with even more difficulties (see Ash, 1976[link]) than in dimension 1. The discrete Fourier transformation

| top | pdf | Shannon's sampling theorem and interpolation formula

| top | pdf |

Let [\varphi \in {\scr E} ({\bb R}^{n})] be such that [\Phi = {\scr F}[\varphi]] has compact support K. Let ϕ be sampled at the nodes of a lattice [\Lambda^{*}], yielding the lattice distribution [R^{*} \times \varphi]. The Fourier transform of this sampled version of ϕ is [{\scr F}[R^{*} \times \varphi] = | \det {\bf A}| (R * \Phi),] which is essentially Φ periodized by period lattice [\Lambda = (\Lambda^{*})^{*}], with period matrix A.

Let us assume that Λ is such that the translates of K by different period vectors of Λ are disjoint. Then we may recover Φ from [R * \Phi] by masking the contents of a `unit cell' [{\scr V}] of Λ (i.e. a fundamental domain for the action of Λ in [{\bb R}^{n}]) whose boundary does not meet K. If [\chi _{\scr V}] is the indicator function of [{\scr V}], then [\Phi = \chi_{\scr V}\times (R * \Phi).] Transforming both sides by [\bar{\scr F}] yields [\varphi = \bar{\scr F}\left[\chi_{\scr V}\times {1 \over |\det {\bf A}|} {\scr F}[R^{*} \times \varphi]\right],] i.e. [\varphi = \left({1 \over V} \bar{\scr F}[\chi_{\scr V}]\right) * (R^{*} \times \varphi)] since [|\det {\bf A}|] is the volume V of [{\scr V}].

This interpolation formula is traditionally credited to Shannon (1949)[link], although it was discovered much earlier by Whittaker (1915)[link]. It shows that ϕ may be recovered from its sample values on [\Lambda^{*}] (i.e. from [R^{*} \times \varphi]) provided [\Lambda^{*}] is sufficiently fine that no overlap (or `aliasing') occurs in the periodization of Φ by the dual lattice Λ. The interpolation kernel is the transform of the normalized indicator function of a unit cell of Λ containing the support K of Φ.

If K is contained in a sphere of radius [1/\Delta] and if Λ and [\Lambda^{*}] are rectangular, the length of each basis vector of Λ must be greater than [2/\Delta], and thus the sampling interval must be smaller than [\Delta /2]. This requirement constitutes the Shannon sampling criterion. Duality between subdivision and decimation of period lattices

| top | pdf | Geometric description of sublattices

| top | pdf |

Let [\Lambda_{\bf A}] be a period lattice in [{\bb R}^{n}] with matrix A, and let [\Lambda_{\bf A}^{*}] be the lattice reciprocal to [\Lambda_{\bf A}], with period matrix [(A^{-1})^{T}]. Let [\Lambda_{\bf B}, {\bf B}, \Lambda_{\bf B}^{*}] be defined similarly, and let us suppose that [\Lambda_{\bf A}] is a sublattice of [\Lambda_{\bf B}], i.e. that [\Lambda_{\bf B} \supset \Lambda_{\bf A}] as a set.

The relation between [\Lambda_{\bf A}] and [\Lambda_{\bf B}] may be described in two different fashions: (i) multiplicatively, and (ii) additively.

  • (i) We may write [{\bf A} = {\bf BN}] for some non-singular matrix N with integer entries. N may be viewed as the period matrix of the coarser lattice [\Lambda_{\bf A}] with respect to the period basis of the finer lattice [\Lambda_{\bf B}]. It will be more convenient to write [{\bf A} = {\bf DB}], where [{\bf D} = {\bf BNB}^{-1}] is a rational matrix (with integer determinant since det [{\bf D} = \det {\bf N}]) in terms of which the two lattices are related by [\Lambda_{\bf A} = {\bf D} \Lambda_{\bf B}.]

  • (ii) Call two vectors in [\Lambda_{\bf B}] congruent modulo [\Lambda_{\bf A}] if their difference lies in [\Lambda_{\bf A}]. Denote the set of congruence classes (or `cosets') by [\Lambda_{\bf B} / \Lambda_{\bf A}], and the number of these classes by [[\Lambda_{\bf B} : \Lambda_{\bf A}]]. The `coset decomposition' [\Lambda_{\bf B} = \bigcup_{{\boldell} \in \Lambda_{\bf B} / \Lambda_{\bf A}} ({\boldell} + \Lambda_{\bf A})] represents [\Lambda_{\bf B}] as the disjoint union of [[\Lambda_{\bf B} : \Lambda_{\bf A}]] translates of [\Lambda_{\bf A} .\; \Lambda_{\bf B} / \Lambda_{\bf A}] is a finite lattice with [[\Lambda_{\bf B} : \Lambda_{\bf A}]] elements, called the residual lattice of [\Lambda_{\bf B}] modulo [\Lambda_{\bf A}].

    The two descriptions are connected by the relation [[\Lambda_{\bf B} : \Lambda_{\bf A}] = \det {\bf D} = \det {\bf N}], which follows from a volume calculation. We may also combine (i)[link] and (ii)[link] into

  • [\displaylines{\quad({\rm iii})\hfill \Lambda_{\bf B} = \bigcup_{{\boldell} \in \Lambda_{\bf B} / \Lambda_{\bf A}} ({\boldell} + {\bf D} \Lambda_{\bf B})\hfill}] which may be viewed as the n-dimensional equivalent of the Euclidean algorithm for integer division: [\boldell] is the `remainder' of the division by [\Lambda_{\bf A}] of a vector in [\Lambda_{\bf B}], the quotient being the matrix D. Sublattice relations for reciprocal lattices

| top | pdf |

Let us now consider the two reciprocal lattices [\Lambda_{\bf A}^{*}] and [\Lambda_{\bf B}^{*}]. Their period matrices [({\bf A}^{-1})^{T}] and [({\bf B}^{-1})^{T}] are related by: [({\bf B}^{-1})^{T} = ({\bf A}^{-1})^{T} {\bf N}^{T}], where [{\bf N}^{T}] is an integer matrix; or equivalently by [({\bf B}^{-1})^{T} = {\bf D}^{T} ({\bf A}^{-1})^{T}]. This shows that the roles are reversed in that [\Lambda_{\bf B}^{*}] is a sublattice of [\Lambda_{\bf A}^{*}], which we may write:

  • [\displaylines{\quad({\rm i})^*\hfill \Lambda_{\bf B}^{*} = {\bf D}^{T} \Lambda_{\bf A}^{*}\hfill}]

  • [\displaylines{\quad({\rm ii})^*\hfill\Lambda_{\bf A}^{*} = \bigcup_{{\boldell}^{*} \in \Lambda_{\bf A}^{*} / \Lambda_{\bf B}^{*}} ({\boldell}^{*} + \Lambda_{\bf B}^{*}).\hfill}] The residual lattice [\Lambda_{\bf A}^{*} / \Lambda_{\bf B}^{*}] is finite, with [[\Lambda_{\bf A}^{*}: \Lambda_{\bf B}^{*}] =] [ \det {\bf D} = \det {\bf N} = [\Lambda_{\bf B}: \Lambda_{\bf A}]], and we may again combine [(\hbox{i})^{*}] [link] and [(\hbox{ii})^{*}] [link] into

  • [\displaylines{\quad({\rm iii})^*\hfill\Lambda_{\bf A}^{*} = \bigcup_{{\boldell}^{*} \in \Lambda_{\bf A}^{*} / \Lambda_{\bf B}^{*}} ({\boldell}^{*} + {\bf D}^{T} \Lambda_{\bf A}^{*}).\hfill}] Relation between lattice distributions

| top | pdf |

The above relations between lattices may be rewritten in terms of the corresponding lattice distributions as follows: [\displaylines{\quad (\hbox{i}) \hfill R_{\bf A} = {1 \over |\det {\bf D}|} {\bf D}^{\#} R_{\bf B}^{*} \;\hfill\cr \quad (\hbox{ii}) \hfill R_{\bf B} = T_{{\bf B} / {\bf A}} * R_{\bf A}\qquad \hfill\cr \quad (\hbox{i})^{*} \hfill \;\;R_{\bf B}^{*} = {1 \over |\det {\bf D}|} ({\bf D}^{T})^{\#} R_{\bf A}^{*} \hfill\cr \quad (\hbox{ii})^{*} \hfill R_{\bf A}^{*} =T_{{\bf A} / {\bf B}}^{*} * R_{\bf B}^{*} \qquad\;\;\hfill}] where [T_{{\bf B} / {\bf A}} = {\textstyle\sum\limits_{{\boldell} \in \Lambda_{\bf B} / \Lambda_{\bf A}}} \delta_{({\boldell})}] and [T_{{\bf A}/{\bf B}}^{*} = {\textstyle\sum\limits_{{\boldell}^{*} \in \Lambda_{\bf A}^{*} / \Lambda_{\bf B}^{*}}} \delta_{({\boldell}^{*})}] are (finite) residual-lattice distributions. We may incorporate the factor [1/|\det {\bf D}|] in (i) and [(\hbox{i})^{*}] into these distributions and define [S_{{\bf B}/{\bf A}} = {1 \over |\det {\bf D}|} T_{{\bf B}/{\bf A}},\quad S_{{\bf A}/{\bf B}}^{*} = {1 \over |\det {\bf D}|} T_{{\bf A}/{\bf B}}^{*}.]

Since [|\det {\bf D}| = [\Lambda_{\bf B}: \Lambda_{\bf A}] = [\Lambda_{\bf A}^{*}: \Lambda_{\bf B}^{*}]], convolution with [S_{{\bf B}/{\bf A}}] and [S_{{\bf A}/{\bf B}}^{*}] has the effect of averaging the translates of a distribution under the elements (or `cosets') of the residual lattices [\Lambda_{\bf B}/\Lambda_{\bf A}] and [\Lambda_{\bf A}^{*}/\Lambda_{\bf B}^{*}], respectively. This process will be called `coset averaging'. Eliminating [R_{\bf A}] and [R_{\bf B}] between (i) and (ii), and [R_{\bf A}^{*}] and [R_{\bf B}^{*}] between [(\hbox{i})^{*}] and [(\hbox{ii})^{*}], we may write: [\displaylines{\quad (\hbox{i}')\hfill \! R_{\bf A} = {\bf D}^{\#} (S_{{\bf B}/{\bf A}} * R_{\bf A})\;\;\;\hfill\cr \quad (\hbox{ii}')\hfill \! R_{\bf B} = S_{{\bf B}/{\bf A}} * ({\bf D}^{\#} R_{\bf B})\;\;\;\;\hfill\cr \quad (\hbox{i}')^{*}\hfill R_{\bf B}^{*} = ({\bf D}^{T})^{\#} (S_{{\bf A}/{\bf B}}^{*} * R_{\bf B}^{*}) \hfill\cr \quad (\hbox{ii}')^{*}\hfill R_{\bf A}^{*} = S_{{\bf A}/{\bf B}}^{*} * [({\bf D}^{T})^{\#} R_{\bf A}^{*}]. \;\hfill}] These identities show that period subdivision by convolution with [S_{{\bf B}/{\bf A}}] (respectively [S_{{\bf A}/{\bf B}}^{*}]) on the one hand, and period decimation by `dilation' by [{\bf D}^{\#}] on the other hand, are mutually inverse operations on [R_{\bf A}] and [R_{\bf B}] (respectively [R_{\bf A}^{*}] and [R_{\bf B}^{*}]). Relation between Fourier transforms

| top | pdf |

Finally, let us consider the relations between the Fourier transforms of these lattice distributions. Recalling the basic relation of Section[link], [\eqalign{{\scr F}[R_{\bf A}] &= {1 \over |\det {\bf A}|} R_{\bf A}^{*}\cr &= {1 \over |\det {\bf DB}|} T_{{\bf A}/{\bf B}}^{*} * R_{\bf B}^{*} \quad \quad \quad \quad \quad \quad \hbox{by (ii)}^{*}\cr &= \left({1 \over |\det {\bf D}|} T_{{\bf A}/{\bf B}}^{*}\right) * \left({1 \over |\det {\bf B}|} R_{\bf B}^{*}\right)}] i.e. [\displaylines{\quad (\hbox{iv})\hfill {\scr F}[R_{\bf A}] = S_{{\bf A}/{\bf B}}^{*} * {\scr F}[R_{\bf B}]\hfill}] and similarly: [\displaylines{\quad (\hbox{v})\hfill {\scr F}[R_{\bf B}^{*}] = S_{{\bf B}/{\bf A}} * {\scr F}[R_{\bf A}^{*}].\hfill}]

Thus [R_{\bf A}] (respectively [R_{\bf B}^{*}]), a decimated version of [R_{\bf B}] (respectively [R_{\bf A}^{*}]), is transformed by [{\scr F}] into a subdivided version of [{\scr F}[R_{\bf B}]] (respectively [{\scr F}[R_{\bf A}^{*}]]).

The converse is also true: [\eqalign{{\scr F}[R_{\bf B}] &= {1 \over |\det {\bf B}|} R_{\bf B}^{*}\cr &= {1 \over |\det {\bf B}|} {1 \over |\det {\bf D}|} ({\bf D}^{T})^{\#} R_{\bf A}^{*}\quad \quad \quad \quad \hbox{by (i)}^{*}\cr &= ({\bf D}^{T})^{\#} \left({1 \over |\det {\bf A}|} R_{\bf A}^{*}\right)}] i.e. [\displaylines{\quad (\hbox{iv}')\hfill {\scr F}[R_{\bf B}] = ({\bf D}^{T})^{\#} {\scr F}[R_{\bf A}]\hfill}] and similarly [\displaylines{\quad (\hbox{v}')\hfill {\scr F}[R_{\bf A}^{*}] = {\bf D}^{\#} {\scr F}[R_{\bf B}^{*}].\hfill}]

Thus [R_{\bf B}] (respectively [R_{\bf A}^{*}]), a subdivided version of [R_{\bf A}] (respectively [R_{\bf B}^{*}]) is transformed by [{\scr F}] into a decimated version of [{\scr F}[R_{\bf A}]] (respectively [{\scr F}[R_{\bf B}^{*}]]). Therefore, the Fourier transform exchanges subdivision and decimation of period lattices for lattice distributions.

Further insight into this phenomenon is provided by applying [\bar{\scr F}] to both sides of (iv) and (v) and invoking the convolution theorem: [\displaylines{\quad (\hbox{iv}'')\hfill \!\! R_{\bf A} = \bar{\scr F}[S_{{\bf A}/{\bf B}}^{*}] \times R_{\bf B} \;\hfill\cr \quad (\hbox{v}'')\hfill R_{\bf B}^{*} = \bar{\scr F}[S_{{\bf B}/{\bf A}}] \times R_{\bf A}^{*}. \hfill}] These identities show that multiplication by the transform of the period-subdividing distribution [S_{{\bf A}/{\bf B}}^{*}] (respectively [S_{{\bf B}/{\bf A}}]) has the effect of decimating [R_{\bf B}] to [R_{\bf A}] (respectively [R_{\bf A}^{*}] to [R_{\bf B}^{*}]). They clearly imply that, if [\boldell \in \Lambda_{\bf B}/\Lambda_{\bf A}] and [\boldell^{*} \in \Lambda_{\bf A}^{*}/\Lambda_{\bf B}^{*}], then [\eqalign{\bar{\scr F}[S_{{\bf A}/{\bf B}}^{*}] ({\boldell}) &= 1 \hbox{ if } {\boldell} = {\bf 0} \;\;\quad (i.e. \hbox{ if } {\boldell} \hbox{ belongs}\cr &{\hbox to 66pt{}}\hbox{to the class of } \Lambda_{\bf A}),\cr &= 0 \hbox{ if } {\boldell} \neq {\bf 0}\hbox{;}\cr \bar{\scr F}[S_{{\bf B}/{\bf A}}] ({\boldell}^{*}) &= 1 \hbox{ if } {\boldell}^{*} = {\bf 0} \quad (i.e. \hbox{ if } {\boldell}^{*} \hbox{ belongs}\cr &{\hbox to 60pt{}} \hbox{ to the class of } \Lambda_{\bf B}^{*}),\cr &= 0 \hbox{ if } {\boldell}^{*} \neq {\bf 0}.}] Therefore, the duality between subdivision and decimation may be viewed as another aspect of that between convolution and multiplication.

There is clearly a strong analogy between the sampling/periodization duality of Section[link] and the decimation/subdivision duality, which is viewed most naturally in terms of subgroup relationships: both sampling and decimation involve restricting a function to a discrete additive subgroup of the domain over which it is initially given. Sublattice relations in terms of periodic distributions

| top | pdf |

The usual presentation of this duality is not in terms of lattice distributions, but of periodic distributions obtained by convolving them with a motif.

Given [T^{0} \in {\scr E}\,' ({\bb R}^{n})], let us form [R_{\bf A} * T^{0}], then decimate its transform [(1/|\det {\bf A}|) R_{\bf A}^{*} \times \bar{\scr F}[T^{0}]] by keeping only its values at the points of the coarser lattice [\Lambda_{\bf B}^{*} = {\bf D}^{T} \Lambda_{\bf A}^{*}]; as a result, [R_{\bf A}^{*}] is replaced by [(1/|\det {\bf D}|) R_{\bf B}^{*}], and the reverse transform then yields [\displaylines{\hfill{1 \over |\det {\bf D}|} R_{\bf B} * T^{0} = S_{{\bf B}/{\bf A}} * (R_{\bf A} * T^{0})\hfill \hbox{by (ii)},}] which is the coset-averaged version of the original [R_{\bf A} * T^{0}]. The converse situation is analogous to that of Shannon's sampling theorem. Let a function [\varphi \in {\scr E}({\bb R}^{n})] whose transform [\Phi = {\scr F}[\varphi]] has compact support be sampled as [R_{\bf B} \times \varphi] at the nodes of [\Lambda_{\bf B}]. Then [{\scr F}[R_{\bf B} \times \varphi] = {1 \over |\det {\bf B}|} (R_{\bf B}^{*} * \Phi)] is periodic with period lattice [\Lambda_{\bf B}^{*}]. If the sampling lattice [\Lambda_{\bf B}] is decimated to [\Lambda_{\bf A} = {\bf D} \Lambda_{\bf B}], the inverse transform becomes [\eqalign{{\hbox to 48pt{}}{\scr F}[R_{\bf A} \times \varphi] &= {1 \over |\det {\bf D}|} (R_{\bf A}^{*} * \Phi)\cr &= S_{{\bf A}/{\bf B}}^{*} * (R_{\bf B}^{*} * \Phi){\hbox to 58pt{}}\hbox{by (ii)}^{*},}] hence becomes periodized more finely by averaging over the cosets of [\Lambda_{\bf A}^{*}/\Lambda_{\bf B}^{*}]. With this finer periodization, the various copies of Supp Φ may start to overlap (a phenomenon called `aliasing'), indicating that decimation has produced too coarse a sampling of ϕ. Discretization of the Fourier transformation

| top | pdf |

Let [\varphi^{0} \in {\scr E}({\bb R}^{n})] be such that [\Phi^{0} = {\scr F}[\varphi^{0}]] has compact support ([\varphi^{0}] is said to be band-limited). Then [\varphi = R_{\bf A} * \varphi^{0}] is [\Lambda_{\bf A}]-periodic, and [\Phi = {\scr F}[\varphi] = (1/|\det {\bf A}|) R_{\bf A}^{*} \times \Phi^{0}] is such that only a finite number of points [\lambda_{\bf A}^{*}] of [\Lambda_{\bf A}^{*}] have a non-zero Fourier coefficient [\Phi^{0} (\lambda_{\bf A}^{*})] attached to them. We may therefore find a decimation [\Lambda_{\bf B}^{*} = {\bf D}^{T} \Lambda_{\bf A}^{*}] of [\Lambda_{\bf A}^{*}] such that the distinct translates of Supp [\Phi^{0}] by vectors of [\Lambda_{\bf B}^{*}] do not intersect.

The distribution Φ can be uniquely recovered from [R_{\bf B}^{*} * \Phi] by the procedure of Section[link], and we may write: [\eqalign{R_{\bf B}^{*} * \Phi &= {1 \over |\det {\bf A}|} R_{\bf B}^{*} * (R_{\bf A}^{*} \times \Phi^{0})\cr &= {1 \over |\det {\bf A}|} R_{\bf A}^{*} \times (R_{\bf B}^{*} * \Phi^{0})\cr &= {1 \over |\det {\bf A}|} R_{\bf B}^{*} * [T_{{\bf A}/{\bf B}}^{*} \times (R_{\bf B}^{*} * \Phi^{0})]\hbox{;}}] these rearrangements being legitimate because [\Phi^{0}] and [T_{{\bf A}/{\bf B}}^{*}] have compact supports which are intersection-free under the action of [\Lambda_{\bf B}^{*}]. By virtue of its [\Lambda_{\bf B}^{*}]-periodicity, this distribution is entirely characterized by its `motif' [\tilde{\Phi}] with respect to [\Lambda_{\bf B}^{*}]: [\tilde{\Phi} = {1 \over |\det {\bf A}|} T_{{\bf A}/{\bf B}}^{*} \times (R_{\bf B}^{*} * \Phi^{0}).]

Similarly, ϕ may be uniquely recovered by Shannon interpolation from the distribution sampling its values at the nodes of [\Lambda_{\bf B} = {\bf D}^{-1} \Lambda_{\bf A} (\Lambda_{\bf B}] is a subdivision of [\Lambda_{\bf B}]). By virtue of its [\Lambda_{\bf A}]-periodicity, this distribution is completely characterized by its motif: [\tilde{\varphi} = T_{{\bf B}/{\bf A}} \times \varphi = T_{{\bf B}/{\bf A}} \times (R_{\bf A}^{*} * \varphi^{0}).]

Let [{\boldell} \in \Lambda_{\bf B}/\Lambda_{\bf A}] and [{\boldell}^{*} \in \Lambda_{\bf A}^{*}/\Lambda_{\bf B}^{*}], and define the two sets of coefficients [\!\!\matrix{(1)& \tilde{\varphi} ({\boldell}) \hfill&= \varphi ({\boldell} + \boldlambda_{\bf A})\hfill&\hbox{for any } \boldlambda_{\bf A} \in \Lambda_{\bf A}\hfill&\cr &&&(\hbox{all choices of } \boldlambda_{\bf A} \hbox{ give the same } \tilde{\varphi}),\hfill&\cr (2)&\tilde{\Phi} ({\boldell}^{*}) \hfill&= \Phi^{0} ({\boldell}^{*} + \boldlambda_{\bf B}^{*})\hfill &\hbox{for the unique } \boldlambda_{\bf B}^{*} \hbox{ (if it exists)}\hfill&\cr &&&\hbox{such that } {\boldell}^{*} + \boldlambda_{\bf B}^{*} \in \hbox{Supp } \Phi^{0},\hfill&\cr &&= 0\hfill&\hbox{if no such } \boldlambda_{\bf B}^{*} \hbox{ exists}.\hfill}] Define the two distributions [\omega = {\textstyle\sum\limits_{{\boldell} \in \Lambda_{\bf B}/\Lambda_{\bf A}}} \tilde{\varphi} ({\boldell}) \delta_{({\boldell})}] and [\Omega = {\textstyle\sum\limits_{{\boldell}^{*} \in \Lambda_{\bf A}^{*}/\Lambda_{\bf B}^{*}}} \tilde{\Phi} ({\boldell}^{*}) \delta_{({\boldell}^{*})}.] The relation between ω and Ω has two equivalent forms: [\displaylines{\quad (\hbox{i})\hfill \quad R_{\bf A} * \omega = {\scr F}[R_{\bf B}^{*} * \Omega] \hfill\cr \quad (\hbox{ii})\hfill \bar{\scr F}[R_{\bf A} * \omega] = R_{\bf B}^{*} * \Omega.\quad\;\;\;\hfill}]

By (i), [R_{\bf A} * \omega = |\det {\bf B}| R_{\bf B} \times {\scr F}[\Omega]]. Both sides are weighted lattice distributions concentrated at the nodes of [\Lambda_{\bf B}], and equating the weights at [\boldlambda_{\bf B} = \boldell + \boldlambda_{\bf A}] gives [\tilde{\varphi} ({\boldell}) = {1 \over |\det {\bf D}|} {\sum\limits_{{\boldell}^{*} \in \Lambda_{\bf A}^{*}/\Lambda_{\bf B}^{*}}} \tilde{\Phi} ({\boldell}^{*}) \exp [-2\pi i {\boldell}^{*} \cdot ({\boldell} + \boldlambda_{\bf A})].] Since [\boldell^{*} \in \Lambda_{\bf A}^{*}], [\boldell^{*} \cdot \boldlambda_{\bf A}] is an integer, hence [\tilde{\varphi} ({\boldell}) = {1 \over |\det {\bf D}|} {\sum\limits_{{\boldell}^{*} \in \Lambda_{\bf A}^{*}/\Lambda_{\bf B}^{*}}} \tilde{\Phi} ({\boldell}^{*}) \exp (-2\pi i {\boldell}^{*} \cdot {\boldell}).]

By (ii), we have [{1 \over |\det {\bf A}|} R_{\bf B}^{*} * [T_{{\bf A}/{\bf B}}^{*} \times (R_{\bf B}^{*} * \Phi^{0})] = {1 \over |\det {\bf A}|} \bar{\scr F}[R_{\bf A} * \omega].] Both sides are weighted lattice distributions concentrated at the nodes of [\Lambda_{\bf B}^{*}], and equating the weights at [{\boldlambda}_{\bf A}^{*} = \boldell^{*} + {\boldlambda}_{\bf B}^{*}] gives [\tilde{\Phi} ({\boldell}^{*}) = {\textstyle\sum\limits_{{\boldell} \in \Lambda_{\bf B}/\Lambda_{\bf A}}} \tilde{\varphi} ({\boldell}) \exp [+2\pi i {\boldell} \cdot ({\boldell}^{*} + {\boldlambda}_{\bf B}^{*})].] Since [\boldell \in \Lambda_{\bf B}], [\boldell \cdot {\boldlambda}^{*}_{\bf B}] is an integer, hence [\tilde{\Phi} ({\boldell}^{*}) = {\textstyle\sum\limits_{{\boldell} \in \Lambda_{\bf B}/\Lambda_{\bf A}}} \tilde{\varphi} ({\boldell}) \exp (+2\pi i {\boldell} \cdot {\boldell}^{*}).]

Now the decimation/subdivision relations between [\Lambda_{\bf A}] and [\Lambda_{\bf B}] may be written: [{\bf A} = {\bf DB} = {\bf BN},] so that [\eqalign{{\boldell} &= {\bf B}{\bf \scr k}\qquad\qquad\hbox{for } {\bf \scr k}\in {\bb Z}^{n}\cr {\boldell}^{*} &= ({\bf A}^{-1})^{T} {\scr k}^{*}\quad \hbox{ for } {\bf \scr k}^{*} \in {\bb Z}^{n}}] with [({\bf A}^{-1})^{T} = ({\bf B}^{-1})^{T} ({\bf N}^{-1})^{T}], hence finally [{\boldell}^{*} \cdot {\boldell} = {\boldell} \cdot {\boldell}^{*} = {\scr k}^{*} \cdot ({\bf N}^{-1} {\bf \scr k}).]

Denoting [\tilde{\varphi} ({\bf B{\scr k}})] by [\psi ({\scr k})] and [\tilde{\Phi}[({\bf A}^{-1})^{T} {\scr k}^{*}]] by [\Psi ({\scr k}^{*})], the relation between ω and Ω may be written in the equivalent form [\displaylines{(\hbox{i})\quad\hfill \psi ({\bf \scr k}) = {1 \over |\det {\bf N}|} {\sum\limits_{{\bf \scr k}^{*} \in {\bb Z}^{n}/{\bf N}^{T}{\bb Z}^{n}}} \Psi ({\bf \scr k}^{*}) \exp [-2 \pi i {\bf \scr k}^{*} \cdot ({\bf N}^{-1} {\bf \scr k})] \hfill\cr (\hbox{ii})\hfill \Psi ({\bf \scr k}^{*}) = {\sum\limits_{{\scr k}\in {\bb Z}^{n}/{\bf N}{\bb Z}^{\rm n}}} \psi ({\bf \scr k}) \exp [+2 \pi i {\bf \scr k}^{*} \cdot ({\bf N}^{-1} {\bf \scr k})], \quad\;\qquad\hfill}] where the summations are now over finite residual lattices in standard form.

Equations (i) and (ii) describe two mutually inverse linear transformations [{\scr F}({\bf N})] and [\bar{\scr F}({\bf N})] between two vector spaces [W_{\bf N}] and [W_{\bf N}^{*}] of dimension [|\det {\bf N}|]. [{\scr F}({\bf N})] [respectively [\bar{\scr F}({\bf N})]] is the discrete Fourier (respectively inverse Fourier) transform associated to matrix N.

The vector spaces [W_{\bf N}] and [W_{\bf N}^{*}] may be viewed from two different standpoints:

  • (1) as vector spaces of weighted residual-lattice distributions, of the form [\alpha ({\bf x}) T_{{\bf B}/{\bf A}}] and [\beta ({\bf x}) T_{{\bf A}/{\bf B}}^{*}]; the canonical basis of [W_{\bf N}] (respectively [W_{\bf N}^{*}]) then consists of the [\delta_{({\scr k})}] for [{\scr k}\in {\bb Z}^{n}/{\bf N}{\bb Z}^{n}] [respectively [\delta_{({\scr k}^{*})}] for [{\scr k}^{*} \in {\bb Z}^{n}/{\bf N}^{T} {\bb Z}^{n}]];

  • (2) as vector spaces of weight vectors for the [|\det {\bf N}|\ \delta]-functions involved in the expression for [T_{{\bf B}/{\bf A}}] (respectively [T_{{\bf A}/{\bf B}}^{*}]); the canonical basis of [W_{\bf N}] (respectively [W_{\bf N}^{*}]) consists of weight vectors [{\bf u}_{{\scr k}}] (respectively [{\bf v}_{{\scr k}^{*}}]) giving weight 1 to element [{\scr k}] (respectively [{\scr k}^{*}]) and 0 to the others.

These two spaces are said to be `isomorphic' (a relation denoted ≅), the isomorphism being given by the one-to-one correspondence: [\eqalign{\omega &= {\textstyle\sum\limits_{{\bf \scr k}}} \psi ({\bf \scr k}) \delta_{({\bf \scr k})} \qquad \leftrightarrow \quad \psi = {\textstyle\sum\limits_{{\bf \scr k}}} \psi ({\scr k}) {\bf u}_{{\bf \scr k}}\cr \Omega &= {\textstyle\sum\limits_{{\bf \scr k}^{*}}} \Psi ({\bf \scr k}^{*}) \delta_{({\bf \scr k}^{*})} \quad\; \leftrightarrow \quad \Psi = {\textstyle\sum\limits_{{\bf \scr k}^{*}}} \Psi ({\bf \scr k}^{*}) {\bf v}_{{\bf \scr k}^{*}}.}]

The second viewpoint will be adopted, as it involves only linear algebra. However, it is most helpful to keep the first one in mind and to think of the data or results of a discrete Fourier transform as representing (through their sets of unique weights) two periodic lattice distributions related by the full, distribution-theoretic Fourier transform.

We therefore view [W_{\bf N}] (respectively [W_{\bf N}^{*}]) as the vector space of complex-valued functions over the finite residual lattice [\Lambda_{\bf B}/\Lambda_{\bf A}] (respectively [\Lambda_{\bf A}^{*}/\Lambda_{\bf B}^{*}]) and write: [\eqalign{W_{\bf N} &\cong L(\Lambda_{\bf B}/\Lambda_{\bf A}) \cong L({\bb Z}^{n}/{\bf N}{\bb Z}^{n}) \cr W_{\bf N}^{*} &\cong L(\Lambda_{\bf A}^{*}/\Lambda_{\bf B}^{*}) \cong L({\bb Z}^{n}/{\bf N}^{T} {\bb Z}^{n})}] since a vector such as ψ is in fact the function [{\scr k} \;\longmapsto\; \psi ({\scr k})].

The two spaces [W_{\bf N}] and [W_{\bf N}^{*}] may be equipped with the following Hermitian inner products: [\eqalign{(\varphi, \psi)_{W} &= {\textstyle\sum\limits_{{\bf \scr k}}} \overline{\varphi ({\bf \scr k})} \psi ({\bf \scr k}) \cr (\Phi, \Psi)_{W^{*}} &= {\textstyle\sum\limits_{{\bf \scr k}}} \overline{\Phi ({\bf \scr k}^{*})} \Psi ({\bf \scr k}^{*}),}] which makes each of them into a Hilbert space. The canonical bases [\{{\bf u}_{{\scr k}} | {\scr k}\in {\bb Z}^{n}/{\bf N} {\bb Z}^{n}\}] and [\{{\bf v}_{{\scr k}^{*}} | {\scr k}^{*} \in {\bb Z}^{n}/{\bf N}^{T} {\bb Z}^{n}\}] and [W_{\bf N}] and [W_{\bf N}^{*}] are orthonormal for their respective product. Matrix representation of the discrete Fourier transform (DFT)

| top | pdf |

By virtue of definitions (i) and (ii), [\eqalign{{\scr F}({\bf N}) {\bf v}_{{\bf \scr k}^{*}} &= {1 \over |\det {\bf N}|} {\sum\limits_{{\bf \scr k}}} \exp [-2 \pi i {\bf \scr k}^{*} \cdot ({\bf N}^{-1} {\bf \scr k})] {\bf u}_{{\bf \scr k}} \cr \bar{\scr F}({\bf N}) {\bf u}_{{\bf \scr k}} &= {\sum\limits_{{\bf \scr k}^{*}}} \exp [+2 \pi i {\bf \scr k}^{*} \cdot ({\bf N}^{-1} {\bf \scr k})] {\bf v}_{{\bf \scr k}^{*}}}] so that [{\scr F}({\bf N})] and [\bar{\scr F}({\bf N})] may be represented, in the canonical bases of [W_{\bf N}] and [W_{\bf N}^{*}], by the following matrices: [\eqalign{[{\scr F}({\bf N})]_{{\bf {\bf \scr k}{\bf \scr k}}^{*}} &= {1 \over |\det {\bf N}|} \exp [-2 \pi i {\bf \scr k}^{*} \cdot ({\bf N}^{-1} {\bf \scr k})] \cr [\bar{\scr F}({\bf N})]_{{\bf \scr k}^{*} {\bf \scr k}} &= \exp [+2 \pi i {\bf \scr k}^{*} \cdot ({\bf N}^{-1} {\bf \scr k})].}]

When N is symmetric, [{\bb Z}^{n}/{\bf N} {\bb Z}^{n}] and [{\bb Z}^{n}/{\bf N}^{T} {\bb Z}^{n}] may be identified in a natural manner, and the above matrices are symmetric.

When N is diagonal, say [{\bf N} = \hbox{diag} (\nu_{1}, \nu_{2}, \ldots, \nu_{n})], then the tensor product structure of the full multidimensional Fourier transform (Section[link]) [{\scr F}_{\bf x} = {\scr F}_{x_{1}} \otimes {\scr F}_{x_{2}} \otimes \ldots \otimes {\scr F}_{x_{n}}] gives rise to a tensor product structure for the DFT matrices. The tensor product of matrices is defined as follows: [{\bf A} \otimes {\bf B} = \pmatrix{a_{11} {\bf B} &\ldots &a_{1n} {\bf B}\cr \vdots & &\vdots\cr a_{n1} {\bf B} &\ldots &a_{nn} {\bf B}\cr}.] Let the index vectors [{\scr k}] and [{\scr k}^{*}] be ordered in the same way as the elements in a Fortran array, e.g. for [{\scr k}] with [{\scr k}_{1}] increasing fastest, [{\scr k}_{2}] next fastest, [\ldots, {\scr k}_{n}] slowest; then [{\scr F}({\bf N}) = {\scr F}(\nu_{1}) \otimes {\scr F}(\nu_{2}) \otimes \ldots \otimes {\scr F}(\nu_{n}),] where [[{\scr F}(\nu_{j})]_{{\scr k}_{j}, \, {\scr k}_{j}^{*}} = {1 \over \nu_{j}} \exp \left(-2 \pi i {{\scr k}_{j}^{*} {\scr k}_{j} \over \nu_{j}}\right),] and [\bar{\scr F}({\bf N}) = \bar{\scr F}(\nu_{1}) \otimes \bar{\scr F}(\nu_{2}) \otimes \ldots \otimes \bar{\scr F}(\nu_{n}),] where [[\bar{\scr F}_{\nu_{j}}]_{{\scr k}_{j}^{*}, \, {\scr k}_{j}} = \exp \left(+2 \pi i {{\scr k}_{j}^{*} {\scr k}_{j} \over \nu_{j}}\right).] Properties of the discrete Fourier transform

| top | pdf |

The DFT inherits most of the properties of the Fourier transforms, but with certain numerical factors (`Jacobians') due to the transition from continuous to discrete measure.

  • (1) Linearity is obvious.

  • (2) Shift property. If [(\tau_{{\bf {\scr a}}} \psi) ({\scr k}) = \psi ({\scr k} - {\bf {\scr a}})] and [(\tau_{{\bf {\scr a}}^{*}} \Psi) ({\scr k}^{*}) =] [\Psi ({\scr k}^{*} - {\bf {\scr a}}^{*})], where subtraction takes place by modular vector arithmetic in [{\bb Z}^{n}/{\bf N} {\bb Z}^{n}] and [{\bb Z}^{n}/{\bf N}^{T}{\bb Z}^{n}], respectively, then the following identities hold: [\eqalign{\bar{\scr F}({\bf N}) [\tau_{\bf \scr k} \psi] ({\bf \scr k}^{*}) &= \exp [+ 2 \pi i{\bf \scr k}^{*} \cdot ({\bf N}^{-1} {\bf \scr k})] \bar{\scr F}({\bf N})[\psi]({\bf \scr k}^{*}) \cr {\scr F}({\bf N})[\tau_{{\bf \scr k}^{*}} \Psi]({\bf \scr k}) &= \exp [- 2 \pi i{\bf \scr k}^{*} \cdot ({\bf N}^{-1} {\bf \scr k})] {\scr F}({\bf N})[\Psi]({\bf \scr k}).}]

  • (3) Differentiation identities. Let vectors ψ and Ψ be constructed from [\varphi^{0} \in {\scr E}({\bb R}^{n})] as in Section[link], hence be related by the DFT. If [D^{{\bf p}} \boldpsi] designates the vector of sample values of [D_{\bf x}^{{\bf p}} \varphi^{0}] at the points of [\Lambda_{\bf B}/\Lambda_{\bf A}], and [D^{{\bf p}} \boldPsi] the vector of values of [D_{\boldxi}^{{\bf p}} \Phi^{0}] at points of [\Lambda_{\bf A}^{*}/\Lambda_{\bf B}^{*}], then for all multi-indices [{\bf p} = (p_{1}, p_{2}, \ldots, p_{n})] [\eqalign{(D^{{\bf p}} \boldpsi) ({\bf \scr k}) &= \bar{\scr F}({\bf N}) [(+ 2 \pi i{\bf \scr k}^{*})^{{\bf p}} \boldPsi] ({\bf \scr k}) \cr (D^{{\bf p}} \boldPsi) ({\bf \scr k}^{*}) &= {\scr F}({\bf N}) [(- 2 \pi i{\bf \scr k})^{{\bf p}} \boldpsi] ({\bf \scr k}^{*})}] or equivalently [\eqalign{{\scr F}({\bf N}) [D^{{\bf p}} \boldpsi] ({\bf \scr k}^{*}) &= (+ 2 \pi i{\bf \scr k}^{*})^{{\bf p}} \boldPsi ({\bf \scr k}^{*}) \cr \bar{\scr F}({\bf N}) [D^{{\bf p}} \boldPsi] ({\bf \scr k}) &= (- 2 \pi i{\bf \scr k})^{{\bf p}} \boldpsi ({\bf \scr k}).}]

  • (4) Convolution property. Let [\boldvarphi \in W_{\bf N}] and [\boldPhi \in W_{\bf N}^{*}] (respectively ψ and Ψ) be related by the DFT, and define [\eqalign{(\boldvarphi * \boldpsi) ({\bf \scr k}) &= \textstyle\sum\limits_{{\bf \scr k}' \in {\bb Z}^{n}/{\bf N} {\bb Z}^{n}} \boldvarphi ({\bf \scr k}') \boldpsi ({\bf \scr k} - {\bf \scr k}') \cr (\boldPhi * \boldPsi) ({\bf \scr k}^{*}) &= \textstyle\sum\limits_{{\bf \scr k}^{*'} \in {\bb Z}^{n}/{\bf N}^{T} {\bb Z}^{n}} \boldPhi ({\bf \scr k}^{*'}) {\boldPsi} ({\bf \scr k}^{*} - {\bf \scr k}^{*'}).}] Then [\eqalign{\bar{\scr F}({\bf N}) [\boldPhi * \boldPsi] ({\bf \scr k}) &= |\det {\bf N}| \boldvarphi ({\bf \scr k}) \boldpsi ({\bf \scr k}) \cr {\scr F}({\bf N}) [\boldvarphi * \boldpsi] ({\bf \scr k}^{*}) &= \boldPhi ({\bf \scr k}^{*}) \boldPsi ({\bf \scr k}^{*})}] and [\eqalign{\bar{\scr F}({\bf N}) [\boldvarphi \times \boldpsi] ({\bf \scr k}^{*}) &= {1 \over |\det {\bf N}|} (\boldPhi * \boldPsi) ({\bf \scr k}^{*}) \cr {\scr F}({\bf N}) [\boldPhi \times \boldPsi] ({\bf \scr k}) &= (\boldvarphi * \boldpsi) ({\bf \scr k}).}] Since addition on [{\bb Z}^{n}/{\bf N}{\bb Z}^{n}] and [{\bb Z}^{n}/{\bf N}^{T} {\bb Z}^{n}] is modular, this type of convolution is called cyclic convolution.

  • (5) Parseval/Plancherel property. If ϕ, ψ, Φ, Ψ are as above, then [\eqalign{({\scr F}({\bf N}) [\boldPhi], {\scr F}({\bf N}) [\boldPsi])_{W} &= {1 \over |\det {\bf N}|} (\boldPhi, \boldPsi)_{W^{*}} \cr (\bar{\scr F}({\bf N}) [\boldvarphi], \bar{\scr F}({\bf N}) [\boldpsi])_{W} &= {1 \over |\det {\bf N}|} (\boldvarphi, \boldpsi)_{W}.}]

  • (6) Period 4. When N is symmetric, so that the ranges of indices [{\scr k}] and [{\scr k}^{*}] can be identified, it makes sense to speak of powers of [{\scr F}({\bf N})] and [\bar{\scr F}({\bf N})]. Then the `standardized' matrices [(1/|\det {\bf N}|^{1/2}){\scr F}({\bf N})] and [(1/|\det {\bf N}|^{1/2}) \bar{\scr F}({\bf N})] are unitary matrices whose fourth power is the identity matrix (Section[link]); their eigenvalues are therefore [\pm 1] and [\pm i].


Ahlfors, L. V. (1966). Complex analysis. New York: McGraw-Hill.
Akhiezer, N. I. (1965). The classical moment problem. Edinburgh and London: Oliver & Boyd.
Ash, J. M. (1976). Multiple trigonometric series. In Studies in harmonic analysis, edited by J. M. Ash, pp. 76–96. MAA studies in mathematics, Vol. 13. The Mathematical Association of America.
Berberian, S. K. (1962). Measure and integration. New York: Macmillan. [Reprinted by Chelsea, New York, 1965.]
Bertaut, E. F. (1952). L'énergie électrostatique de réseaux ioniques. J. Phys. Radium, 13, 499–505.
Bochner, S. (1932). Vorlesungen über Fouriersche Integrale. Leipzig: Akademische Verlagsgesellschaft.
Bochner, S. (1959). Lectures on Fourier integrals. Translated from Bochner (1932) by M. Tenenbaum & H. Pollard. Princeton University Press.
Born, M. & Huang, K. (1954). Dynamical theory of crystal lattices. Oxford University Press.
Bracewell, R. N. (1986). The Fourier transform and its applications, 2nd ed., revised. New York: McGraw-Hill.
Bremermann, H. (1965). Distributions, complex variables, and Fourier transforms. Reading: Addison-Wesley.
Campbell, G. A. & Foster, R. M. (1948). Fourier integrals for practical applications. Princeton: Van Nostrand.
Carathéodory, C. (1911). Über den Variabilitätsbereich der Fourierschen Konstanten von positiven harmonischen Functionen. Rend. Circ. Mat. Palermo, 32, 193–217.
Carslaw, H. S. (1930). An introduction to the theory of Fourier's series and integrals. London: Macmillan. [Reprinted by Dover Publications, New York, 1950.]
Carslaw, H. S. & Jaeger, J. C. (1948). Operational methods in applied mathematics. Oxford University Press.
Cartan, H. (1961). Théorie des fonctions analytiques. Paris: Hermann.
Challifour, J. L. (1972). Generalized functions and Fourier analysis. Reading: Benjamin.
Churchill, R. V. (1958). Operational mathematics, 2nd ed. New York: McGraw-Hill.
Dieudonné, J. (1969). Foundations of modern analysis. New York and London: Academic Press.
Dieudonné, J. (1970). Treatise on analysis, Vol. II. New York and London: Academic Press.
Dirac, P. A. M. (1958). The principles of quantum mechanics, 4th ed. Oxford: Clarendon Press.
Dym, H. & McKean, H. P. (1972). Fourier series and integrals. New York and London: Academic Press.
Erdélyi, A. (1954). Tables of integral transforms, Vol. I. New York: McGraw-Hill.
Erdélyi, A. (1962). Operational calculus and generalized functions. New York: Holt, Rinehart & Winston.
Ewald, P. P. (1921). Die Berechnung optischer und electrostatischer Gitterpotentiale. Ann. Phys. Leipzig, 64, 253–287.
Friedlander, F. G. (1982). Introduction to the theory of distributions. Cambridge University Press.
Friedman, A. (1970). Foundations of modern analysis. New York: Holt, Rinehart & Winston. [Reprinted by Dover, New York, 1982.]
Gel'fand, I. M. & Shilov, G. E. (1964). Generalized functions, Vol. I. New York and London: Academic Press.
Grenander, U. (1952). On Toeplitz forms and stationary processes. Ark. Math. 1, 555–571.
Grenander, U. & Szegö, G. (1958). Toeplitz forms and their applications. Berkeley: University of California Press.
Hadamard, J. (1932). Le problème de Cauchy et les équations aux dérivées partielles linéaires hyperboliques. Paris: Hermann.
Hadamard, J. (1952). Lectures on Cauchy's problem in linear partial differential equations. New York: Dover Publications.
Hardy, G. H. (1933). A theorem concerning Fourier transforms. J. London Math. Soc. 8, 227–231.
Hartman, P. & Wintner, A. (1950). On the spectra of Toeplitz's matrices. Am. J. Math. 72, 359–366.
Hartman, P. & Wintner, A. (1954). The spectra of Toeplitz's matrices. Am. J. Math. 76, 867–882.
Herglotz, G. (1911). Über Potenzreihen mit positiven, reellen Teil im Einheitskreis. Ber. Sächs. Ges. Wiss. Leipzig, 63, 501–511.
Hirschman, I. I. Jr & Hughes, D. E. (1977). Extreme eigenvalues of Toeplitz operators. Lecture notes in mathematics, Vol. 618. Berlin: Springer-Verlag.
Hörmander, L. (1963). Linear partial differential operators. Berlin: Springer-Verlag.
Kac, M. (1954). Toeplitz matrices, translation kernels, and a related problem in probability theory. Duke Math. J. 21, 501–509.
Kac, M., Murdock, W. L. & Szegö, G. (1953). On the eigenvalues of certain Hermitian forms. J. Rat. Mech. Anal. 2, 767–800.
Katznelson, Y. (1968). An introduction to harmonic analysis. New York: John Wiley.
Lanczos, C. (1966). Discourse on Fourier series. Edinburgh: Oliver & Boyd.
Landau, H. J. & Pollack, H. O. (1961). Prolate spheroidal wave functions, Fourier analysis and uncertainty (2). Bell Syst. Tech. J. 40, 65–84.
Landau, H. J. & Pollack, H. O. (1962). Prolate spheroidal wave functions, Fourier analysis and uncertainty (3): the dimension of the space of essentially time- and band-limited signals. Bell Syst. Tech. J. 41, 1295–1336.
Lang, S. (1965). Algebra. Reading, MA: Addison-Wesley.
Larmor, J. (1934). The Fourier discontinuities: a chapter in historical integral calculus. Philos. Mag. 17, 668–678.
Lavoine, J. (1963). Transformation de Fourier des pseudo-fonctions, avec tables de nouvelles transformées. Paris: Editions du CNRS.
Lighthill, M. J. (1958). Introduction to Fourier analysis and generalized functions. Cambridge University Press.
Magnus, W., Oberhettinger, F. & Soni, R. P. (1966). Formulas and theorems for the special functions of mathematical physics. Berlin: Springer-Verlag.
Moore, D. H. (1971). Heaviside operational calculus. An elementary foundation. New York: American Elsevier.
Natterer, F. (1986). The mathematics of computerized tomography. New York: John Wiley.
Paley, R. E. A. C. & Wiener, N. (1934). Fourier transforms in the complex domain. Providence, RI: American Mathematical Society.
Pollack, H. O. & Slepian, D. (1961). Prolate spheroidal wave functions, Fourier analysis and uncertainty (1). Bell Syst. Tech. J. 40, 43–64.
Riesz, M. (1938). L'intégrale de Riemann–Liouville et le problème de Cauchy pour l'équation des ondes. Bull. Soc. Math. Fr. 66, 153–170.
Riesz, M. (1949). L'intégrale de Riemann–Liouville et le problème de Cauchy. Acta Math. 81, 1–223.
Schwartz, L. (1965). Mathematics for the physical sciences. Paris: Hermann, and Reading: Addison-Wesley.
Schwartz, L. (1966). Théorie des distributions. Paris: Hermann.
Shannon, C. E. (1949). Communication in the presence of noise. Proc. Inst. Radio Eng. NY, 37, 10–21.
Shohat, J. A. & Tamarkin, J. D. (1943). The problem of moments. Mathematical surveys, No. 1. New York: American Mathematical Society.
Sneddon, I. N. (1951). Fourier transforms. New York: McGraw-Hill.
Sneddon, I. N. (1972). The use of integral transforms. New York: McGraw-Hill.
Sprecher, D. A. (1970). Elements of real analysis. New York: Academic Press. [Reprinted by Dover Publications, New York, 1987.]
Szegö, G. (1915). Ein Grenzwertsatz uber die Toeplitzschen Determinanten einer reellen positiven Funktion. Math. Ann. 76, 490–503.
Szegö, G. (1920). Beitrage zur Theorie der Toeplitzchen Formen (Erste Mitteilung). Math. Z. 6, 167–202.
Szegö, G. (1952). On certain Hermitian forms associated with the Fourier series of a positive function. Comm. Sém. Mat., Univ. Lund (Suppl. dedicated to Marcel Riesz), pp. 228–238.
Titchmarsh, E. C. (1948). Introduction to the theory of Fourier integrals. Oxford: Clarendon Press.
Toeplitz, O. (1907). Zur Theorie der quadratischen Formen von unendlichvielen Variablen. Nachr. der Kgl. Ges. Wiss. Göttingen, Math. Phys. Kl. pp. 489–506.
Toeplitz, O. (1910). Zur Transformation der Scharen bilinearer Formen von unendlichvielen Veränderlichen. Nachr. der Kgl. Ges. Wiss. Göttingen, Math. Phys. Kl. pp. 110–115.
Toeplitz, O. (1911a). Zur Theorie der quadratischen und bilinearen Formen von unendlichvielen Veränderlichen. I. Teil: Theorie der L-formen. Math. Ann. 70, 351–376.
Toeplitz, O. (1911b). Über die Fouriersche Entwicklung positiver Funktionen. Rend. Circ. Mat. Palermo, 32, 191–192.
Tolstov, G. P. (1962). Fourier series. Englewood Cliffs, NJ: Prentice-Hall.
Trèves, F. (1967). Topological vector spaces, distributions, and kernels. New York and London: Academic Press.
Van der Pol, B. & Bremmer, H. (1955). Operational calculus, 2nd ed. Cambridge University Press.
Weyl, H. (1931). The theory of groups and quantum mechanics. New York: Dutton. [Reprinted by Dover Publications, New York, 1950.]
Whittaker, E. T. (1915). On the functions which are represented by the expansions of the interpolation-theory. Proc. R. Soc. (Edinburgh), 35, 181–194.
Whittaker, E. T. (1928). Oliver Heaviside. Bull. Calcutta Math. Soc. 20, 199–220. [Reprinted in Moore (1971).]
Whittaker, E. T. & Watson, G. N. (1927). A course of modern analysis, 4th ed. Cambridge University Press.
Widom, H. (1965). Toeplitz matrices. In Studies in real and complex analysis, edited by I. I. Hirschmann Jr, pp. 179–209. MAA studies in mathematics, Vol. 3. Englewood Cliffs: Prentice-Hall.
Wiener, N. (1933). The Fourier integral and certain of its applications. Cambridge University Press. [Reprinted by Dover Publications, New York, 1959.]
Yosida, K. (1965). Functional analysis. Berlin: Springer-Verlag.
Zemanian, A. H. (1965). Distribution theory and transform analysis. New York: McGraw-Hill.
Zemanian, A. H. (1968). Generalised integral transformations. New York: Interscience.
Zygmund, A. (1959). Trigonometric series, Vols. 1 and 2. Cambridge University Press.
Zygmund, A. (1976). Notes on the history of Fourier series. In Studies in harmonic analysis, edited by J. M. Ash, pp. 1–19. MAA studies in mathematics, Vol. 13. The Mathematical Association of America.

to end of page
to top of page