International
Tables for Crystallography Volume B Reciprocal space Edited by U. Shmueli © International Union of Crystallography 2006 
International Tables for Crystallography (2006). Vol. B, ch. 1.3, pp. 2598
doi: 10.1107/97809553602060000551 Chapter 1.3. Fourier transforms in crystallography: theory, algorithms and applications
G. Bricogne^{a}
^{a}MRC Laboratory of Molecular Biology, Hills Road, Cambridge CB2 2QH, England, and LURE, Bâtiment 209D, Université ParisSud, 91405 Orsay, France In the first part of this chapter, the mathematical theory of the Fourier transformation is cast in the language of Schwartz's theory of distributions, allowing Fourier transforms, Fourier series and discrete Fourier transforms to be treated together. Next the numerical computation of the discrete Fourier transform is discussed. Onedimensional algorithms are examined first, including the Cooley–Tukey algorithm, the Good (or prime factor) algorithm, the Rader algorithm and the Winograd algorithms. Multidimensional algorithms are then covered. The last part of the chapter surveys the crystallographic applications of Fourier transforms. 
Since the publication of Volume II of International Tables, most aspects of the theory, computation and applications of Fourier transforms have undergone considerable development, often to the point of being hardly recognizable.
The mathematical analysis of the Fourier transformation has been extensively reformulated within the framework of distribution theory, following Schwartz's work in the early 1950s.
The computation of Fourier transforms has been revolutionized by the advent of digital computers and of the Cooley–Tukey algorithm, and progress has been made at an everaccelerating pace in the design of new types of algorithms and in optimizing their interplay with machine architecture.
These advances have transformed both theory and practice in several fields which rely heavily on Fourier methods; much of electrical engineering, for instance, has become digital signal processing.
By contrast, crystallography has remained relatively unaffected by these developments. From the conceptual point of view, oldfashioned Fourier series are still adequate for the quantitative description of Xray diffraction, as this rarely entails consideration of molecular transforms between reciprocallattice points. From the practical point of view, threedimensional Fourier transforms have mostly been used as a tool for visualizing electrondensity maps, so that only moderate urgency was given to trying to achieve ultimate efficiency in these relatively infrequent calculations.
Recent advances in phasing and refinement methods, however, have placed renewed emphasis on concepts and techniques long used in digital signal processing, e.g. flexible sampling, Shannon interpolation, linear filtering, and interchange between convolution and multiplication. These methods are iterative in nature, and thus generate a strong incentive to design new crystallographic Fourier transform algorithms making the fullest possible use of all available symmetry to save both storage and computation.
As a result, need has arisen for a modern and coherent account of Fourier transform methods in crystallography which would provide:
The rapid pace of progress in these fields implies that such an account would be struck by quasiimmediate obsolescence if it were written solely for the purpose of compiling a catalogue of results and formulae `customized' for crystallographic use. Instead, the emphasis has been placed on a mode of presentation in which most results and formulae are derived rather than listed. This does entail a substantial mathematical overhead, but has the advantage of preserving in its `native' form the context within which these results are obtained. It is this context, rather than any particular set of results, which constitutes the most fertile source of new ideas and new applications, and as such can have any hope at all of remaining useful in the long run.
These conditions have led to the following choices:
In keeping with this philosophy, the theory is presented first, followed by the crystallographic applications. There are `forward references' from mathematical results to the applications which later invoke them (thus giving `reallife' examples rather than artificial ones), and `backward references' as usual. In this way, the internal logic of the mathematical developments – the surest guide to future innovations – can be preserved, whereas the alternative solution of relegating these to appendices tends on the contrary to obscure that logic by subordinating it to that of the applications.
It is hoped that this attempt at an overall presentation of the main features of Fourier transforms and of their ubiquitous role in crystallography will be found useful by scientists both within and outside the field.
The Fourier transformation and the practical applications to which it gives rise occur in three different forms which, although they display a similar range of phenomena, normally require distinct formulations and different proof techniques:
At the same time, the most useful property of the Fourier transformation – the exchange between multiplication and convolution – is mathematically the most elusive and the one which requires the greatest caution in order to avoid writing down meaningless expressions.
It is the unique merit of Schwartz's theory of distributions (Schwartz, 1966) that it affords complete control over all the troublesome phenomena which had previously forced mathematicians to settle for a piecemeal, fragmented theory of the Fourier transformation. By its ability to handle rigorously highly `singular' objects (especially δfunctions, their derivatives, their tensor products, their products with smooth functions, their translates and lattices of these translates), distribution theory can deal with all the major properties of the Fourier transformation as particular instances of a single basic result (the exchange between multiplication and convolution), and can at the same time accommodate the three previously distinct types of Fourier theories within a unique framework. This brings great simplification to matters of central importance in crystallography, such as the relations between
All these properties become subsumed under the same theorem.
This striking synthesis comes at a slight price, which is the relative complexity of the notion of distribution. It is first necessary to establish the notion of topological vector space and to gain sufficient control (or, at least, understanding) over convergence behaviour in certain of these spaces. The key notion of metrizability cannot be circumvented, as it underlies most of the constructs and many of the proof techniques used in distribution theory. Most of Section 1.3.2.2 builds up to the fundamental result at the end of Section 1.3.2.2.6.2, which is basic to the definition of a distribution in Section 1.3.2.3.4 and to all subsequent developments.
The reader mostly interested in applications will probably want to reach this section by starting with his or her favourite topic in Section 1.3.4, and following the backward references to the relevant properties of the Fourier transformation, then to the proof of these properties, and finally to the definitions of the objects involved. Hopefully, he or she will then feel inclined to follow the forward references and thus explore the subject from the abstract to the practical. The books by Dieudonné (1969) and Lang (1965) are particularly recommended as general references for all aspects of analysis and algebra.
Throughout this text, will denote the set of real numbers, the set of rational (signed) integers and the set of natural (unsigned) integers. The symbol will denote the Cartesian product of n copies of : so that an element x of is an ntuple of real numbers: Similar meanings will be attached to and .
The symbol will denote the set of complex numbers. If , its modulus will be denoted by , its conjugate by (not ), and its real and imaginary parts by and :
If X is a finite set, then will denote the number of its elements. If mapping f sends an element x of set X to the element of set Y, the notation will be used; the plain arrow → will be reserved for denoting limits, as in
If X is any set and S is a subset of X, the indicator function of S is the realvalued function on X defined by
The set can be endowed with the structure of a vector space of dimension n over , and can be made into a Euclidean space by treating its standard basis as an orthonormal basis and defining the Euclidean norm:
By misuse of notation, x will sometimes also designate the column vector of coordinates of ; if these coordinates are referred to an orthonormal basis of , then where denotes the transpose of x.
The distance between two points x and y defined by allows the topological structure of to be transferred to , making it a metric space. The basic notions in a metric space are those of neighbourhoods, of open and closed sets, of limit, of continuity, and of convergence (see Section 1.3.2.2.6.1).
A subset S of is bounded if sup as x and y run through S; it is closed if it contains the limits of all convergent sequences with elements in S. A subset K of which is both bounded and closed has the property of being compact, i.e. that whenever K has been covered by a family of open sets, a finite subfamily can be found which suffices to cover K. Compactness is a very useful topological property for the purpose of proof, since it allows one to reduce the task of examining infinitely many local situations to that of examining only finitely many of them.
Let ϕ be a complexvalued function over . The support of ϕ, denoted Supp ϕ, is the smallest closed subset of outside which ϕ vanishes identically. If Supp ϕ is compact, ϕ is said to have compact support.
If , the translate of ϕ by t, denoted , is defined by Its support is the geometric translate of that of ϕ:
If A is a nonsingular linear transformation in , the image of ϕ by A, denoted , is defined by Its support is the geometric image of Supp ϕ under A:
If S is a nonsingular affine transformation in of the form with A linear, the image of ϕ by S is , i.e. Its support is the geometric image of Supp ϕ under S:
It may be helpful to visualize the process of forming the image of a function by a geometric operation as consisting of applying that operation to the graph of that function, which is equivalent to applying the inverse transformation to the coordinates x. This use of the inverse later affords the `leftrepresentation property' [see Section 1.3.4.2.2.2(e)] when the geometric operations form a group, which is of fundamental importance in the treatment of crystallographic symmetry (Sections 1.3.4.2.2.4, 1.3.4.2.2.5).
When dealing with functions in n variables and their derivatives, considerable abbreviation of notation can be obtained through the use of multiindices.
A multiindex is an ntuple of natural integers: . The length of p is defined as and the following abbreviations will be used:
Leibniz's formula for the repeated differentiation of products then assumes the concise form while the Taylor expansion of f to order m about reads
In certain sections the notation will be used for the gradient vector of f, and the notation for the Hessian matrix of its mixed secondorder partial derivatives:
The Riemann integral used in elementary calculus suffers from the drawback that vector spaces of Riemannintegrable functions over are not complete for the topology of convergence in the mean: a Cauchy sequence of integrable functions may converge to a nonintegrable function.
To obtain the property of completeness, which is fundamental in functional analysis, it was necessary to extend the notion of integral. This was accomplished by Lebesgue [see Berberian (1962), Dieudonné (1970), or Chapter 1 of Dym & McKean (1972) and the references therein, or Chapter 9 of Sprecher (1970)], and entailed identifying functions which differed only on a subset of zero measure in (such functions are said to be equal `almost everywhere'). The vector spaces consisting of function classes f modulo this identification for which are then complete for the topology induced by the norm : the limit of every Cauchy sequence of functions in is itself a function in (Riesz–Fischer theorem).
The space consists of those function classes f such that which are called summable or absolutely integrable. The convolution product: is well defined; combined with the vector space structure of , it makes into a (commutative) convolution algebra. However, this algebra has no unit element: there is no such that for all ; it has only approximate units, i.e. sequences such that tends to g in the topology as . This is one of the starting points of distribution theory.
The space of squareintegrable functions can be endowed with a scalar product which makes it into a Hilbert space. The Cauchy–Schwarz inequality generalizes the fact that the absolute value of the cosine of an angle is less than or equal to 1.
The space is defined as the space of functions f such that The quantity is called the `essential sup norm' of f, as it is the smallest positive number which exceeds only on a subset of zero measure in . A function is called essentially bounded.
Let , . Then the function is called the tensor product of f and g, and belongs to . The finite linear combinations of functions of the form span a subspace of called the tensor product of and and denoted .
The integration of a general function over may be accomplished in two steps according to Fubini's theorem. Given , the functions exist for almost all and almost all , respectively, are integrable, and Conversely, if any one of the integrals is finite, then so are the other two, and the identity above holds. It is then (and only then) permissible to change the order of integrations.
Fubini's theorem is of fundamental importance in the study of tensor products and convolutions of distributions.
Geometric intuition, which often makes `obvious' the topological properties of the real line and of ordinary space, cannot be relied upon in the study of function spaces: the latter are infinitedimensional, and several inequivalent notions of convergence may exist. A careful analysis of topological concepts and of their interrelationship is thus a necessary prerequisite to the study of these spaces. The reader may consult Dieudonné (1969, 1970), Friedman (1970), Trèves (1967) and Yosida (1965) for detailed expositions.
Most topological notions are first encountered in the setting of metric spaces. A metric space E is a set equipped with a distance function d from to the nonnegative reals which satisfies: By means of d, the following notions can be defined: open balls, neighbourhoods; open and closed sets, interior and closure; convergence of sequences, continuity of mappings; Cauchy sequences and completeness; compactness; connectedness. They suffice for the investigation of a great number of questions in analysis and geometry (see e.g. Dieudonné, 1969).
Many of these notions turn out to depend only on the properties of the collection of open subsets of E: two distance functions leading to the same lead to identical topological properties. An axiomatic reformulation of topological notions is thus possible: a topology in E is a collection of subsets of E which satisfy suitable axioms and are deemed open irrespective of the way they are obtained. From the practical standpoint, however, a topology which can be obtained from a distance function (called a metrizable topology) has the very useful property that the notions of closure, limit and continuity may be defined by means of sequences. For nonmetrizable topologies, these notions are much more difficult to handle, requiring the use of `filters' instead of sequences.
In some spaces E, a topology may be most naturally defined by a family of pseudodistances , where each satisfies (i) and (iii) but not (ii). Such spaces are called uniformizable. If for every pair there exists such that , then the separation property can be recovered. If furthermore a countable subfamily of the suffices to define the topology of E, the latter can be shown to be metrizable, so that limiting processes in E may be studied by means of sequences.
The function spaces E of interest in Fourier analysis have an underlying vector space structure over the field of complex numbers. A topology on E is said to be compatible with a vector space structure on E if vector addition [i.e. the map ] and scalar multiplication [i.e. the map ] are both continuous; E is then called a topological vector space. Such a topology may be defined by specifying a `fundamental system S of neighbourhoods of ', which can then be translated by vector addition to construct neighbourhoods of other points .
A norm ν on a vector space E is a nonnegative realvalued function on such that Subsets of E defined by conditions of the form with form a fundamental system of neighbourhoods of 0. The corresponding topology makes E a normed space. This topology is metrizable, since it is equivalent to that derived from the translationinvariant distance . Normed spaces which are complete, i.e. in which all Cauchy sequences converge, are called Banach spaces; they constitute the natural setting for the study of differential calculus.
A seminorm σ on a vector space E is a positive realvalued function on which satisfies (i′) and (iii′) but not (ii′). Given a set Σ of seminorms on E such that any pair (x, y) in is separated by at least one , let B be the set of those subsets of E defined by a condition of the form with and ; and let S be the set of finite intersections of elements of B. Then there exists a unique topology on E for which S is a fundamental system of neighbourhoods of 0. This topology is uniformizable since it is equivalent to that derived from the family of translationinvariant pseudodistances . It is metrizable if and only if it can be constructed by the above procedure with Σ a countable set of seminorms. If furthermore E is complete, E is called a Fréchet space.
If E is a topological vector space over , its dual is the set of all linear mappings from E to (which are also called linear forms, or linear functionals, over E). The subspace of consisting of all linear forms which are continuous for the topology of E is called the topological dual of E and is denoted E′. If the topology on E is metrizable, then the continuity of a linear form at can be ascertained by means of sequences, i.e. by checking that the sequence of complex numbers converges to in whenever the sequence converges to f in E.
At the end of the 19th century, Heaviside proposed under the name of `operational calculus' a set of rules for solving a class of differential, partial differential and integral equations encountered in electrical engineering (today's `signal processing'). These rules worked remarkably well but were devoid of mathematical justification (see Whittaker, 1928). In 1926, Dirac introduced his famous δfunction [see Dirac (1958), pp. 58–61], which was found to be related to Heaviside's constructs. Other singular objects, together with procedures to handle them, had already appeared in several branches of analysis [Cauchy's `principal values'; Hadamard's `finite parts' (Hadamard, 1932, 1952); Riesz's regularization methods for certain divergent integrals (Riesz, 1938, 1949)] as well as in the theories of Fourier series and integrals (see e.g. Bochner, 1932, 1959). Their very definition often verged on violating the rigorous rules governing limiting processes in analysis, so that subsequent recourse to limiting processes could lead to erroneous results; ad hoc precautions thus had to be observed to avoid mistakes in handling these objects.
In 1945–1950, Laurent Schwartz proposed his theory of distributions (see Schwartz, 1966), which provided a unified and definitive treatment of all these questions, with a striking combination of rigour and simplicity. Schwartz's treatment of Dirac's δfunction illustrates his approach in a most direct fashion. Dirac's original definition reads: These two conditions are irreconcilable with Lebesgue's theory of integration: by (i), δ vanishes almost everywhere, so that its integral in (ii) must be 0, not 1.
A better definition consists in specifying that for any function ϕ sufficiently well behaved near . This is related to the problem of finding a unit for convolution (Section 1.3.2.2.4). As will now be seen, this definition is still unsatisfactory. Let the sequence in be an approximate convolution unit, e.g. Then for any well behaved function ϕ the integrals exist, and the sequence of their numerical values tends to . It is tempting to combine this with (iii) to conclude that δ is the limit of the sequence as . However, almost everywhere in and the crux of the problem is that because the sequence does not satisfy the hypotheses of Lebesgue's dominated convergence theorem.
Schwartz's solution to this problem is deceptively simple: the regular behaviour one is trying to capture is an attribute not of the sequence of functions , but of the sequence of continuous linear functionals which has as a limit the continuous functional It is the latter functional which constitutes the proper definition of δ. The previous paradoxes arose because one insisted on writing down the simple linear operation T in terms of an integral.
The essence of Schwartz's theory of distributions is thus that, rather than try to define and handle `generalized functions' via sequences such as [an approach adopted e.g. by Lighthill (1958) and Erdélyi (1962)], one should instead look at them as continuous linear functionals over spaces of well behaved functions.
There are many books on distribution theory and its applications. The reader may consult in particular Schwartz (1965, 1966), Gel'fand & Shilov (1964), Bremermann (1965), Trèves (1967), Challifour (1972), Friedlander (1982), and the relevant chapters of Hörmander (1963) and Yosida (1965). Schwartz (1965) is especially recommended as an introduction.
The guiding principle which leads to requiring that the functions ϕ above (traditionally called `test functions') should be well behaved is that correspondingly `wilder' behaviour can then be accommodated in the limiting behaviour of the while still keeping the integrals under control. Thus
To ensure further the continuity of functionals such as with respect to the test function ϕ as the go increasingly wild, very strong control will have to be exercised in the way in which a sequence of test functions will be said to converge towards a limiting ϕ: conditions will have to be imposed not only on the values of the functions , but also on those of all their derivatives. Hence, defining a strong enough topology on the space of test functions ϕ is an essential prerequisite to the development of a satisfactory theory of distributions.
With this rationale in mind, the following function spaces will be defined for any open subset Ω of (which may be the whole of ):
When Ω is unambiguously defined by the context, we will simply write .
It sometimes suffices to require the existence of continuous derivatives only up to finite order m inclusive. The corresponding spaces are then denoted with the convention that if , only continuity is required.
The topologies on these spaces constitute the most important ingredients of distribution theory, and will be outlined in some detail.
It is defined by the family of seminorms where p is a multiindex and K a compact subset of Ω. A fundamental system S of neighbourhoods of the origin in is given by subsets of of the form for all natural integers m, positive real , and compact subset K of Ω. Since a countable family of compact subsets K suffices to cover Ω, and since restricted values of of the form lead to the same topology, S is equivalent to a countable system of neighbourhoods and hence is metrizable.
Convergence in may thus be defined by means of sequences. A sequence in will be said to converge to 0 if for any given there exists such that whenever ; in other words, if the and all their derivatives converge to 0 uniformly on any given compact K in Ω.
It is defined by the family of seminorms where K is now fixed. The fundamental system S of neighbourhoods of the origin in is given by sets of the form It is equivalent to the countable subsystem of the , hence is metrizable.
Convergence in may thus be defined by means of sequences. A sequence in will be said to converge to 0 if for any given there exists such that whenever ; in other words, if the and all their derivatives converge to 0 uniformly in K.
It is defined by the fundamental system of neighbourhoods of the origin consisting of sets of the form where (m) is an increasing sequence of integers tending to and () is a decreasing sequence of positive reals tending to 0, as .
This topology is not metrizable, because the sets of sequences (m) and () are essentially uncountable. It can, however, be shown to be the inductive limit of the topology of the subspaces , in the following sense: V is a neighbourhood of the origin in if and only if its intersection with is a neighbourhood of the origin in for any given compact K in Ω.
A sequence in will thus be said to converge to 0 in if all the belong to some (with K a compact subset of Ω independent of ν) and if converges to 0 in .
As a result, a complexvalued functional T on will be said to be continuous for the topology of if and only if, for any given compact K in Ω, its restriction to is continuous for the topology of , i.e. maps convergent sequences in to convergent sequences in .
This property of , i.e. having a nonmetrizable topology which is the inductive limit of metrizable topologies in its subspaces , conditions the whole structure of distribution theory and dictates that of many of its proofs.
A distribution T on Ω is a linear form over , i.e. a map which associates linearly a complex number to any , and which is continuous for the topology of that space. In the terminology of Section 1.3.2.2.6.2, T is an element of , the topological dual of .
Continuity over is equivalent to continuity over for all compact K contained in Ω, and hence to the condition that for any sequence in such that
then the sequence of complex numbers converges to 0 in .
If the continuity of a distribution T requires (ii) for only, T may be defined over and thus ; T is said to be a distribution of finite order m. In particular, for is the space of continuous functions with compact support, and a distribution is a (Radon) measure as used in the theory of integration. Thus measures are particular cases of distributions.
Generally speaking, the larger a space of test functions, the smaller its topological dual: This clearly results from the observation that if the ϕ's are allowed to be less regular, then less wildness can be accommodated in T if the continuity of the map with respect to ϕ is to be preserved.
Let f be a complexvalued function over Ω such that exists for any given compact K in Ω; f is then called locally integrable.
The linear mapping from to defined by may then be shown to be continuous over . It thus defines a distribution : As the continuity of only requires that , is actually a Radon measure.
It can be shown that two locally integrable functions f and g define the same distribution, i.e. if and only if they are equal almost everywhere. The classes of locally integrable functions modulo this equivalence form a vector space denoted ; each element of may therefore be identified with the distribution defined by any one of its representatives f.
A distribution is said to vanish on an open subset ω of Ω if it vanishes on all functions in , i.e. if whenever .
The support of a distribution T, denoted Supp T, is then defined as the complement of the settheoretic union of those open subsets ω on which T vanishes; or equivalently as the smallest closed subset of Ω outside which T vanishes.
When for , then Supp , so that the two notions coincide. Clearly, if Supp T and Supp ϕ are disjoint subsets of Ω, then .
It can be shown that any distribution with compact support may be extended from to while remaining continuous, so that ; and that conversely, if , then its restriction T to is a distribution with compact support. Thus, the topological dual of consists of those distributions in which have compact support. This is intuitively clear since, if the condition of having compact support is fulfilled by T, it needs no longer be required of ϕ, which may then roam through rather than .
A sequence of distributions will be said to converge in to a distribution T as if, for any given , the sequence of complex numbers converges in to the complex number .
A series of distributions will be said to converge in and to have distribution S as its sum if the sequence of partial sums converges to S.
These definitions of convergence in assume that the limits T and S are known in advance, and are distributions. This raises the question of the completeness of : if a sequence in is such that the sequence has a limit in for all , does the map define a distribution ? In other words, does the limiting process preserve continuity with respect to ϕ? It is a remarkable theorem that, because of the strong topology on , this is actually the case. An analogous statement holds for series. This notion of convergence does not coincide with any of the classical notions used for ordinary functions: for example, the sequence with converges to 0 in , but fails to do so by any of the standard criteria.
An example of convergent sequences of distributions is provided by sequences which converge to δ. If is a sequence of locally summable functions on such that
then the sequence of distributions converges to δ in .
As a general rule, the definitions are chosen so that the operations coincide with those on functions whenever a distribution is associated to a function.
Most definitions consist in transferring to a distribution T an operation which is well defined on by `transposing' it in the duality product ; this procedure will map T to a new distribution provided the original operation maps continuously into itself.
The reverse operation from differentiation, namely calculating the `indefinite integral' of a distribution S, consists in finding a distribution T such that .
For all such that with , we must have This condition defines T in a `hyperplane' of , whose equation reflects the fact that ψ has compact support.
To specify T in the whole of , it suffices to specify the value of where is such that : then any may be written uniquely as with and T is defined by The freedom in the choice of means that T is defined up to an additive constant.
The product of a distribution T on by a function α over will be defined by transposition: In order that be a distribution, the mapping must send continuously into itself; hence the multipliers α must be infinitely differentiable. The product of two general distributions cannot be defined. The need for a careful treatment of multipliers of distributions will become clear when it is later shown (Section 1.3.2.5.8) that the Fourier transformation turns convolutions into multiplications and vice versa.
If T is a distribution of order m, then α needs only have continuous derivatives up to order m. For instance, δ is a distribution of order zero, and is a distribution provided α is continuous; this relation is of fundamental importance in the theory of sampling and of the properties of the Fourier transformation related to sampling (Sections 1.3.2.6.4, 1.3.2.6.6). More generally, is a distribution of order , and the following formula holds for all with :
The derivative of a product is easily shown to be and generally for any multiindex p
Given a distribution S on and an infinitely differentiable multiplier function α, the division problem consists in finding a distribution T such that .
If α never vanishes, is the unique answer. If , and if α has only isolated zeros of finite order, it can be reduced to a collection of cases where the multiplier is , for which the general solution can be shown to be of the form where U is a particular solution of the division problem and the are arbitrary constants.
In dimension , the problem is much more difficult, but is of fundamental importance in the theory of linear partial differential equations, since the Fourier transformation turns the problem of solving these into a division problem for distributions [see Hörmander (1963)].
Let σ be a smooth nonsingular change of variables in , i.e. an infinitely differentiable mapping from an open subset Ω of to Ω′ in , whose Jacobian vanishes nowhere in Ω. By the implicit function theorem, the inverse mapping from Ω′ to Ω is well defined.
If f is a locally summable function on Ω, then the function defined by is a locally summable function on Ω′, and for any we may write: In terms of the associated distributions
This operation can be extended to an arbitrary distribution T by defining its image under coordinate transformation σ through which is well defined provided that σ is proper, i.e. that is compact whenever K is compact.
For instance, if is a translation by a vector a in , then ; is denoted by , and the translate of a distribution T is defined by
Let be a linear transformation defined by a nonsingular matrix A. Then , and This formula will be shown later (Sections 1.3.2.6.5, 1.3.4.2.1.1) to be the basis for the definition of the reciprocal lattice.
In particular, if , where I is the identity matrix, A is an inversion through a centre of symmetry at the origin, and denoting by we have: T is called an even distribution if , an odd distribution if .
If with , A is called a dilation and Writing symbolically δ as and as , we have: If and f is a function with isolated simple zeros , then in the same symbolic notation where each is analogous to a `Lorentz factor' at zero .
The purpose of this construction is to extend Fubini's theorem to distributions. Following Section 1.3.2.2.5, we may define the tensor product as the vector space of finite linear combinations of functions of the form where and .
Let and denote the distributions associated to f and g, respectively, the subscripts x and y acting as mnemonics for and . It follows from Fubini's theorem (Section 1.3.2.2.5) that , and hence defines a distribution over ; the rearrangement of integral signs gives for all . In particular, if with , then
This construction can be extended to general distributions and . Given any test function , let denote the map ; let denote the map ; and define the two functions and . Then, by the lemma on differentiation under the sign of Section 1.3.2.3.9.1, , and there exists a unique distribution such that is called the tensor product of S and T.
With the mnemonic introduced above, this definition reads identically to that given above for distributions associated to locally integrable functions:
The tensor product of distributions is associative: Derivatives may be calculated by The support of a tensor product is the Cartesian product of the supports of the two factors.
The convolution of two functions f and g on is defined by whenever the integral exists. This is the case when f and g are both in ; then is also in . Let S, T and W denote the distributions associated to f, g and respectively: a change of variable immediately shows that for any , Introducing the map σ from to defined by , the latter expression may be written: (where denotes the composition of mappings) or by a slight abuse of notation:
A difficulty arises in extending this definition to general distributions S and T because the mapping σ is not proper: if K is compact in , then is a cylinder with base K and generator the `second bisector' in . However, is defined whenever the intersection between Supp and is compact.
We may therefore define the convolution of two distributions S and T on by whenever the following support condition is fulfilled:
`the set is compact in for all K compact in '.
The latter condition is met, in particular, if S or T has compact support. The support of is easily seen to be contained in the closure of the vector sum
Convolution by a fixed distribution S is a continuous operation for the topology on : it maps convergent sequences to convergent sequences . Convolution is commutative: .
The convolution of p distributions with supports can be defined by whenever the following generalized support condition:
`the set is compact in for all K compact in '
is satisfied. It is then associative. Interesting examples of associativity failure, which can be traced back to violations of the support condition, may be found in Bracewell (1986, pp. 436–437).
It follows from previous definitions that, for all distributions , the following identities hold:

The latter property is frequently used for the purpose of regularization: if T is a distribution, α an infinitely differentiable function, and at least one of the two has compact support, then is an infinitely differentiable ordinary function. Since sequences of such functions α can be constructed which have compact support and converge to δ, it follows that any distribution T can be obtained as the limit of infinitely differentiable functions . In topological jargon: is `everywhere dense' in . A standard function in which is often used for such proofs is defined as follows: put with (so that θ is in and is normalized), and put
Another related result, also proved by convolution, is the structure theorem: the restriction of a distribution to a bounded open set Ω in is a derivative of finite order of a continuous function.
Properties (i) to (iv) are the basis of the symbolic or operational calculus (see Carslaw & Jaeger, 1948; Van der Pol & Bremmer, 1955; Churchill, 1958; Erdélyi, 1962; Moore, 1971) for solving integrodifferential equations with constant coefficients by turning them into convolution equations, then using factorization methods for convolution algebras (Schwartz, 1965).
Given a complexvalued function f on subject to suitable regularity conditions, its Fourier transform and Fourier cotransform are defined as follows: where is the ordinary scalar product. The terminology and sign conventions given above are the standard ones in mathematics; those used in crystallography are slightly different (see Section 1.3.4.2.1.1). These transforms enjoy a number of remarkable properties, whose natural settings entail different regularity assumptions on f: for instance, properties relating to convolution are best treated in , while Parseval's theorem requires the Hilbert space structure of . After a brief review of these classical properties, the Fourier transformation will be examined in a space particularly well suited to accommodating the full range of its properties, which will later serve as a space of test functions to extend the Fourier transformation to distributions.
There exists an abundant literature on the `Fourier integral'. The books by Carslaw (1930), Wiener (1933), Titchmarsh (1948), Katznelson (1968), Sneddon (1951, 1972), and Dym & McKean (1972) are particularly recommended.
Both transformations and are obviously linear maps from to when these spaces are viewed as vector spaces over the field of complex numbers.
and turn translations into phase shifts:
Under a general linear change of variable with nonsingular matrix A, the transform of is i.e. and similarly for . The matrix is called the contragredient of matrix A.
Under an affine change of coordinates with nonsingular matrix A, the transform of is given by with a similar result for , replacing −i by +i.
The kernels of the Fourier transformations and satisfy the following identities: As a result the transformations and themselves have the following `conjugate symmetry' properties [where the notation of Section 1.3.2.2.2 will be used]: Therefore,
Conjugate symmetry is the basis of Friedel's law (Section 1.3.4.2.1.4) in crystallography.
Another elementary property of is its naturality with respect to tensor products. Let and , and let denote the Fourier transformations in and , respectively. Then Furthermore, if , then as a function of x and as a function of y, and This is easily proved by using Fubini's theorem and the fact that , where . This property may be written:
If f and g are summable, their convolution exists and is summable, and With , so that and with Fubini's theorem, rearrangement of the double integral gives: and similarly Thus the Fourier transform and cotransform turn convolution into multiplication.
In general, and are not summable, and hence cannot be further transformed; however, as they are essentially bounded, their products with the Gaussians are summable for all , and it can be shown that where the limit is taken in the topology of the norm . Thus and are (in a sense) mutually inverse, which justifies the common practice of calling the `inverse Fourier transformation'.
If , i.e. is summable, then and exist and are continuous and essentially bounded: In fact one has the much stronger property, whose statement constitutes the Riemann–Lebesgue lemma, that and both tend to zero as .
Let us now suppose that and that is differentiable with . Integration by parts yields Since f′ is summable, f has a limit when , and this limit must be 0 since f is summable. Therefore with the bound so that decreases faster than .
This result can be easily extended to several dimensions and to any multiindex m: if f is summable and has continuous summable partial derivatives up to order , then and
Similar results hold for , with replaced by . Thus, the more differentiable f is, with summable derivatives, the faster and decrease at infinity.
The property of turning differentiation into multiplication by a monomial has many important applications in crystallography, for instance differential syntheses (Sections 1.3.4.2.1.9, 1.3.4.4.7.2, 1.3.4.4.7.5) and momentgenerating functions [Section 1.3.4.5.2.1(c )].
Conversely, assume that f is summable on and that f decreases fast enough at infinity for also to be summable, for some multiindex m. Then the integral defining may be subjected to the differential operator , still yielding a convergent integral: therefore exists, and with the bound
Similar results hold for , with replaced by . Thus, the faster f decreases at infinity, the more and are differentiable, with bounded derivatives. This property is the converse of that described in Section 1.3.2.4.2.8, and their combination is fundamental in the definition of the function space in Section 1.3.2.4.4.1, of tempered distributions in Section 1.3.2.5, and in the extension of the Fourier transformation to them.
An extreme case of the last instance occurs when f has compact support: then and are so regular that they may be analytically continued from to where they are entire functions, i.e. have no singularities at finite distance (Paley & Wiener, 1934). This is easily seen for : giving vector a vector of imaginary parts leads to where the latter transform always exists since is summable with respect to x for all values of η. This analytic continuation forms the basis of the saddlepoint method in probability theory [Section 1.3.4.5.2.1(f)] and leads to the use of maximumentropy distributions in the statistical theory of direct phase determination [Section 1.3.4.5.2.2(e)].
By Liouville's theorem, an entire function in cannot vanish identically on the complement of a compact subset of without vanishing everywhere: therefore cannot have compact support if f has, and hence is not stable by Fourier transformation.
Let f belong to , i.e. be such that
and , equality being taken as `almost everywhere' equality. This again leads to calling the `inverse Fourier transformation' rather than the Fourier cotransformation.
and preserve the norm: This property, which may be written in terms of the inner product (,) in as implies that and are unitary transformations of into itself, i.e. infinitedimensional `rotations'.
Some light can be shed on the geometric structure of these rotations by the following simple considerations. Note that so that (and similarly ) is the identity map. Any eigenvalue of or is therefore a fourth root of unity, i.e. ±1 or , and splits into an orthogonal direct sum where (respectively ) acts in each subspace by multiplication by . Orthonormal bases for these subspaces can be constructed from Hermite functions (cf. Section 1.3.2.4.4.2) This method was used by Wiener (1933, pp. 51–71).
In , the convolution theorem (when applicable) and the Parseval/Plancherel theorem are not independent. Suppose that f, g, and are all in (without questioning whether these properties are independent). Then may be written in terms of the inner product in as follows: i.e.
Invoking the isometry property, we may rewrite the righthand side as so that the initial identity yields the convolution theorem.
To obtain the converse implication, note that where conjugate symmetry (Section 1.3.2.4.2.2) has been used.
These relations have an important application in the calculation by Fourier transform methods of the derivatives used in the refinement of macromolecular structures (Section 1.3.4.4.7).
The duality established in Sections 1.3.2.4.2.8 and 1.3.2.4.2.9 between the local differentiability of a function and the rate of decrease at infinity of its Fourier transform prompts one to consider the space of functions f on which are infinitely differentiable and all of whose derivatives are rapidly decreasing, so that for all multiindices k and p The product of by any polynomial over is still in ( is an algebra over the ring of polynomials). Furthermore, is invariant under translations and differentiation.
If , then its transforms and are
hence and are in : is invariant under and .
Since and , all properties of and already encountered above are enjoyed by functions of , with all restrictions on differentiability and/or integrability lifted. For instance, given two functions f and g in , then both f g and are in (which was not the case with nor with ) so that the reciprocity theorem inherited from allows one to state the reverse of the convolution theorem first established in :
Gaussian functions are particularly important elements of . In dimension 1, a well known contour integration (Schwartz, 1965, p. 184) yields which shows that the `standard Gaussian' is invariant under and . By a tensor product construction, it follows that the same is true of the standard Gaussian in dimension n: In other words, G is an eigenfunction of and for eigenvalue 1 (Section 1.3.2.4.3.4).
A complete system of eigenfunctions may be constructed as follows. In dimension 1, consider the family of functions where D denotes the differentiation operator. The first two members of the family are such that , as shown above, and hence We may thus take as an induction hypothesis that The identity may be written and the two differentiation theorems give: Combination of this with the induction hypothesis yields thus proving that is an eigenfunction of for eigenvalue for all . The same proof holds for , with eigenvalue . If these eigenfunctions are normalized as then it can be shown that the collection of Hermite functions constitutes an orthonormal basis of such that is an eigenfunction of (respectively ) for eigenvalue (respectively ).
In dimension n, the same construction can be extended by tensor product to yield the multivariate Hermite functions (where is a multiindex). These constitute an orthonormal basis of , with an eigenfunction of (respectively ) for eigenvalue (respectively ). Thus the subspaces of Section 1.3.2.4.3.4 are spanned by those with .
General multivariate Gaussians are usually encountered in the nonstandard form where A is a symmetric positivedefinite matrix. Diagonalizing A as with the identity matrix, and putting , we may write i.e. hence (by Section 1.3.2.4.2.3) i.e. i.e. finally
This result is widely used in crystallography, e.g. to calculate form factors for anisotropic atoms (Section 1.3.4.2.2.6) and to obtain transforms of derivatives of Gaussian atomic densities (Section 1.3.4.4.7.10).
The result just obtained, which also holds for , shows that the `peakier' , the `broader' . This is a general property of the Fourier transformation, expressed in dimension 1 by the Heisenberg inequality (Weyl, 1931): where, by a beautiful theorem of Hardy (1933), equality can only be attained for f Gaussian. Hardy's theorem is even stronger: if both f and behave at infinity as constant multiples of G, then each of them is everywhere a constant multiple of G; if both f and behave at infinity as constant multiples of , then each of them is a finite linear combination of Hermite functions. Hardy's theorem is invoked in Section 1.3.4.4.5 to derive the optimal procedure for spreading atoms on a sampling grid in order to obtain the most accurate structure factors.
The search for optimal compromises between the confinement of f to a compact domain in xspace and of to a compact domain in ξspace leads to consideration of prolate spheroidal wavefunctions (Pollack & Slepian, 1961; Landau & Pollack, 1961, 1962).
A final formal property of the Fourier transform, best established in , is its symmetry: if f and g are in , then by Fubini's theorem
This possibility of `transposing' (and ) from the left to the right of the duality bracket will be used in Section 1.3.2.5.4 to extend the Fourier transformation to distributions.
Other ways of writing Fourier transforms in exist besides the one used here. All have the form where h is real positive and ω real nonzero, with the reciprocity formula written: with k real positive. The consistency condition between h, k and ω is
It should be noted that conventions (ii) and (iii) introduce numerical factors of 2π in convolution and Parseval formulae, while (ii) breaks the symmetry between and .
It was found in Section 1.3.2.4.2 that the usual space of test functions is not invariant under and . By contrast, the space of infinitely differentiable rapidly decreasing functions is invariant under and , and furthermore transposition formulae such as hold for all . It is precisely this type of transposition which was used successfully in Sections 1.3.2.3.9.1 and 1.3.2.3.9.3 to define the derivatives of distributions and their products with smooth functions.
This suggests using instead of as a space of test functions ϕ, and defining the Fourier transform of a distribution T by whenever T is capable of being extended from to while remaining continuous. It is this latter proviso which will be subsumed under the adjective `tempered'. As was the case with the construction of , it is the definition of a sufficiently strong topology (i.e. notion of convergence) in which will play a key role in transferring to the elements of its topological dual (called tempered distributions) all the properties of the Fourier transformation.
Besides the general references to distribution theory mentioned in Section 1.3.2.3.1 the reader may consult the books by Zemanian (1965, 1968). Lavoine (1963) contains tables of Fourier transforms of distributions.
A notion of convergence has to be introduced in in order to be able to define and test the continuity of linear functionals on it.
A sequence of functions in will be said to converge to 0 if, for any given multiindices k and p, the sequence tends to 0 uniformly on .
It can be shown that is dense in . Translation is continuous for this topology. For any linear differential operator and any polynomial over , implies in the topology of . Therefore, differentiation and multiplication by polynomials are continuous for the topology on .
The Fourier transformations and are also continuous for the topology of . Indeed, let converge to 0 for the topology on . Then, by Section 1.3.2.4.2, The righthand side tends to 0 as by definition of convergence in , hence uniformly, so that in as . The same proof applies to .
A distribution is said to be tempered if it can be extended into a continuous linear functional on .
If is the topological dual of , and if , then its restriction to is a tempered distribution; conversely, if is tempered, then its extension to is unique (because is dense in ), hence it defines an element S of . We may therefore identify and the space of tempered distributions.
A distribution with compact support is tempered, i.e. . By transposition of the corresponding properties of , it is readily established that the derivative, translate or product by a polynomial of a tempered distribution is still a tempered distribution.
These inclusion relations may be summarized as follows: since contains but is contained in , the reverse inclusions hold for the topological duals, and hence contains but is contained in .
A locally summable function f on will be said to be of polynomial growth if can be majorized by a polynomial in as . It is easily shown that such a function f defines a tempered distribution via In particular, polynomials over define tempered distributions, and so do functions in . The latter remark, together with the transposition identity (Section 1.3.2.4.4), invites the extension of and from to .
The Fourier transform and cotransform of a tempered distribution T are defined by for all test functions . Both and are themselves tempered distributions, since the maps and are both linear and continuous for the topology of . In the same way that x and ξ have been used consistently as arguments for ϕ and , respectively, the notation and will be used to indicate which variables are involved.
When T is a distribution with compact support, its Fourier transform may be written since the function is in while . It can be shown, as in Section 1.3.2.4.2, to be analytically continuable into an entire function over .
The duality between differentiation and multiplication by a monomial extends from to by transposition: Analogous formulae hold for , with i replaced by −i.
The formulae expressing the duality between translation and phase shift, e.g. between a linear change of variable and its contragredient, e.g. are obtained similarly by transposition from the corresponding identities in . They give a transposition formula for an affine change of variables with nonsingular matrix A: with a similar result for , replacing −i by +i.
Conjugate symmetry is obtained similarly: with the same identities for .
The tensor product property also transposes to tempered distributions: if ,
Since δ has compact support, It is instructive to show that conversely without invoking the reciprocity theorem. Since for all , it follows from Section 1.3.2.3.9.4 that ; the constant c can be determined by using the invariance of the standard Gaussian G established in Section 1.3.2.4.3: hence . Thus, .
The basic properties above then read (using multiindices to denote differentiation): with analogous relations for , i becoming −i. Thus derivatives of δ are mapped to monomials (and vice versa), while translates of δ are mapped to `phase factors' (and vice versa).
The previous results now allow a selfcontained and rigorous proof of the reciprocity theorem between and to be given, whereas in traditional settings (i.e. in and ) the implicit handling of δ through a limiting process is always the sticking point.
Reciprocity is first established in as follows: and similarly
The reciprocity theorem is then proved in by transposition: Thus the Fourier cotransformation in may legitimately be called the `inverse Fourier transformation'.
The method of Section 1.3.2.4.3 may then be used to show that and both have period 4 in .
Multiplier functions for tempered distributions must be infinitely differentiable, as for ordinary distributions; furthermore, they must grow sufficiently slowly as to ensure that for all and that the map is continuous for the topology of . This leads to choosing for multipliers the subspace consisting of functions of polynomial growth. It can be shown that if f is in , then the associated distribution is in (i.e. is a tempered distribution); and that conversely if T is in is in for all .
Corresponding restrictions must be imposed to define the space of those distributions T whose convolution with a tempered distribution S is still a tempered distribution: T must be such that, for all is in ; and such that the map be continuous for the topology of . This implies that S is `rapidly decreasing'. It can be shown that if f is in , then the associated distribution is in ; and that conversely if T is in is in for all .
The two spaces and are mapped into each other by the Fourier transformation and the convolution theorem takes the form The same identities hold for . Taken together with the reciprocity theorem, these show that and establish mutually inverse isomorphisms between and , and exchange multiplication for convolution in .
It may be noticed that most of the basic properties of and may be deduced from this theorem and from the properties of δ. Differentiation operators and translation operators are convolutions with and ; they are turned, respectively, into multiplication by monomials (the transforms of ) or by phase factors (the transforms of ).
Another consequence of the convolution theorem is the duality established by the Fourier transformation between sections and projections of a function and its transform. For instance, in , the projection of on the x, y plane along the z axis may be written its Fourier transform is then which is the section of by the plane , orthogonal to the z axis used for projection. There are numerous applications of this property in crystallography (Section 1.3.4.2.1.8) and in fibre diffraction (Section 1.3.4.5.1.3).
The special properties of in the space of squareintegrable functions , such as Parseval's identity, can be accommodated within distribution theory: if , then is a tempered distribution in (the map being continuous) and it can be shown that is of the form , where is the Fourier transform of u in . By Plancherel's theorem, .
This embedding of into can be used to derive the convolution theorem for . If u and v are in , then can be shown to be a bounded continuous function; thus is not in , but it is in , so that its Fourier transform is a distribution, and
Spaces of tempered distributions related to can be defined as follows. For any real s, define the Sobolev space to consist of all tempered distributions such that
These spaces play a fundamental role in the theory of partial differential equations, and in the mathematical theory of tomographic reconstruction – a subject not unrelated to the crystallographic phase problem (Natterer, 1986).
Let be the subset of consisting of those points with (signed) integer coordinates; it is an ndimensional lattice, i.e. a free Abelian group on n generators. A particularly simple set of n generators is given by the standard basis of , and hence will be called the standard lattice in . Any other `nonstandard' ndimensional lattice Λ in is the image of this standard lattice by a general linear transformation.
If we identify any two points in whose coordinates are congruent modulo , i.e. differ by a vector in , we obtain the standard ntorus . The latter may be viewed as , i.e. as the Cartesian product of n circles. The same identification may be carried out modulo a nonstandard lattice Λ, yielding a nonstandard ntorus . The correspondence to crystallographic terminology is that `standard' coordinates over the standard 3torus are called `fractional' coordinates over the unit cell; while Cartesian coordinates, e.g. in ångströms, constitute a set of nonstandard coordinates.
Finally, we will denote by I the unit cube and by the subset
A distribution is called periodic with period lattice (or periodic) if for all (in crystallography the period lattice is the direct lattice).
Given a distribution with compact support , then is a periodic distribution. Note that we may write , where consists of Dirac δ's at all nodes of the period lattice .
Conversely, any periodic distribution T may be written as for some . To retrieve such a `motif' from T, a function ψ will be constructed in such a way that (hence has compact support) and ; then . Indicator functions (Section 1.3.2.2) such as or cannot be used directly, since they are discontinuous; but regularized versions of them may be constructed by convolution (see Section 1.3.2.3.9.7) as , with and η such that on and outside . Then the function has the desired property. The sum in the denominator contains at most nonzero terms at any given point x and acts as a smoothly varying `multiplicity correction'.
Throughout this section, `periodic' will mean `periodic'.
Let , and let [s] denote the largest integer . For , let be the unique vector with . If , then if and only if . The image of the map is thus modulo , or .
If f is a periodic function over , then implies ; we may thus define a function over by putting for any such that . Conversely, if is a function over , then we may define a function f over by putting , and f will be periodic. Periodic functions over may thus be identified with functions over , and this identification preserves the notions of convergence, local summability and differentiability.
Given , we may define since the sum only contains finitely many nonzero terms; ϕ is periodic, and . Conversely, if we may define periodic by , and by putting with ψ constructed as above.
By transposition, a distribution defines a unique periodic distribution by ; conversely, periodic defines uniquely by .
We may therefore identify periodic distributions over with distributions over . We will, however, use mostly the former presentation, as it is more closely related to the crystallographer's perception of periodicity (see Section 1.3.4.1).
The content of this section is perhaps the central result in the relation between Fourier theory and crystallography (Section 1.3.4.2.1.1).
Let with r defined as in Section 1.3.2.6.2. Then , hence , so that : periodic distributions are tempered, hence have a Fourier transform. The convolution theorem (Section 1.3.2.5.8) is applicable, giving: and similarly for .
It is readily shown that Q is tempered and periodic, so that , while the periodicity of r implies that Since the first factors have single isolated zeros at in , (see Section 1.3.2.3.9.4) and hence by periodicity ; convoluting with shows that . Thus we have the fundamental result: so that i.e., according to Section 1.3.2.3.9.3,
The righthand side is a weighted lattice distribution, whose nodes are weighted by the sample values of the transform of the motif at those nodes. Since , the latter values may be written By the structure theorem for distributions with compact support (Section 1.3.2.3.9.7), is a derivative of finite order of a continuous function; therefore, from Section 1.3.2.4.2.8 and Section 1.3.2.5.8, grows at most polynomially as (see also Section 1.3.2.6.10.3 about this property). Conversely, let be a weighted lattice distribution such that the weights grow at most polynomially as . Then W is a tempered distribution, whose Fourier cotransform is periodic. If T is now written as for some , then by the reciprocity theorem Although the choice of is not unique, and need not yield back the same motif as may have been used to build T initially, different choices of will lead to the same coefficients because of the periodicity of .
The Fourier transformation thus establishes a duality between periodic distributions and weighted lattice distributions. The pair of relations are referred to as the Fourier analysis and the Fourier synthesis of T, respectively (there is a discrepancy between this terminology and the crystallographic one, see Section 1.3.4.2.1.1). In other words, any periodic distribution may be represented by a Fourier series (ii), whose coefficients are calculated by (i). The convergence of (ii) towards T in will be investigated later (Section 1.3.2.6.10).
Let Λ denote the nonstandard lattice consisting of all vectors of the form , where the are rational integers and are n linearly independent vectors in . Let R be the corresponding lattice distribution: .
Let A be the nonsingular matrix whose successive columns are the coordinates of vectors in the standard basis of ; A will be called the period matrix of Λ, and the mapping will be denoted by A. According to Section 1.3.2.3.9.5 we have for any , and hence . By Fourier transformation, according to Section 1.3.2.5.5, which we write: with
is a lattice distribution: associated with the reciprocal lattice whose basis vectors are the columns of . Since the latter matrix is equal to the adjoint matrix (i.e. the matrix of cofactors) of A divided by det A, the components of the reciprocal basis vectors can be written down explicitly (see Section 1.3.4.2.1.1 for the crystallographic case ).
A distribution T will be called Λperiodic if for all ; as previously, T may be written for some motif distribution with compact support. By Fourier transformation, so that is a weighted reciprocallattice distribution, the weight attached to node being times the value of the Fourier transform of the motif .
This result may be further simplified if T and its motif are referred to the standard period lattice by defining t and so that , , . Then hence so that in nonstandard coordinates, while in standard coordinates.
The reciprocity theorem may then be written: in nonstandard coordinates, or equivalently: in standard coordinates. It gives an ndimensional Fourier series representation for any periodic distribution over . The convergence of such series in will be examined in Section 1.3.2.6.10.
Let be a distribution with compact support (the `motif'). Its Fourier transform is analytic (Section 1.3.2.5.4) and may thus be used as a multiplier.
We may rephrase the preceding results as follows:
Thus the Fourier transformation establishes a duality between the periodization of a distribution by a period lattice Λ and the sampling of its transform at the nodes of lattice reciprocal to Λ. This is a particular instance of the convolution theorem of Section 1.3.2.5.8.
At this point it is traditional to break the symmetry between and which distribution theory has enabled us to preserve even in the presence of periodicity, and to perform two distinct identifications:
Let , so that . Let R be the lattice distribution associated to lattice Λ, with period matrix A, and let be associated to the reciprocal lattice . Then we may write: i.e.
This identity, which also holds for , is called the Poisson summation formula. Its usefulness follows from the fact that the speed of decrease at infinity of ϕ and are inversely related (Section 1.3.2.4.4.3), so that if one of the series (say, the lefthand side) is slowly convergent, the other (say, the righthand side) will be rapidly convergent. This procedure has been used by Ewald (1921) [see also Bertaut (1952), Born & Huang (1954)] to evaluate lattice sums (Madelung constants) involved in the calculation of the internal electrostatic energy of crystals (see Chapter 3.4 in this volume on convergence acceleration techniques for crystallographic lattice sums).
When ϕ is a multivariate Gaussian then and Poisson's summation formula for a lattice with period matrix A reads: or equivalently with
Let and be two Λperiodic distributions, the motifs and having compact support. The convolution does not exist, because S and T do not satisfy the support condition (Section 1.3.2.3.9.7). However, the three distributions R, and do satisfy the generalized support condition, so that their convolution is defined; then, by associativity and commutativity:
By Fourier transformation and by the convolution theorem: Let , and be the sets of Fourier coefficients associated to S, T and , respectively. Identifying the coefficients of for yields the forward version of the convolution theorem for Fourier series:
The backward version of the theorem requires that T be infinitely differentiable. The distribution is then well defined and its Fourier coefficients are given by
Toeplitz forms were first investigated by Toeplitz (1907, 1910, 1911a). They occur in connection with the `trigonometric moment problem' (Shohat & Tamarkin, 1943; Akhiezer, 1965) and probability theory (Grenander, 1952) and play an important role in several direct approaches to the crystallographic phase problem [see Sections 1.3.4.2.1.10, 1.3.4.5.2.2(e)]. Many aspects of their theory and applications are presented in the book by Grenander & Szegö (1958).
Let be realvalued, so that its Fourier coefficients satisfy the relations . The Hermitian form in complex variables is called the nth Toeplitz form associated to f. It is a straightforward consequence of the convolution theorem and of Parseval's identity that may be written:
It was shown independently by Toeplitz (1911b), Carathéodory (1911) and Herglotz (1911) that a function is almost everywhere nonnegative if and only if the Toeplitz forms associated to f are positive semidefinite for all values of n.
This is equivalent to the infinite system of determinantal inequalities The are called Toeplitz determinants. Their application to the crystallographic phase problem is described in Section 1.3.4.2.1.10.
The eigenvalues of the Hermitian form are defined as the real roots of the characteristic equation . They will be denoted by
It is easily shown that if for all x, then for all n and all . As these bounds, and the distribution of the within these bounds, can be made more precise by introducing two new notions.

We may now state an important theorem of Szegö (1915, 1920). Let , and put , . If m and M are finite, then for any continuous function defined in the interval [m, M] we have In other words, the eigenvalues of the and the values of f on a regular subdivision of ]0, 1[ are equally distributed.
Further investigations into the spectra of Toeplitz matrices may be found in papers by Hartman & Wintner (1950, 1954), Kac et al. (1953), Widom (1965), and in the notes by Hirschman & Hughes (1977).
The investigation of the convergence of Fourier series and of more general trigonometric series has been the subject of intense study for over 150 years [see e.g. Zygmund (1976)]. It has been a constant source of new mathematical ideas and theories, being directly responsible for the birth of such fields as set theory, topology and functional analysis.
This section will briefly survey those aspects of the classical results in dimension 1 which are relevant to the practical use of Fourier series in crystallography. The books by Zygmund (1959), Tolstov (1962) and Katznelson (1968) are standard references in the field, and Dym & McKean (1972) is recommended as a stimulant.
The space consists of (equivalence classes of) complexvalued functions f on the circle which are summable, i.e. for which It is a convolution algebra: If f and g are in , then is in .
The mth Fourier coefficient of f, is bounded: , and by the Riemann–Lebesgue lemma as . By the convolution theorem, .
The pth partial sum of the Fourier series of f, may be written, by virtue of the convolution theorem, as , where is the Dirichlet kernel. Because comprises numerous slowly decaying oscillations, both positive and negative, may not converge towards f in a strong sense as . Indeed, spectacular pathologies are known to exist where the partial sums, examined pointwise, diverge everywhere (Zygmund, 1959, Chapter VIII). When f is piecewise continuous, but presents isolated jumps, convergence near these jumps is marred by the Gibbs phenomenon: always `overshoots the mark' by about 9%, the area under the spurious peak tending to 0 as but not its height [see Larmor (1934) for the history of this phenomenon].
By contrast, the arithmetic mean of the partial sums, also called the pth Cesàro sum, converges to f in the sense of the norm: as . If furthermore f is continuous, then the convergence is uniform, i.e. the error is bounded everywhere by a quantity which goes to 0 as . It may be shown that where is the Fejér kernel. has over the advantage of being everywhere positive, so that the Cesàro sums of a positive function f are always positive.
The de la Vallée Poussin kernel has a trapezoidal distribution of coefficients and is such that if ; therefore is a trigonometric polynomial with the same Fourier coefficients as f over that range of values of m.
The Poisson kernel with gives rise to an Abel summation procedure [Tolstov (1962, p. 162); Whittaker & Watson (1927, p. 57)] since Compared with the other kernels, has the disadvantage of not being a trigonometric polynomial; however, is the real part of the Cauchy kernel (Cartan, 1961; Ahlfors, 1966): and hence provides a link between trigonometric series and analytic functions of a complex variable.
Other methods of summation involve forming a moving average of f by convolution with other sequences of functions besides of which `tend towards δ' as . The convolution is performed by multiplying the Fourier coefficients of f by those of , so that one forms the quantities For instance the `sigma factors' of Lanczos (Lanczos, 1966, p. 65), defined by lead to a summation procedure whose behaviour is intermediate between those using the Dirichlet and the Fejér kernels; it corresponds to forming a moving average of f by convolution with which is itself the convolution of a `rectangular pulse' of width and of the Dirichlet kernel of order p.
A review of the summation problem in crystallography is given in Section 1.3.4.2.1.3.
The space of (equivalence classes of) squareintegrable complexvalued functions f on the circle is contained in , since by the Cauchy–Schwarz inequality Thus all the results derived for hold for , a great simplification over the situation in or where neither nor was contained in the other.
However, more can be proved in , because is a Hilbert space (Section 1.3.2.2.4) for the inner product and because the family of functions constitutes an orthonormal Hilbert basis for .
The sequence of Fourier coefficients of belongs to the space of squaresummable sequences: Conversely, every element of is the sequence of Fourier coefficients of a unique function in . The inner product makes into a Hilbert space, and the map from to established by the Fourier transformation is an isometry (Parseval/Plancherel): or equivalently: This is a useful property in applications, since (f, g) may be calculated either from f and g themselves, or from their Fourier coefficients and (see Section 1.3.4.4.6) for crystallographic applications).
By virtue of the orthogonality of the basis , the partial sum is the best meansquare fit to f in the linear subspace of spanned by , and hence (Bessel's inequality)
The use of distributions enlarges considerably the range of behaviour which can be accommodated in a Fourier series, even in the case of general dimension n where classical theories meet with even more difficulties than in dimension 1.
Let be a sequence of complex numbers with growing at most polynomially as , say . Then the sequence is in and even defines a continuous function and an associated tempered distribution . Differentiation of times then yields a tempered distribution whose Fourier transform leads to the original sequence of coefficients. Conversely, by the structure theorem for distributions with compact support (Section 1.3.2.3.9.7), the motif of a periodic distribution is a derivative of finite order of a continuous function; hence its Fourier coefficients will grow at most polynomially with as .
Thus distribution theory allows the manipulation of Fourier series whose coefficients exhibit polynomial growth as their order goes to infinity, while those derived from functions had to tend to 0 by virtue of the Riemann–Lebesgue lemma. The distributiontheoretic approach to Fourier series holds even in the case of general dimension n, where classical theories meet with even more difficulties (see Ash, 1976) than in dimension 1.
Let be such that has compact support K. Let ϕ be sampled at the nodes of a lattice , yielding the lattice distribution . The Fourier transform of this sampled version of ϕ is which is essentially Φ periodized by period lattice , with period matrix A.
Let us assume that Λ is such that the translates of K by different period vectors of Λ are disjoint. Then we may recover Φ from by masking the contents of a `unit cell' of Λ (i.e. a fundamental domain for the action of Λ in ) whose boundary does not meet K. If is the indicator function of , then Transforming both sides by yields i.e. since is the volume V of .
This interpolation formula is traditionally credited to Shannon (1949), although it was discovered much earlier by Whittaker (1915). It shows that ϕ may be recovered from its sample values on (i.e. from ) provided is sufficiently fine that no overlap (or `aliasing') occurs in the periodization of Φ by the dual lattice Λ. The interpolation kernel is the transform of the normalized indicator function of a unit cell of Λ containing the support K of Φ.
If K is contained in a sphere of radius and if Λ and are rectangular, the length of each basis vector of Λ must be greater than , and thus the sampling interval must be smaller than . This requirement constitutes the Shannon sampling criterion.
Let be a period lattice in with matrix A, and let be the lattice reciprocal to , with period matrix . Let be defined similarly, and let us suppose that is a sublattice of , i.e. that as a set.
The relation between and may be described in two different fashions: (i) multiplicatively, and (ii) additively.

Let us now consider the two reciprocal lattices and . Their period matrices and are related by: , where is an integer matrix; or equivalently by . This shows that the roles are reversed in that is a sublattice of , which we may write:
The above relations between lattices may be rewritten in terms of the corresponding lattice distributions as follows: where and are (finite) residuallattice distributions. We may incorporate the factor in (i) and into these distributions and define
Since , convolution with and has the effect of averaging the translates of a distribution under the elements (or `cosets') of the residual lattices and , respectively. This process will be called `coset averaging'. Eliminating and between (i) and (ii), and and between and , we may write: These identities show that period subdivision by convolution with (respectively ) on the one hand, and period decimation by `dilation' by on the other hand, are mutually inverse operations on and (respectively and ).
Finally, let us consider the relations between the Fourier transforms of these lattice distributions. Recalling the basic relation of Section 1.3.2.6.5, i.e. and similarly:
Thus (respectively ), a decimated version of (respectively ), is transformed by into a subdivided version of (respectively ).
The converse is also true: i.e. and similarly
Thus (respectively ), a subdivided version of (respectively ) is transformed by into a decimated version of (respectively ). Therefore, the Fourier transform exchanges subdivision and decimation of period lattices for lattice distributions.
Further insight into this phenomenon is provided by applying to both sides of (iv) and (v) and invoking the convolution theorem: These identities show that multiplication by the transform of the periodsubdividing distribution (respectively ) has the effect of decimating to (respectively to ). They clearly imply that, if and , then Therefore, the duality between subdivision and decimation may be viewed as another aspect of that between convolution and multiplication.
There is clearly a strong analogy between the sampling/periodization duality of Section 1.3.2.6.6 and the decimation/subdivision duality, which is viewed most naturally in terms of subgroup relationships: both sampling and decimation involve restricting a function to a discrete additive subgroup of the domain over which it is initially given.
The usual presentation of this duality is not in terms of lattice distributions, but of periodic distributions obtained by convolving them with a motif.
Given , let us form , then decimate its transform by keeping only its values at the points of the coarser lattice ; as a result, is replaced by , and the reverse transform then yields which is the cosetaveraged version of the original . The converse situation is analogous to that of Shannon's sampling theorem. Let a function whose transform has compact support be sampled as at the nodes of . Then is periodic with period lattice . If the sampling lattice is decimated to , the inverse transform becomes hence becomes periodized more finely by averaging over the cosets of . With this finer periodization, the various copies of Supp Φ may start to overlap (a phenomenon called `aliasing'), indicating that decimation has produced too coarse a sampling of ϕ.
Let be such that has compact support ( is said to be bandlimited). Then is periodic, and is such that only a finite number of points of have a nonzero Fourier coefficient attached to them. We may therefore find a decimation of such that the distinct translates of Supp by vectors of do not intersect.
The distribution Φ can be uniquely recovered from by the procedure of Section 1.3.2.7.1, and we may write: these rearrangements being legitimate because and have compact supports which are intersectionfree under the action of . By virtue of its periodicity, this distribution is entirely characterized by its `motif' with respect to :
Similarly, ϕ may be uniquely recovered by Shannon interpolation from the distribution sampling its values at the nodes of is a subdivision of ). By virtue of its periodicity, this distribution is completely characterized by its motif:
Let and , and define the two sets of coefficients Define the two distributions and The relation between ω and Ω has two equivalent forms:
By (i), . Both sides are weighted lattice distributions concentrated at the nodes of , and equating the weights at gives Since , is an integer, hence
By (ii), we have Both sides are weighted lattice distributions concentrated at the nodes of , and equating the weights at gives Since , is an integer, hence
Now the decimation/subdivision relations between and may be written: so that with , hence finally
Denoting by and by , the relation between ω and Ω may be written in the equivalent form where the summations are now over finite residual lattices in standard form.
Equations (i) and (ii) describe two mutually inverse linear transformations and between two vector spaces and of dimension . [respectively ] is the discrete Fourier (respectively inverse Fourier) transform associated to matrix N.
The vector spaces and may be viewed from two different standpoints:

These two spaces are said to be `isomorphic' (a relation denoted ≅), the isomorphism being given by the onetoone correspondence:
The second viewpoint will be adopted, as it involves only linear algebra. However, it is most helpful to keep the first one in mind and to think of the data or results of a discrete Fourier transform as representing (through their sets of unique weights) two periodic lattice distributions related by the full, distributiontheoretic Fourier transform.
We therefore view (respectively ) as the vector space of complexvalued functions over the finite residual lattice (respectively ) and write: since a vector such as ψ is in fact the function .
The two spaces and may be equipped with the following Hermitian inner products: which makes each of them into a Hilbert space. The canonical bases and and and are orthonormal for their respective product.
By virtue of definitions (i) and (ii), so that and may be represented, in the canonical bases of and , by the following matrices:
When N is symmetric, and may be identified in a natural manner, and the above matrices are symmetric.
When N is diagonal, say , then the tensor product structure of the full multidimensional Fourier transform (Section 1.3.2.4.2.4) gives rise to a tensor product structure for the DFT matrices. The tensor product of matrices is defined as follows: Let the index vectors and be ordered in the same way as the elements in a Fortran array, e.g. for with increasing fastest, next fastest, slowest; then where and where
The DFT inherits most of the properties of the Fourier transforms, but with certain numerical factors (`Jacobians') due to the transition from continuous to discrete measure.

The Fourier transformation's most remarkable property is undoubtedly that of turning convolution into multiplication. As distribution theory has shown, other valuable properties – such as the shift property, the conversion of differentiation into multiplication by monomials, and the duality between periodicity and sampling – are special instances of the convolution theorem.
This property is exploited in many areas of applied mathematics and engineering (Campbell & Foster, 1948; Sneddon, 1951; Champeney, 1973; Bracewell, 1986). For example, the passing of a signal through a linear filter, which results in its being convolved with the response of the filter to a δfunction `impulse', may be modelled as a multiplication of the signal's transform by the transform of the impulse response (also called transfer function). Similarly, the solution of systems of partial differential equations may be turned by Fourier transformation into a division problem for distributions. In both cases, the formulations obtained after Fourier transformation are considerably simpler than the initial ones, and lend themselves to constructive solution techniques.
Whenever the functions to which the Fourier transform is applied are bandlimited, or can be well approximated by bandlimited functions, the discrete Fourier transform (DFT) provides a means of constructing explicit numerical solutions to the problems at hand. A great variety of investigations in physics, engineering and applied mathematics thus lead to DFT calculations, to such a degree that, at the time of writing, about 50% of all supercomputer CPU time is alleged to be spent calculating DFTs.
The straightforward use of the defining formulae for the DFT leads to calculations of size for N sample points, which become unfeasible for any but the smallest problems. Much ingenuity has therefore been exerted on the design and implementation of faster algorithms for calculating the DFT (McClellan & Rader, 1979; Nussbaumer, 1981; Blahut, 1985; Brigham, 1988). The most famous is that of Cooley & Tukey (1965) which heralded the age of digital signal processing. However, it had been preceded by the prime factor algorithm of Good (1958, 1960), which has lately been the basis of many new developments. Recent historical research (Goldstine, 1977, pp. 249–253; Heideman et al., 1984) has shown that Gauss essentially knew the Cooley–Tukey algorithm as early as 1805 (before Fourier's 1807 work on harmonic analysis!); while it has long been clear that Dirichlet knew of the basis of the prime factor algorithm and used it extensively in his theory of multiplicative characters [see e.g. Chapter I of Ayoub (1963), and Chapters 6 and 8 of Apostol (1976)]. Thus the computation of the DFT, far from being a purely technical and rather narrow piece of specialized numerical analysis, turns out to have very rich connections with such central areas of pure mathematics as number theory (algebraic and analytic), the representation theory of certain Lie groups and coding theory – to list only a few. The interested reader may consult Auslander & Tolimieri (1979); Auslander, Feig & Winograd (1982, 1984); Auslander & Tolimieri (1985); Tolimieri (1985).
Onedimensional algorithms are examined first. The Sande mixedradix version of the Cooley–Tukey algorithm only calls upon the additive structure of congruence classes of integers. The prime factor algorithm of Good begins to exploit some of their multiplicative structure, and the use of relatively prime factors leads to a stronger factorization than that of Sande. Fuller use of the multiplicative structure, via the group of units, leads to the Rader algorithm; and the factorization of short convolutions then yields the Winograd algorithms.
Multidimensional algorithms are at first built as tensor products of onedimensional elements. The problem of factoring the DFT in several dimensions simultaneously is then examined. The section ends with a survey of attempts at formalizing the interplay between algorithm structure and computer architecture for the purpose of automating the design of optimal DFT code.
It was originally intended to incorporate into this section a survey of all the basic notions and results of abstract algebra which are called upon in the course of these developments, but time limitations have made this impossible. This material, however, is adequately covered by the first chapter of Tolimieri et al. (1989) in a form tailored for the same purposes. Similarly, the inclusion of numerous detailed examples of the algorithms described here has had to be postponed to a later edition, but an abundant supply of such examples may be found in the signal processing literature, for instance in the books by McClellan & Rader (1979), Blahut (1985), and Tolimieri et al. (1989).
Throughout this section we will denote by the expression , . The mapping has the following properties: Thus e defines an isomorphism between the additive group (the reals modulo the integers) and the multiplicative group of complex numbers of modulus 1. It follows that the mapping , where and N is a positive integer, defines an isomorphism between the onedimensional residual lattice and the multiplicative group of Nth roots of unity.
The DFT on N points then relates vectors X and in W and through the linear transformations:
The presentation of Gentleman & Sande (1966) will be followed first [see also Cochran et al. (1967)]. It will then be reinterpreted in geometric terms which will prepare the way for the treatment of multidimensional transforms in Section 1.3.3.3.
Suppose that the number of sample points N is composite, say . We may write k to the base and to the base as follows: The defining relation for may then be written: The argument of may be expanded as and the last summand, being an integer, may be dropped: This computation may be decomposed into five stages, as follows:
If the intermediate transforms in stages (ii) and (iv) are performed in place, i.e. with the results overwriting the data, then at stage (v) the result will be found at address . This phenomenon is called scrambling by `digit reversal', and stage (v) is accordingly known as unscrambling.
The initial Npoint transform has thus been performed as transforms on points, followed by transforms on points, thereby reducing the arithmetic cost from to . The phase shifts applied at stage (iii) are traditionally called `twiddle factors', and the transposition between and can be performed by the fast recursive technique of Eklundh (1972). Clearly, this procedure can be applied recursively if and are themselves composite, leading to an overall arithmetic cost of order N log N if N has no large prime factors.
The Cooley–Tukey factorization may also be derived from a geometric rather than arithmetic argument. The decomposition is associated to a geometric partition of the residual lattice into copies of , each translated by and `blown up' by a factor . This partition in turn induces a (direct sum) decomposition of X as where
According to (i), is related to by decimation by and offset by . By Section 1.3.2.7.2, is related to by periodization by and phase shift by , so that the periodization by being reflected by the fact that does not depend on . Writing and expanding shows that the phase shift contains both the twiddle factor and the kernel of . The Cooley–Tukey algorithm is thus naturally associated to the coset decomposition of a lattice modulo a sublattice (Section 1.3.2.7.2).
It is readily seen that essentially the same factorization can be obtained for , up to the complex conjugation of the twiddle factors. The normalizing constant arises from the normalizing constants and in and , respectively.
Factors of 2 are particularly simple to deal with and give rise to a characteristic computational structure called a `butterfly loop'. If , then two options exist:

By repeated factoring of the number N of sample points, the calculation of and can be reduced to a succession of stages, the smallest of which operate on single prime factors of N. The reader is referred to Gentleman & Sande (1966) for a particularly lucid analysis of the programming considerations which help implement this factorization efficiently; see also Singleton (1969). Powers of two are often grouped together into factors of 4 or 8, which are advantageous in that they require fewer complex multiplications than the repeated use of factors of 2. In this approach, large prime factors P are detrimental, since they require a full size computation according to the defining formula.
The set of congruence classes of integers modulo an integer N [see e.g. Apostol (1976), Chapter 5] inherits from not only the additive structure used in deriving the Cooley–Tukey factorization, but also a multiplicative structure in which the product of two congruence classes mod N is uniquely defined as the class of the ordinary product (in ) of representatives of each class. The multiplication can be distributed over addition in the usual way, endowing with the structure of a commutative ring.
If N is composite, the ring has zero divisors. For example, let , let mod N, and let mod N: then mod N. In the general case, a product of nonzero elements will be zero whenever these elements collect together all the factors of N. These circumstances give rise to a fundamental theorem in the theory of commutative rings, the Chinese Remainder Theorem (CRT), which will now be stated and proved [see Apostol (1976), Chapter 5; Schroeder (1986), Chapter 16].
Let be factored into a product of pairwise coprime integers, so that g.c.d. for . Then the system of congruence equations has a unique solution mod N. In other words, each is associated in a onetoone fashion to the dtuple of its residue classes in .
The proof of the CRT goes as follows. Let Since g.c.d. there exist integers and such that then the integer is the solution. Indeed, because all terms with contain as a factor; and by the defining relation for .
It may be noted that so that the are mutually orthogonal idempotents in the ring , with properties formally similar to those of mutually orthogonal projectors onto subspaces in linear algebra. The analogy is exact, since by virtue of the CRT the ring may be considered as the direct product via the two mutually inverse mappings:
The mapping defined by (ii) is sometimes called the `CRT reconstruction' of from the .
These two mappings have the property of sending sums to sums and products to products, i.e: (the last proof requires using the properties of the idempotents ). This may be described formally by stating that the CRT establishes a ring isomorphism:
The CRT will now be used to factor the Npoint DFT into a tensor product of d transforms, the jth of length .
Let the indices k and be subjected to the following mappings:
Then Cross terms with vanish since they contain all the factors of N, hence Dividing by N, which may be written as for each j, yields and hence Therefore, by the multiplicative property of ,
Let be described by a onedimensional array indexed by k. The index mapping (i) turns X into an element of described by a ddimensional array ; the latter may be transformed by into a new array . Finally, the onedimensional array of results will be obtained by reconstructing according to (ii).
The prime factor algorithm, like the Cooley–Tukey algorithm, reindexes a 1D transform to turn it into d separate transforms, but the use of coprime factors and CRT index mapping leads to the further gain that no twiddle factors need to be applied between the successive transforms (see Good, 1971). This makes up for the cost of the added complexity of the CRT index mapping.
The natural factorization of N for the prime factor algorithm is thus its factorization into prime powers: is then the tensor product of separate transforms (one for each prime power factor ) whose results can be reassembled without twiddle factors. The separate factors within each must then be dealt with by another algorithm (e.g. Cooley–Tukey, which does require twiddle factors). Thus, the DFT on a prime number of points remains undecomposable.
The previous two algorithms essentially reduce the calculation of the DFT on N points for N composite to the calculation of smaller DFTs on prime numbers of points, the latter remaining irreducible. However, Rader (1968) showed that the ppoint DFT for p an odd prime can itself be factored by invoking some extra arithmetic structure present in .
The ring has the property that its nonzero elements, called units, form a multiplicative group . In particular, all units have a unique multiplicative inverse in , i.e. a unit such that . This endows with the structure of a finite field.
Furthermore, is a cyclic group, i.e. consists of the successive powers of a generator g called a primitive root mod p (such a g may not be unique, but it always exists). For instance, for , is generated by , whose successive powers mod 7 are: [see Apostol (1976), Chapter 10].
The basis of Rader's algorithm is to bring to light a hidden regularity in the matrix by permuting the basis vectors and of as follows: where g is a primitive root mod p.
With respect to these new bases, the matrix representing will have the following elements: Thus the `core' of matrix , of size , formed by the elements with two nonzero indices, has a socalled skewcirculant structure because element depends only on . Simplification may now occur because multiplication by is closely related to a cyclic convolution. Introducing the notation we may write the relation in the permuted bases as where Z is defined by , .
Thus may be obtained by cyclic convolution of C and Z, which may for instance be calculated by where × denotes the componentwise multiplication of vectors. Since p is odd, is always divisible by 2 and may even be highly composite. In that case, factoring by means of the Cooley–Tukey or Good methods leads to an algorithm of complexity p log p rather than for . An added bonus is that, because , the elements of can be shown to be either purely real or purely imaginary, which halves the number of real multiplications involved.
This idea was extended by Winograd (1976, 1978) to the treatment of prime powers , using the cyclic structure of the multiplicative group of units . The latter consists of all those elements of which are not divisible by p, and thus has elements. It is cyclic, and there exist primitive roots g modulo such that The elements divisible by p, which are divisors of zero, have to be treated separately just as 0 had to be treated separately for .
When , then with . The results are pdecimated, hence can be obtained via the point DFT of the periodized data Y: with
When , then we may write where contains the contributions from and those from . By a converse of the previous calculation, arises from pdecimated data Z, hence is the periodization of the point DFT of these data: with (the periodicity follows implicity from the fact that the transform on the righthand side is independent of ).
Finally, the contribution from all may be calculated by reindexing by the powers of a primitive root g modulo , i.e. by writing then carrying out the multiplication by the skewcirculant matrix core as a convolution.
Thus the DFT of size may be reduced to two DFTs of size (dealing, respectively, with pdecimated results and pdecimated data) and a convolution of size . The latter may be `diagonalized' into a multiplication by purely real or purely imaginary numbers (because ) by two DFTs, whose factoring in turn leads to DFTs of size and . This method, applied recursively, allows the complete decomposition of the DFT on points into arbitrarily small DFTs.
When , the same method can be applied, except for a slight modification in the calculation of . There is no primitive root modulo for : the group is the direct product of two cyclic groups, the first (of order 2) generated by −1, the second (of order ) generated by 3 or 5. One then uses a representation and the reindexed core matrix gives rise to a twodimensional convolution. The latter may be carried out by means of two 2D DFTs on points.
The cyclic convolutions generated by Rader's multiplicative reindexing may be evaluated more economically than through DFTs if they are reexamined within a new algebraic setting, namely the theory of congruence classes of polynomials [see, for instance, Blahut (1985), Chapter 2; Schroeder (1986), Chapter 24].
The set, denoted , of polynomials in one variable with coefficients in a given field has many of the formal properties of the set of rational integers: it is a ring with no zero divisors and has a Euclidean algorithm on which a theory of divisibility can be built.
Given a polynomial , then for every there exist unique polynomials and such that and is called the residue of modulo . Two polynomials and having the same residue modulo are said to be congruent modulo , which is denoted by
If is said to be divisible by . If only has divisors of degree zero in , it is said to be irreducible over (this notion depends on ). Irreducible polynomials play in a role analogous to that of prime numbers in , and any polynomial over has an essentially unique factorization as a product of irreducible polynomials.
There exists a Chinese remainder theorem (CRT) for polynomials. Let be factored into a product of pairwise coprime polynomials [i.e. and have no common factor for ]. Then the system of congruence equations has a unique solution modulo . This solution may be constructed by a procedure similar to that used for integers. Let Then and are coprime, and the Euclidean algorithm may be used to obtain polynomials and such that With , the polynomial is easily shown to be the desired solution.
As with integers, it can be shown that the 1:1 correspondence between and sends sums to sums and products to products, i.e. establishes a ring isomorphism:
These results will now be applied to the efficient calculation of cyclic convolutions. Let and be two vectors of length N, and let be obtained by cyclic convolution of U and V: The very simple but crucial result is that this cyclic convolution may be carried out by polynomial multiplication modulo : if then the above relation is equivalent to Now the polynomial can be factored over the field of rational numbers into irreducible factors called cyclotomic polynomials: if d is the number of divisors of N, including 1 and N, then where the cyclotomics are well known (Nussbaumer, 1981; Schroeder, 1986, Chapter 22). We may now invoke the CRT, and exploit the ring isomorphism it establishes to simplify the calculation of from and as follows:
When N is not too large, i.e. for `short cyclic convolutions', the are very simple, with coefficients 0 or ±1, so that (i) only involves a small number of additions. Furthermore, special techniques have been developed to multiply general polynomials modulo cyclotomic polynomials, thus helping keep the number of multiplications in (ii) and (iii) to a minimum. As a result, cyclic convolutions can be calculated rapidly when N is sufficiently composite.
It will be recalled that Rader's multiplicative indexing often gives rise to cyclic convolutions of length for p an odd prime. Since is highly composite for all other than 23 and 47, these cyclic convolutions can be performed more efficiently by the above procedure than by DFT.
These combined algorithms are due to Winograd (1977, 1978, 1980), and are known collectively as `Winograd small FFT algorithms'. Winograd also showed that they can be thought of as bringing the DFT matrix F to the following `normal form': where
The elements on the diagonal of B can be shown to be either real or pure imaginary, by the same argument as in Section 1.3.3.2.3.1. Matrices A and C may be rectangular rather than square, so that intermediate results may require extra storage space.
From an algorithmic point of view, the distinction between onedimensional (1D) and multidimensional DFTs is somewhat blurred by the fact that some factoring techniques turn a 1D transform into a multidimensional one. The distinction made here, however, is a practical one and is based on the dimensionality of the indexing sets for data and results. This section will therefore be concerned with the problem of factoring the DFT when the indexing sets for the input data and output results are multidimensional.
The DFT was defined in Section 1.3.2.7.4 in an ndimensional setting and it was shown that when the decimation matrix N is diagonal, say , then has a tensor product structure: This may be rewritten as follows: where the I's are identity matrices and × denotes ordinary matrix multiplication. The matrix within each bracket represents a onedimensional DFT along one of the n dimensions, the other dimensions being left untransformed. As these matrices commute, the order in which the successive 1D DFTs are performed is immaterial.
This is the most straightforward method for building an ndimensional algorithm from existing 1D algorithms. It is known in crystallography under the name of `Beevers–Lipson factorization' (Section 1.3.4.3.1), and in signal processing as the `row–column method'.
Substantial reductions in the arithmetic cost, as well as gains in flexibility, can be obtained if the factoring of the DFT is carried out in several dimensions simultaneously. The presentation given here is a generalization of that of Mersereau & Speake (1981), using the abstract setting established independently by Auslander, Tolimieri & Winograd (1982).
Let us return to the general ndimensional setting of Section 1.3.2.7.4, where the DFT was defined for an arbitrary decimation matrix N by the formulae (where denotes ): with
Let us now assume that this decimation can be factored into d successive decimations, i.e. that and hence Then the coset decomposition formulae corresponding to these successive decimations (Section 1.3.2.7.1) can be combined as follows: with . Therefore, any may be written uniquely as Similarly: so that any may be written uniquely as with . These decompositions are the vector analogues of the multiradix number representation systems used in the Cooley–Tukey factorization.
We may then write the definition of with factors as The argument of e(–) may be expanded as The first summand may be recognized as a twiddle factor, the second and third as the kernels of and , respectively, while the fourth is an integer which may be dropped. We are thus led to a `vectorradix' version of the Cooley–Tukey algorithm, in which the successive decimations may be introduced in all n dimensions simultaneously by general integer matrices. The computation may be decomposed into five stages analogous to those of the onedimensional algorithm of Section 1.3.3.2.1:
The initial point transform can thus be performed as transforms on points, followed by transforms on points. This process can be applied successively to all d factors. The same decomposition applies to , up to the complex conjugation of twiddle factors, the normalization factor being obtained as the product of the factors in the successive partial transforms .
The geometric interpretation of this factorization in terms of partial transforms on translates of sublattices applies in full to this ndimensional setting; in particular, the twiddle factors are seen to be related to the residual translations which place the sublattices in register within the big lattice. If the intermediate transforms are performed in place, then the quantity will eventually be found at location so that the final results will have to be unscrambled by a process which may be called `coset reversal', the vector equivalent of digit reversal.
Factoring by 2 in all n dimensions simultaneously, i.e. taking , leads to `ndimensional butterflies'. Decimation in time corresponds to the choice , so that is an ndimensional parity class; the calculation then proceeds by Decimation in frequency corresponds to the choice , , so that labels `octant' blocks of shape M; the calculation then proceeds through the following steps: i.e. the parity classes of results, corresponding to the different , are obtained separately. When the dimension n is 2 and the decimating matrix is diagonal, this analysis reduces to the `vector radix FFT' algorithms proposed by Rivard (1977) and Harris et al. (1977). These lead to substantial reductions in the number M of multiplications compared to the row–column method: M is reduced to by simultaneous factoring, and to by simultaneous factoring.
The use of a nondiagonal decimating matrix may bring savings in computing time if the spectrum of the bandlimited function under study is of such a shape as to pack more compactly in a nonrectangular than in a rectangular lattice (Mersereau, 1979). If, for instance, the support K of the spectrum Φ is contained in a sphere, then a decimation matrix producing a close packing of these spheres will yield an aliasingfree DFT algorithm with fewer sample points than the standard algorithm using a rectangular lattice.
Suppose that the decimation matrix N is diagonal and let each diagonal element be written in terms of its prime factors: where m is the total number of distinct prime factors present in the .
The CRT may be used to turn each 1D transform along dimension i into a multidimensional transform with a separate `pseudodimension' for each distinct prime factor of ; the number , of these pseudodimensions is equal to the cardinality of the set: The full ndimensional transform thus becomes μdimensional, with .
We may now permute the μ pseudodimensions so as to bring into contiguous position those corresponding to the same prime factor ; the m resulting groups of pseudodimensions are said to define `pprimary' blocks. The initial transform is now written as a tensor product of m pprimary transforms, where transform j is on points [by convention, dimension i is not transformed if ]. These pprimary transforms may be computed, for instance, by multidimensional Cooley–Tukey factorization (Section 1.3.3.3.1), which is faster than the straightforward row–column method. The final results may then be obtained by reversing all the permutations used.
The extra gain with respect to the multidimensional Cooley–Tukey method is that there are no twiddle factors between pprimary pieces corresponding to different primes p.
The case where N is not diagonal has been examined by Guessoum & Mersereau (1986).
Suppose that the CRT has been used as above to map an ndimensional DFT to a μdimensional DFT. For each [κ runs over those pairs (i, j) such that ], the Rader/Winograd procedure may be applied to put the matrix of the κth 1D DFT in the CBA normal form of a Winograd small FFT. The full DFT matrix may then be written, up to permutation of data and results, as
A well known property of the tensor product of matrices allows this to be rewritten as and thus to form a matrix in which the combined preaddition, multiplication and postaddition matrices have been precomputed. This procedure, called nesting, can be shown to afford a reduction of the arithmetic operation count compared to the row–column method (Morris, 1978).
Clearly, the nesting rearrangement need not be applied to all μ dimensions, but can be restricted to any desired subset of them.
Nussbaumer's approach views the DFT as the evaluation of certain polynomials constructed from the data (as in Section 1.3.3.2.4). For instance, putting , the 1D Npoint DFT may be written where the polynomial Q is defined by
Let us consider (Nussbaumer & Quandalle, 1979) a 2D transform of size : By introduction of the polynomials this may be rewritten:
Let us now suppose that is coprime to N. Then has a unique inverse modulo N (denoted by ), so that multiplication by simply permutes the elements of and hence for any function f over . We may thus write: where Since only the value of polynomial at is involved in the result, the computation of may be carried out modulo the unique cyclotomic polynomial such that . Thus, if we define: we may write: or equivalently
For N an odd prime p, all nonzero values of are coprime with p so that the point DFT may be calculated as follows:
Step (1) is a set of p `polynomial transforms' involving no multiplications; step (2) consists of p DFTs on p points each since if then step (3) is a permutation; and step (4) is a ppoint DFT. Thus the 2D DFT on points, which takes 2p ppoint DFTs by the row–column method, involves only ppoint DFTs; the other DFTs have been replaced by polynomial transforms involving only additions.
This procedure can be extended to n dimensions, and reduces the number of 1D ppoint DFTs from for the row–column method to , at the cost of introducing extra additions in the polynomial transforms.
A similar algorithm has been formulated by Auslander et al. (1983) in terms of Galois theory.
The mathematical analysis of the structure of DFT computations has brought to light a broad variety of possibilities for reducing or reshaping their arithmetic complexity. All of them are `analytic' in that they break down large transforms into a succession of smaller ones.
These results may now be considered from the converse `synthetic' viewpoint as providing a list of procedures for assembling them:

The simplest DFT may then be carried out into a global algorithm in many different ways. The diagrams in Fig. 1.3.3.1 illustrate a few of the options available to compute a 400point DFT. They may differ greatly in their arithmetic operation counts.
To obtain a truly useful measure of the computational complexity of a DFT algorithm, its arithmetic operation count must be tempered by computer architecture considerations. Three main types of tradeoffs must be borne in mind:

Many of the mathematical developments above took place in the context of singleprocessor serial computers, where f.p. addition is substantially cheaper than f.p. multiplication but where integer address arithmetic has to compete with f.p. arithmetic for processor cycles. As a result, the alternatives to the Cooley–Tukey algorithm hardly ever led to particularly favourable tradeoffs, thus creating the impression that there was little to gain by switching to more exotic algorithms.
The advent of new machine architectures with vector and/or parallel processing features has greatly altered this picture (Pease, 1968; Korn & Lambiotte, 1979; Fornberg, 1981; Swartzrauber, 1984):

Another major consideration is that of data flow [see e.g. Nawab & McClellan (1979)]. Serial machines only have few registers and few paths connecting them, and allow little or no overlap between computation and data movement. New architectures, on the other hand, comprise banks of vector registers (or `cache memory') besides the usual internal registers, and dedicated ALUs can service data transfers between several of them simultaneously and concurrently with computation.
In this new context, the devices described in Sections 1.3.3.2 and 1.3.3.3 for altering the balance between the various types of arithmetic operations, and reshaping the data flow during the computation, are invaluable. The field of machinedependent DFT algorithm design is thriving on them [see e.g. Temperton (1983a,b,c, 1985); Agarwal & Cooley (1986, 1987)].
In order to explore systematically all possible algorithms for carrying out a given DFT computation, and to pick the one best suited to a given machine, attempts have been made to develop:

Task (i) can be accomplished by systematic use of a tensor product notation to represent the various stages into which the DFT can be factored (reindexing, small transforms on subsets of indices, twiddle factors, digitreversal permutations).
Task (ii) may for instance use the Winograd CBA normal form for each small transform, then apply the rules governing the rearrangement of tensor product and ordinary product × operations on matrices. The matching of these rearrangements to the architecture of a vector and/or parallel computer can be formalized algebraically [see e.g. Chapter 2 of Tolimieri et al. (1989)].
Task (iii) is a complex search which requires techniques such as dynamic programming (Bellman, 1958).
Johnson & Burrus (1983) have proposed and tested such a scheme to identify the optimal tradeoffs between prime factor nesting and Winograd nesting of small Winograd transforms. In step (ii), they further decomposed the preaddition matrix A and postaddition matrix C into several factors, so that the number of design options available becomes very large: the Npoint DFT when N has four factors can be calculated in over 10^{12} distinct ways.
This large family of nested algorithms contains the prime factor algorithm and the Winograd algorithms as particular cases, but usually achieves greater efficiency than either by reducing the f.p. multiplication count while keeping the number of f.p. additions small.
There is little doubt that this systematic approach will be extended so as to incorporate all available methods of restructuring the DFT.
The central role of the Fourier transformation in Xray crystallography is a consequence of the kinematic approximation used in the description of the scattering of Xrays by a distribution of electrons (Bragg, 1915; Duane, 1925; Havighurst, 1925a,b; Zachariasen, 1945; James, 1948a, Chapters 1 and 2; Lipson & Cochran, 1953, Chapter 1; Bragg, 1975).
Let be the density of electrons in a sample of matter contained in a finite region V which is being illuminated by a parallel monochromatic Xray beam with wavevector . Then the farfield amplitude scattered in a direction corresponding to wavevector is proportional to
In certain model calculations, the `sample' may contain not only volume charges, but also point, line and surface charges. These singularities may be accommodated by letting ρ be a distribution, and writing F is still a well behaved function (analytic, by Section 1.3.2.4.2.10) because ρ has been assumed to have compact support.
If the sample is assumed to be an infinite crystal, so that ρ is now a periodic distribution, the customary limiting process by which it is shown that F becomes a discrete series of peaks at reciprocallattice points (see e.g. von Laue, 1936; Ewald, 1940; James, 1948a p. 9; Lipson & Taylor, 1958, pp. 14–27; Ewald, 1962, pp. 82–101; Warren, 1969, pp. 27–30) is already subsumed under the treatment of Section 1.3.2.6.
Let ρ be the distribution of electrons in a crystal. Then, by definition of a crystal, ρ is Λperiodic for some period lattice Λ (Section 1.3.2.6.5) so that there exists a motif distribution with compact support such that where . The lattice Λ is usually taken to be the finest for which the above representation holds.
Let Λ have a basis over the integers, these basis vectors being expressed in terms of a standard orthonormal basis as Then the matrix is the period matrix of Λ (Section 1.3.2.6.5) with respect to the unit lattice with basis , and the volume V of the unit cell is given by .
By Fourier transformation where is the lattice distribution associated to the reciprocal lattice . The basis vectors have coordinates in given by the columns of , whose expression in terms of the cofactors of A (see Section 1.3.2.6.5) gives the familiar formulae involving the cross product of vectors for . The Hdistribution F of scattered amplitudes may be written and is thus a weighted reciprocallattice distribution, the weight attached to each node being the value at H of the transform of the motif . Taken in conjunction with the assumption that the scattering is elastic, i.e. that H only changes the direction but not the magnitude of the incident wavevector , this result yields the usual forms (Laue or Bragg) of the diffraction conditions: , and simultaneously H lies on the Ewald sphere.
By the reciprocity theorem, can be recovered if F is known for all as follows [Section 1.3.2.6.5, e.g. (iv)]:
These relations may be rewritten in terms of standard, or `fractional crystallographic', coordinates by putting so that a unit cell of the crystal corresponds to , and that . Defining and by so that we have These formulae are valid for an arbitrary motif distribution , provided the convergence of the Fourier series for is considered from the viewpoint of distribution theory (Section 1.3.2.6.10.3).
The experienced crystallographer may notice the absence of the familiar factor from the expression for just given. This is because we use the (mathematically) natural unit for , the electron per unit cell, which matches the dimensionless nature of the crystallographic coordinates x and of the associated volume element . The traditional factor was the result of the somewhat inconsistent use of x as an argument but of as a volume element to obtain ρ in electrons per unit volume (e.g. Å^{3}). A fortunate consequence of the present convention is that nuisance factors of V or , which used to abound in convolution or scalar product formulae, are now absent.
It should be noted at this point that the crystallographic terminology regarding and differs from the standard mathematical terminology introduced in Section 1.3.2.4.1 and applied to periodic distributions in Section 1.3.2.6.4: F is the inverse Fourier transform of ρ rather than its Fourier transform, and the calculation of ρ is called a Fourier synthesis in crystallography even though it is mathematically a Fourier analysis. The origin of this discrepancy may be traced to the fact that the mathematical theory of the Fourier transformation originated with the study of temporal periodicity, while crystallography deals with spatial periodicity; since the expression for the phase factor of a plane wave is , the difference in sign between the contributions from time versus spatial displacements makes this conflict unavoidable.
In many cases, is a sum of translates of atomic electrondensity distributions. Assume there are n distinct chemical types of atoms, with identical isotropic atoms of type j described by an electron distribution about their centre of mass. According to quantum mechanics each is a smooth rapidly decreasing function of x, i.e. , hence and (ignoring the effect of thermal agitation) which may be written (Section 1.3.2.5.8) By Fourier transformation: Defining the form factor of atom j as a function of h to be we have If and are the real and reciprocalspace coordinates in Å and Å^{−1}, and if is the spherically symmetric electrondensity function for atom type j, then
More complex expansions are used for electrondensity studies (see Chapter 1.2 in this volume). Anisotropic Gaussian atoms may be dealt with through the formulae given in Section 1.3.2.4.4.2.
The convergence of the Fourier series for is usually examined from the classical point of view (Section 1.3.2.6.10). The summation of multiple Fourier series meets with considerable difficulties, because there is no natural order in to play the role of the natural order in (Ash, 1976). In crystallography, however, the structure factors are often obtained within spheres for increasing resolution (decreasing Δ). Therefore, successive estimates of are most naturally calculated as the corresponding partial sums (Section 1.3.2.6.10.1): This may be written where is the `spherical Dirichlet kernel' exhibits numerous negative ripples around its central peak. Thus the `series termination errors' incurred by using instead of consist of negative ripples around each atom, and may lead to a Gibbslike phenomenon (Section 1.3.2.6.10.1) near a molecular boundary.
As in one dimension, Cesàro sums (arithmetic means of partial sums) have better convergence properties, as they lead to a convolution by a `spherical Fejér kernel' which is everywhere positive. Thus Cesàro summation will always produce positive approximations to a positive electron density. Other positive summation kernels were investigated by Pepinsky (1952) and by Waser & Schomaker (1953).
If the wavelength λ of the incident Xrays is far from any absorption edge of the atoms in the crystal, there is a constant phase shift in the scattering, and the electron density may be considered to be realvalued. Then Thus if then This is Friedel's law (Friedel, 1913). The set of Fourier coefficients is said to have Hermitian symmetry.
If λ is close to some absorption edge(s), the proximity to resonance induces an extra phase shift, whose effect may be represented by letting take on complex values. Let and correspondingly, by termwise Fourier transformation
Since and are both real, and are both Hermitian symmetric, hence while Thus , so that Friedel's law is violated. The components and , which do obey Friedel's law, may be expressed as:
By Section 1.3.2.4.3.3 and Section 1.3.2.6.10.2, Usually is real and positive, hence , but the identity remains valid even when is made complexvalued by the presence of anomalous scatterers.
If is the collection of structure factors belonging to another electron density with the same period lattice as ρ, then Thus, norms and inner products may be evaluated either from structure factors or from `maps'.
Let and be two electron densities referred to crystallographic coordinates, with structure factors and , so that
The distribution is well defined, since the generalized support condition (Section 1.3.2.3.9.7) is satisfied. The forward version of the convolution theorem implies that if then
If either or is infinitely differentiable, then the distribution exists, and if we analyse it as then the backward version of the convolution theorem reads:
The cross correlation between and is the periodic distribution defined by: If and are locally integrable, Let The combined use of the shift property and of the forward convolution theorem then gives immediately: hence the Fourier series representation of : Clearly, , as shown by the fact that permuting F and G changes into its complex conjugate.
The autocorrelation of is defined as and is called the Patterson function of . If consists of point atoms, i.e. then contains information about interatomic vectors. It has the Fourier series representation and is therefore calculable from the diffraction intensities alone. It was first proposed by Patterson (1934, 1935a,b) as an extension to crystals of the radially averaged correlation function used by Warren & Gingrich (1934) in the study of powders.
Shannon's sampling and interpolation theorem (Section 1.3.2.7.1) takes two different forms, according to whether the property of finite bandwidth is assumed in real space or in reciprocal space.

It was shown at the end of Section 1.3.2.5.8 that the convolution theorem establishes, under appropriate assumptions, a duality between sectioning a smooth function (viewed as a multiplication by a δfunction in the sectioning coordinate) and projecting its transform (viewed as a convolution with the function 1 everywhere equal to 1 as a function of the projection coordinate). This duality follows from the fact that and map to and to (Section 1.3.2.5.6), and from the tensor product property (Section 1.3.2.5.5).
In the case of periodic distributions, projection and section must be performed with respect to directions or subspaces which are integral with respect to the period lattice if the result is to be periodic; furthermore, projections must be performed only on the contents of one repeating unit along the direction of projection, or else the result would diverge. The same relations then hold between principal central sections and projections of the electron density and the dual principal central projections and sections of the weighted reciprocal lattice, e.g. etc.
When the sections are principal but not central, it suffices to use the shift property of Section 1.3.2.5.5. When the sections or projections are not principal, they can be made principal by changing to new primitive bases B and for Λ and , respectively, the transition matrices P and to these new bases being related by in order to preserve duality. This change of basis must be such that one of these matrices (say, P) should have a given integer vector u as its first column, u being related to the line or plane defining the section or projection of interest.
The problem of constructing a matrix P given u received an erroneous solution in Volume II of International Tables (Patterson, 1959), which was subsequently corrected in 1962. Unfortunately, the solution proposed there is complicated and does not suggest a general approach to the problem. It therefore seems worthwhile to record here an effective procedure which solves this problem in any dimension n (Watson, 1970).
Let be a primitive integral vector, i.e. g.c.d. . Then an integral matrix P with det having u as its first column can be constructed by induction as follows. For the result is trivial. For it can be solved by means of the Euclidean algorithm, which yields such that , so that we may take . Note that, if is a solution, then is another solution for any . For , write with so that both and are primitive. By the inductive hypothesis there is an integral matrix V with as its first column, and an integral matrix Z with z as its first column, with and .
Now put i.e. The first column of P is and its determinant is 1, QED.
The incremental step from dimension to dimension n is the construction of matrix V, for which there exist infinitely many solutions labelled by an integer . Therefore, the collection of matrices P which solve the problem is labelled by arbitrary integers . This freedom can be used to adjust the shape of the basis B.
Once P has been chosen, the calculation of general sections and projections is transformed into that of principal sections and projections by the changes of coordinates: and an appeal to the tensor product property.
Booth (1945a) made use of the convolution theorem to form the Fourier coefficients of `bounded projections', which provided a compromise between 2D and 3D Fourier syntheses. If it is desired to compute the projection on the (x, y) plane of the electron density lying between the planes and , which may be written as The transform is then giving for coefficient :
Another particular instance of the convolution theorem is the duality between differentiation and multiplication by a monomial (Sections 1.3.2.4.2.8, 1.3.2.5.8).
In the present context, this result may be written in Cartesian coordinates, and in crystallographic coordinates.
A particular case of the first formula is where is the Laplacian of ρ.
The second formula has been used with or 2 to compute `differential syntheses' and refine the location of maxima (or other stationary points) in electrondensity maps. Indeed, the values at x of the gradient vector and Hessian matrix are readily obtained as and a step of Newton iteration towards the nearest stationary point of will proceed by
The modern use of Fourier transforms to speed up the computation of derivatives for model refinement will be described in Section 1.3.4.4.7.
The converse property is also useful: it relates the derivatives of the continuous transform to the moments of : For and , this identity gives the well known relation between the Hessian matrix of the transform at the origin of reciprocal space and the inertia tensor of the motif . This is a particular case of the momentgenerating properties of , which will be further developed in Section 1.3.4.5.2.
The classical results presented in Section 1.3.2.6.9 can be readily generalized to the case of triple Fourier series; no new concept is needed, only an obvious extension of the notation.
Let be realvalued, so that Friedel's law holds and . Let be a finite set of indices comprising the origin: . Then the Hermitian form in complex variables is called the Toeplitz form of order associated to . By the convolution theorem and Parseval's identity, If is almost everywhere nonnegative, then for all the forms are positive semidefinite and therefore all Toeplitz determinants are nonnegative, where
The Toeplitz–Carathéodory–Herglotz theorem given in Section 1.3.2.6.9.2 states that the converse is true: if for all , then is almost everywhere nonnegative. This result is known in the crystallographic literature through the papers of Karle & Hauptman (1950), MacGillavry (1950), and Goedkoop (1950), following previous work by Harker & Kasper (1948) and Gillis (1948a,b).
Szegö's study of the asymptotic distribution of the eigenvalues of Toeplitz forms as their order tends to infinity remains valid. Some precautions are needed, however, to define the notion of a sequence of finite subsets of indices tending to infinity: it suffices that the should consist essentially of the reciprocallattice points h contained within a domain of the form (kfold dilation of Ω) where Ω is a convex domain in containing the origin (Widom, 1960). Under these circumstances, the eigenvalues of the Toeplitz forms become equidistributed with the sample values of on a grid satisfying the Shannon sampling criterion for the data in (cf. Section 1.3.2.6.9.3).
A particular consequence of this equidistribution is that the geometric means of the and of the are equal, and hence as in Section 1.3.2.6.9.4 where denotes the number of reflections in . Complementary terms giving a better comparison of the two sides were obtained by Widom (1960, 1975) and Linnik (1975).
This formula played an important role in the solution of the 2D Ising model by Onsager (1944) (see Montroll et al., 1963). It is also encountered in phasing methods involving the `Burg entropy' (Britten & Collins, 1982; Narayan & Nityananda, 1982; Bricogne, 1982, 1984, 1988).
The description of a crystal given so far has dealt only with its invariance under the action of the (discrete Abelian) group of translations by vectors of its period lattice Λ.
Let the crystal now be embedded in Euclidean 3space, so that it may be acted upon by the group of rigid (i.e. distancepreserving) motions of that space. The group contains a normal subgroup of translations, and the quotient group may be identified with the 3dimensional orthogonal group . The period lattice Λ of a crystal is a discrete uniform subgroup of .
The possible invariance properties of a crystal under the action of are captured by the following definition: a crystallographic group is a subgroup Γ of if
The two properties are not independent: by a theorem of Bieberbach (1911), they follow from the assumption that Λ is a discrete subgroup of which operates without accumulation point and with a compact fundamental domain (see Auslander, 1965). These two assumptions imply that G acts on Λ through an integral representation, and this observation leads to a complete enumeration of all distinct Γ's. The mathematical theory of these groups is still an active research topic (see, for instance, Farkas, 1981), and has applications to Riemannian geometry (Wolf, 1967).
This classification of crystallographic groups is described elsewhere in these Tables (Wondratschek, 2005), but it will be surveyed briefly in Section 1.3.4.2.2.3 for the purpose of establishing further terminology and notation, after recalling basic notions and results concerning groups and group actions in Section 1.3.4.2.2.2.
The books by Hall (1959) and Scott (1964) are recommended as reference works on group theory.

Let Γ be a crystallographic group, Λ the normal subgroup of its lattice translations, and G the finite factor group . Then G acts on Λ by conjugation [Section 1.3.4.2.2.2(d)] and this action, being a mapping of a lattice into itself, is representable by matrices with integer entries.
The classification of crystallographic groups proceeds from this observation in the following three steps:

Step 1 leads to the following groups, listed in association with the crystal system to which they later give rise: and the extension of these groups by a centre of inversion. In this list ⋉ denotes a semidirect product [Section 1.3.4.2.2.2(d)], α denotes the automorphism , and (the group of permutations on three letters) operates by permuting the copies of (using the subgroup of cyclic permutations gives the tetrahedral subsystem).
Step 2 leads to a list of 73 equivalence classes called arithmetic classes of representations , where is a integer matrix, with and . This enumeration is more familiar if equivalence is relaxed so as to allow conjugation by rational matrices with determinant ± 1: this leads to the 32 crystal classes. The difference between an arithmetic class and its rational class resides in the choice of a lattice mode . Arithmetic classes always refer to a primitive lattice, but may use inequivalent integral representations for a given geometric symmetry element; while crystallographers prefer to change over to a nonprimitive lattice, if necessary, in order to preserve the same integral representation for a given geometric symmetry element. The matrices P and describing the changes of basis between primitive and centred lattices are listed in Table 5.1.3.1 and illustrated in Figs. 5.1.3.2 to 5.1.3.8 , pp. 80–85, of Volume A of International Tables (Arnold, 2005).
Step 3 gives rise to a system of congruences for the systems of nonprimitive translations which may be associated to the matrices of a given arithmetic class, namely: first derived by Frobenius (1911). If equivalence under the action of is taken into account, 219 classes are found. If equivalence is defined with respect to the action of the subgroup of consisting only of transformations with determinant +1, then 230 classes called spacegroup types are obtained. In particular, associating to each of the 73 arithmetic classes a trivial set of nonprimitive translations yields the 73 symmorphic space groups. This third step may also be treated as an abstract problem concerning group extensions, using cohomological methods [Ascher & Janner (1965); see Janssen (1973) for a summary]; the connection with Frobenius's approach, as generalized by Zassenhaus (1948), is examined in Ascher & Janner (1968).
The finiteness of the number of spacegroup types in dimension 3 was shown by Bieberbach (1912) to be the case in arbitrary dimension. The reader interested in Ndimensional spacegroup theory for may consult Brown (1969), Brown et al. (1978), Schwarzenberger (1980), and Engel (1986). The standard reference for integral representation theory is Curtis & Reiner (1962).
All threedimensional space groups G have the property of being solvable, i.e. that there exists a chain of subgroups where each is a normal subgroup of and the factor group is a cyclic group of some order . This property may be established by inspection, or deduced from a famous theorem of Burnside [see Burnside (1911), pp. 322–323] according to which any group G such that , with p and q distinct primes, is solvable; in the case at hand, and . The whole classification of 3D space groups can be performed swiftly by a judicious use of the solvability property (L. Auslander, personal communication).
Solvability facilitates the indexing of elements of G in terms of generators and relations (Coxeter & Moser, 1972; Magnus et al., 1976) for the purpose of calculation. By definition of solvability, elements may be chosen in such a way that the cyclic factor group is generated by the coset . The set is then a system of generators for G such that the defining relations [see Brown et al. (1978), pp. 26–27] have the particularly simple form with and . Each element g of G may then be obtained uniquely as an `ordered word': with , using the algorithm of Jürgensen (1970). Such generating sets and defining relations are tabulated in Brown et al. (1978, pp. 61–76). An alternative list is given in Janssen (1973, Table 4.3, pp. 121–123, and Appendix D, pp. 262–271).
The action of a crystallographic group Γ may be written in terms of standard coordinates in as with
An important characteristic of the representation is its reducibility, i.e. whether or not it has invariant subspaces other than and the whole of . For triclinic, monoclinic and orthorhombic space groups, θ is reducible to a direct sum of three onedimensional representations: for trigonal, tetragonal and hexagonal groups, it is reducible to a direct sum of two representations, of dimension 2 and 1, respectively; while for tetrahedral and cubic groups, it is irreducible.
By Schur's lemma (see e.g. Ledermann, 1987), any matrix which commutes with all the matrices for must be a scalar multiple of the identity in each invariant subspace.
In the reducible cases, the reductions involve changes of basis which will be rational, not integral, for those arithmetic classes corresponding to nonprimitive lattices. Thus the simplification of having maximally reduced representation has as its counterpart the use of nonprimitive lattices.
The notions of orbit, isotropy subgroup and fundamental domain (or asymmetric unit) for the action of G on are inherited directly from the general setting of Section 1.3.4.2.2.2. Points x for which are called special positions, and the various types of isotropy subgroups which may be encountered in crystallographic groups have been labelled by means of Wyckoff symbols. The representation operators in have the form: The operators associated to the purely rotational part of each transformation will also be used. Note the relation:
Let a crystal structure be described by the list of the atoms in its unit cell, indexed by . Let the electrondensity distribution about the centre of mass of atom k be described by with respect to the standard coordinates x. Then the motif may be written as a sum of translates: and the crystal electron density is .
Suppose that is invariant under Γ. If and are in the same orbit, say , then Therefore if is a special position and thus , then This identity implies that (the special position condition), and that i.e. that must be invariant by the pure rotational part of . Trueblood (1956) investigated the consequences of this invariance on the thermal vibration tensor of an atom in a special position (see Section 1.3.4.2.2.6 below).
Let J be a subset of K such that contains exactly one atom from each orbit. An orbit decomposition yields an expression for in terms of symmetryunique atoms: or equivalently If the atoms are assumed to be Gaussian, write where is the total number of electrons, and where the matrix combines the Gaussian spread of the electrons in atom j at rest with the covariance matrix of the random positional fluctuations of atom j caused by thermal agitation.
In crystallographic coordinates:
If atom k is in a special position , then the matrix must satisfy the identity for all g in the isotropy subgroup of . This condition may also be written in Cartesian coordinates as where This is a condensed form of the symmetry properties derived by Trueblood (1956).
An elementary discussion of this topic may be found in Chapter 1.4 of this volume.
Having established that the symmetry of a crystal may be most conveniently stated and handled via the left representation of G given by its action on electrondensity distributions, it is natural to transpose this action by the identity of Section 1.3.2.5.5: for any tempered distribution T, i.e. whenever the transforms are functions.
Putting , a periodic distribution, this relation defines a left action of G on given by which is conjugate to the action in the sense that The identity expressing the Ginvariance of is then equivalent to the identity between its structure factors, i.e. (Waser, 1955a)
If G is made to act on via the usual notions of orbit, isotropy subgroup (denoted ) and fundamental domain may be attached to this action. The above relation then shows that the spectrum is entirely known if it is specified on a fundamental domain containing one reciprocallattice point from each orbit of this action.
A reflection h is called special if . Then for any we have , and hence implying that unless . Special reflections h for which for some are thus systematically absent. This phenomenon is an instance of the duality between periodization and decimation of Section 1.3.2.7.2: if , the projection of on the direction of h has period , hence its transform (which is the portion of F supported by the central line through h) will be decimated, giving rise to the above condition.
A reflection h is called centric if , i.e. if the orbit of h contains . Then for some coset γ in , so that the following relation must hold: In the absence of dispersion, Friedel's law gives rise to the phase restriction: The value of the restricted phase is independent of the choice of coset representative γ. Indeed, if is another choice, then with and by the Frobenius congruences , so that Since , and if h is not a systematic absence: thus
The treatment of centred lattices may be viewed as another instance of the duality between periodization and decimation (Section 1.3.2.7.2): the periodization of the electron density by the nonprimitive lattice translations has as its counterpart in reciprocal space the decimation of the transform by the `reflection conditions' describing the allowed reflections, the decimation and periodization matrices being each other's contragredient.
The reader may consult the papers by Bienenstock & Ewald (1962) and Wells (1965) for earlier approaches to this material.
Structure factors may be calculated from a list of symmetryunique atoms by Fourier transformation of the orbit decomposition formula for the motif given in Section 1.3.4.2.2.4: i.e. finally:
In the case of Gaussian atoms, the atomic transforms are or equivalently
Two common forms of equivalent temperature factors (incorporating both atomic form and thermal motion) are
In the first case, does not depend on , and therefore: In the second case, however, no such simplification can occur: These formulae, or special cases of them, were derived by Rollett & Davies (1955), Waser (1955b), and Trueblood (1956).
The computation of structure factors by applying the discrete Fourier transform to a set of electrondensity values calculated on a grid will be examined in Section 1.3.4.4.5.
A formula for the Fourier synthesis of electrondensity maps from symmetryunique structure factors is readily obtained by orbit decomposition: where L is a subset of such that contains exactly one point of each orbit for the action of G on . The physical electron density per cubic ångström is then with V in Å^{3}.
In the absence of anomalous scatterers in the crystal and of a centre of inversion −I in Γ, the spectrum has an extra symmetry, namely the Hermitian symmetry expressing Friedel's law (Section 1.3.4.2.1.4). The action of a centre of inversion may be added to that of Γ to obtain further simplification in the above formula: under this extra action, an orbit with is either mapped into itself or into the disjoint orbit ; the terms corresponding to and may then be grouped within the common orbit in the first case, and between the two orbits in the second case.

The general statement of Parseval's theorem given in Section 1.3.4.2.1.5 may be rewritten in terms of symmetryunique structure factors and electron densities by means of orbit decomposition.
In reciprocal space, for each l, the summands corresponding to the various are equal, so that the lefthand side is equal to
In real space, the triple integral may be rewritten as (where D is the asymmetric unit) if and are smooth densities, since the set of special positions has measure zero. If, however, the integral is approximated as a sum over a Ginvariant grid defined by decimation matrix N, special positions on this grid must be taken into account: where the discrete asymmetric unit D contains exactly one point in each orbit of G in .
The standard convolution theorems derived in the absence of symmetry are readily seen to follow from simple properties of functions (denoted simply e in formulae which are valid for both signs), namely: These relations imply that the families of functions both generate an algebra of functions, i.e. a vector space endowed with an internal multiplication, since (i) and (ii) show how to `linearize products'.
Friedel's law (when applicable) on the one hand, and the Fourier relation between intensities and the Patterson function on the other hand, both follow from the property
When crystallographic symmetry is present, the convolution theorems remain valid in their original form if written out in terms of `expanded' data, but acquire a different form when rewritten in terms of symmetryunique data only. This rewriting is made possible by the extra relation (Section 1.3.4.2.2.5) or equivalently
The kernels of symmetrized Fourier transforms are not the functions e but rather the symmetrized sums for which the linearization formulae are readily obtained using (i), (ii) and (iv) as where the choice of sign in ± must be the same throughout each formula.
Formulae defining the `structurefactor algebra' associated to G were derived by Bertaut (1955c, 1956b,c, 1959a,b) and Bertaut & Waser (1957) in another context.
The forward convolution theorem (in discrete form) then follows. Let then with
The backward convolution theorem is derived similarly. Let then with Both formulae are simply orbit decompositions of their symmetryfree counterparts.
Consider two model electron densities and with the same period lattice and the same space group G. Write their motifs in terms of atomic electron densities (Section 1.3.4.2.2.4) as where and label the symmetryunique atoms placed at positions and , respectively.
To calculate the correlation between and we need the following preliminary formulae, which are easily established: if and f is an arbitrary function on , then hence and
The cross correlation between motifs is therefore which contains a peak of shape at the interatomic vector for each , , , .
The crosscorrelation between the original electron densities is then obtained by further periodizing by .
Note that these expressions are valid for any choice of `atomic' density functions and , which may be taken as molecular fragments if desired (see Section 1.3.4.4.8).
If G contains elements g such that has an eigenspace with eigenvalue 1 and an invariant complementary subspace , while has a nonzero component in , then the Patterson function will contain Harker peaks (Harker, 1936) of the form [where represent the action of g in ] in the translate of by .
In 1929, W. L. Bragg demonstrated the practical usefulness of the Fourier transform relation between electron density and structure factors by determining the structure of diopside from three principal projections calculated numerically by 2D Fourier summation (Bragg, 1929). It was immediately realized that the systematic use of this powerful method, and of its extension to three dimensions, would entail considerable amounts of numerical computation which had to be organized efficiently. As no other branch of applied science had yet needed this type of computation, crystallographers had to invent their own techniques.
The first step was taken by Beevers & Lipson (1934) who pointed out that a 2D summation could be factored into successive 1D summations. This is essentially the tensor product property of the Fourier transform (Sections 1.3.2.4.2.4, 1.3.3.3.1), although its aspect is rendered somewhat complicated by the use of sines and cosines instead of complex exponentials. Computation is economized to the extent that the cost of an transform grows with N as rather than . Generalization to 3D is immediate, reducing computation size from to for an transform. The complication introduced by using expressions in terms of sines and cosines is turned to advantage when symmetry is present, as certain families of terms are systematically absent or are simply related to each other; multiplicity corrections must, however, be introduced. The necessary information was tabulated for each space group by Lonsdale (1936), and was later incorporated into Volume I of International Tables.
The second step was taken by Beevers & Lipson (1936) and Lipson & Beevers (1936) in the form of the invention of the `Beevers–Lipson strips', a practical device which was to assist a whole generation of crystallographers in the numerical computation of crystallographic Fourier sums. The strips comprise a set of `cosine strips' tabulating the functions and a set of `sine strips' tabulating the functions for the 16 arguments . Function values are rounded to the nearest integer, and those for other arguments m may be obtained by using the symmetry properties of the sine and cosine functions. A Fourier summation of the form is then performed by selecting the n cosine strips labelled and the n sine strips labelled , placing them in register, and adding the tabulated values columnwise. The number 60 was chosen as the l.c.m. of 12 (itself the l.c.m. of the orders of all possible nonprimitive translations) and of 10 (for decimal convenience). The limited accuracy imposed by the twodigit tabulation was later improved by Robertson's sorting board (Robertson, 1936a,b) or by the use of separate strips for each decimal digit of the amplitude (Booth, 1948b), which allowed threedigit tabulation while keeping the set of strips within manageable size. Cochran (1948a) found that, for most structures under study at the time, the numerical inaccuracies of the method were less than the level of error in the experimental data. The sampling rate was subsequently increased from 60 to 120 (Beevers, 1952) to cope with larger unit cells.
Further gains in speed and accuracy were sought through the construction of specialpurpose mechanical, electromechanical, electronic or optical devices. Two striking examples are the mechanical computer RUFUS built by Robertson (1954, 1955, 1961) on the principle of previous strip methods (see also Robertson, 1932) and the electronic analogue computer XRAC built by Pepinsky, capable of realtime calculation and display of 2D and 3D Fourier syntheses (Pepinsky, 1947; Pepinsky & Sayre, 1948; Pepinsky et al., 1961; see also Suryan, 1957). The optical methods of Lipson & Taylor (1951, 1958) also deserve mention. Many other ingenious devices were invented, whose descriptions may be found in Booth (1948b), Niggli (1961), and Lipson & Cochran (1968).
Later, commercial punchedcard machines were programmed to carry out Fourier summations or structurefactor calculations (Shaffer et al., 1946a,b; Cox et al., 1947, 1949; Cox & Jeffrey, 1949; Donohue & Schomaker, 1949; Grems & Kasper, 1949; Hodgson et al., 1949; Greenhalgh & Jeffrey, 1950; Kitz & Marchington, 1953).
The modern era of digital electronic computation of Fourier series was initiated by the work of Bennett & Kendrew (1952), Mayer & Trueblood (1953), Ahmed & Cruickshank (1953b), Sparks et al. (1956) and Fowweather (1955). Their Fouriersynthesis programs used Beevers–Lipson factorization, the program by Sparks et al. being the first 3D Fourier program useable for all space groups (although these were treated as P1 or by data expansion). Ahmed & Barnes (1958) then proposed a general programming technique to allow full use of symmetry elements (orthorhombic or lower) in the 3D Beevers–Lipson factorization process, including multiplicity corrections. Their method was later adopted by Shoemaker & Sly (1961), and by crystallographic program writers at large.
The discovery of the FFT algorithm by Cooley & Tukey in 1965, which instantly transformed electrical engineering and several other disciplines, paradoxically failed to have an immediate impact on crystallographic computing. A plausible explanation is that the calculation of large 3D Fourier maps was a relatively infrequent task which was not thought to constitute a bottleneck, as crystallographers had learned to settle most structural questions by means of cheaper 2D sections or projections. It is significant in this respect that the first use of the FFT in crystallography by Barrett & Zwick (1971) should have occurred as part of an iterative scheme for improving protein phases by density modification in real space, which required a much greater number of Fourier transformations than any previous method. Independently, Bondot (1971) had attracted attention to the merits of the FFT algorithm.
The FFT program used by Barrett & Zwick had been written for signalprocessing applications. It was restricted to sampling rates of the form , and was not designed to take advantage of crystallographic symmetry at any stage of the calculation; Bantz & Zwick (1974) later improved this situation somewhat.
It was the work of Ten Eyck (1973) and Immirzi (1973, 1976) which led to the general adoption of the FFT in crystallographic computing. Immirzi treated all space groups as P1 by data expansion. Ten Eyck based his program on a versatile multiradix FFT routine (Gentleman & Sande, 1966) coupled with a flexible indexing scheme for dealing efficiently with multidimensional transforms. He also addressed the problems of incorporating symmetry elements of order 2 into the factorization of 1D transforms, and of transposing intermediate results by other symmetry elements. He was thus able to show that in a large number of space groups (including the 74 space groups having orthorhombic or lower symmetry) it is possible to calculate only the unique results from the unique data within the logic of the FFT algorithm. Ten Eyck wrote and circulated a package of programs for computing Fourier maps and reanalysing them into structure factors in some simple space groups (P1, P1, P2, P2/m, P2_{1}, P222, P2_{1}2_{1}2_{1}, Pmmm). This package was later augmented by a handful of new spacegroupspecific programs contributed by other crystallographers (P2_{1}2_{1}2, I222, P3_{1}21, P4_{1}2_{1}2). The writing of such programs is an undertaking of substantial complexity, which has deterred all but the bravest: the usual practice is now to expand data for a highsymmetry space group to the largest subgroup for which a specific FFT program exists in the package, rather than attempt to write a new program. Attempts have been made to introduce more modern approaches to the calculation of crystallographic Fourier transforms (Auslander, Feig & Winograd, 1982; Auslander & Shenefelt, 1987; Auslander et al., 1988) but have not gone beyond the stage of preliminary studies.
The task of fully exploiting the FFT algorithm in crystallographic computations is therefore still unfinished, and it is the purpose of this section to provide a systematic treatment such as that (say) of Ahmed & Barnes (1958) for the Beevers–Lipson algorithm.
Ten Eyck's approach, based on the reducibility of certain space groups, is extended by the derivation of a universal transposition formula for intermediate results. It is then shown that space groups which are not completely reducible may nevertheless be treated by threedimensional Cooley–Tukey factorization in such a way that their symmetry may be fully exploited, whatever the shape of their asymmetric unit. Finally, new factorization methods with builtin symmetries are presented. The unifying concept throughout this presentation is that of `group action' on indexing sets, and of `orbit exchange' when this action has a composite structure; it affords new ways of rationalizing the use of symmetry, or of improving computational speed, or both.
A finite set of reflections can be periodized without aliasing by the translations of a suitable sublattice of the reciprocal lattice ; the converse operation in real space is the sampling of ρ at points X of a grid of the form (Section 1.3.2.7.3). In standard coordinates, is periodized by , and is sampled at points .
In the absence of symmetry, the unique data are
They are connected by the ordinary DFT relations: or and or
In the presence of symmetry, the unique data are
– or in real space (by abuse of notation, D will denote an asymmetric unit for x or for m indifferently);
– in reciprocal space.
The previous summations may then be subjected to orbital decomposition, to yield the following `crystallographic DFT' (CDFT) defining relations: with the obvious alternatives in terms of . Our problem is to evaluate the CDFT for a given space group as efficiently as possible, in spite of the fact that the group action has spoilt the simple tensorproduct structure of the ordinary threedimensional DFT (Section 1.3.3.3.1).
Two procedures are available to carry out the 3D summations involved as a succession of smaller summations:

Clearly, a symmetry expansion to the largest fully reducible subgroup of the space group will give maximal decomposability, but will require computing more than the unique results from more than the unique data. Economy will follow from factoring the transforms in the subspaces within which the space group acts irreducibly.
For irreducible subspaces of dimension 1, the group action is readily incorporated into the factorization of the transform, as first shown by Ten Eyck (1973).
For irreducible subspaces of dimension 2 or 3, the ease of incorporation of symmetry into the factorization depends on the type of factorization method used. The multidimensional Cooley–Tukey method (Section 1.3.3.3.1) is rather complicated; the multidimensional Good method (Section 1.3.3.3.2.2) is somewhat simpler; and the Rader/Winograd factorization admits a generalization, based on the arithmetic of certain rings of algebraic integers, which accommodates 2D crystallographic symmetries in a most powerful and pleasing fashion.
At each stage of the calculation, it is necessary to keep track of the definition of the asymmetric unit and of the symmetry properties of the numbers being manipulated. This requirement applies not only to the initial data and to the final results, where these are familiar; but also to all the intermediate quantities produced by partial transforms (on subsets of factors, or subsets of dimensions, or both), where they are less familiar. Here, the general formalism of transposition (or `orbit exchange') described in Section 1.3.4.2.2.2 plays a central role.
Suppose that the spacegroup action is reducible, i.e. that for each by Schur's lemma, the decimation matrix must then be of the form if it is to commute with all the .
Putting and , we may define and write (direct sum) as a shorthand for
We may also define the representation operators and acting on functions of and , respectively (as in Section 1.3.4.2.2.4), and the operators and acting on functions of and , respectively (as in Section 1.3.4.2.2.5). Then we may write and in the sense that g acts on by and on by
Thus equipped we may now derive concisely a general identity describing the symmetry properties of intermediate quantities of the form which arise through partial transformation of F on or of on . The action of on these quantities will be
and hence the symmetry properties of T are expressed by the identity Applying this relation not to T but to gives i.e.
If the unique were initially indexed by (see Section 1.3.4.2.2.2), this formula allows the reindexing of the intermediate results from the initial form to the final form on which the second transform (on ) may now be performed, giving the final results indexed by which is an asymmetric unit. An analogous interpretation holds if one is going from to F.
The above formula solves the general problem of transposing from one invariant subspace to another, and is the main device for decomposing the CDFT. Particular instances of this formula were derived and used by Ten Eyck (1973); it is useful for orthorhombic groups, and for dihedral groups containing screw axes with g.c.d. . For comparison with later uses of orbit exchange, it should be noted that the type of intermediate results just dealt with is obtained after transforming on all factors in one summand.
A central piece of information for driving such a decomposition is the definition of the full asymmetric unit in terms of the asymmetric units in the invariant subspaces. As indicated at the end of Section 1.3.4.2.2.2, this is straightforward when G acts without fixed points, but becomes more involved if fixed points do exist. To this day, no systematic `calculus of asymmetric units' exists which can automatically generate a complete description of the asymmetric unit of an arbitrary space group in a form suitable for directing the orbit exchange process, although Shenefelt (1988) has outlined a procedure for dealing with space group P622 and its subgroups. The asymmetric unit definitions given in Volume A of International Tables are incomplete in this respect, in that they do not specify the possible residual symmetries which may exist on the boundaries of the domains.
Methods for factoring the DFT in the absence of symmetry were examined in Sections 1.3.3.2 and 1.3.3.3. They are based on the observation that the finite sets which index both data and results are endowed with certain algebraic structures (e.g. are Abelian groups, or rings), and that subsets of indices may be found which are not merely subsets but substructures (e.g. subgroups or subrings). Summation over these substructures leads to partial transforms, and the way in which substructures fit into the global structure indicates how to reassemble the partial results into the final results. As a rule, the richer the algebraic structure which is identified in the indexing set, the more powerful the factoring method.
The ability of a given factoring method to accommodate crystallographic symmetry will thus be determined by the extent to which the crystallographic group action respects (or fails to respect) the partitioning of the index set into the substructures pertaining to that method. This remark justifies trying to gain an overall view of the algebraic structures involved, and of the possibilities of a crystallographic group acting `naturally' on them.
The index sets and are finite Abelian groups under componentwise addition. If an iterated addition is viewed as an action of an integer scalar via then an Abelian group becomes a module over the ring (or, for short, a module), a module being analogous to a vector space but with scalars drawn from a ring rather than a field. The left actions of a crystallographic group G by can be combined with this action as follows: This provides a left action, on the indexing sets, of the set of symbolic linear combinations of elements of G with integral coefficients. If addition and multiplication are defined in by and with then is a ring, and the action defined above makes the indexing sets into modules. The ring is called the integral group ring of G (Curtis & Reiner, 1962, p. 44).
From the algebraic standpoint, therefore, the interaction between symmetry and factorization can be expected to be favourable whenever the indexing sets of partial transforms are submodules of the main modules.
Suppose, as in Section 1.3.3.3.2.1, that the decimation matrix N may be factored as . Then any grid point index in real space may be written with and determined by These relations establish a onetoone correspondence between and the Cartesian product of and , and hence as a set. However as an Abelian group, since in general because there can be a `carry' from the addition of the first components into the second components; therefore, as a module, which shows that the incorporation of symmetry into the Cooley–Tukey algorithm is not a trivial matter.
Let act on I through and suppose that N `integerizes' all the nonprimitive translations so that we may write with and determined as above. Suppose further that N, and commute with for all , i.e. (by Schur's lemma, Section 1.3.4.2.2.4) that these matrices are integer multiples of the identity in each Ginvariant subspace. The action of g on leads to which we may decompose as with and
Introducing the notation the two components of may be written with
The term is the geometric equivalent of a carry or borrow: it arises because , calculated as a vector in , may be outside the unit cell , and may need to be brought back into it by a `large' translation with a nonzero component in the space; equivalently, the action of g may need to be applied around different permissible origins for different values of , so as to map the unit cell into itself without any recourse to lattice translations. [Readers familiar with the cohomology of groups (see e.g. Hall, 1959; MacLane, 1963) will recognize as the cocycle of the extension of Gmodules described by the exact sequence .]
Thus G acts on I in a rather complicated fashion: although does define a left action in alone, no action can be defined in alone because depends on . However, because , and are left actions, it follows that satisfies the identity for all g, in G and all in . In particular, for all , and
This action will now be used to achieve optimal use of symmetry in the multidimensional Cooley–Tukey algorithm of Section 1.3.3.3.2.1. Let us form an array Y according to for all but only for the unique under the action of G in . Except in special cases which will be examined later, these vectors contain essentially an asymmetric unit of electrondensity data, up to some redundancies on boundaries. We may then compute the partial transform on : Using the symmetry of in the form yields by the procedure of Section 1.3.3.3.2 the transposition formula
By means of this identity we can transpose intermediate results initially indexed by so as to have them indexed by We may then apply twiddle factors to get and carry out the second transform The final results are indexed by which yield essentially an asymmetric unit of structure factors after unscrambling by:
The transposition formula above applies to intermediate results when going backwards from F to , provided these results are considered after the twiddlefactor stage. A transposition formula applicable before that stage can be obtained by characterizing the action of G on h (including the effects of periodization by ) in a manner similar to that used for m.
Let with We may then write with Here and are defined by and
Let us then form an array according to for all but only for the unique under the action of G in , and transform on to obtain Putting and using the symmetry of F in the form where yields by a straightforward rearrangement
This formula allows the transposition of intermediate results Z from an indexing by to an indexing by We may then apply the twiddle factors to obtain and carry out the second transform on The results, indexed by yield essentially an asymmetric unit of electron densities by the rearrangement
The equivalence of the two transposition formulae up to the intervening twiddle factors is readily established, using the relation which is itself a straightforward consequence of the identity
To complete the characterization of the effect of symmetry on the Cooley–Tukey factorization, and of the economy of computation it allows, it remains to consider the possibility that some values of may be invariant under some transformations under the action .
Suppose that has a nontrivial isotropy subgroup , and let . Then each subarray defined by satisfies the identity so that the data for the transform on have residual symmetry properties. In this case the identity satisfied by simplifies to which shows that the mapping satisfies the Frobenius congruences (Section 1.3.4.2.2.3). Thus the internal symmetry of subarray with respect to the action of G on is given by acting on via
The transform on needs only be performed for one out of distinct arrays (results for the others being obtainable by the transposition formula), and this transforms is symmetric. In other words, the following cases occur:
The symmetry properties of the transform may themselves be exploited in a similar way if can be factored as a product of smaller decimation matrices; otherwise, an appropriate symmetrized DFT routine may be provided, using for instance the idea of `multiplexing/demultiplexing' (Section 1.3.4.3.5). We thus have a recursive descent procedure, in which the deeper stages of the recursion deal with transforms on fewer points, or of lower symmetry (usually both).
The same analysis applies to the transforms on the subarrays , and leads to a similar descent procedure.
In conclusion, crystallographic symmetry can be fully exploited to reduce the amount of computation to the minimum required to obtain the unique results from the unique data. No such analysis was so far available in cases where the asymmetric units in real and reciprocal space are not parallelepipeds. An example of this procedure will be given in Section 1.3.4.3.6.5.
This procedure was described in Section 1.3.3.3.2.2. The main difference with the Cooley–Tukey factorization is that if , where the different factors are pairwise coprime, then the Chinese remainder theorem reindexing makes isomorphic to a direct sum. where each pprimary piece is endowed with an induced module structure by letting G operate in the usual way but with the corresponding modular arithmetic. The situation is thus more favourable than with the Cooley–Tukey method, since there is no interference between the factors (no `carry'). In the terminology of Section 1.3.4.2.2.2, G acts diagonally on this direct sum, and results of a partial transform may be transposed by orbit exchange as in Section 1.3.4.3.4.1 but without the extra terms μ or η. The analysis of the symmetry properties of partial transforms also carries over, again without the extra terms. Further simplification occurs for all pprimary pieces with p other than 2 or 3, since all nonprimitive translations (including those associated to lattice centring) disappear modulo p.
Thus the cost of the CRT reindexing is compensated by the computational savings due to the absence of twiddle factors and of other phase shifts associated with nonprimitive translations and with geometric `carries'.
Within each pprimary piece, however, higher powers of p may need to be split up by a Cooley–Tukey factorization, or carried out directly by a suitably adapted Winograd algorithm.
As was the case in the absence of symmetry, the two previous classes of algorithms can only factor the global transform into partial transforms on prime numbers of points, but cannot break the latter down any further. Rader's idea of using the action of the group of units to obtain further factorization of a pprimary transform has been used in `scalar' form by Auslander & Shenefelt (1987), Shenefelt (1988), and Auslander et al. (1988). It will be shown here that it can be adapted to the crystallographic case so as to take advantage also of the possible existence of nfold cyclic symmetry elements in a twodimensional transform (Bricogne & Tolimieri, 1990). This adaptation entails the use of certain rings of algebraic integers rather than ordinary integers, whose connection with the handling of cyclic symmetry will now be examined.
Let G be the group associated with a threefold axis of symmetry: with . In a standard trigonal basis, G has matrix representation in real space, in reciprocal space. Note that and that so that and are conjugate in the group of unimodular integer matrices. The group ring is commutative, and has the structure of the polynomial ring with the single relation corresponding to the minimal polynomial of . In the terminology of Section 1.3.3.2.4, the ring structure of is obtained from that of by carrying out polynomial addition and multiplication modulo , then replacing X by any generator of G. This type of construction forms the very basis of algebraic number theory [see Artin (1944, Section IIc) for an illustration of this viewpoint], and as just defined is isomorphic to the ring of algebraic integers of the form under the identification . Addition in this ring is defined componentwise, while multiplication is defined by
In the case of a fourfold axis, with , and is obtained from by carrying out polynomial arithmetic modulo . This identifies with the ring of Gaussian integers of the form , in which addition takes place componentwise while multiplication is defined by
In the case of a sixfold axis, with , and is isomorphic to under the mapping since .
Thus in all cases where is an irreducible quadratic polynomial with integer coefficients.
The actions of G on lattices in real and reciprocal space (Sections 1.3.4.2.2.4, 1.3.4.2.2.5) extend naturally to actions of on in which an element of acts via in real space, and via in reciprocal space. These two actions are related by conjugation, since and the following identity (which is fundamental in the sequel) holds:
Let us now consider the calculation of a twodimensional DFT with nfold cyclic symmetry for an odd prime . Denote by . Both the data and the results of the DFT are indexed by : hence the action of on these indices is in fact an action of , the latter being obtained from by carrying out all integer arithmetic in modulo p. The algebraic structure of combines the symmetrycarrying ring structure of with the finite field structure of used in Section 1.3.3.2.3.1, and holds the key to a symmetryadapted factorization of the DFT at hand.
The structure of depends on whether remains irreducible when considered as a polynomial over . Thus two cases arise:
These two cases require different developments.

Most crystallographic Fourier syntheses are realvalued and originate from Hermitiansymmetric collections of Fourier coefficients. Hermitian symmetry is closely related to the action of a centre of inversion in reciprocal space, and thus interacts strongly with all other genuinely crystallographic symmetry elements of order 2. All these symmetry properties are best treated by factoring by 2 and reducing the computation of the initial transform to that of a collection of smaller transforms with less symmetry or none at all.
The computation of a DFT with Hermitiansymmetric or realvalued data can be carried out at a cost of half that of an ordinary transform, essentially by `multiplexing' pairs of special partial transforms into general complex transforms, and then `demultiplexing' the results on the basis of their symmetry properties. The treatment given below is for general dimension n; a subset of cases for was treated by Ten Eyck (1973).

A vector is said to be Hermitianantisymmetric if Its transform then satisfies i.e. is purely imaginary.
If X is Hermitianantisymmetric, then is Hermitiansymmetric, with realvalued. The treatment of Section 1.3.4.3.5.1 may therefore be adapted, with trivial factors of i or , or used as such in conjunction with changes of variable by multiplication by .
The matrix is its own contragredient, and hence (Section 1.3.2.4.2.2) the transform of a symmetric (respectively antisymmetric) function is symmetric (respectively antisymmetric). In this case the group acts in both real and reciprocal space as . If with both factors diagonal, then acts by i.e.
The symmetry or antisymmetry properties of X may be written with for symmetry and for antisymmetry.
The computation will be summarized as with the same indexing as that used for structurefactor calculation. In both cases it will be shown that a transform with and M diagonal can be computed using only partial transforms instead of .

Conjugate symmetric (Section 1.3.2.4.2.3) implies that if the data X are real and symmetric [i.e. and ], then so are the results . Thus if contains a centre of symmetry, F is real symmetric. There is no distinction (other than notation) between structurefactor and electrondensity calculation; the algorithms will be described in terms of the former. It will be shown that if , a real symmetric transform can be computed with only partial transforms instead of .
