International
Tables for
Crystallography
Volume B
Reciprocal space
Edited by U. Shmueli

International Tables for Crystallography (2010). Vol. B, ch. 1.3, pp. 62-106   | 1 | 2 |

## Section 1.3.4. Crystallographic applications of Fourier transforms

G. Bricognea

aGlobal Phasing Ltd, Sheraton House, Suites 14–16, Castle Park, Cambridge CB3 0AX, England, and LURE, Bâtiment 209D, Université Paris-Sud, 91405 Orsay, France

### 1.3.4. Crystallographic applications of Fourier transforms

| top | pdf |

#### 1.3.4.1. Introduction

| top | pdf |

The central role of the Fourier transformation in X-ray crystallography is a consequence of the kinematic approximation used in the description of the scattering of X-rays by a distribution of electrons (Bragg, 1915; Duane, 1925; Havighurst, 1925a,b; Zachariasen, 1945; James, 1948a, Chapters 1 and 2; Lipson & Cochran, 1953, Chapter 1; Bragg, 1975).

Let be the density of electrons in a sample of matter contained in a finite region V which is being illuminated by a parallel monochromatic X-ray beam with wavevector . Then the far-field amplitude scattered in a direction corresponding to wavevector is proportional to

In certain model calculations, the sample' may contain not only volume charges, but also point, line and surface charges. These singularities may be accommodated by letting ρ be a distribution, and writingF is still a well behaved function (analytic, by Section 1.3.2.4.2.10) because ρ has been assumed to have compact support.

If the sample is assumed to be an infinite crystal, so that ρ is now a periodic distribution, the customary limiting process by which it is shown that F becomes a discrete series of peaks at reciprocal-lattice points (see e.g. von Laue, 1936; Ewald, 1940; James, 1948a p. 9; Lipson & Taylor, 1958, pp. 14–27; Ewald, 1962, pp. 82–101; Warren, 1969, pp. 27–30) is already subsumed under the treatment of Section 1.3.2.6.

| top | pdf |

| top | pdf |

#### 1.3.4.2.1.1. Period lattice, reciprocal lattice and structure factors

| top | pdf |

Let ρ be the distribution of electrons in a crystal. Then, by definition of a crystal, ρ is Λ-periodic for some period lattice Λ (Section 1.3.2.6.5) so that there exists a motif distribution with compact support such thatwhere . The lattice Λ is usually taken to be the finest for which the above representation holds.

Let Λ have a basis over the integers, these basis vectors being expressed in terms of a standard orthonormal basis asThen the matrixis the period matrix of Λ (Section 1.3.2.6.5) with respect to the unit lattice with basis , and the volume V of the unit cell is given by .

By Fourier transformationwhere is the lattice distribution associated to the reciprocal lattice . The basis vectors have coordinates in given by the columns of , whose expression in terms of the cofactors of A (see Section 1.3.2.6.5) gives the familiar formulae involving the cross product of vectors for . The H-distribution F of scattered amplitudes may be writtenand is thus a weighted reciprocal-lattice distribution, the weight attached to each node being the value at H of the transform of the motif . Taken in conjunction with the assumption that the scattering is elastic, i.e. that H only changes the direction but not the magnitude of the incident wavevector , this result yields the usual forms (Laue or Bragg) of the diffraction conditions: , and simultaneously H lies on the Ewald sphere.

By the reciprocity theorem, can be recovered if F is known for all as follows [Section 1.3.2.6.5, e.g. (iv)]:

These relations may be rewritten in terms of standard, or fractional crystallographic', coordinates by puttingso that a unit cell of the crystal corresponds to , and that . Defining and byso thatwe haveThese formulae are valid for an arbitrary motif distribution , provided the convergence of the Fourier series for is considered from the viewpoint of distribution theory (Section 1.3.2.6.10.3).

The experienced crystallographer may notice the absence of the familiar factor from the expression for just given. This is because we use the (mathematically) natural unit for , the electron per unit cell, which matches the dimensionless nature of the crystallographic coordinates x and of the associated volume element . The traditional factor was the result of the somewhat inconsistent use of x as an argument but of as a volume element to obtain ρ in electrons per unit volume (e.g. Å3). A fortunate consequence of the present convention is that nuisance factors of V or , which used to abound in convolution or scalar product formulae, are now absent.

It should be noted at this point that the crystallographic terminology regarding and differs from the standard mathematical terminology introduced in Section 1.3.2.4.1 and applied to periodic distributions in Section 1.3.2.6.4: F is the inverse Fourier transform of ρ rather than its Fourier transform, and the calculation of ρ is called a Fourier synthesis in crystallography even though it is mathematically a Fourier analysis. The origin of this discrepancy may be traced to the fact that the mathematical theory of the Fourier transformation originated with the study of temporal periodicity, while crystallography deals with spatial periodicity; since the expression for the phase factor of a plane wave is , the difference in sign between the contributions from time versus spatial displacements makes this conflict unavoidable.

#### 1.3.4.2.1.2. Structure factors in terms of form factors

| top | pdf |

In many cases, is a sum of translates of atomic electron-density distributions. Assume there are n distinct chemical types of atoms, with identical isotropic atoms of type j described by an electron distribution about their centre of mass. According to quantum mechanics each is a smooth rapidly decreasing function of x, i.e. , hence and (ignoring the effect of thermal agitation)which may be written (Section 1.3.2.5.8)By Fourier transformation:Defining the form factor of atom j as a function of h to bewe haveIf and are the real- and reciprocal-space coordinates in Å and Å−1, and if is the spherically symmetric electron-density function for atom type j, then

More complex expansions are used for electron-density studies (see Chapter 1.2 in this volume). Anisotropic Gaussian atoms may be dealt with through the formulae given in Section 1.3.2.4.4.2.

#### 1.3.4.2.1.3. Fourier series for the electron density and its summation

| top | pdf |

The convergence of the Fourier series for is usually examined from the classical point of view (Section 1.3.2.6.10). The summation of multiple Fourier series meets with considerable difficulties, because there is no natural order in to play the role of the natural order in (Ash, 1976). In crystallography, however, the structure factors are often obtained within spheres for increasing resolution (decreasing Δ). Therefore, successive estimates of are most naturally calculated as the corresponding partial sums (Section 1.3.2.6.10.1):This may be writtenwhere is the spherical Dirichlet kernel' exhibits numerous negative ripples around its central peak. Thus the series termination errors' incurred by using instead of consist of negative ripples around each atom, and may lead to a Gibbs-like phenomenon (Section 1.3.2.6.10.1) near a molecular boundary.

As in one dimension, Cesàro sums (arithmetic means of partial sums) have better convergence properties, as they lead to a convolution by a spherical Fejér kernel' which is everywhere positive. Thus Cesàro summation will always produce positive approximations to a positive electron density. Other positive summation kernels were investigated by Pepinsky (1952) and by Waser & Schomaker (1953).

#### 1.3.4.2.1.4. Friedel's law, anomalous scatterers

| top | pdf |

If the wavelength λ of the incident X-rays is far from any absorption edge of the atoms in the crystal, there is a constant phase shift in the scattering, and the electron density may be considered to be real-valued. ThenThus ifthenThis is Friedel's law (Friedel, 1913). The set of Fourier coefficients is said to have Hermitian symmetry.

If λ is close to some absorption edge(s), the proximity to resonance induces an extra phase shift, whose effect may be represented by letting take on complex values. Letand correspondingly, by termwise Fourier transformation

Since and are both real, and are both Hermitian symmetric, hencewhileThus , so that Friedel's law is violated. The components and , which do obey Friedel's law, may be expressed as:

#### 1.3.4.2.1.5. Parseval's identity and other theorems

| top | pdf |

By Section 1.3.2.4.3.3 and Section 1.3.2.6.10.2,Usually is real and positive, hence , but the identity remains valid even when is made complex-valued by the presence of anomalous scatterers.

If is the collection of structure factors belonging to another electron density with the same period lattice as ρ, thenThus, norms and inner products may be evaluated either from structure factors or from maps'.

#### 1.3.4.2.1.6. Convolution, correlation and Patterson function

| top | pdf |

Let and be two electron densities referred to crystallographic coordinates, with structure factors and , so that

The distribution is well defined, since the generalized support condition (Section 1.3.2.3.9.7) is satisfied. The forward version of the convolution theorem implies that ifthen

If either or is infinitely differentiable, then the distribution exists, and if we analyse it asthen the backward version of the convolution theorem reads:

The cross correlation between and is the -periodic distribution defined by:If and are locally integrable,LetThe combined use of the shift property and of the forward convolution theorem then gives immediately:hence the Fourier series representation of :Clearly, , as shown by the fact that permuting F and G changes into its complex conjugate.

The auto-correlation of is defined as and is called the Patterson function of . If consists of point atoms, i.e.thencontains information about interatomic vectors. It has the Fourier series representationand is therefore calculable from the diffraction intensities alone. It was first proposed by Patterson (1934, 1935a,b) as an extension to crystals of the radially averaged correlation function used by Warren & Gingrich (1934) in the study of powders.

#### 1.3.4.2.1.7. Sampling theorems, continuous transforms, interpolation

| top | pdf |

Shannon's sampling and interpolation theorem (Section 1.3.2.7.1) takes two different forms, according to whether the property of finite bandwidth is assumed in real space or in reciprocal space.

 (1) The most usual setting is in reciprocal space (see Sayre, 1952c). Only a finite number of diffraction intensities can be recorded and phased, and for physical reasons the cutoff criterion is the resolution . Electron-density maps are thus calculated as partial sums (Section 1.3.4.2.1.3), which may be written in Cartesian coordinates as is band-limited, the support of its spectrum being contained in the solid sphere defined by . Let be the indicator function of . The transform of the normalized version of is (see below, Section 1.3.4.4.3.5)By Shannon's theorem, it suffices to calculate on an integral subdivision Γ of the period lattice Λ such that the sampling criterion is satisfied (i.e. that the translates of by vectors of do not overlap). Values of may then be calculated at an arbitrary point X by the interpolation formula: (2) The reverse situation occurs whenever the support of the motif does not fill the whole unit cell, i.e. whenever there exists a region M (the molecular envelope'), strictly smaller than the unit cell, such that the translates of M by vectors of r do not overlap and thatIt then follows that Defining the interference function' G as the normalized indicator function of M according towe may invoke Shannon's theorem to calculate the value at an arbitrary point ξ of reciprocal space from its sample values at points of the reciprocal lattice asThis aspect of Shannon's theorem constitutes the mathematical basis of phasing methods based on geometric redundancies created by solvent regions and/or noncrystallographic symmetries (Bricogne, 1974). The connection between Shannon's theorem and the phase problem was first noticed by Sayre (1952b). He pointed out that the Patterson function of , written as , may be viewed as consisting of a motif (containing all the internal interatomic vectors) which is periodized by convolution with r. As the translates of by vectors of do overlap, the sample values of the intensities at nodes of the reciprocal lattice do not provide enough data to interpolate intensities at arbitrary points of reciprocal space. Thus the loss of phase is intimately related to the impossibility of intensity interpolation, implying in return that any indication of intensity values attached to nonintegral points of the reciprocal lattice is a potential source of phase information.

#### 1.3.4.2.1.8. Sections and projections

| top | pdf |

It was shown at the end of Section 1.3.2.5.8 that the convolution theorem establishes, under appropriate assumptions, a duality between sectioning a smooth function (viewed as a multiplication by a δ-function in the sectioning coordinate) and projecting its transform (viewed as a convolution with the function 1 everywhere equal to 1 as a function of the projection coordinate). This duality follows from the fact that and map to and to (Section 1.3.2.5.6), and from the tensor product property (Section 1.3.2.5.5).

In the case of periodic distributions, projection and section must be performed with respect to directions or subspaces which are integral with respect to the period lattice if the result is to be periodic; furthermore, projections must be performed only on the contents of one repeating unit along the direction of projection, or else the result would diverge. The same relations then hold between principal central sections and projections of the electron density and the dual principal central projections and sections of the weighted reciprocal lattice, e.g.etc.

When the sections are principal but not central, it suffices to use the shift property of Section 1.3.2.5.5. When the sections or projections are not principal, they can be made principal by changing to new primitive bases B and for Λ and , respectively, the transition matrices P and to these new bases being related by in order to preserve duality. This change of basis must be such that one of these matrices (say, P) should have a given integer vector u as its first column, u being related to the line or plane defining the section or projection of interest.

The problem of constructing a matrix P given u received an erroneous solution in Volume II of International Tables (Patterson, 1959), which was subsequently corrected in 1962. Unfortunately, the solution proposed there is complicated and does not suggest a general approach to the problem. It therefore seems worthwhile to record here an effective procedure which solves this problem in any dimension n (Watson, 1970).

Letbe a primitive integral vector, i.e. g.c.d. . Then an integral matrix P with det having u as its first column can be constructed by induction as follows. For the result is trivial. For it can be solved by means of the Euclidean algorithm, which yields such that , so that we may take Note that, if is a solution, then is another solution for any . For , write with so that bothare primitive. By the inductive hypothesis there is an integral matrix V withas its first column, and an integral matrix Z with z as its first column, with and .

Now puti.e.The first column of P isand its determinant is 1, QED.

The incremental step from dimension to dimension n is the construction of matrix V, for which there exist infinitely many solutions labelled by an integer . Therefore, the collection of matrices P which solve the problem is labelled by arbitrary integers . This freedom can be used to adjust the shape of the basis B.

Once P has been chosen, the calculation of general sections and projections is transformed into that of principal sections and projections by the changes of coordinates:and an appeal to the tensor product property.

Booth (1945a) made use of the convolution theorem to form the Fourier coefficients of bounded projections', which provided a compromise between 2D and 3D Fourier syntheses. If it is desired to compute the projection on the (x, y) plane of the electron density lying between the planes and , which may be written asThe transform is thengiving for coefficient :

#### 1.3.4.2.1.9. Differential syntheses

| top | pdf |

Another particular instance of the convolution theorem is the duality between differentiation and multiplication by a monomial (Sections 1.3.2.4.2.8, 1.3.2.5.8).

In the present context, this result may be writtenin Cartesian coordinates, andin crystallographic coordinates.

A particular case of the first formula iswhereis the Laplacian of ρ.

The second formula has been used with or 2 to compute differential syntheses' and refine the location of maxima (or other stationary points) in electron-density maps. Indeed, the values at x of the gradient vector and Hessian matrix are readily obtained asand a step of Newton iteration towards the nearest stationary point of will proceed by

The modern use of Fourier transforms to speed up the computation of derivatives for model refinement will be described in Section 1.3.4.4.7.

The converse property is also useful: it relates the derivatives of the continuous transform to the moments of :For and , this identity gives the well known relation between the Hessian matrix of the transform at the origin of reciprocal space and the inertia tensor of the motif . This is a particular case of the moment-generating properties of , which will be further developed in Section 1.3.4.5.2.

#### 1.3.4.2.1.10. Toeplitz forms, determinantal inequalities and Szegö's theorem

| top | pdf |

The classical results presented in Section 1.3.2.6.9 can be readily generalized to the case of triple Fourier series; no new concept is needed, only an obvious extension of the notation.

Let be real-valued, so that Friedel's law holds and . Let be a finite set of indices comprising the origin: . Then the Hermitian form in complex variablesis called the Toeplitz form of order associated to . By the convolution theorem and Parseval's identity,If is almost everywhere non-negative, then for all the forms are positive semi-definite and therefore all Toeplitz determinants are non-negative, where

The Toeplitz–Carathéodory–Herglotz theorem given in Section 1.3.2.6.9.2 states that the converse is true: if for all , then is almost everywhere non-negative. This result is known in the crystallographic literature through the papers of Karle & Hauptman (1950), MacGillavry (1950), and Goedkoop (1950), following previous work by Harker & Kasper (1948) and Gillis (1948a,b).

Szegö's study of the asymptotic distribution of the eigenvalues of Toeplitz forms as their order tends to infinity remains valid. Some precautions are needed, however, to define the notion of a sequence of finite subsets of indices tending to infinity: it suffices that the should consist essentially of the reciprocal-lattice points h contained within a domain of the form (k-fold dilation of Ω) where Ω is a convex domain in containing the origin (Widom, 1960). Under these circumstances, the eigen­values of the Toeplitz forms become equidistributed with the sample values of on a grid satisfying the Shannon sampling criterion for the data in (cf. Section 1.3.2.6.9.3).

A particular consequence of this equidistribution is that the geometric means of the and of the are equal, and hence as in Section 1.3.2.6.9.4where denotes the number of reflections in . Complementary terms giving a better comparison of the two sides were obtained by Widom (1960, 1975) and Linnik (1975).

This formula played an important role in the solution of the 2D Ising model by Onsager (1944) (see Montroll et al., 1963). It is also encountered in phasing methods involving the Burg entropy' (Britten & Collins, 1982; Narayan & Nityananda, 1982; Bricogne, 1982, 1984, 1988).

| top | pdf |

#### 1.3.4.2.2.1. Crystallographic groups

| top | pdf |

The description of a crystal given so far has dealt only with its invariance under the action of the (discrete Abelian) group of translations by vectors of its period lattice Λ.

Let the crystal now be embedded in Euclidean 3-space, so that it may be acted upon by the group of rigid (i.e. distance-preserving) motions of that space. The group contains a normal subgroup of translations, and the quotient group may be identified with the 3-dimensional orthogonal group . The period lattice Λ of a crystal is a discrete uniform subgroup of .

The possible invariance properties of a crystal under the action of are captured by the following definition: a crystallographic group is a subgroup Γ of if

 (i) , a period lattice and a normal subgroup of Γ; (ii) the factor group is finite.

The two properties are not independent: by a theorem of Bieberbach (1911), they follow from the assumption that Λ is a discrete subgroup of which operates without accumulation point and with a compact fundamental domain (see Auslander, 1965). These two assumptions imply that G acts on Λ through an integral representation, and this observation leads to a complete enumeration of all distinct Γ's. The mathematical theory of these groups is still an active research topic (see, for instance, Farkas, 1981), and has applications to Riemannian geometry (Wolf, 1967).

This classification of crystallographic groups is described elsewhere in these Tables (Wondratschek, 2005), but it will be surveyed briefly in Section 1.3.4.2.2.3 for the purpose of establishing further terminology and notation, after recalling basic notions and results concerning groups and group actions in Section 1.3.4.2.2.2.

#### 1.3.4.2.2.2. Groups and group actions

| top | pdf |

The books by Hall (1959) and Scott (1964) are recommended as reference works on group theory.

• (a) Left and right actions

Let G be a group with identity element e, and let X be a set. An action of G on X is a mapping from to X with the property that, if g x denotes the image of , thenAn element g of G thus induces a mapping of X into itself defined by , with the representation property':Since G is a group, every g has an inverse ; hence every mapping has an inverse , so that each is a permutation of X.

Strictly speaking, what has just been defined is a left action. A right action of G on X is defined similarly as a mapping such thatThe mapping defined by then has the right-representation' property:

The essential difference between left and right actions is of course not whether the elements of G are written on the left or right of those of X: it lies in the difference between (iii) and (iii′). In a left action the product in G operates on by operating first, then operating on the result; in a right action, operates first, then . This distinction will be of importance in Sections 1.3.4.2.2.4 and 1.3.4.2.2.5. In the sequel, we will use left actions unless otherwise stated.

• (b) Orbits and isotropy subgroups

Let x be a fixed element of X. Two fundamental entities are associated to x:

 (1) the subset of G consisting of all g such that is a subgroup of G, called the isotropy subgroup of x and denoted ; (2) the subset of X consisting of all elements gx with g running through G is called the orbit of x under G and is denoted Gx.

Through these definitions, the action of G on X can be related to the internal structure of G, as follows. Let denote the collection of distinct left cosets of in G, i.e. of distinct subsets of G of the form . Let and denote the numbers of elements in the corresponding sets. The number of distinct cosets of in G is also denoted and is called the index of in G; by Lagrange's theoremNow if and are in the same coset of , then with , and hence ; the converse is obviously true. Therefore, the mapping from cosets to orbit elementsestablishes a one-to-one correspondence between the distinct left cosets of in G and the elements of the orbit of x under G. It follows that the number of distinct elements in the orbit of x is equal to the index of in G:and that the elements of the orbit of x may be listed without repetition in the form

Similar definitions may be given for a right action of G on X. The set of distinct right cosets in G, denoted , is then in one-to-one correspondence with the distinct elements in the orbit xG of x.

• (c) Fundamental domain and orbit decomposition

The group properties of G imply that two orbits under G are either disjoint or equal. The set X may thus be written as the disjoint unionwhere the are elements of distinct orbits and I is an indexing set labelling them. The subset is said to constitute a fundamental domain (mathematical terminology) or an asymmetric unit (crystallographic terminology) for the action of G on X: it contains one representative of each distinct orbit. Clearly, other fundamental domains may be obtained by choosing different representatives for these orbits.

If X is finite and if f is an arbitrary complex-valued function over X, the integral' of f over X may be written as a sum of integrals over the distinct orbits, yielding the orbit decomposition formula:In particular, taking for all x and denoting by the number of elements of X:

• (d) Conjugation, normal subgroups, semi-direct products

A group G acts on itself by conjugation, i.e. by associating to the mapping defined byIndeed, and . In particular, operates on the set of subgroups of G, two subgroups H and K being called conjugate if for some ; for example, it is easily checked that . The orbits under this action are the conjugacy classes of subgroups of G, and the isotropy subgroup of H under this action is called the normalizer of H in G.

If is a one-element orbit, H is called a self-conjugate or normal subgroup of G; the cosets of H in G then form a group called the factor group of G by H.

Let G and H be two groups, and suppose that G acts on H by automorphisms of H, i.e. in such a way that

Then the symbols [g, h] with , form a group K under the product rule:{associativity checks; [] is the identity; has inverse }. The group K is called the semi-direct product of H by G, denoted .

The elements form a subgroup of K isomorphic to G, the elements form a normal subgroup of K isomorphic to H, and the action of G on H may be represented as an action by conjugation in the sense that

A familiar example of semi-direct product is provided by the group of Euclidean motions (Section 1.3.4.2.2.1). An element S of may be written with , the orthogonal group, and , the translation group, and the product lawshows that with acting on by rotating the translation vectors.

• (e) Associated actions in function spaces

For every left action of G in X, there is an associated left action of G on the space of complex-valued functions over X, defined by change of variable' (Section 1.3.2.3.9.5):Indeed for any in G,since , it follows thatIt is clear that the change of variable must involve the action of (not g) if is to define a left action; using g instead would yield a right action.

The linear representation operators on provide the most natural instrument for stating and exploiting symmetry properties which a function may possess with respect to the action of G. Thus a function will be called G-invariant if for all and all . The value then depends on x only through its orbit G x, and f is uniquely defined once it is specified on a fundamental domain ; its integral over X is then a weighted sum of its values in D:

The G-invariance of f may be written:Thus f is invariant under each , which obviously implies that f is invariant under the linear operator in which averages an arbitrary function by the action of G. Conversely, if , thenso that the two statements of the G-invariance of f are equivalent. The identityis easily proved by observing that the map ( being any element of G) is a one-to-one map from G into itself, so thatas these sums differ only by the order of the terms. The same identity implies that is a projector:and hence that its eigenvalues are either 0 or 1. In summary, we may say that the invariance of f under G is equivalent to f being an eigenfunction of the associated projector for eigenvalue 1.

• (f) Orbit exchange

One final result about group actions which will be used repeatedly later is concerned with the case when X has the structure of a Cartesian product:and when G acts diagonally on X, i.e. acts on each separately:Then complete sets (but not usually minimal sets) of representatives of the distinct orbits for the action of G in X may be obtained in the formfor each , i.e. by taking a fundamental domain in and all the elements in with . The action of G on each does indeed generate the whole of X: given an arbitrary element of X, there is an index such that and a coset of in G such that for any representative γ of that coset; thenwhich is of the form with .

The various are related in a simple manner by transposition' or orbit exchange' (the latter name is due to J. W. Cooley). For instance, may be obtained from as follows: for each there exists and such that ; thereforesince the fundamental domain of is thus expanded to the whole of , while is reduced to its fundamental domain. In other words: orbits are simultaneously collapsed in the jth factor and expanded in the kth.

When G operates without fixed points in each (i.e. for all ), then each is a fundamental domain for the action of G in X. The existence of fixed points in some or all of the complicates the situation in that for each k and each such that the action of on the other factors must be examined. Shenefelt (1988) has made a systematic study of orbit exchange for space group P622 and its subgroups.

Orbit exchange will be encountered, in a great diversity of forms, as the basic mechanism by which intermediate results may be rearranged between the successive stages of the computation of crystallographic Fourier transforms (Section 1.3.4.3).

#### 1.3.4.2.2.3. Classification of crystallographic groups

| top | pdf |

Let Γ be a crystallographic group, Λ the normal subgroup of its lattice translations, and G the finite factor group . Then G acts on Λ by conjugation [Section 1.3.4.2.2.2(d)] and this action, being a mapping of a lattice into itself, is representable by matrices with integer entries.

The classification of crystallographic groups proceeds from this observation in the following three steps:

 Step 1: find all possible finite abstract groups G which can be represented by integer matrices. Step 2: for each such G find all its inequivalent representations by integer matrices, equivalence being defined by a change of primitive lattice basis (i.e. conjugation by a integer matrix with determinant ±1). Step 3: for each G and each equivalence class of integral representations of G, find all inequivalent extensions of the action of G from Λ to , equivalence being defined by an affine coordinate change [i.e. conjugation by an element of ].

Step 1 leads to the following groups, listed in association with the crystal system to which they later give rise:and the extension of these groups by a centre of inversion. In this list denotes a semi-direct product [Section 1.3.4.2.2.2(d)], α denotes the automorphism , and (the group of permutations on three letters) operates by permuting the copies of (using the subgroup of cyclic permutations gives the tetrahedral subsystem).

Step 2 leads to a list of 73 equivalence classes called arithmetic classes of representations , where is a integer matrix, with and . This enumeration is more familiar if equivalence is relaxed so as to allow conjugation by rational matrices with determinant ± 1: this leads to the 32 crystal classes. The difference between an arithmetic class and its rational class resides in the choice of a lattice mode . Arithmetic classes always refer to a primitive lattice, but may use inequivalent integral representations for a given geometric symmetry element; while crystallographers prefer to change over to a nonprimitive lattice, if necessary, in order to preserve the same integral representation for a given geometric symmetry element. The matrices P and describing the changes of basis between primitive and centred lattices are listed in Table 5.1.3.1 and illustrated in Figs. 5.1.3.2 to 5.1.3.8 , pp. 80–85, of Volume A of International Tables (Arnold, 2005).

Step 3 gives rise to a system of congruences for the systems of nonprimitive translations which may be associated to the matrices of a given arithmetic class, namely:first derived by Frobenius (1911). If equivalence under the action of is taken into account, 219 classes are found. If equivalence is defined with respect to the action of the subgroup of consisting only of transformations with determinant +1, then 230 classes called space-group types are obtained. In particular, associating to each of the 73 arithmetic classes a trivial set of nonprimitive translations yields the 73 symmorphic space groups. This third step may also be treated as an abstract problem concerning group extensions, using cohomological methods [Ascher & Janner (1965); see Janssen (1973) for a summary]; the connection with Frobenius's approach, as generalized by Zassenhaus (1948), is examined in Ascher & Janner (1968).

The finiteness of the number of space-group types in dimension 3 was shown by Bieberbach (1912) to be the case in arbitrary dimension. The reader interested in N-dimensional space-group theory for may consult Brown (1969), Brown et al. (1978), Schwarzenberger (1980) and Engel (1986). The standard reference for integral representation theory is Curtis & Reiner (1962).

All three-dimensional space groups G have the property of being solvable, i.e. that there exists a chain of subgroupswhere each is a normal subgroup of and the factor group is a cyclic group of some order . This property may be established by inspection, or deduced from a famous theorem of Burnside [see Burnside (1911), pp. 322–323] according to which any group G such that , with p and q distinct primes, is solvable; in the case at hand, and . The whole classification of 3D space groups can be performed swiftly by a judicious use of the solvability property (L. Auslander, personal communication).

Solvability facilitates the indexing of elements of G in terms of generators and relations (Coxeter & Moser, 1972; Magnus et al., 1976) for the purpose of calculation. By definition of solvability, elements may be chosen in such a way that the cyclic factor group is generated by the coset . The set is then a system of generators for G such that the defining relations [see Brown et al. (1978), pp. 26–27] have the particularly simple formwith and . Each element g of G may then be obtained uniquely as an ordered word':with , using the algorithm of Jürgensen (1970). Such generating sets and defining relations are tabulated in Brown et al. (1978, pp. 61–76). An alternative list is given in Janssen (1973, Table 4.3, pp. 121–123, and Appendix D, pp. 262–271).

#### 1.3.4.2.2.4. Crystallographic group action in real space

| top | pdf |

The action of a crystallographic group Γ may be written in terms of standard coordinates in aswith

An important characteristic of the representation is its reducibility, i.e. whether or not it has invariant subspaces other than and the whole of . For triclinic, monoclinic and orthorhombic space groups, θ is reducible to a direct sum of three one-dimensional representations:for trigonal, tetragonal and hexagonal groups, it is reducible to a direct sum of two representations, of dimension 2 and 1, respectively; while for tetrahedral and cubic groups, it is irreducible.

By Schur's lemma (see e.g. Ledermann, 1987), any matrix which commutes with all the matrices for must be a scalar multiple of the identity in each invariant subspace.

In the reducible cases, the reductions involve changes of basis which will be rational, not integral, for those arithmetic classes corresponding to nonprimitive lattices. Thus the simplification of having maximally reduced representation has as its counterpart the use of nonprimitive lattices.

The notions of orbit, isotropy subgroup and fundamental domain (or asymmetric unit) for the action of G on are inherited directly from the general setting of Section 1.3.4.2.2.2. Points x for which are called special positions, and the various types of isotropy subgroups which may be encountered in crystallographic groups have been labelled by means of Wyckoff symbols. The representation operators in have the form:The operators associated to the purely rotational part of each transformation will also be used. Note the relation:

Let a crystal structure be described by the list of the atoms in its unit cell, indexed by . Let the electron-density distribution about the centre of mass of atom k be described by with respect to the standard coordinates x. Then the motif may be written as a sum of translates:and the crystal electron density is .

Suppose that is invariant under Γ. If and are in the same orbit, say , thenTherefore if is a special position and thus , thenThis identity implies that(the special position condition), and thati.e. that must be invariant by the pure rotational part of . Trueblood (1956) investigated the consequences of this invariance on the thermal vibration tensor of an atom in a special position (see Section 1.3.4.2.2.6 below).

Let J be a subset of K such that contains exactly one atom from each orbit. An orbit decomposition yields an expression for in terms of symmetry-unique atoms:or equivalentlyIf the atoms are assumed to be Gaussian, writewhere is the total number of electrons, and where the matrix combines the Gaussian spread of the electrons in atom j at rest with the covariance matrix of the random positional fluctuations of atom j caused by thermal agitation.

In crystallographic coordinates:

If atom k is in a special position , then the matrix must satisfy the identityfor all g in the isotropy subgroup of . This condition may also be written in Cartesian coordinates aswhereThis is a condensed form of the symmetry properties derived by Trueblood (1956).

#### 1.3.4.2.2.5. Crystallographic group action in reciprocal space

| top | pdf |

An elementary discussion of this topic may be found in Chapter 1.4 of this volume.

Having established that the symmetry of a crystal may be most conveniently stated and handled via the left representation of G given by its action on electron-density distributions, it is natural to transpose this action by the identity of Section 1.3.2.5.5:for any tempered distribution T, i.e.whenever the transforms are functions.

Putting , a -periodic distribution, this relation defines a left action of G on given bywhich is conjugate to the action in the sense thatThe identity expressing the G-invariance of is then equivalent to the identity between its structure factors, i.e. (Waser, 1955a)

If G is made to act on viathe usual notions of orbit, isotropy subgroup (denoted ) and fundamental domain may be attached to this action. The above relation then shows that the spectrum is entirely known if it is specified on a fundamental domain containing one reciprocal-lattice point from each orbit of this action.

A reflection h is called special if . Then for any we have , and henceimplying that unless . Special reflections h for which for some are thus systematically absent. This phenomenon is an instance of the duality between periodization and decimation of Section 1.3.2.7.2: if , the projection of on the direction of h has period , hence its transform (which is the portion of F supported by the central line through h) will be decimated, giving rise to the above condition.

A reflection h is called centric if , i.e. if the orbit of h contains . Then for some coset γ in , so that the following relation must hold:In the absence of dispersion, Friedel's law gives rise to the phase restriction:The value of the restricted phase is independent of the choice of coset representative γ. Indeed, if is another choice, then with and by the Frobenius congruences , so thatSince , and if h is not a systematic absence: thus

The treatment of centred lattices may be viewed as another instance of the duality between periodization and decimation (Section 1.3.2.7.2): the periodization of the electron density by the nonprimitive lattice translations has as its counterpart in reciprocal space the decimation of the transform by the reflection conditions' describing the allowed reflections, the decimation and periodization matrices being each other's contragredient.

The reader may consult the papers by Bienenstock & Ewald (1962) and Wells (1965) for earlier approaches to this material.

#### 1.3.4.2.2.6. Structure-factor calculation

| top | pdf |

Structure factors may be calculated from a list of symmetry-unique atoms by Fourier transformation of the orbit decomposition formula for the motif given in Section 1.3.4.2.2.4:i.e. finally:

In the case of Gaussian atoms, the atomic transforms areor equivalently

Two common forms of equivalent temperature factors (incorporating both atomic form and thermal motion) are

 (i) isotropic B:so that , or ; (ii) anisotropic β's:so that , or .

In the first case, does not depend on , and therefore:In the second case, however, no such simplification can occur:These formulae, or special cases of them, were derived by Rollett & Davies (1955), Waser (1955b), and Trueblood (1956).

The computation of structure factors by applying the discrete Fourier transform to a set of electron-density values calculated on a grid will be examined in Section 1.3.4.4.5.

#### 1.3.4.2.2.7. Electron-density calculations

| top | pdf |

A formula for the Fourier synthesis of electron-density maps from symmetry-unique structure factors is readily obtained by orbit decomposition:where L is a subset of such that contains exactly one point of each orbit for the action of G on . The physical electron density per cubic ångström is thenwith V in Å3.

In the absence of anomalous scatterers in the crystal and of a centre of inversion −I in Γ, the spectrum has an extra symmetry, namely the Hermitian symmetry expressing Friedel's law (Section 1.3.4.2.1.4). The action of a centre of inversion may be added to that of Γ to obtain further simplification in the above formula: under this extra action, an orbit with is either mapped into itself or into the disjoint orbit ; the terms corresponding to and may then be grouped within the common orbit in the first case, and between the two orbits in the second case.

 Case 1: is centric. The cosets in may be partitioned into two disjoint classes by picking one coset in each of the two-coset orbits of the action of −I. Let denote one such class: then the reduced orbitcontains exactly once the Friedel-unique half of the full orbit , and thusGrouping the summands for and yields a real-valued summand Case 2: is acentric. The two orbits are then disjoint, and the summands corresponding to and may be grouped together into a single real-valued summand In order to reindex the collection of all summands of , putwhere labels the Friedel-unique centric reflections in L and the acentric ones, and let stand for a subset of containing a unique element of each pair for . Then

#### 1.3.4.2.2.8. Parseval's theorem with crystallographic symmetry

| top | pdf |

The general statement of Parseval's theorem given in Section 1.3.4.2.1.5 may be rewritten in terms of symmetry-unique structure factors and electron densities by means of orbit decomposition.

In reciprocal space,for each l, the summands corresponding to the various are equal, so that the left-hand side is equal to

In real space, the triple integral may be rewritten as(where D is the asymmetric unit) if and are smooth densities, since the set of special positions has measure zero. If, however, the integral is approximated as a sum over a G-invariant grid defined by decimation matrix N, special positions on this grid must be taken into account:where the discrete asymmetric unit D contains exactly one point in each orbit of G in .

#### 1.3.4.2.2.9. Convolution theorems with crystallographic symmetry

| top | pdf |

The standard convolution theorems derived in the absence of symmetry are readily seen to follow from simple properties of functions (denoted simply e in formulae which are valid for both signs), namely:These relations imply that the families of functionsboth generate an algebra of functions, i.e. a vector space endowed with an internal multiplication, since (i) and (ii) show how to linearize products'.

Friedel's law (when applicable) on the one hand, and the Fourier relation between intensities and the Patterson function on the other hand, both follow from the property

When crystallographic symmetry is present, the convolution theorems remain valid in their original form if written out in terms of expanded' data, but acquire a different form when rewritten in terms of symmetry-unique data only. This rewriting is made possible by the extra relation (Section 1.3.4.2.2.5)or equivalently

The kernels of symmetrized Fourier transforms are not the functions e but rather the symmetrized sumsfor which the linearization formulae are readily obtained using (i), (ii) and (iv) aswhere the choice of sign in ± must be the same throughout each formula.

Formulae defining the structure-factor algebra' associated to G were derived by Bertaut (1955c, 1956b,c, 1959a,b) and Bertaut & Waser (1957) in another context.

The forward convolution theorem (in discrete form) then follows. Letthenwith

The backward convolution theorem is derived similarly. LetthenwithBoth formulae are simply orbit decompositions of their symmetry-free counterparts.

#### 1.3.4.2.2.10. Correlation and Patterson functions

| top | pdf |

Consider two model electron densities and with the same period lattice and the same space group G. Write their motifs in terms of atomic electron densities (Section 1.3.4.2.2.4) aswhere and label the symmetry-unique atoms placed at positions and , respectively.

To calculate the correlation between and we need the following preliminary formulae, which are easily established: if and f is an arbitrary function on , thenhenceand

The cross correlation between motifs is thereforewhich contains a peak of shape at the interatomic vector for each , , , .

The cross-correlation between the original electron densities is then obtained by further periodizing by .

Note that these expressions are valid for any choice of atomic' density functions and , which may be taken as molecular fragments if desired (see Section 1.3.4.4.8).

If G contains elements g such that has an eigenspace with eigenvalue 1 and an invariant complementary subspace , while has a nonzero component in , then the Patterson function will contain Harker peaks (Harker, 1936) of the form[where represent the action of g in ] in the translate of by .

| top | pdf |

#### 1.3.4.3.1. Historical introduction

| top | pdf |

In 1929, W. L. Bragg demonstrated the practical usefulness of the Fourier transform relation between electron density and structure factors by determining the structure of diopside from three principal projections calculated numerically by 2D Fourier summation (Bragg, 1929). It was immediately realized that the systematic use of this powerful method, and of its extension to three dimensions, would entail considerable amounts of numerical computation which had to be organized efficiently. As no other branch of applied science had yet needed this type of computation, crystallographers had to invent their own techniques.

The first step was taken by Beevers & Lipson (1934) who pointed out that a 2D summation could be factored into successive 1D summations. This is essentially the tensor product property of the Fourier transform (Sections 1.3.2.4.2.4, 1.3.3.3.1), although its aspect is rendered somewhat complicated by the use of sines and cosines instead of complex exponentials. Computation is economized to the extent that the cost of an transform grows with N as rather than . Generalization to 3D is immediate, reducing computation size from to for an transform. The complication introduced by using expressions in terms of sines and cosines is turned to advantage when symmetry is present, as certain families of terms are systematically absent or are simply related to each other; multiplicity corrections must, however, be introduced. The necessary information was tabulated for each space group by Lonsdale (1936), and was later incorporated into Volume I of International Tables.

The second step was taken by Beevers & Lipson (1936) and Lipson & Beevers (1936) in the form of the invention of the Beevers–Lipson strips', a practical device which was to assist a whole generation of crystallographers in the numerical computation of crystallographic Fourier sums. The strips comprise a set of cosine strips' tabulating the functionsand a set of sine strips' tabulating the functionsfor the 16 arguments . Function values are rounded to the nearest integer, and those for other arguments m may be obtained by using the symmetry properties of the sine and cosine functions. A Fourier summation of the formis then performed by selecting the n cosine strips labelled and the n sine strips labelled , placing them in register, and adding the tabulated values columnwise. The number 60 was chosen as the l.c.m. of 12 (itself the l.c.m. of the orders of all possible nonprimitive translations) and of 10 (for decimal convenience). The limited accuracy imposed by the two-digit tabulation was later improved by Robertson's sorting board (Robertson, 1936a,b) or by the use of separate strips for each decimal digit of the amplitude (Booth, 1948b), which allowed three-digit tabulation while keeping the set of strips within manageable size. Cochran (1948a) found that, for most structures under study at the time, the numerical inaccuracies of the method were less than the level of error in the experimental data. The sampling rate was subsequently increased from 60 to 120 (Beevers, 1952) to cope with larger unit cells.

Further gains in speed and accuracy were sought through the construction of special-purpose mechanical, electro-mechanical, electronic or optical devices. Two striking examples are the mechanical computer RUFUS built by Robertson (1954, 1955, 1961) on the principle of previous strip methods (see also Robertson, 1932) and the electronic analogue computer X-RAC built by Pepinsky, capable of real-time calculation and display of 2D and 3D Fourier syntheses (Pepinsky, 1947; Pepinsky & Sayre, 1948; Pepinsky et al., 1961; see also Suryan, 1957). The optical methods of Lipson & Taylor (1951, 1958) also deserve mention. Many other ingenious devices were invented, whose descriptions may be found in Booth (1948b), Niggli (1961) and Lipson & Cochran (1968).

Later, commercial punched-card machines were programmed to carry out Fourier summations or structure-factor calculations (Shaffer et al., 1946a,b; Cox et al., 1947, 1949; Cox & Jeffrey, 1949; Donohue & Schomaker, 1949; Grems & Kasper, 1949; Hodgson et al., 1949; Greenhalgh & Jeffrey, 1950; Kitz & Marchington, 1953).

The modern era of digital electronic computation of Fourier series was initiated by the work of Bennett & Kendrew (1952), Mayer & Trueblood (1953), Ahmed & Cruickshank (1953b), Sparks et al. (1956) and Fowweather (1955). Their Fourier-synthesis programs used Beevers–Lipson factorization, the program by Sparks et al. being the first 3D Fourier program useable for all space groups (although these were treated as P1 or by data expansion). Ahmed & Barnes (1958) then proposed a general programming technique to allow full use of symmetry elements (orthorhombic or lower) in the 3D Beevers–Lipson factorization process, including multiplicity corrections. Their method was later adopted by Shoemaker & Sly (1961), and by crystallographic program writers at large.

The discovery of the FFT algorithm by Cooley & Tukey in 1965, which instantly transformed electrical engineering and several other disciplines, paradoxically failed to have an immediate impact on crystallographic computing. A plausible explanation is that the calculation of large 3D Fourier maps was a relatively infrequent task which was not thought to constitute a bottleneck, as crystallographers had learned to settle most structural questions by means of cheaper 2D sections or projections. It is significant in this respect that the first use of the FFT in crystallography by Barrett & Zwick (1971) should have occurred as part of an iterative scheme for improving protein phases by density modification in real space, which required a much greater number of Fourier transformations than any previous method. Independently, Bondot (1971) had attracted attention to the merits of the FFT algorithm.

The FFT program used by Barrett & Zwick had been written for signal-processing applications. It was restricted to sampling rates of the form , and was not designed to take advantage of crystallographic symmetry at any stage of the calculation; Bantz & Zwick (1974) later improved this situation somewhat.

It was the work of Ten Eyck (1973) and Immirzi (1973, 1976) which led to the general adoption of the FFT in crystallographic computing. Immirzi treated all space groups as P1 by data expansion. Ten Eyck based his program on a versatile multi-radix FFT routine (Gentleman & Sande, 1966) coupled with a flexible indexing scheme for dealing efficiently with multidimensional transforms. He also addressed the problems of incorporating symmetry elements of order 2 into the factorization of 1D transforms, and of transposing intermediate results by other symmetry elements. He was thus able to show that in a large number of space groups (including the 74 space groups having orthorhombic or lower symmetry) it is possible to calculate only the unique results from the unique data within the logic of the FFT algorithm. Ten Eyck wrote and circulated a package of programs for computing Fourier maps and re-analysing them into structure factors in some simple space groups (P1, P1, P2, P2/m, P21, P222, P212121, Pmmm). This package was later augmented by a handful of new space-group-specific programs contributed by other crystallographers (P21212, I222, P3121, P41212). The writing of such programs is an undertaking of substantial complexity, which has deterred all but the bravest: the usual practice is now to expand data for a high-symmetry space group to the largest subgroup for which a specific FFT program exists in the package, rather than attempt to write a new program. Attempts have been made to introduce more modern approaches to the calculation of crystallographic Fourier transforms (Auslander, Feig & Winograd, 1982; Auslander & Shenefelt, 1987; Auslander et al., 1988) but have not gone beyond the stage of preliminary studies.

The task of fully exploiting the FFT algorithm in crystallographic computations is therefore still unfinished, and it is the purpose of this section to provide a systematic treatment such as that (say) of Ahmed & Barnes (1958) for the Beevers–Lipson algorithm.

Ten Eyck's approach, based on the reducibility of certain space groups, is extended by the derivation of a universal transposition formula for intermediate results. It is then shown that space groups which are not completely reducible may nevertheless be treated by three-dimensional Cooley–Tukey factorization in such a way that their symmetry may be fully exploited, whatever the shape of their asymmetric unit. Finally, new factorization methods with built-in symmetries are presented. The unifying concept throughout this presentation is that of group action' on indexing sets, and of orbit exchange' when this action has a composite structure; it affords new ways of rationalizing the use of symmetry, or of improving computational speed, or both.

#### 1.3.4.3.2. Defining relations and symmetry considerations

| top | pdf |

A finite set of reflections can be periodized without aliasing by the translations of a suitable sublattice of the reciprocal lattice ; the converse operation in real space is the sampling of ρ at points X of a grid of the form (Section 1.3.2.7.3). In standard coordinates, is periodized by , and is sampled at points .

In the absence of symmetry, the unique data are

 – the indexed by in reciprocal space; – the indexed by ; or equivalently the indexed by , where .

They are connected by the ordinary DFT relations:orandor

In the presence of symmetry, the unique data are

or in real space (by abuse of notation, D will denote an asymmetric unit for x or for m indifferently);

in reciprocal space.

The previous summations may then be subjected to orbital decomposition, to yield the following crystallographic DFT' (CDFT) defining relations:with the obvious alternatives in terms of . Our problem is to evaluate the CDFT for a given space group as efficiently as possible, in spite of the fact that the group action has spoilt the simple tensor-product structure of the ordinary three-dimensional DFT (Section 1.3.3.3.1).

Two procedures are available to carry out the 3D summations involved as a succession of smaller summations:

 (1) decomposition into successive transforms of fewer dimensions but on the same number of points along these dimensions. This possibility depends on the reducibility of the space group, as defined in Section 1.3.4.2.2.4, and simply invokes the tensor product property of the DFT; (2) factorization of the transform into transforms of the same number of dimensions as the original one, but on fewer points along each dimension. This possibility depends on the arithmetic factorability of the decimation matrix N, as described in Section 1.3.3.3.2.

Clearly, a symmetry expansion to the largest fully reducible subgroup of the space group will give maximal decomposability, but will require computing more than the unique results from more than the unique data. Economy will follow from factoring the transforms in the subspaces within which the space group acts irreducibly.

For irreducible subspaces of dimension 1, the group action is readily incorporated into the factorization of the transform, as first shown by Ten Eyck (1973).

For irreducible subspaces of dimension 2 or 3, the ease of incorporation of symmetry into the factorization depends on the type of factorization method used. The multidimensional Cooley–Tukey method (Section 1.3.3.3.1) is rather complicated; the multidimensional Good method (Section 1.3.3.3.2.2) is somewhat simpler; and the Rader/Winograd factorization admits a generalization, based on the arithmetic of certain rings of algebraic integers, which accommodates 2D crystallographic symmetries in a most powerful and pleasing fashion.

At each stage of the calculation, it is necessary to keep track of the definition of the asymmetric unit and of the symmetry properties of the numbers being manipulated. This requirement applies not only to the initial data and to the final results, where these are familiar; but also to all the intermediate quantities produced by partial transforms (on subsets of factors, or subsets of dimensions, or both), where they are less familiar. Here, the general formalism of transposition (or orbit exchange') described in Section 1.3.4.2.2.2 plays a central role.

#### 1.3.4.3.3. Interaction between symmetry and decomposition

| top | pdf |

Suppose that the space-group action is reducible, i.e. that for each by Schur's lemma, the decimation matrix must then be of the formif it is to commute with all the .

Puttingwe may defineand write (direct sum) as a shorthand for

We may also define the representation operators and acting on functions of and , respectively (as in Section 1.3.4.2.2.4), and the operators and acting on functions of and , respectively (as in Section 1.3.4.2.2.5). Then we may writeandin the sense that g acts on byand on by

Thus equipped we may now derive concisely a general identity describing the symmetry properties of intermediate quantities of the formwhich arise through partial transformation of F on or of on . The action of on these quantities will be

 (i) through on the function , (ii) through on the function ,

and hence the symmetry properties of T are expressed by the identityApplying this relation not to T but to givesi.e.

If the unique were initially indexed by(see Section 1.3.4.2.2.2), this formula allows the reindexing of the intermediate results from the initial formto the final formon which the second transform (on ) may now be performed, giving the final results indexed bywhich is an asymmetric unit. An analogous interpretation holds if one is going from to F.

The above formula solves the general problem of transposing from one invariant subspace to another, and is the main device for decomposing the CDFT. Particular instances of this formula were derived and used by Ten Eyck (1973); it is useful for orthorhombic groups, and for dihedral groups containing screw axes with g.c.d. . For comparison with later uses of orbit exchange, it should be noted that the type of intermediate results just dealt with is obtained after transforming on all factors in one summand.

A central piece of information for driving such a decomposition is the definition of the full asymmetric unit in terms of the asymmetric units in the invariant subspaces. As indicated at the end of Section 1.3.4.2.2.2, this is straightforward when G acts without fixed points, but becomes more involved if fixed points do exist. To this day, no systematic calculus of asymmetric units' exists which can automatically generate a complete description of the asymmetric unit of an arbitrary space group in a form suitable for directing the orbit exchange process, although Shenefelt (1988) has outlined a procedure for dealing with space group P622 and its subgroups. The asymmetric unit definitions given in Volume A of International Tables are incomplete in this respect, in that they do not specify the possible residual symmetries which may exist on the boundaries of the domains.

#### 1.3.4.3.4. Interaction between symmetry and factorization

| top | pdf |

Methods for factoring the DFT in the absence of symmetry were examined in Sections 1.3.3.2 and 1.3.3.3. They are based on the observation that the finite sets which index both data and results are endowed with certain algebraic structures (e.g. are Abelian groups, or rings), and that subsets of indices may be found which are not merely subsets but substructures (e.g. subgroups or subrings). Summation over these substructures leads to partial transforms, and the way in which substructures fit into the global structure indicates how to reassemble the partial results into the final results. As a rule, the richer the algebraic structure which is identified in the indexing set, the more powerful the factoring method.

The ability of a given factoring method to accommodate crystallographic symmetry will thus be determined by the extent to which the crystallographic group action respects (or fails to respect) the partitioning of the index set into the substructures pertaining to that method. This remark justifies trying to gain an overall view of the algebraic structures involved, and of the possibilities of a crystallographic group acting naturally' on them.

The index sets and are finite Abelian groups under component-wise addition. If an iterated addition is viewed as an action of an integer scalar viathen an Abelian group becomes a module over the ring (or, for short, a -module), a module being analogous to a vector space but with scalars drawn from a ring rather than a field. The left actions of a crystallographic group G bycan be combined with this action as follows:This provides a left action, on the indexing sets, of the setof symbolic linear combinations of elements of G with integral coefficients. If addition and multiplication are defined in byandwiththen is a ring, and the action defined above makes the indexing sets into -modules. The ring is called the integral group ring of G (Curtis & Reiner, 1962, p. 44).

From the algebraic standpoint, therefore, the interaction between symmetry and factorization can be expected to be favourable whenever the indexing sets of partial transforms are -submodules of the main -modules.

#### 1.3.4.3.4.1. Multidimensional Cooley–Tukey factorization

| top | pdf |

Suppose, as in Section 1.3.3.3.2.1, that the decimation matrix N may be factored as . Then any grid point index in real space may be writtenwith and determined byThese relations establish a one-to-one correspondence between and the Cartesian product of and , and hence as a set. However as an Abelian group, since in general because there can be a carry' from the addition of the first components into the second components; therefore, as a -module, which shows that the incorporation of symmetry into the Cooley–Tukey algorithm is not a trivial matter.

Let act on I throughand suppose that N integerizes' all the nonprimitive translations so that we may writewith and determined as above. Suppose further that N, and commute with for all , i.e. (by Schur's lemma, Section 1.3.4.2.2.4) that these matrices are integer multiples of the identity in each G-invariant subspace. The action of g on leads towhich we may decompose aswithand

Introducing the notationthe two components of may be writtenwith

The term is the geometric equivalent of a carry or borrow: it arises because , calculated as a vector in , may be outside the unit cell , and may need to be brought back into it by a large' translation with a nonzero component in the space; equivalently, the action of g may need to be applied around different permissible origins for different values of , so as to map the unit cell into itself without any recourse to lattice translations. [Readers familiar with the cohomology of groups (see e.g. Hall, 1959; MacLane, 1963) will recognize as the cocycle of the extension of G-modules described by the exact sequence .]

Thus G acts on I in a rather complicated fashion: although does define a left action in alone, no action can be defined in alone because depends on . However, because , and are left actions, it follows that satisfies the identityfor all g, in G and all in . In particular, for all , and

This action will now be used to achieve optimal use of symmetry in the multidimensional Cooley–Tukey algorithm of Section 1.3.3.3.2.1. Let us form an array Y according tofor all but only for the unique under the action of G in . Except in special cases which will be examined later, these vectors contain essentially an asymmetric unit of electron-density data, up to some redundancies on boundaries. We may then compute the partial transform on :Using the symmetry of in the form yields by the procedure of Section 1.3.3.3.2 the transposition formula

By means of this identity we can transpose intermediate results initially indexed byso as to have them indexed byWe may then apply twiddle factors to getand carry out the second transformThe final results are indexed bywhich yield essentially an asymmetric unit of structure factors after unscrambling by:

The transposition formula above applies to intermediate results when going backwards from F to , provided these results are considered after the twiddle-factor stage. A transposition formula applicable before that stage can be obtained by characterizing the action of G on h (including the effects of periodization by ) in a manner similar to that used for m.

LetwithWe may then writewithHere and are defined byand

Let us then form an array according tofor all but only for the unique under the action of G in , and transform on to obtainPutting and using the symmetry of F in the formwhereyields by a straightforward rearrangement

This formula allows the transposition of intermediate results Z from an indexing byto an indexing byWe may then apply the twiddle factors to obtainand carry out the second transform on The results, indexed byyield essentially an asymmetric unit of electron densities by the rearrangement

The equivalence of the two transposition formulae up to the intervening twiddle factors is readily established, using the relationwhich is itself a straightforward consequence of the identity

To complete the characterization of the effect of symmetry on the Cooley–Tukey factorization, and of the economy of computation it allows, it remains to consider the possibility that some values of may be invariant under some transformations under the action .

Suppose that has a nontrivial isotropy subgroup , and let . Then each subarray defined bysatisfies the identityso that the data for the transform on have residual symmetry properties. In this case the identity satisfied by simplifies towhich shows that the mapping satisfies the Frobenius congruences (Section 1.3.4.2.2.3). Thus the internal symmetry of subarray with respect to the action of G on is given by acting on via

The transform on needs only be performed for one out of distinct arrays (results for the others being obtainable by the transposition formula), and this transforms is -symmetric. In other words, the following cases occur:

The symmetry properties of the -transform may themselves be exploited in a similar way if can be factored as a product of smaller decimation matrices; otherwise, an appropriate symmetrized DFT routine may be provided, using for instance the idea of multiplexing/demultiplexing' (Section 1.3.4.3.5). We thus have a recursive descent procedure, in which the deeper stages of the recursion deal with transforms on fewer points, or of lower symmetry (usually both).

The same analysis applies to the -transforms on the sub­arrays , and leads to a similar descent procedure.

In conclusion, crystallographic symmetry can be fully exploited to reduce the amount of computation to the minimum required to obtain the unique results from the unique data. No such analysis was so far available in cases where the asymmetric units in real and reciprocal space are not parallelepipeds. An example of this procedure will be given in Section 1.3.4.3.6.5.

#### 1.3.4.3.4.2. Multidimensional Good factorization

| top | pdf |

This procedure was described in Section 1.3.3.3.2.2. The main difference with the Cooley–Tukey factorization is that if , where the different factors are pairwise coprime, then the Chinese remainder theorem reindexing makes isomorphic to a direct sum.where each p-primary piece is endowed with an induced -module structure by letting G operate in the usual way but with the corresponding modular arithmetic. The situation is thus more favourable than with the Cooley–Tukey method, since there is no interference between the factors (no carry'). In the terminology of Section 1.3.4.2.2.2, G acts diagonally on this direct sum, and results of a partial transform may be transposed by orbit exchange as in Section 1.3.4.3.4.1 but without the extra terms μ or η. The analysis of the symmetry properties of partial transforms also carries over, again without the extra terms. Further simplification occurs for all p-primary pieces with p other than 2 or 3, since all nonprimitive translations (including those associated to lattice centring) disappear modulo p.

Thus the cost of the CRT reindexing is compensated by the computational savings due to the absence of twiddle factors and of other phase shifts associated with nonprimitive translations and with geometric carries'.

Within each p-primary piece, however, higher powers of p may need to be split up by a Cooley–Tukey factorization, or carried out directly by a suitably adapted Winograd algorithm.

| top | pdf |

As was the case in the absence of symmetry, the two previous classes of algorithms can only factor the global transform into partial transforms on prime numbers of points, but cannot break the latter down any further. Rader's idea of using the action of the group of units to obtain further factorization of a p-primary transform has been used in scalar' form by Auslander & Shenefelt (1987), Shenefelt (1988) and Auslander et al. (1988). It will be shown here that it can be adapted to the crystallographic case so as to take advantage also of the possible existence of n-fold cyclic symmetry elements in a two-dimensional transform (Bricogne & Tolimieri, 1990). This adaptation entails the use of certain rings of algebraic integers rather than ordinary integers, whose connection with the handling of cyclic symmetry will now be examined.

Let G be the group associated with a threefold axis of symmetry: with . In a standard trigonal basis, G has matrix representationin real space,in reciprocal space. Note thatand thatso that and are conjugate in the group of unimodular integer matrices. The group ring is commutative, and has the structure of the polynomial ring with the single relation corresponding to the minimal polynomial of . In the terminology of Section 1.3.3.2.4, the ring structure of is obtained from that of by carrying out polynomial addition and multiplication modulo , then replacing X by any generator of G. This type of construction forms the very basis of algebraic number theory [see Artin (1944, Section IIc) for an illustration of this viewpoint], and as just defined is isomorphic to the ring of algebraic integers of the form under the identification . Addition in this ring is defined component-wise, while multiplication is defined by

In the case of a fourfold axis, with , and is obtained from by carrying out polynomial arithmetic modulo . This identifies with the ring of Gaussian integers of the form , in which addition takes place component-wise while multiplication is defined by

In the case of a sixfold axis, with , and is isomorphic to under the mapping since .

Thus in all cases where is an irreducible quadratic polynomial with integer coefficients.

The actions of G on lattices in real and reciprocal space (Sections 1.3.4.2.2.4, 1.3.4.2.2.5) extend naturally to actions of on in which an element of acts viain real space, and viain reciprocal space. These two actions are related by conjugation, sinceand the following identity (which is fundamental in the sequel) holds:

Let us now consider the calculation of a two-dimensional DFT with n-fold cyclic symmetry for an odd prime . Denote by . Both the data and the results of the DFT are indexed by : hence the action of on these indices is in fact an action of , the latter being obtained from by carrying out all integer arithmetic in modulo p. The algebraic structure of combines the symmetry-carrying ring structure of with the finite field structure of used in Section 1.3.3.2.3.1, and holds the key to a symmetry-adapted factorization of the DFT at hand.

The structure of depends on whether remains irreducible when considered as a polynomial over . Thus two cases arise:

 (1) remains irreducible mod p, i.e. there is no nth root of unity in ; (2) factors as , i.e. there are nth roots of unity in .

These two cases require different developments.

 Case 1. is a finite field with elements. There is essentially (i.e. up to isomorphism) only one such field, denoted , and its group of units is a cyclic group with elements. If γ is a generator of this group of units, the input data with may be reordered asby the real-space action of γ; while the results with may be reordered asby the reciprocal-space action of γ, where and are arbitrary nonzero indices. The core of the DFT matrix, defined bywill then have a skew-circulant structure (Section 1.3.3.2.3.1) sincedepends only on . Multiplication by may then be turned into a cyclic convolution of length , which may be factored by two DFTs (Section 1.3.3.2.3.1) or by Winograd's techniques (Section 1.3.3.2.4). The latter factorization is always favourable, as it is easily shown that is divisible by 24 for any odd prime . This procedure is applicable even if no symmetry is present in the data. Assume now that cyclic symmetry of order , 4 or 6 is present. Since n divides 24 hence divides , the generator g of this symmetry is representable as for a suitable generator γ of the group of units. The reordered data will then be -periodic rather than simply -periodic; hence the reindexed results will be n-decimated (Section 1.3.2.7.2), and the nonzero results can be calculated by applying the DFT to the unique input data. In this way, the n-fold symmetry can be used in full to calculate the core contributions from the unique data to the unique results by a DFT of length . It is a simple matter to incorporate nonprimitive translations into this scheme. For example, when going from structure factors to electron densities, reordered data items separated by are not equal but differ by a phase shift proportional to their index mod p, whose effect is simply to shift the origin of the n-decimated transformed sequence. The same economy of computation can therefore be achieved as in the purely cyclic case. Dihedral symmetry elements, which map g to (Section 1.3.4.2.2.3), induce extra one-dimensional symmetries of order 2 in the reordered data which can also be fully exploited to reduce computation. Case 2. If , it can be shown that the two roots u and v are always distinct. Then, by the Chinese remainder theorem (CRT) for polynomials (Section 1.3.3.2.4) we have a ring isomorphismdefined by sending a polynomial from the left-hand-side ring to its two residue classes modulo and , respectively. Since the latter are simply the constants and , the CRT reindexing has the particularly simple formor equivalentlyThe CRT reconstruction formula similarly simplifies toThe use of the CRT therefore amounts to the simultaneous diagonalization (by M) of all the matrices representing the elements of in the basis (1, X). A first consequence of this diagonalization is that the internal structure of becomes clearly visible. Indeed, is mapped isomorphically to a direct product of two copies of , in which arithmetic is carried out component-wise between eigenvalues α and β. Thus ifthenTaking in particularwe have , so that contains zero divisors; therefore is not a field. On the other hand, if with and , then α and β belong to the group of units (Section 1.3.3.2.3.1) and hence have inverses and ; it follows that z is a unit in , with inverse . Therefore, consists of four distinct pieces: A second consequence of this diagonalization is that the actions of on indices m and h can themselves be brought to diagonal form by basis changes:Thus the sets of indices μ and η can be split into four pieces as itself, according as these indices have none, one or two of their coordinates in . These pieces will be labelled by the same symbols – 0, , and U – as those of . The scalar product may be written in terms of η and μ asand an elementary calculation shows that the matrix is diagonal by virtue of the relationTherefore, if and or vice versa. We are now in a position to rearrange the DFT matrix . Clearly, the structure of is more complex than in case 1, as there are three types of core' matrices:(Submatrices of type and have all their elements equal to 1 by the previous remark.) Let γ be a generator of . We may reorder the elements in , and U – and hence the data and results indexed by these elements – according to powers of γ. This requires one exponent in each of and , and two exponents in U. For instance, in the h-index space:and similarly for the μ index. Since the diagonal matrix Δ commutes with all the matrices representing the action of γ, this rearrangement will induce skew-circulant structures in all the core matrices. The corresponding cyclic convolutions may be carried out by Rader's method, i.e. by diagonalizing them by means of two ()-point one-dimensional DFTs in the pieces and of two -point two-dimensional DFTs in the piece (the and pieces involve extra section and projection operations). In the absence of symmetry, no computational saving is achieved, since the same reordering could have been applied to the initial indexing, without the CRT reindexing. In the presence of n-fold cyclic symmetry, however, the rearranged lends itself to an n-fold reduction in size. The basic fact is that whenever case 2 occurs, is divisible by n (i.e. is divisible by 6 when or 6, and by 4 when ), say . If g is a generator of the cyclic symmetry, the generator γ of may be chosen in such a way that . The action of g is then to increment the j index in and by q, and the index in U by (q, q). Since the data items whose indices are related in this way have identical values, the DFTs used to diagonalize the Rader cyclic convolutions will operate on periodized data, hence yield decimated results; and the nonzero results will be obtained from the unique data by DFTs n times smaller than their counterparts in the absence of symmetry. A more thorough analysis is needed to obtain a Winograd factorization into the normal from CBA in the presence of symmetry (see Bricogne & Tolimieri, 1990). Nonprimitive translations and dihedral symmetry may also be accommodated within this framework, as in case 1. This reindexing by means of algebraic integers yields larger orbits, hence more efficient algorithms, than that of Auslander et al. (1988) which only uses ordinary integers acting by scalar dilation.

#### 1.3.4.3.5. Treatment of conjugate and parity-related symmetry properties

| top | pdf |

Most crystallographic Fourier syntheses are real-valued and originate from Hermitian-symmetric collections of Fourier coefficients. Hermitian symmetry is closely related to the action of a centre of inversion in reciprocal space, and thus interacts strongly with all other genuinely crystallographic symmetry elements of order 2. All these symmetry properties are best treated by factoring by 2 and reducing the computation of the initial transform to that of a collection of smaller transforms with less symmetry or none at all.

#### 1.3.4.3.5.1. Hermitian-symmetric or real-valued transforms

| top | pdf |

The computation of a DFT with Hermitian-symmetric or real-valued data can be carried out at a cost of half that of an ordinary transform, essentially by multiplexing' pairs of special partial transforms into general complex transforms, and then demultiplexing' the results on the basis of their symmetry properties. The treatment given below is for general dimension n; a subset of cases for was treated by Ten Eyck (1973).

• (a) Underlying group action

Hermitian symmetry is not a geometric symmetry, but it is defined in terms of the action in reciprocal space of point group , i.e. , where e acts as I (the identity matrix) and acts as .

This group action on with will now be characterized by the calculation of the cocycle (Section 1.3.4.3.4.1) under the assumption that and are both diagonal. For this purpose it is convenient to associate to any integer vectorin the vector whose jth component is

Let , and hence . ThenhenceTherefore −e acts by

Hermitian symmetry is traditionally dealt with by factoring by 2, i.e. by assuming . If , then each is invariant under G, so that each partial vector (Section 1.3.4.3.4.1) inherits the symmetry internally, with a modulation' by . The multiplexing–demultiplexing' technique provides an efficient treatment of this singular case.

• (b) Calculation of structure factors

The computation may be summarized as follows:where is the initial decimation given by , TW is the transposition and twiddle-factor stage, and is the final unscrambling by coset reversal given by .

 (i) Decimation in time The decimated vectors are real and hence have Hermitian transforms . The values of may be grouped into pairs and the vectors corresponding to each pair may be multiplexed into a general complex vectorThe transform can then be resolved into the separate transforms and by using the Hermitian symmetry of the latter, which yields the demultiplexing formulaeThe number of partial transforms is thus reduced from to . Once this separation has been achieved, the remaining steps need only be carried out for a unique half of the values of . (ii) Decimation in frequency Since we have and mod . The vectors of decimated and scrambled results then obey the symmetry relationswhich can be used to halve the number of necessary to compute them, as follows. Having formed the vectors given bywe may group the values of into pairs and for each pair form the multiplexed vector:After calculating the transforms , the individual transforms and can be separated by using for each pair the demultiplexing formulaewhich can be solved recursively. If all pairs are chosen so that they differ only in the jth coordinate , the recursion is along and can be initiated by introducing the (real) values of and at and , accumulated e.g. while forming Z for that pair. Only points with going from 0 to need be resolved, and they contain the unique half of the Hermitian-symmetric transform F.

• (c) Calculation of electron densities

The computation may be summarized as follows:where is the decimation with coset reversal given by , TW is the transposition and twiddle-factor stage, and is the recovery in natural order given by .

 (i) Decimation in time The last transformation has a real-valued matrix, and the final result is real-valued. It follows that the vectors of intermediate results after the twiddle-factor stage are real-valued, hence lend themselves to multiplexing along the real and imaginary components of half as many general complex vectors. Let the initial vectors be multiplexed into vectors[one for each pair ], each of which yields by F(M) a vectorThe real-valuedness of the may be used to recover the separate result vectors for and . For this purpose, introduce the abbreviated notationThen we may writeor, equivalently, for each ,Therefore and may be retrieved from Z by the demultiplexing' formula:which is valid at all points where , i.e. whereDemultiplexing fails whenIf the pairs are chosen so that their members differ only in one coordinate (the jth, say), then the exceptional points are at and the missing transform values are easily obtained e.g. by accumulation while forming . The final stage of the calculation is then (ii) Decimation in frequency The last transformation F(M) gives the real-valued results , therefore the vectors after the twiddle-factor stage each have Hermitian symmetry. A first consequence is that the intermediate vectors need only be computed for the unique half of the values of , the other half being related by the Hermitian symmetry of . A second consequence is that the vectors may be condensed into general complex vectors[one for each pair ] to which a general complex F(M) may be applied to yieldwith and real-valued. The final results can therefore be retrieved by the particularly simple demultiplexing formulae:

#### 1.3.4.3.5.2. Hermitian-antisymmetric or pure imaginary transforms

| top | pdf |

A vector is said to be Hermitian-antisymmetric ifIts transform then satisfiesi.e. is purely imaginary.

If X is Hermitian-antisymmetric, then is Hermitian-symmetric, with real-valued. The treatment of Section 1.3.4.3.5.1 may therefore be adapted, with trivial factors of i or , or used as such in conjunction with changes of variable by multiplication by .

#### 1.3.4.3.5.3. Complex symmetric and antisymmetric transforms

| top | pdf |

The matrix is its own contragredient, and hence (Section 1.3.2.4.2.2) the transform of a symmetric (respectively antisymmetric) function is symmetric (respectively antisymmetric). In this case the group acts in both real and reciprocal space as . If with both factors diagonal, then acts byi.e.

The symmetry or antisymmetry properties of X may be writtenwith for symmetry and for antisymmetry.

The computation will be summarized aswith the same indexing as that used for structure-factor calculation. In both cases it will be shown that a transform with and M diagonal can be computed using only partial transforms instead of .

 (i) Decimation in time Since we have and mod , so that the symmetry relations for each parity class of data reador equivalentlyTransforming by , this relation becomesEach parity class thus obeys a different symmetry relation, so that we may multiplex them in pairs by forming for each pair the vectorPuttingwe then have the demultiplexing relations for each :which can be solved recursively. Transform values at the exceptional points where demultiplexing fails (i.e. where ) can be accumulated while forming Y. Only the unique half of the values of need to be considered at the demultiplexing stage and at the subsequent TW and F(2I) stages. (ii) Decimation in frequency The vectors of final results for each parity class obey the symmetry relationswhich are different for each . The vectors of intermediate results after the twiddle-factor stage may then be multiplexed in pairs as After transforming by , the results may be demultiplexed by using the relationswhich can be solved recursively as in Section 1.3.4.3.5.1(b)(ii).

#### 1.3.4.3.5.4. Real symmetric transforms

| top | pdf |

Conjugate symmetric (Section 1.3.2.4.2.3) implies that if the data X are real and symmetric [i.e. and ], then so are the results . Thus if contains a centre of symmetry, F is real symmetric. There is no distinction (other than notation) between structure-factor and electron-density calculation; the algorithms will be described in terms of the former. It will be shown that if , a real symmetric transform can be computed with only partial transforms instead of .

 (i) Decimation in time Since we have and . The decimated vectors are not only real, but have an internal symmetry expressed byThis symmetry, however, is different for each so that we may multiplex two such vectors and into a general real vectorfor each of the pairs . The Hermitian-symmetric transform vectorscan then be evaluated by the methods of Section 1.3.4.3.5.1(b) at the cost of only general complex . The demultiplexing relations by which the separate vectors and may be recovered are most simply obtained by observing that the vectors Z after the twiddle-factor stage are real-valued since F(2I) has a real matrix. Thus, as in Section 1.3.4.3.5.1(c)(i),where and are real vectors and where the multipliers and are the inverse twiddle factors. Therefore,and hence the demultiplexing relation for each :The values of and at those points where can be evaluated directly while forming Y. This demultiplexing and the final stage of the calculation, namelyneed only be carried out for the unique half of the range of . (ii) Decimation in frequency Similarly, the vectors of decimated and scrambled results are real and obey internal symmetrieswhich are different for each . For each of the pairs the multiplexed vectoris a Hermitian-symmetric vector without internal symmetry, and the real vectorsmay be evaluated at the cost of only general complex by the methods of Section 1.3.4.3.5.1(c). The individual transforms and may then be retrieved via the demultiplexing relationswhich can be solved recursively as described in Section 1.3.4.3.5.1(b)(ii). This yields the unique half of the real symmetric results F.

#### 1.3.4.3.5.5. Real antisymmetric transforms

| top | pdf |

If X is real antisymmetric, then its transform is purely imaginary and antisymmetric. The double-multiplexing techniques used for real symmetric transforms may therefore be adapted with only minor changes involving signs and factors of i.

#### 1.3.4.3.5.6. Generalized multiplexing

| top | pdf |

So far the multiplexing technique has been applied to pairs of vectors with similar types of parity-related and/or conjugate symmetry properties, in particular the same value of .

It can be generalized so as to accommodate mixtures of vectors with different symmetry characteristics. For example if is Hermitian-symmetric and is Hermitian-antisymmetric, so that is real-valued while has purely imaginary values, the multiplexing process should obviously form (instead of if both had the same type of symmetry), and demultiplexing consists in separating

The general multiplexing formula for pairs of vectors may therefore be writtenwhere ω is a phase factor (e.g. 1 or i) chosen in such a way that all nonexceptional components of and (or and ) be embedded in the complex plane along linearly independent directions, thus making multiplexing possible.

It is possible to develop a more general form of multiplexing/demultiplexing for more than two vectors, which can be used to deal with symmetry elements of order 3, 4 or 6. It is based on the theory of group characters (Ledermann, 1987).

#### 1.3.4.3.6. Global crystallographic algorithms

| top | pdf |

All the necessary ingredients are now available for calculating the CDFT for any given space group.

#### 1.3.4.3.6.1. Triclinic groups

| top | pdf |

Space group P1 is dealt with by the methods of Section 1.3.4.3.5.1 and by those of Section 1.3.4.3.5.4.

#### 1.3.4.3.6.2. Monoclinic groups

| top | pdf |

A general monoclinic transformation is of the formwith a diagonal matrix whose entries are or , and a vector whose entries are 0 or . We may thus decompose both real and reciprocal space into a direct sum of a subspace where acts as the identity, and a subspace where acts as minus the identity, with . All usual entities may be correspondingly written as direct sums, for instance:

We will use factoring by 2, with decimation in frequency when computing structure factors, and decimation in time when computing electron densities; this corresponds to with , . The nonprimitive translation vector then belongs to , and thusThe symmetry relations obeyed by and F are as follows: for electron densitiesor, after factoring by 2,while for structure factorswith its Friedel counterpartor, after factoring by 2,with Friedel counterpart

When calculating electron densities, two methods may be used.

 (i) Transform on first. The partial vectors defined by obey symmetry relations of the formwith independent of . This is the same relation as for the same parity class of data for a (complex or real) symmetric or antisymmetric transform. The same techniques can be used to decrease the number of by multiplexing pairs of such vectors and demultiplexing their transforms. Partial vectors with different values of may be mixed in this way (Section 1.3.4.3.5.6). Once is completed, its results have Hermitian symmetry with respect to , and the methods of Section 1.3.4.3.5.1 may be used to obtain the unique electron densities. (ii) Transform on first. The partial vectors defined by obey symmetry relations of the formwith independent of . This is the same relation as for the same parity class of data for a Hermitian symmetric or antisymmetric transform. The same techniques may be used to decrease the number of . This generalizes the procedure described by Ten Eyck (1973) for treating dyad axes, i.e. for the case , and (simple dyad) or (screw dyad). Once is completed, its results have Hermitian symmetry properties with respect to which can be used to obtain the unique electron densities. Structure factors may be computed by applying the reverse procedures in the reverse order.

#### 1.3.4.3.6.3. Orthorhombic groups

| top | pdf |

Almost all orthorhombic space groups are generated by two monoclinic transformations and of the type described in Section 1.3.4.3.6.2, with the addition of a centre of inversion for centrosymmetric groups. The only exceptions are Fdd2 and Fddd which contain diamond glides, in which some nonprimitive translations are square roots' not of primitive lattice translations, but of centring translations. The generic case will be examined first.

To calculate electron densities, the unique octant of data may first be transformed on (respectively ) as in Section 1.3.4.3.6.2 using the symmetry pertaining to generator . These intermediate results may then be expanded by generator by the formula of Section 1.3.4.3.3 prior to the final transform on (respectively ). To calculate structure factors, the reverse operations are applied in the reverse order.

The two exceptional groups Fdd2 and Fddd only require a small modification. The F-centring causes the systematic absence of parity classes with mixed parities, leaving only (000) and (111). For the former, the phase factors in the symmetry relations of Section 1.3.4.3.6.2 become powers of (−1) so that one is back to the generic case. For the latter, these phase factors are odd powers of i which it is a simple matter to incorporate into a modified multiplexing/demultiplexing procedure.

#### 1.3.4.3.6.4. Trigonal, tetragonal and hexagonal groups

| top | pdf |

All the symmetries in this class of groups can be handled by the generalized Rader/Winograd algorithms of Section 1.3.4.3.4.3, but no implementation of these is yet available.

In groups containing axes of the form with g.c.d. along the c direction, the following procedure may be used (Ten Eyck, 1973):

 (i) to calculate electron densities, the unique structure factors indexed byare transformed on l; the results are rearranged by the transposition formula of Section 1.3.4.3.3 so as to be indexed byand are finally transformed on (h, k) to produce an asymmetric unit. For a dihedral group, the extra twofold axis may be used in the transposition to produce a unique th of z. (ii) to calculate structure factors, the unique densities in th of z [or th for a dihedral group] are first transformed on x and y, then transposed by the formula of Section 1.3.4.3.3 to reindex the intermediate results bythe last transform on z is then carried out.

#### 1.3.4.3.6.5. Cubic groups

| top | pdf |

These are usually treated as their orthorhombic or tetragonal subgroups, as the body-diagonal threefold axis cannot be handled by ordinary methods of decomposition.

The three-dimensional factorization technique of Section 1.3.4.3.4.1 allows a complete treatment of cubic symmetry. Factoring by 2 along all three dimensions gives four types (i.e. orbits) of parity classes:Orbit exchange using the threefold axis thus allows one to reduce the number of partial transforms from 8 to 4 (one per orbit). Factoring by 3 leads to a reduction from 27 to 11 (in this case, further reduction to 9 can be gained by multiplexing the three diagonal classes with residual threefold symmetry into a single class; see Section 1.3.4.3.5.6). More generally, factoring by q leads to a reduction from to . Each of the remaining transforms then has a symmetry induced from the orthorhombic or tetragonal subgroup, which can be treated as above.

No implementation of this procedure is yet available.

#### 1.3.4.3.6.6. Treatment of centred lattices

| top | pdf |

Lattice centring is an instance of the duality between periodization and decimation: the extra translational periodicity of ρ induces a decimation of described by the reflection conditions' on h. As was pointed out in Section 1.3.4.2.2.3, nonprimitive lattices are introduced in order to retain the same matrix representation for a given geometric symmetry operation in all the arithmetic classes in which it occurs. From the computational point of view, therefore, the main advantage in using centred lattices is that it maximizes decomposability (Section 1.3.4.2.2.4); reindexing to a primitive lattice would for instance often destroy the diagonal character of the matrix representing a dyad.

In the usual procedure involving three successive one-dimensional transforms, the loss of efficiency caused by the duplication of densities or the systematic vanishing of certain classes of structure factors may be avoided by using a multiplexing/demultiplexing technique (Ten Eyck, 1973):

 (i) for base-centred or body-centred lattices, two successive planes of structure factors may be overlaid into a single plane; after transformation, the results belonging to each plane may be separated by parity considerations; (ii) for face-centred lattices the same method applies, using four successive planes (the third and fourth with an origin translation); (iii) for rhombohedral lattices in hexagonal coordinates, three successive planes may be overlaid, and the results may be separated by linear combinations involving cube roots of unity.

The three-dimensional factorization technique of Section 1.3.4.3.4.1 is particularly well suited to the treatment of centred lattices: if the decimation matrix of N contains as a factor a matrix which integerizes' all the nonprimitive lattice vectors, then centring is reflected by the systematic vanishing of certain classes of vectors of decimated data or results, which can simply be omitted from the calculation. An alternative possibly is to reindex on a primitive lattice and use different representative matrices for the symmetry operations: the loss of decomposability is of little consequence in this three-dimensional scheme, although it substantially complicates the definition of the cocycles and .

#### 1.3.4.3.6.7. Programming considerations

| top | pdf |

The preceding sections have been devoted to showing how the raw computational efficiency of a crystallographic Fourier transform algorithm can be maximized. This section will briefly discuss another characteristic (besides speed) which a crystallographic Fourier transform program may be required to possess if it is to be useful in various applications: a convenient and versatile mode of presentation of input data or output results.

The standard crystallographic FFT programs (Ten Eyck, 1973, 1985) are rather rigid in this respect, and use rather rudimentary data structures (lists of structure-factor values, and two-dimensional arrays containing successive sections of electron-density maps). It is frequently the case that considerable reformatting of these data or results must be carried out before they can be used in other computations; for instance, maps have to be converted from 2D sections to 3D bricks' before they can be inspected on a computer graphics display.

The explicitly three-dimensional approach to the factorization of the DFT and the use of symmetry offers the possibility of richer and more versatile data structures. For instance, the use of decimation in frequency' in real space and of decimation in time' in reciprocal space leads to data structures in which real-space coordinates are handled by blocks (thus preserving, at least locally, the three-dimensional topological connectivity of the maps) while reciprocal-space indices are handled by parity classes or their generalizations for factors other than 2 (thus making the treatment of centred lattices extremely easy). This global three-dimensional indexing also makes it possible to carry symmetry and multiplicity characteristics for each subvector of intermediate results for the purpose of automating the use of the orbit exchange mechanism.

Brünger (1989) has described the use of a similar three-dimensional factoring technique in the context of structure-factor calculations for the refinement of macromolecular structures.

| top | pdf |

#### 1.3.4.4.1. Introduction

| top | pdf |

Fourier transform (FT) calculations play an indispensable role in crystallography, because the Fourier transformation is inherent in the diffraction phenomenon itself.

Besides this obligatory use, the FT has numerous other applications, motivated more often by its mathematical properties than by direct physical reasoning (although the latter can be supplied after the fact). Typically, many crystallographic computations turn out to be convolutions in disguise, which can be speeded up by orders of magnitude through a judicious use of the FT. Several recent advances in crystallographic computation have been based on this kind of observation.

#### 1.3.4.4.2. Fourier synthesis of electron-density maps

| top | pdf |

Bragg (1929) was the first to use this type of calculation to assist structure determination. Progress in computing techniques since that time was reviewed in Section 1.3.4.3.1.

The usefulness of the maps thus obtained can be adversely affected by three main factors:

 (i) limited resolution; (ii) errors in the data; (iii) computational errors.

Limited resolution causes series-termination errors' first investigated by Bragg & West (1930), who used an optical analogy with the numerical aperture of a microscope. James (1948b) gave a quantitative description of this phenomenon as a convolution with the spherical Dirichlet kernel' (Section 1.3.4.2.1.3), which reflects the truncation of the Fourier spectrum by multiplication with the indicator function of the limiting resolution sphere. Bragg & West (1930) suggested that the resulting ripples might be diminished by applying an artificial temperature factor to the data, which performs a further convolution with a Gaussian point-spread function. When the electron-density map is to be used for model refinement, van Reijen (1942) suggested using Fourier coefficients calculated from the model when no observation is available, as a means of combating series-termination effects.

Errors in the data introduce errors in the electron-density maps, with the same mean-square value by virtue of Parseval's theorem. Special positions accrue larger errors (Cruickshank & Rollett, 1953; Cruickshank, 1965a). To minimize the mean-square electron-density error due to large phase uncertainties, Blow & Crick (1959) introduced the best Fourier' which uses centroid Fourier coefficients; the associated error level in the electron-density map was evaluated by Blow & Crick (1959) and Dickerson et al. (1961a,b).

Computational errors used to be a serious concern when Beevers–Lipson strips were used, and Cochran (1948a) carried out a critical evaluation of the accuracy limitations imposed by strip methods. Nowadays, the FFT algorithm implemented on digital computers with a word size of at least 32 bits gives results accurate to six decimal places or better in most applications (see Gentleman & Sande, 1966).

#### 1.3.4.4.3. Fourier analysis of modified electron-density maps

| top | pdf |

Various approaches to the phase problem are based on certain modifications of the electron-density map, followed by Fourier analysis of the modified map and extraction of phase information from the resulting Fourier coefficients.

#### 1.3.4.4.3.1. Squaring

| top | pdf |

Sayre (1952a) derived his squaring method equation' for structures consisting of equal, resolved and spherically symmetric atoms by observing that squaring such an electron density is equivalent merely to sharpening each atom into its square. Thuswhere is the ratio between the form factor common to all the atoms and the form factor for the squared version of that atom.

Most of the central results of direct methods, such as the tangent formula, are an immediate consequence of Sayre's equation. Phase refinement for a macromolecule by enforcement of the squaring method equation was demonstrated by Sayre (1972, 1974).

#### 1.3.4.4.3.2. Other nonlinear operations

| top | pdf |

A category of phase improvement procedures known as density modification' is based on the pointwise application of various quadratic or cubic filters' to electron-density maps after removal of negative regions (Hoppe & Gassmann, 1968; Hoppe et al., 1970; Barrett & Zwick, 1971; Gassmann & Zechmeister, 1972; Collins, 1975; Collins et al., 1976; Gassmann, 1976). These operations are claimed to be equivalent to reciprocal-space phase-refinement techniques such as those based on the tangent formula. Indeed the replacement ofby , where P is a polynomialyieldsand hence gives rise to the convolution-like families of terms encountered in direct methods. This equivalence, however, has been shown to be rather superficial (Bricogne, 1982) because the uncertainty principle' embodied in Heisenberg's inequality (Section 1.3.2.4.4.3) imposes severe limitations on the effectiveness of any procedure which operates pointwise in both real and reciprocal space.

In applying such methods, sampling considerations must be given close attention. If the spectrum of extends to resolution Δ and if the pointwise nonlinear filter involves a polynomial P of degree n, then P() should be sampled at intervals of at most to accommodate the full bandwidth of its spectrum.

#### 1.3.4.4.3.3. Solvent flattening

| top | pdf |

Crystals of proteins and nucleic acids contain large amounts of mother liquor, often in excess of 50% of the unit-cell volume, occupying connected channels. The well ordered electron density corresponding to the macromolecule thus occupies only a periodic subregion of the crystal. Thusimplying the convolution identity between structure factors (Main & Woolfson, 1963):which is a form of the Shannon interpolation formula (Sections 1.3.2.7.1, 1.3.4.2.1.7; Bricogne, 1974; Colman, 1974).

It is often possible to obtain an approximate molecular envelope' from a poor electron-density map , either interactively by computer graphics (Bricogne, 1976) or automatically by calculating a moving average of the electron density within a small sphere S. The latter procedure can be implemented in real space (Wang, 1985). However, as it is a convolution of with , it can be speeded up considerably (Leslie, 1987) by computing the moving average as

This remark is identical in substance to Booth's method of computation of bounded projections' (Booth, 1945a) described in Section 1.3.4.2.1.8, except that the summation is kept three-dimensional.

The iterative use of the estimated envelope for the purpose of phase improvement (Wang, 1985) is a submethod of the previously developed method of molecular averaging, which is described below. Sampling rules for the Fourier analysis of envelope-truncated maps will be given there.

#### 1.3.4.4.3.4. Molecular averaging by noncrystallographic symmetries

| top | pdf |

Macromolecules and macromolecular assemblies frequently crystallize with several identical subunits in the asymmetric metric unit, or in several crystal forms containing the same molecule in different arrangements. Rossmann & Blow (1963) recognized that intensity data collected from such structures are redundant (Sayre, 1952b) and that their redundancy could be a source of phase information.

The phase constraints implied by the consistency of geometrically redundant intensities were first derived by Rossmann & Blow (1963), and were generalized by Main & Rossmann (1966). Crowther (1967, 1969) reformulated them as linear eigenvalue equations between structure factors, for which he proposed an iterative matrix solution method. Although useful in practice (Jack, 1973), this reciprocal-space approach required computations of size for N reflections, so that N could not exceed a few thousands.

The theory was then reformulated in real space (Bricogne, 1974), showing that the most costly step in Crowther's procedure could be carried out much more economically by averaging the electron densities of all crystallographically independent sub­units, then rebuilding the crystal(s) from this averaged subunit, flattening the density in the solvent region(s) by resetting it to its average value. This operation is a projection [by virtue of Section 1.3.4.2.2.2(d)]. The overall complexity was thus reduced from to N log N. The design and implementation of a general-purpose program package for averaging, reconstructing and solvent-flattening electron-density maps (Bricogne, 1976) led rapidly to the first high-resolution determinations of virus structures (Bloomer et al., 1978; Harrison et al., 1978), with .

The considerable gain in speed is a consequence of the fact that the masking operations used to retrieve the various copies of the common subunit are carried out by simple pointwise multiplication by an indicator function in real space, whereas they involve a convolution with in reciprocal space.

The averaging by noncrystallographic symmetries of an electron-density map calculated by FFT – hence sampled on a grid which is an integral subdivision of the period lattice – necessarily entails the interpolation of densities at nonintegral points of that grid. The effect of interpolation on the structure factors recalculated from an averaged map was examined by Bricogne (1976). This study showed that, if linear interpolation is used, the initial map should be calculated on a fine grid, of size Δ/5 or Δ/6 at resolution Δ (instead of the previously used value of Δ/3). The analysis about to be given applies to all interpolation schemes which consist in a convolution of the sampled density with a fixed interpolation kernel function K.

Let be a -periodic function. Let K be the interpolation kernel in normalized' form, i.e. such that and scaled so as to interpolate between sample values given on a unit grid ; in the case of linear interpolation, K is the trilinear wedge'whereLet be sampled on a grid , and let denote the function interpolated from this sampled version of . Then:where , so that

The transform of thus consists of

 (i) a main band' corresponding to , which consists of the true transform attenuated by multiplication by the central region of ; in the case of linear interpolation, for example, (ii) a series of ghost bands' corresponding to , which consist of translates of multiplied by the tail regions of .

Thus is not band-limited even if is. Supposing, however, that is band-limited and that grid satisfies the Shannon sampling criterion, we see that there will be no overlap between the different bands: may therefore be recovered from the main band by compensating its attenuation, which is approximately a temperature-factor correction.

For numerical work, however, must be resampled onto another grid , which causes its transform to become periodized intoThis now causes the main band to become contaminated by the ghost bands of the translates of .

Aliasing errors may be minimized by increasing the sampling rate in grid well beyond the Shannon minimum, which rapidly reduces the r.m.s. content of the ghost bands.

The sampling rate in grid needs only exceed the Shannon minimum to the extent required to accommodate the increase in bandwidth due to convolution with , which is the reciprocal-space counterpart of envelope truncation (or solvent flattening) in real space.

#### 1.3.4.4.3.5. Molecular-envelope transforms via Green's theorem

| top | pdf |

Green's theorem stated in terms of distributions (Section 1.3.2.3.9.1) is particularly well suited to the calculation of the Fourier transforms of indicator functions. Let f be the indicator function and let S be the boundary of U (assumed to be a smooth surface). The jump in the value of f across S along the outer normal vector is , the jump in the normal derivative of f across S is , and the Laplacian of f as a function is (almost everywhere) 0 so that . Green's theorem then reads:

The function satisfies the identity . Therefore, in Cartesian coordinates:i.e.where n is the outer normal to S. This formula was used by von Laue (1936) for a different purpose, namely to calculate the transforms of crystal shapes (see also Ewald, 1940). If the surface S is given by a triangulation, the surface integral becomes a sum over all faces, since n is constant on each face. If U is a solid sphere with radius R, an integration by parts gives immediately:

#### 1.3.4.4.4. Structure factors from model atomic parameters

| top | pdf |

An atomic model of a crystal structure consists of a list of symmetry-unique atoms described by their positions, their thermal agitation and their chemical identity (which can be used as a pointer to form-factor tables). Form factors are usually parameterized as sums of Gaussians, and thermal agitation by a Gaussian temperature factor or tensor. The formulae given in Section 1.3.4.2.2.6 for Gaussian atoms are therefore adequate for most purposes. High-resolution electron-density studies use more involved parameterizations.

Early calculations were carried out by means of Bragg–Lipson charts (Bragg & Lipson, 1936) which gave a graphical representation of the symmetrized trigonometric sums Ξ of Section 1.3.4.2.2.9. The approximation of form factors by Gaussians goes back to the work of Vand et al. (1957) and Forsyth & Wells (1959). Agarwal (1978) gave simplified expansions suitable for medium-resolution modelling of macromolecular structures.

This method of calculating structure factors is expensive because each atom sends contributions of essentially equal magnitude to all structure factors in a resolution shell. The calculation is therefore of size for N atoms and reflections. Since N and are roughly proportional at a given resolution, this method is very costly for large structures.

Two distinct programming strategies are available (Rollett, 1965) according to whether the fast loop is on all atoms for each reflection, or on all reflections for each atom. The former method was favoured in the early times when computers were unreliable. The latter was shown by Burnett & Nordman (1974) to be more amenable to efficient programming, as no multiplication is required in calculating the arguments of the sine/cosine terms: these can be accumulated by integer addition, and used as subscripts in referencing a trigonometric function table.

#### 1.3.4.4.5. Structure factors via model electron-density maps

| top | pdf |

Robertson (1936b) recognized the similarity between the calculation of structure factors by Fourier summation and the calculation of Fourier syntheses, the main difference being of course that atomic coordinates do not usually lie exactly on a grid obtained by integer subdivision of the crystal lattice. He proposed to address this difficulty by the use of his sorting board, which could extend the scale of subdivision and thus avoid phase errors. In this way the calculation of structure factors became amenable to Beevers–Lipson strip methods, with considerable gain of speed.

Later, Beevers & Lipson (1952) proposed that trigonometric functions attached to atomic positions falling between the grid points on which Beevers–Lipson strips were based should be obtained by linear interpolation from the values found on the strips for the closest grid points. This amounts (Section 1.3.4.4.3.4) to using atoms in the shape of a trilinear wedge, whose form factor was indicated in Section 1.3.4.4.3.4 and gives rise to aliasing effects (see below) not considered by Beevers & Lipson.

The correct formulation of this idea came with the work of Sayre (1951), who showed that structure factors could be calculated by Fourier analysis of a sampled electron-density map previously generated on a subdivision of the crystal lattice Λ. When generating such a map, care must be taken to distribute onto the sample grid not only the electron densities of all the atoms in the asymmetric motif, but also those of their images under space-group symmetries and lattice translations. Considerable savings in computation occur, especially for large structures, because atoms are localized: each atom sends contributions to only a few grid points in real space, rather than to all reciprocal-lattice points. The generation of the sampled electron-density map is still of complexity for N atoms and reflections, but the proportionality constant is smaller than that in Section 1.3.4.4.4 by orders of magnitude; the extra cost of Fourier analysis, proportional to , is negligible.

The idea of approximating a Fourier transform by a discrete transform on sampled values had already been used by Whittaker (1948), who tested it on the first three odd Hermite functions and did not consider the problem of aliasing errors. By contrast, Sayre gave a lucid analysis of the sampling problems associated to this technique. If the periodic sampled map is written in the form of a weighted lattice distribution (as in Section 1.3.2.7.3) asthen its discrete Fourier transform yieldsso that each correct value is corrupted by its aliases for .

To cure this aliasing problem, Sayre used hypothetical atoms' with form factors equal to those of standard atoms within the resolution range of interest, but set to zero outside that range. This amounts to using atomic densities with built-in series-termination errors, which has the detrimental effect of introducing slowly decaying ripples around the atom which require incrementing sample densities at many more grid points per atom.

Sayre considered another cure in the form of an artificial temperature factor B (Bragg & West, 1930) applied to all atoms. This spreads each atom on more grid points in real space but speeds up the decay of its transform in reciprocal space, thus allowing the use of a coarser sampling grid in real space. He discounted it as spoiling the agreement with observed data, but Ten Eyck (1977) pointed out that this agreement could be restored by applying the negative of the artificial temperature factor to the results. This idea cannot be carried to extremes: if B is chosen too large, the atoms will be so spread out in real space as each to occupy a sizeable fraction of the unit cell and the advantage of atom localization will be lost; furthermore, the form factors will fall off so rapidly that round-off error amplification will occur when the results are sharpened back. Clearly, there exists an optimal combination of B and sampling rate yielding the most economical computation for a given accuracy at a given resolution, and a formula will now be given to calculate it.

Let us make the simplifying assumption that all atoms are roughly equal and that their common form factor can be represented by an equivalent temperature factor . Let be the resolution to which structure factors are wanted. The Shannon sampling interval is . Let σ be the oversampling rate, so that the actual sampling interval in the map is : then consecutive copies of the transform are separated by a distance in reciprocal space. Let the artificial temperature factor be added, and letThe worst aliasing occurs at the outer resolution limit , where the signal' due to an atom is proportional towhile the noise' due to the closest alias is proportional toThus the signal-to-noise ratio, or quality factor, Q is

If a certain value of Q is desired (e.g. for 1% accuracy), then the equationdefines B in terms of and Q.

The overall cost of the structure-factor calculation from N atoms is then

 (i) for density generation, (ii) for Fourier analysis,

where and are constant depending on the speed of the computer used. This overall cost may be minimized with respect to σ for given and Q, determining the optimal B (and hence ) in passing by the above relation.

Sayre (1951) did observe that applying an artificial temperature factor in real space would not create series-termination ripples: the resulting atoms would have a smaller effective radius than his hypothetical atoms, so that step (i) would be faster. This optimality of Gaussian smearing is ultimately a consequence of Hardy's theorem (Section 1.3.2.4.4.3).

#### 1.3.4.4.6. Derivatives for variational phasing techniques

| top | pdf |

Some methods of phase determination rely on maximizing a certain global criterion involving the electron density, of the form , under constraint of agreement with the observed structure-factor amplitudes, typically measured by a residual C. Several recently proposed methods use for various measures of entropy defined by taking or (Bricogne, 1982; Britten & Collins, 1982; Narayan & Nityananda, 1982; Bryan et al., 1983; Wilkins et al., 1983; Bricogne, 1984; Navaza, 1985; Livesey & Skilling, 1985). Sayre's use of the squaring method to improve protein phases (Sayre, 1974) also belongs to this category, and is amenable to the same computational strategies (Sayre, 1980).

These methods differ from the density-modification procedures of Section 1.3.4.4.3.2 in that they seek an optimal solution by moving electron densities (or structure factors) jointly rather than pointwise, i.e. by moving along suitably chosen search directions [or ].

For computational purposes, these search directions may be handled either as column vectors of sample values on a grid in real space, or as column vectors of Fourier coefficients in reciprocal space. These column vectors are the coordinates of the same vector in an abstract vector space of dimension over , but referred to two different bases which are related by the DFT and its inverse (Section 1.3.2.7.3).

The problem of finding the optimum of S for a given value of C amounts to achieving collinearity between the gradients and of S and of C in , the scalar ratio between them being a Lagrange multiplier. In order to move towards such a solution from a trial position, the dependence of and on position in must be represented. This involves the Hessian matrices H(S) and H(C), whose size precludes their use in the whole of . Restricting the search to a smaller search subspace of dimension n spanned by we may build local quadratic models of S and C (Bryan & Skilling, 1980; Burch et al., 1983) with respect to n coordinates X in that subspace:The coefficients of these linear models are given by scalar products:which, by virtue of Parseval's theorem, may be evaluated either in real space or in reciprocal space (Bricogne, 1984). In doing so, special positions and reflections must be taken into account, as in Section 1.3.4.2.2.8. Scalar products involving S are best evaluated by real-space grid summation, because H(S) is diagonal in this representation; those involving C are best calculated by reciprocal-space summation, because H(C) is at worst block-diagonal in this representation. Using these Hessian matrices in the wrong space would lead to prohibitively expensive convolutions instead of scalar (or at worst matrix) multiplications.

#### 1.3.4.4.7. Derivatives for model refinement

| top | pdf |

Since the origins of X-ray crystal structure analysis, the calculation of crystallographic Fourier series has been closely associated with the process of refinement. Fourier coefficients with phases were obtained for all or part of the measured reflections in the basis of some trial model for all or part of the structure, and Fourier syntheses were then used to complete and improve this initial model. This approach is clearly described in the classic paper by Bragg & West (1929), and was put into practice in the determination of the structures of topaz (Alston & West, 1929) and diopside (Warren & Bragg, 1929). Later, more systematic methods of arriving at a trial model were provided by the Patterson synthesis (Patterson, 1934, 1935a,b; Harker, 1936) and by isomorphous replacement (Robertson, 1935, 1936c). The role of Fourier syntheses, however, remained essentially unchanged [see Robertson (1937) for a review] until more systematic methods of structure refinement were introduced in the 1940s. A particularly good account of the processes of structure completion and refinement may be found in Chapters 15 and 16 of Stout & Jensen (1968).

It is beyond the scope of this section to review the vast topic of refinement methods: rather, it will give an account of those aspects of their development which have sought improved power by exploiting properties of the Fourier transformation. It is of more than historical interest that some recent advances in the crystallographic refinement of macromolecular structures had been anticipated by Cochran and Cruickshank in the early 1950s.

#### 1.3.4.4.7.1. The method of least squares

| top | pdf |

Hughes (1941) was the first to use the already well established multivariate least-squares method (Whittaker & Robinson, 1944) to refine initial estimates of the parameters describing a model structure. The method gained general acceptance through the programming efforts of Friedlander et al. (1955), Sparks et al. (1956), Busing & Levy (1961) and others.

The Fourier relations between and F (Section 1.3.4.2.2.6) are used to derive the observational equations' connecting the structure parameters to the observations comprising the amplitudes and their experimental variances for a set of unique reflections.

The normal equations giving the corrections δu to the parameters are thenwhereTo calculate the elements of A, write:hence

In the simple case of atoms with real-valued form factors and isotropic thermal agitation in space group P1,where being a fractional occupancy.

Positional derivatives with respect to are given byso that the corresponding subvector of the right-hand side of the normal equations reads:

The setting up and solution of the normal equations lends itself well to computer programming and has the advantage of providing a thorough analysis of the accuracy of its results (Cruickshank, 1965b, 1970; Rollett, 1970). It is, however, an expensive task, of complexity , which is unaffordable for macromolecules.

#### 1.3.4.4.7.2. Booth's differential Fourier syntheses

| top | pdf |

It was the use of Fourier syntheses in the completion of trial structures which provided the incentive to find methods for computing 2D and 3D syntheses efficiently, and led to the Beevers–Lipson strips. The limited accuracy of the latter caused the estimated positions of atoms (identified as peaks in the maps) to be somewhat in error. Methods were therefore sought to improve the accuracy with which the coordinates of the electron-density maxima could be determined. The naive method of peak-shape analysis from densities recalculated on a grid using high-accuracy trigonometric tables entailed 27 summations per atom.

Booth (1946a) suggested examining the rapidly varying derivatives of the electron density rather than its slowly varying values. Ifthen the gradient vector of at can be calculated by means of three Fourier summations from the vector of Fourier coefficientsSimilarly, the Hessian matrix of at can be calculated by six Fourier summations from the unique elements of the symmetric matrix of Fourier coefficients:

The scalar maps giving the components of the gradient and Hessian matrix of will be called differential syntheses of 1st order and 2nd order respectively. If is approximately but not exactly a maximum of , then the Newton–Raphson estimate of the true maximum is given by:This calculation requires only nine accurate Fourier summations (instead of 27), and this number is further reduced to four if the peak is assumed to be spherically symmetrical.

The resulting positions are affected by series-termination errors in the differential syntheses. Booth (1945c, 1946c) proposed a back-shift correction' to eliminate them, and extended this treatment to the acentric case (Booth, 1946b). He cautioned against the use of an artificial temperature factor to fight series-termination errors (Brill et al., 1939), as this could be shown to introduce coordinate errors by causing overlap between atoms (Booth, 1946c, 1947a,b).

Cruickshank was able to derive estimates for the standard uncertainties of the atomic coordinates obtained in this way (Cox & Cruickshank, 1948; Cruickshank, 1949a,b) and to show that they agreed with those provided by the least-squares method.

The calculation of differential Fourier syntheses was incorporated into the crystallographic programs of Ahmed & Cruickshank (1953b) and of Sparks et al. (1956).

#### 1.3.4.4.7.3. Booth's method of steepest descents

| top | pdf |

Having defined the now universally adopted R factors (Booth, 1945b) as criteria of agreement between observed and calculated amplitudes or intensities, Booth proposed that R should be minimized with respect to the set of atomic coordinates by descending along the gradient of R in parameter space (Booth, 1947c,d). This steepest descents' procedure was compared with Patterson methods by Cochran (1948d).

When calculating the necessary derivatives, Booth (1948a, 1949) used the formulae given above in connection with least squares. This method was implemented by Qurashi (1949) and by Vand (1948, 1951) with parameter-rescaling modifications which made it very close to the least-squares method (Cruickshank, 1950; Qurashi & Vand, 1953; Qurashi, 1953).

#### 1.3.4.4.7.4. Cochran's Fourier method

| top | pdf |

Cochran (1948b,c, 1951a) undertook to exploit an algebraic similarity between the right-hand side of the normal equations in the least-squares method on the one hand, and the expression for the coefficients used in Booth's differential syntheses on the other hand (see also Booth, 1948a). In doing so he initiated a remarkable sequence of formal and computational developments which are still actively pursued today.

Let be the electron-density map corresponding to the current atomic model, with structure factors ; and let be the map calculated from observed moduli and calculated phases, i.e. with coefficients . If there are enough data for to have a resolved peak at each model atomic position , thenwhile if the calculated phases are good enough, will also have peaks at each :It follows thatwhere the summation is over all reflections in or related to by space-group and Friedel symmetry (overlooking multiplicity factors!). This relation is less sensitive to series-termination errors than either of the previous two, since the spectrum of could have been extrapolated beyond the data in by using that of [as in van Reijen (1942)] without changing its right-hand side.

Cochran then used the identityin the formto rewrite the previous relation as(the operation [] on the first line being neutral because of Friedel symmetry). This is equivalent to the vanishing of the subvector of the right-hand side of the normal equations associated to a least-squares refinement in which the weights would beCochran concluded that, for equal-atom structures with for all j, the positions obtained by Booth's method applied to the difference map are such that they minimize the residualwith respect to the atomic positions. If it is desired to minimize the residual of the ordinary least-squares method, then the differential synthesis method should be applied to the weighted difference mapHe went on to show (Cochran, 1951b) that the refinement of temperature factors could also be carried out by inspecting appropriate derivatives of the weighted difference map.

This Fourier method was used by Freer et al. (1976) in conjunction with a stereochemical regularization procedure to refine protein structures.

#### 1.3.4.4.7.5. Cruickshank's modified Fourier method

| top | pdf |

Cruickshank consolidated and extended Cochran's derivations in a series of classic papers (Cruickshank, 1949b, 1950, 1952, 1956). He was able to show that all the coefficients involved in the right-hand side and normal matrix of the least-squares method could be calculated by means of suitable differential Fourier syntheses even when the atoms overlap. This remarkable achievement lay essentially dormant until its independent rediscovery by Agarwal in 1978 (Section 1.3.4.4.7.6).

To ensure rigorous equivalence between the summations over (in the expressions of least-squares right-hand side and normal matrix elements) and genuine Fourier summations, multiplicity-corrected weights were introduced by:where Gh denotes the orbit of h and its isotropy subgroup (Section 1.3.4.2.2.5). Similarly, derivatives with respect to parameters of symmetry-unique atoms were expressed, via the chain rule, as sums over the orbits of these atoms.

Let be the label of a parameter belonging to atoms with label j. Then Cruickshank showed that the pth element of the right-hand side of the normal equations can be obtained as , where is a differential synthesis of the formwith a polynomial in (h, k, l) depending on the type of parameter p. The correspondence between parameter type and the associated polynomial extends Booth's original range of differential syntheses, and is recapitulated in the following table.Unlike Cochran's original heuristic argument, this result does not depend on the atoms being resolved.

Cruickshank (1952) also considered the elements of the normal matrix, of the formassociated with positional parameters. The block for parameters and may be writtenwhich, using the identitybecomes(Friedel's symmetry makes redundant on the last line). Cruickshank argued that the first term would give a good approximation to the diagonal blocks of the normal matrix and to those off-diagonal blocks for which and are close. On this basis he was able to justify the n-shift rule' of Shoemaker et al. (1950). Cruickshank gave this derivation in a general space group, but using a very terse notation which somewhat obscures it. Using the symmetrized trigonometric structure-factor kernel of Section 1.3.4.2.2.9 and its multiplication formula, the above expression is seen to involve the values of a Fourier synthesis at points of the form .

Cruickshank (1956) showed that this analysis could also be applied to the refinement of temperature factors.

These two results made it possible to obtain all coefficients involved in the normal equations by looking up the values of certain differential Fourier syntheses at or at . At the time this did not confer any superiority over the standard form of the least-squares procedure, because the accurate computation of Fourier syntheses was an expensive operation. The modified Fourier method was used by Truter (1954) and by Ahmed & Cruickshank (1953a), and was incorporated into the program system described by Cruickshank et al. (1961). A more recent comparison with the least-squares method was made by Dietrich (1972).

There persisted, however, some confusion about the nature of the relationship between Fourier and least-squares methods, caused by the extra factors which make it necessary to compute a differential synthesis for each type of atom. This led Cruickshank to conclude that in spite of their remarkable similarities the least-squares and modified-Fourier methods are fundamentally distinct'.

#### 1.3.4.4.7.6. Agarwal's FFT implementation of the Fourier method

| top | pdf |

Agarwal (1978) rederived and completed Cruickshank's results at a time when the availability of the FFT algorithm made the Fourier method of calculating the coefficients of the normal equations much more economical than the standard method, especially for macromolecules.

As obtained by Cruickshank, the modified Fourier method required a full 3D Fourier synthesis

 – for each type of parameter, since this determines [via the polynomial ] the type of differential synthesis to be computed; – for each type of atom , since the coefficients of the differential synthesis must be multiplied by .

Agarwal disposed of the latter dependence by pointing out that the multiplication involved is equivalent to a real-space convolution between the differential synthesis and , the standard electron density for atom type j (Section 1.3.4.2.1.2) smeared by the isotropic thermal agitation of that atom. Since is localized, this convolution involves only a small number of grid points. The requirement of a distinct differential synthesis for each parameter type, however, continued to hold, and created some difficulties at the FFT level because the symmetries of differential syntheses are more complex than ordinary space-group symmetries. Jack & Levitt (1978) sought to avoid the calculation of difference syntheses by using instead finite differences calculated from ordinary Fourier or difference Fourier maps.

In spite of its complication, this return to the Fourier implementation of the least-squares method led to spectacular increases in speed (Isaacs & Agarwal, 1978; Agarwal, 1980; Baker & Dodson, 1980) and quickly gained general acceptance (Dodson, 1981; Isaacs, 1982a,b, 1984).

#### 1.3.4.4.7.7. Lifchitz's reformulation

| top | pdf |

Lifchitz [see Agarwal et al. (1981), Agarwal (1981)] proposed that the idea of treating certain multipliers in Cruickshank's modified differential Fourier syntheses by means of a convolution in real space should be applied not only to , but also to the polynomials which determine the type of differential synthesis being calculated. This leads to convoluting with the same ordinary weighted difference Fourier synthesis, rather than with the differential synthesis of type p. In this way, a single Fourier synthesis, with ordinary (scalar) symmetry properties, needs be computed; the parameter type and atom type both intervene through the function with which it is convoluted. This approach has been used as the basis of an efficient general-purpose least-squares refinement program for macromolecular structures (Tronrud et al., 1987).

This rearrangement amounts to using the fact (Section 1.3.2.3.9.7) that convolution commutes with differentiation. Letbe the inverse-variance weighted difference map, and let us assume that parameter belongs to atom j. Then the Agarwal form for the pth component of the right-hand side of the normal equations iswhile the Lifchitz form is

#### 1.3.4.4.7.8. A simplified derivation

| top | pdf |

A very simple derivation of the previous results will now be given, which suggests the possibility of many generalizations.

The weighted difference map has coefficients which are the gradients of the global residual with respect to each :By the chain rule, a variation of each by will result in a variation of R by withThe operation is superfluous because of Friedel symmetry, so that may be simply written in terms of the Hermitian scalar product in :If is the transform of , we have also by Parseval's theoremWe may therefore writewhich states that is the functional derivative of R with respect to .

The right-hand side of the normal equations has for its pth element, and this may be writtenIf belongs to atom j, thenhenceBy the identity of Section 1.3.2.4.3.5, this is identical to Lifchitz's expression . The present derivation in terms of scalar products [see Brünger (1989) for another presentation of it] is conceptually simpler, since it invokes only the chain rule [other uses of which have been reviewed by Lunin (1985)] and Parseval's theorem; economy of computation is obviously related to the good localization of compared to . Convolutions, whose meaning is less clear, are no longer involved; they were a legacy of having first gone over to reciprocal space via differential syntheses in the 1940s.

Cast in this form, the calculation of derivatives by FFT methods appears as a particular instance of the procedure described in connection with variational techniques (Section 1.3.4.4.6) to calculate the coefficients of local quadratic models in a search subspace; this is far from surprising since varying the electron density through a variation of the parameters of an atomic model is a particular case of the free' variations considered by the variational approach. The latter procedure would accommodate in a very natural fashion the joint consideration of an energetic (Jack & Levitt, 1978; Brünger et al., 1987; Brünger, 1988; Brünger et al., 1989; Kuriyan et al., 1989) or stereochemical (Konnert, 1976; Sussman et al., 1977; Konnert & Hendrickson, 1980; Hendrickson & Konnert, 1980; Tronrud et al., 1987) restraint function (which would play the role of S) and of the crystallographic residual (which would be C). It would even have over the latter the superiority of affording a genuine second-order approximation, albeit only in a subspace, hence the ability of detecting negative curvature and the resulting bifurcation behaviour (Bricogne, 1984). Current methods are unable to do this because they use only first-order models, and this is known to degrade severely the overall efficiency of the refinement process.

#### 1.3.4.4.7.9. Discussion of macromolecular refinement techniques

| top | pdf |

The impossibility of carrying out a full-matrix least-squares refinement of a macromolecular crystal structure, caused by excessive computational cost and by the paucity of observations, led Diamond (1971) to propose a real-space refinement method in which stereochemical knowledge was used to keep the number of free parameters to a minimum. Refinement took place by a least-squares fit between the observed' electron-density map and a model density consisting of Gaussian atoms. This procedure, coupled to iterative recalculation of the phases, led to the first highly refined protein structures obtained without using full-matrix least squares (Huber et al., 1974; Bode & Schwager, 1975; Deisenhofer & Steigemann, 1975; Takano, 1977a,b).

Real-space refinement takes advantage of the localization of atoms (each parameter interacts only with the density near the atom to which it belongs) and gives the most immediate description of stereochemical constraints. A disadvantage is that fitting the observed' electron density amounts to treating the phases of the structure factors as observed quantities, and to ignoring the experimental error estimates on their moduli. The method is also much more vulnerable to series-termination errors and accidentally missing data than the least-squares method. These objections led to the progressive disuse of Diamond's method, and to a switch towards reciprocal-space least squares following Agarwal's work.

The connection established above between the Cruickshank–Agarwal modified Fourier method and the simple use of the chain rule affords a partial refutation to both the premises of Diamond's method and to the objections made against it:

 (i) it shows that refinement can be performed through localized computations in real space without having to treat the phases as observed quantities; (ii) at the same time, it shows that measurement errors on the moduli can be fully utilized in real space, via the Fourier synthesis of the functional derivative or by means of the coefficients of a quadratic model of R in a search subspace.

#### 1.3.4.4.7.10. Sampling considerations

| top | pdf |

The calculation of the inner products from a sampled gradient map D requires even more caution than that of structure factors via electron-density maps described in Section 1.3.4.4.5, because the functions have transforms which extend even further in reciprocal space than the themselves. Analytically, if the are Gaussians, the are finite sums of multivariate Hermite functions (Section 1.3.2.4.4.2) and hence the same is true of their transforms. The difference map D must therefore be finely sampled and the relation between error and sampling rate may be investigated as in Section 1.3.4.4.5. An examination of the sampling rates commonly used (e.g. one third of the resolution) shows that they are insufficient. Tronrud et al. (1987) propose to relax this requirement by applying an artificial temperature factor to (cf. Section 1.3.4.4.5) and the negative of that temperature factor to D, a procedure of questionable validity because the latter sharpening' operation is ill defined [the function exp does not define a tempered distribution, so the associativity properties of convolution may be lost]. A more robust procedure would be to compute the scalar product by means of a more sophisticated numerical quadrature formula than a mere grid sum.

#### 1.3.4.4.8. Miscellaneous correlation functions

| top | pdf |

Certain correlation functions can be useful to detect the presence of multiple copies of the same molecule (known or unknown) in the asymmetric unit of a crystal of unknown structure.

Suppose that a crystal contains one or several copies of a molecule in its asymmetric unit. If is the electron density of that molecule in some reference position and orientation, thenwhere describes the placement of the jth copy of the molecule with respect to the reference copy. It is assumed that each such copy is in a general position, so that there is no isotropy subgroup.

The methods of Section 1.3.4.2.2.9 (with replaced by , and by ) lead to the following expression for the auto-correlation of :

If μ is unknown, consider the subfamily σ of terms with and :The scalar product in which R is a variable rotation will have a peak wheneversince two copies of the self-Patterson' of the molecule will be brought into coincidence. If the interference from terms in the Patterson other than those present in σ is not too serious, the self-rotation function' (Rossmann & Blow, 1962; Crowther, 1972) will show the same peaks, from which the rotations may be determined, either individually or jointly if for instance they form a group.

If μ is known, then its self-Patterson may be calculated, and the may be found by examining the cross-rotation function' which will have peaks at , . Once the are known, then the various copies of may be Fourier-analysed into structure factors:The cross terms with in then contain motifs'with Fourier coefficientstranslated by . Therefore the translation functions' (Crowther & Blow, 1967)will have peaks at corresponding to the detection of these motifs.

| top | pdf |

#### 1.3.4.5.1. Helical diffraction

| top | pdf |

The theory of diffraction by helical structures (Cochran et al., 1952; Klug et al., 1958) has played an important part in the study of polypeptides, of nucleic acids and of tobacco mosaic virus.

#### 1.3.4.5.1.1. Circular harmonic expansions in polar coordinates

| top | pdf |

Let be a reasonably regular function in two-dimensional real space. Going over to polar coordinatesand writing, by slight misuse of notation, for we may use the periodicity of f with respect to ϕ to expand it as a Fourier series (Byerly, 1893):with

Similarly, in reciprocal space, if and ifthenwithwhere the phase factor has been introduced for convenience in the forthcoming step.

#### 1.3.4.5.1.2. The Fourier transform in polar coordinates

| top | pdf |

The Fourier transform relation between f and F may then be written in terms of 's and 's. Observing that , and that (Watson, 1944)we obtain:hence, by the uniqueness of the Fourier expansion of F:The inverse Fourier relationship leads toThe integral transform involved in the previous two equations is called the Hankel transform (see e.g. Titchmarsh, 1922; Sneddon, 1972) of order n.

#### 1.3.4.5.1.3. The transform of an axially periodic fibre

| top | pdf |

Let ρ be the electron-density distribution in a fibre, which is assumed to have translational periodicity with period 1 along z, and to have compact support with respect to the (x, y) coordinates. Thus ρ may be writtenwhere is the motif.

By the tensor product property, the inverse Fourier transform may be writtenand hence consists of layers' labelled by l:with

Changing to polar coordinates in the (x, y) and planes decomposes the calculation of F from ρ into the following steps:and the calculation of ρ from F into:

These formulae are seen to involve a 2D Fourier series with respect to the two periodic coordinates ϕ and z, and Hankel transforms along the radial coordinates. The two periodicities in ϕ and z are independent, so that all combinations of indices (n, l) occur in the Fourier summations.

#### 1.3.4.5.1.4. Helical symmetry and associated selection rules

| top | pdf |

Helical symmetry involves a clutching' between the two (hitherto independent) periodicities in ϕ (period 2π) and z (period 1) which causes a subdivision of the period lattice and hence a decimation (governed by selection rules') of the Fourier coefficients.

Let i and j be the basis vectors along and z. The integer lattice with basis (i, j) is a period lattice for the dependence of the electron density ρ of an axially periodic fibre considered in Section 1.3.4.5.1.3:

Suppose the fibre now has helical symmetry, with u copies of the same molecule in t turns, where g.c.d. . Using the Euclidean algorithm, write with λ and μ positive integers and . The period lattice for the dependence of ρ may be defined in terms of the new basis vectors:

 I, joining subunit 0 to subunit l in the same turn; J, joining subunit 0 to subunit λ after wrapping around.

In terms of the original basisIf α and β are coordinates along I and J, respectively,or equivalentlyBy Fourier transformation,with the transformations between indices given by the contragredients of those between coordinates, i.e.andIt follows thator alternatively thatwhich are two equivalent expressions of the selection rules describing the decimation of the transform. These rules imply that only certain orders n contribute to a given layer l.

The 2D Fourier analysis may now be performed by analysing a single subunit referred to coordinates α and β to obtainand then reindexing to get only the allowed 's byThis is u times faster than analysing u subunits with respect to the coordinates.

#### 1.3.4.5.2. Application to probability theory and direct methods

| top | pdf |

The Fourier transformation plays a central role in the branch of probability theory concerned with the limiting behaviour of sums of large numbers of independent and identically distributed random variables or random vectors. This privileged role is a consequence of the convolution theorem and of the moment-generating' properties which follow from the exchange between differentiation and multiplication by monomials. When the limit theorems are applied to the calculation of joint probability distributions of structure factors, which are themselves closely related to the Fourier transformation, a remarkable phenomenon occurs, which leads to the saddlepoint approximation and to the maximum-entropy method.

#### 1.3.4.5.2.1. Analytical methods of probability theory

| top | pdf |

The material in this section is not intended as an introduction to probability theory [for which the reader is referred to Cramér (1946), Petrov (1975) or Bhattacharya & Rao (1976)], but only as an illustration of the role played by the Fourier transformation in certain specific areas which are used in formulating and implementing direct methods of phase determination.

 (a) Convolution of probability densities The addition of independent random variables or vectors leads to the convolution of their probability distributions: if and are two n-dimensional random vectors independently distributed with probability densities and , respectively, then their sum has probability density given byi.e. This result can be extended to the case where and are singular measures (distributions of order zero, Section 1.3.2.3.4) and do not have a density with respect to the Lebesgue measure in . (b) Characteristic functions This convolution can be turned into a simple multiplication by considering the Fourier transforms (called the characteristic functions) of , and , defined with a slightly different normalization in that there is no factor of in the exponent (see Section 1.3.2.4.5), e.g.Then by the convolution theoremso that may be evaluated by Fourier inversion of its characteristic function as(see Section 1.3.2.4.5 for the normalization factors). It follows from the differentiation theorem that the partial derivatives of the characteristic function at are related to the moments of a distribution P by the identitiesfor any n-tuple of non-negative integers . (c) Moment-generating functions The above relation can be freed from powers of i by defining (at least formally) the moment-generating function:which is related to by so that the inversion formula readsThe moment-generating function is well defined, in particular, for any probability distribution with compact support, in which case it may be continued analytically from a function over into an entire function of n complex variables by virtue of the Paley–Wiener theorem (Section 1.3.2.4.2.10). Its moment-generating properties are summed up in the following relations: (d) Cumulant-generating functions The multiplication of moment-generating functions may be further simplified into the addition of their logarithms:or equivalently of the coefficients of their Taylor series at , viz:These coefficients are called cumulants, since they add when the independent random vectors to which they belong are added, and log M is called the cumulant-generating function. The inversion formula for then reads (e) Asymptotic expansions and limit theorems Consider an n-dimensional random vector X of the formwhere the N summands are independent n-dimensional random vectors identically distributed with probability density P. Then the distribution of X may be written in closed form as a Fourier transform:whereis the moment-generating function common to all the summands. This an exact expression for , which may be exploited analytically or numerically in certain favourable cases. Supposing for instance that P has compact support, then its characteristic function can be sampled finely enough to accommodate the bandwidth of the support of (this sampling rate clearly depends on n) so that the above expression for can be used for its numerical evaluation as the discrete Fourier transform of . This exact method is practical only for small values of the dimension n. In all other cases some form of approximation must be used in the Fourier inversion of . For this purpose it is customary (Cramér, 1946) to expand the cumulant-generating function around with respect to the carrying variables t:where is a multi-index (Section 1.3.2.2.3). The first-order terms may be eliminated by recentring around its vector of first-order cumulantswhere denotes the mathematical expectation of a random vector. The second-order terms may be grouped separately from the terms of third or higher order to givewhere is the covariance matrix of the multivariate distribution P. Expanding the exponential gives rise to a series of terms of the formeach of which may now be subjected to a Fourier transformation to yield a Hermite function of t (Section 1.3.2.4.4.2) with coefficients involving the cumulants κ of P. Taking the transformed terms in natural order gives an asymptotic expansion of P for large N called the Gram–Charlier series of , while grouping the terms according to increasing powers of gives another asymptotic expansion called the Edgeworth series of . Both expansions comprise a leading Gaussian term which embodies the central-limit theorem: (f) The saddlepoint approximation A limitation of the Edgeworth series is that it gives an accurate estimate of only in the vicinity of , i.e. for small values of E. These convergence difficulties are easily understood: one is substituting a local approximation to log M (viz a Taylor-series expansion valid near ) into an integral, whereas integration is a global process which consults values of log M far from . It is possible, however, to let the point t where log M is expanded as a Taylor series depend on the particular value of X for which an accurate evaluation of is desired. This is the essence of the saddlepoint method (Fowler, 1936; Khinchin 1949; Daniels, 1954; de Bruijn, 1970; Bleistein & Handelsman, 1986), which uses an analytical continuation of from a function over to a function over (see Section 1.3.2.4.2.10). Putting then , the version of Cauchy's theorem (Hörmander, 1973) gives rise to the identityfor any . By a convexity argument involving the positive-definiteness of covariance matrix Q, there is a unique value of τ such thatAt the saddlepoint , the modulus of the integrand above is a maximum and its phase is stationary with respect to the integration variable s: as N tends to infinity, all contributions to the integral cancel because of rapid oscillation, except those coming from the immediate vicinity of where there is no oscillation. A Taylor expansion of log to second order with respect to s at then givesand henceThe last integral is elementary and gives the saddlepoint approximation':whereand where This approximation scheme amounts to using the conjugate distribution' (Khinchin, 1949)instead of the original distribution for the common distribution of all N random vectors . The exponential modulation results from the analytic continuation of the characteristic (or moment-generating) function into , as in Section 1.3.2.4.2.10. The saddlepoint approximation is only the leading term of an asymptotic expansion (called the saddlepoint expansion) for , which is actually the Edgeworth expansion associated with .

#### 1.3.4.5.2.2. The statistical theory of phase determination

| top | pdf |

The methods of probability theory just surveyed were applied to various problems formally similar to the crystallographic phase problem [e.g. the problem of the random walk' of Pearson (1905)] by Rayleigh (1880, 1899, 1905, 1918, 1919) and Kluyver (1906). They became the basis of the statistical theory of communication with the classic papers of Rice (1944, 1945).

The Gram–Charlier and Edgeworth series were introduced into crystallography by Bertaut (1955a,b,c, 1956a) and by Klug (1958), respectively, who showed them to constitute the mathematical basis of numerous formulae derived by Hauptman & Karle (1953). The saddlepoint approximation was introduced by Bricogne (1984) and was shown to be related to variational methods involving the maximization of certain entropy criteria. This connection exhibits most of the properties of the Fourier transform at play simultaneously, and will now be described as a final illustration.

• (a) Definitions and conventions

Let H be a set of unique non-origin reflections h for a crystal with lattice Λ and space group G. Let H contain acentric and centric reflections. Structure-factor values attached to all reflections in H will comprise real numbers. For h acentric, and will be the real and imaginary parts of the complex structure factor; for h centric, will be the real coordinate of the (possibly complex) structure factor measured along a real axis rotated by one of the two angles , π apart, to which the phase is restricted modulo (Section 1.3.4.2.2.5). These n real coordinates will be arranged as a column vector containing the acentric then the centric data, i.e. in the order

• (b) Vectors of trigonometric structure-factor expressions

Let denote the vector of trigonometric structure-factor expressions associated with , where D denotes the asymmetric unit. These are defined as follows:where

According to the convention above, the coordinates of in will be arranged in a column vector as follows:

• (c) Distributions of random atoms and moment-generating functions

Let position x in D now become a random vector with probability density . Then becomes itself a random vector in , whose distribution is the image of distribution through the mapping just defined. The locus of in is a compact algebraic manifold (the multidimensional analogue of a Lissajous curve), so that p is a singular measure (a distribution of order 0, Section 1.3.2.3.4, concentrated on that manifold) with compact support. The average with respect to p of any function Ω over which is infinitely differentiable in a neighbourhood of may be calculated as an average with respect to m over D by the induction formula':

In particular, one can calculate the moment-generating function M for distribution p asand hence calculate the moments μ (respectively cumulants κ) of p by differentiation of M (respectively log M) at :The structure-factor algebra for group G (Section 1.3.4.2.2.9) then allows one to express products of 's as linear combinations of other 's, and hence to express all moments and cumulants of distribution as linear combinations of real and imaginary parts of Fourier coefficients of the prior distribution of atoms . This plays a key role in the use of nonuniform distributions of atoms.

• (d) The joint probability distribution of structure factors

In the random-atom model of an equal-atom structure, N atoms are placed randomly, independently of each other, in the asymmetric unit D of the crystal with probability density . For point atoms of unit weight, the vector F of structure-factor values for reflections may be writtenwhere the N copies of random vector ξ are independent and have the same distribution .

The joint probability distribution is then [Section 1.3.4.5.2.1(e)]

For low dimensionality n it is possible to carry out the Fourier transformation numerically after discretization, provided is sampled sufficiently finely that no aliasing results from taking its Nth power (Barakat, 1974). This exact approach can also accommodate heterogeneity, and has been used first in the field of intensity statistics (Shmueli et al., 1984, 1985; Shmueli & Weiss, 1987, 1988), then in the study of the and relations in triclinic space groups (Shmueli & Weiss, 1985, 1986). Some of these applications are described in Chapter 2.1 of this volume. This method could be extended to the construction of any joint probability distribution (j.p.d.) in any space group by using the generic expression for the moment-generating function (m.g.f.) derived by Bricogne (1984). It is, however, limited to small values of n by the necessity to carry out n-dimensional FFTs on large arrays of sample values.

The asymptotic expansions of Gram–Charlier and Edgeworth have good convergence properties only if lies in the vicinity of for all . Previous work on the j.p.d. of structure factors has used for a uniform distribution, so that ; as a result, the corresponding expansions are accurate only if all moduli are small, in which case the j.p.d. contains little phase information.

The saddlepoint method [Section 1.3.4.5.2.1(f)] constitutes the method of choice for evaluating the joint probability of structure factors when some of the moduli in are large. As shown previously, this approximation amounts to using the conjugate distribution'instead of the original distribution for the distribution of random vector ξ. This conjugate distribution is induced from the modified distribution of atomswhere, by the induction formula, may be written asand where τ is the unique solution of the saddlepoint equation:The desired approximation is thenwhereand where

Finally, the elements of the Hessian matrix are just the trigonometric second-order cumulants of distribution p, and hence can be calculated via structure-factor algebra from the Fourier coefficients of . All the quantities involved in the expression for are therefore effectively computable from the initial data and .

• (e) Maximum-entropy distributions of atoms

One of the main results in Bricogne (1984) is that the modified distribution in (SP1) is the unique distribution which has maximum entropy relative to , whereunder the constraint that be the centroid vector of the corresponding conjugate distribution . The traditional notation of maximum-entropy (ME) theory (Jaynes, 1957, 1968, 1983) is in this case (Bricogne, 1984)so that Z is identical to the m.g.f. M, and the coordinates of the saddlepoint are the Lagrange multipliers λ for the constraints .

Jaynes's ME theory also gives an estimate for :whereis the total entropy and is the counterpart to under the equivalence just established.

is identical to , but lacks the denominator. The latter, which is the normalization factor of a multivariate Gaussian with covariance matrix , may easily be seen to arise through Szegö's theorem (Sections 1.3.2.6.9.4, 1.3.4.2.1.10) from the extra logarithmic term in Stirling's formula(see, for instance, Reif, 1965) beyond the first two terms which serve to define entropy, sinceThe relative effect of this extra normalization factor depends on the ratio

The above relation between entropy maximization and the saddlepoint approximation is the basis of a Bayesian statistical approach to the phase problem (Bricogne, 1988) where the assumptions under which joint distributions of structure factors are sought incorporate many new ingredients (such as molecular boundaries, isomorphous substitutions, known fragments, noncrystallographic symmetries, multiple crystal forms) besides trial phase choices for basis reflections. The ME criterion intervenes in the construction of under these assumptions, and the distribution is a very useful computational intermediate in obtaining the approximate joint probability and the associated conditional distributions and likelihood functions.

• (f) Role of the Fourier transformation

The formal developments presented above make use of the following properties of the Fourier transformation:

 (i) the convolution theorem, which turns the convolution of probability distributions into the multiplication of their characteristic functions; (ii) the differentiation property, which confers moment-generating properties to characteristic functions; (iii) the reciprocity theorem, which allows the retrieval of a probability distribution from its characteristic or moment-generating function; (iv) the Paley–Wiener theorem, which allows the analytic continuation of characteristic functions associated to probability distributions with compact support, and thus gives rise to conjugate families of distributions; (v) Bertaut's structure-factor algebra (a discrete symmetrized version of the convolution theorem), which allows the calculation of all necessary moments and cumulants when the dimension n is small; (vi) Szegö's theorem, which provides an asymptotic approximation of the normalization factor when n is large.

This multi-faceted application seems an appropriate point at which to end this description of the Fourier transformation and of its use in crystallography.

### References

Agarwal, R. C. (1978). A new least-squares refinement technique based on the fast Fourier transform algorithm. Acta Cryst. A34, 791–809.
Agarwal, R. C. (1980). The refinement of crystal structure by fast Fourier least-squares. In Computing in Crystallography, edited by R. Diamond, S. Ramaseshan & K. Venkatesan, pp. 18.01–18.13. Bangalore: The Indian Academy of Science.
Agarwal, R. C. (1981). New results on fast Fourier least-squares refinement technique. In Refinement of Protein Structures, compiled by P. A. Machin, J. W. Campbell & M. Elder (ref. DL/SCI/R16), pp. 24–28. Warrington: SERC Daresbury Laboratory.
Agarwal, R. C., Lifchitz, A. & Dodson, E. J. (1981). Appendix (pp. 36–38) to Dodson (1981).
Ahmed, F. R. & Barnes, W. H. (1958). Generalized programmes for crystallographic computations. Acta Cryst. 11, 669–671.
Ahmed, F. R. & Cruickshank, D. W. J. (1953a). A refinement of the crystal structure analysis of oxalic acid dihydrate. Acta Cryst. 6, 385–392.
Ahmed, F. R. & Cruickshank, D. W. J. (1953b). Crystallographic calculations on the Manchester University electronic digital computer (Mark II). Acta Cryst. 6, 765–769.
Alston, N. A. & West, J. (1929). The structure of topaz, [Al(F,OH)]2SiO4. Z. Kristallogr. 69, 149–167.
Arnold, H. (2005). Transformations in crystallography. In International Tables for Crystallography, Vol. A. Space-Group Symmetry, edited by Th. Hahn, Chapter 5.1. Heidelberg: Springer.
Artin, E. (1944). Galois Theory. Notre Dame University Press.
Ascher, E. & Janner, A. (1965). Algebraic aspects of crystallography. I. Space groups as extensions. Helv. Phys. Acta, 38, 551–572.
Ascher, E. & Janner, A. (1968). Algebraic aspects of crystallography. II. Non-primitive translations in space groups. Commun. Math. Phys. 11, 138–167.
Ash, J. M. (1976). Multiple trigonometric series. In Studies in Harmonic Analysis, edited by J. M. Ash, pp. 76–96. MAA studies in mathematics, Vol. 13. The Mathematical Association of America.
Auslander, L. (1965). An account of the theory of crystallographic groups. Proc. Am. Math. Soc. 16, 1230–1236.
Auslander, L., Feig, E. & Winograd, S. (1982). New algorithms for evaluating the 3-dimensional discrete Fourier transform. In Computational Crystallography, edited by D. Sayre, pp. 451–461. New York: Oxford University Press.
Auslander, L., Johnson, R. W. & Vulis, M. (1988). Evaluating finite Fourier transforms that respect group symmetries. Acta Cryst. A44, 467–478.
Auslander, L. & Shenefelt, M. (1987). Fourier transforms that respect crystallographic symmetries. IBM J. Res. Dev. 31, 213–223.
Baker, E. N. & Dodson, E. J. (1980). Crystallographic refinement of the structure of actinidin at 1.7 Å resolution by fast Fourier least-squares methods. Acta Cryst. A36, 559–572.
Bantz, D. & Zwick, M. (1974). The use of symmetry with the fast Fourier algorithm. Acta Cryst. A30, 257–260.
Barakat, R. (1974). First-order statistics of combined random sinusoidal waves with applications to laser speckle patterns. Opt. Acta, 21, 903–921.
Barrett, A. N. & Zwick, M. (1971). A method for the extension and refinement of crystallographic protein phases utilizing the fast Fourier transform. Acta Cryst. A27, 6–11.
Beevers, C. A. (1952). Fourier strips at a 3° interval. Acta Cryst. 5, 670–673.
Beevers, C. A. & Lipson, H. (1934). A rapid method for the summation of a two-dimensional Fourier series. Philos. Mag. 17, 855–859.
Beevers, C. A. & Lipson, H. (1936). A numerical method for two-dimensional Fourier synthesis. Nature (London), 137, 825–826.
Beevers, C. A. & Lipson, H. (1952). The use of Fourier strips for calculating structure factors. Acta Cryst. 5, 673–675.
Bennett, J. M. & Kendrew, J. C. (1952). The computation of Fourier syntheses with a digital electronic calculating machine. Acta Cryst. 5, 109–116.
Bertaut, E. F. (1955a). La méthode statistique en cristallographie. I. Acta Cryst. 8, 537–543.
Bertaut, E. F. (1955b). La méthode statistique en cristallographie. II. Quelques applications. Acta Cryst. 8, 544–548.
Bertaut, E. F. (1955c). Fonction de répartition: application à l'approache directe des structures. Acta Cryst. 8, 823–832.
Bertaut, E. F. (1956a). Les groupes de translation non primitifs et la méthode statistique. Acta Cryst. 9, 322.
Bertaut, E. F. (1956b). Tables de linéarisation des produits et puissances des facteurs de structure. Acta Cryst. 9, 322–323.
Bertaut, E. F. (1956c). Algèbre des facteurs de structure. Acta Cryst. 9, 769–770.
Bertaut, E. F. (1959a). IV. Algèbre des facteurs de structure. Acta Cryst. 12, 541–549.
Bertaut, E. F. (1959b). V. Algèbre des facteurs de structure. Acta Cryst. 12, 570–574.
Bertaut, E. F. & Waser, J. (1957). Structure factor algebra. II. Acta Cryst. 10, 606–607.
Bhattacharya, R. N. & Rao, R. R. (1976). Normal Approximation and Asymptotic Expansions. New York: John Wiley.
Bieberbach, L. (1911). Über die Bewegungsgruppen der Euklidischen Raume I. Math. Ann. 70, 297–336.
Bieberbach, L. (1912). Über die Bewegungsgruppen der Euklidischen Raume II. Math. Ann. 72, 400–412.
Bienenstock, A. & Ewald, P. P. (1962). Symmetry of Fourier space. Acta Cryst. 15, 1253–1261.
Bleistein, N. & Handelsman, R. A. (1986). Asymptotic Expansions of Integrals. New York: Dover Publications.
Bloomer, A. C., Champness, J. N., Bricogne, G., Staden, R. & Klug, A. (1978). Protein disk of tobacco mosaic virus at 2.8 Ångström resolution showing the interactions within and between subunits. Nature (London), 276, 362–368.
Blow, D. M. & Crick, F. H. C. (1959). The treatment of errors in the isomorphous replacement method. Acta Cryst. 12, 794–802.
Bode, W. & Schwager, P. (1975). The refined crystal structure of bovine β-trypsin at 1.8Å resolution. II. Crystallographic refinement, calcium-binding site, benzamidine binding site and active site at pH 7.0. J. Mol. Biol. 98, 693–717.
Bondot, P. (1971). Application de la transformée de Fourier performante aux calculs cristallographiques. Acta Cryst. A27, 492–494.
Booth, A. D. (1945a). Two new modifications of the Fourier method of X-ray structure analysis. Trans. Faraday Soc. 41, 434–438.
Booth, A. D. (1945b). An expression for following the process of refinement in X-ray structure analysis using Fourier series. Philos. Mag. 36, 609–615.
Booth, A. D. (1945c). Accuracy of atomic co-ordinates derived from Fourier synthesis. Nature (London), 156, 51–52.
Booth, A. D. (1946a). A differential Fourier method for refining atomic parameters in crystal structure analysis. Trans. Faraday Soc. 42, 444–448.
Booth, A. D. (1946b). The simultaneous differential refinement of co-ordinates and phase angles in X-ray Fourier synthesis. Trans. Faraday Soc. 42, 617–619.
Booth, A. D. (1946c). The accuracy of atomic co-ordinates derived from Fourier series in X-ray structure analysis. Proc. R. Soc. London Ser. A, 188, 77–92.
Booth, A. D. (1947a). The accuracy of atomic co-ordinates derived from Fourier series in X-ray structure analysis. III. Proc. R. Soc. London Ser. A, 190, 482–489.
Booth, A. D. (1947b). The accuracy of atomic co-ordinates derived from Fourier series in X-ray structure analysis. IV. The two-dimensional projection of oxalic acid. Proc. R. Soc. London Ser. A, 190, 490–496.
Booth, A. D. (1947c). A new refinement technique for X-ray structure analysis. J. Chem. Phys. 15, 415–416.
Booth, A. D. (1947d). Application of the method of steepest descents to X-ray structure analysis. Nature (London), 160, 196.
Booth, A. D. (1948a). A new Fourier refinement technique. Nature (London), 161, 765–766.
Booth, A. D. (1948b). Fourier Technique in X-ray Organic Structure Analysis. Cambridge University Press.
Booth, A. D. (1949). The refinement of atomic parameters by the technique known in X-ray crystallography as `the method of steepest descents'. Proc. R. Soc. London Ser. A, 197, 336–355.
Bragg, W. H. (1915). X-rays and crystal structure. (Bakerian Lecture.) Philos. Trans. R. Soc. London Ser. A, 215, 253–274.
Bragg, W. L. (1929). Determination of parameters in crystal structures by means of Fourier series. Proc. R. Soc. London Ser. A, 123, 537–559.
Bragg, W. L. (1975). The Development of X-ray Analysis, edited by D. C. Phillips & H. Lipson. London: Bell.
Bragg, W. L. & Lipson, H. (1936). The employment of contoured graphs of structure-factor in crystal analysis. Z. Kristallogr. 95, 323–337.
Bragg, W. L. & West, J. (1929). A technique for the X-ray examination of crystal structures with many parameters. Z. Kristallogr. 69, 118–148.
Bragg, W. L. & West, J. (1930). A note on the representation of crystal structure by Fourier series. Philos. Mag. 10, 823–841.
Bricogne, G. (1974). Geometric sources of redundancy in intensity data and their use for phase determination. Acta Cryst. A30, 395–405.
Bricogne, G. (1976). Methods and programs for direct-space exploitation of geometric redundancies. Acta Cryst. A32, 832–847.
Bricogne, G. (1982). Generalised density modification methods. In Computational Crystallography, edited by D. Sayre, pp. 258–264. New York: Oxford University Press.
Bricogne, G. (1984). Maximum entropy and the foundations of direct methods. Acta Cryst. A40, 410–445.
Bricogne, G. (1988). A Bayesian statistical theory of the phase problem. I. A multichannel maximum entropy formalism for constructing generalised joint probability distributions of structure factors. Acta Cryst. A44, 517–545.
Bricogne, G. & Tolimieri, R. (1990). Two-dimensional FFT algorithms on data admitting 90°-rotational symmetry. In Signal Processing Theory, edited by L. Auslander, T. Kailath & S. Mitter, pp. 25–35. New York: Springer-Verlag.
Brill, R., Grimm, H., Hermann, C. & Peters, C. (1939). Anwendung der röntgenographischen Fourieranalyse auf Fragen den chemischen Bindung. Ann. Phys. (Leipzig), 34, 393–445.
Britten, P. L. & Collins, D. M. (1982). Information theory as a basis for the maximum determinant. Acta Cryst. A38, 129–132.
Brown, H. (1969). An algorithm for the determination of space groups. Math. Comput. 23, 499–514.
Brown, H., Bülow, R., Neubüser, J., Wondratschek, H. & Zassenhaus, H. (1978). Crystallographic Groups of Four-Dimensional Space. New York: John Wiley.
Bruijn, N. G. de (1970). Asymptotic Methods in Analysis, 3rd ed. Amsterdam: North-Holland.
Bryan, R. K., Bansal, M., Folkhard, W., Nave, C. & Marvin, D. A. (1983). Maximum-entropy calculation of the electron density at 4 Å resolution of Pf1 filamentous bacteriophage. Proc. Natl Acad. Sci. USA, 80, 4728–4731.
Bryan, R. K. & Skilling, J. (1980). Deconvolution by maximum entropy, as illustrated by application to the jet of M87. Mon. Not. R. Astron. Soc. 191, 69–79.
Brünger, A. T. (1988). Crystallographic refinement by simulated annealing. In Crystallographic Computing 4: Techniques and New Technologies, edited by N. W. Isaacs & M. R. Taylor, pp. 126–140. New York: Oxford University Press.
Brünger, A. T. (1989). A memory-efficient fast Fourier transformation algorithm for crystallographic refinement on supercomputers. Acta Cryst. A45, 42–50.
Brünger, A. T., Karplus, M. & Petsko, G. A. (1989). Crystallographic refinement by simulated annealing: application to crambin. Acta Cryst. A45, 50–61.
Brünger, A. T., Kuriyan, J. & Karplus, M. (1987). Crystallographic R factor refinement by molecular dynamics. Science, 235, 458–460.
Burch, S. F., Gull, S. F. & Skilling, J. (1983). Image restoration by a powerful maximum entropy method. Comput. Vision Graphics Image Process. 23, 113–128.
Burnett, R. M. & Nordman, C. E. (1974). Optimization of the calculation of structure factors for large molecules. J. Appl. Cryst. 7, 625–627.
Burnside, W. (1911). Theory of Groups of Finite Order, 2nd ed. Cambridge University Press.
Busing, W. R. & Levy, H. A. (1961). Least squares refinement programs for the IBM 704. In Computing Methods and the Phase Problem in X-ray Crystal Analysis, edited by R. Pepinsky, J. M. Robertson & J. C. Speakman, pp. 146–149. Oxford: Pergamon Press.
Byerly, W. E. (1893). An Elementary Treatise on Fourier's Series and Spherical, Cylindrical and Ellipsoidal Harmonics. Boston: Ginn & Co. [Reprinted by Dover Publications, New York, 1959.]
Cochran, W. (1948a). A critical examination of the Beevers–Lipson method of Fourier series summation. Acta Cryst. 1, 54–56.
Cochran, W. (1948b). The Fourier method of crystal structure analysis. Nature (London), 161, 765.
Cochran, W. (1948c). The Fourier method of crystal-structure analysis. Acta Cryst. 1, 138–142.
Cochran, W. (1948d). X-ray analysis and the method of steepest descents. Acta Cryst. 1, 273.
Cochran, W. (1951a). Some properties of the -synthesis. Acta Cryst. 4, 408–411.
Cochran, W. (1951b). The structures of pyrimidines and purines. V. The electron distribution in adenine hydrochloride. Acta Cryst. 4, 81–92.
Cochran, W., Crick, F. H. C. & Vand, V. (1952). The structure of synthetic polypeptides. I. The transform of atoms on a helix. Acta Cryst. 5, 581–586.
Collins, D. M. (1975). Efficiency in Fourier phase refinement for protein crystal structures. Acta Cryst. A31, 388–389.
Collins, D. M., Brice, M. D., Lacour, T. F. M. & Legg, M. J. (1976). Fourier phase refinement and extension by modification of electron-density maps. In Crystallographic Computing Techniques, edited by F. R. Ahmed, pp. 330–335. Copenhagen: Munksgaard.
Colman, P. M. (1974). Non-crystallographic symmetry and the sampling theorem. Z. Kristallogr. 140, 344–349.
Cox, E. G. & Cruickshank, D. W. J. (1948). The accuracy of electron-density maps in X-ray structure analysis. Acta Cryst. 1, 92–93.
Cox, E. G., Gross, L. & Jeffrey, G. A. (1947). A Hollerith punched-card method for the evaluation of electron density in crystal structure analysis. Proc. Leeds Philos. Soc. 5, 1–13.
Cox, E. G., Gross, L. & Jeffrey, G. A. (1949). A Hollerith technique for computing three-dimensional differential Fourier syntheses in X-ray crystal structure analysis. Acta Cryst. 2, 351–355.
Cox, E. G. & Jeffrey, G. A. (1949). The use of Hollerith computing equipment in crystal structure analysis. Acta Cryst. 2, 341–343.
Coxeter, H. S. M. & Moser, W. O. J. (1972). Generators and Relations for Discrete Groups, 3rd ed. Berlin: Springer-Verlag.
Cramér, H. (1946). Mathematical Methods of Statistics. Princeton University Press.
Crowther, R. A. (1967). A linear analysis of the non-crystallographic symmetry problem. Acta Cryst. 22, 758–764.
Crowther, R. A. (1969). The use of non-crystallographic symmetry for phase determination. Acta Cryst. B25, 2571–2580.
Crowther, R. A. (1972). The fast rotation function. In The Molecular Replacement Method, edited by M. G. Rossmann, pp. 173–178. New York: Gordon & Breach.
Crowther, R. A. & Blow, D. M. (1967). A method of positioning a known molecule in an unknown crystal structure. Acta Cryst. 23, 544–548.
Cruickshank, D. W. J. (1949a). The accuracy of electron-density maps in X-ray analysis with special reference to dibenzyl. Acta Cryst. 2, 65–82.
Cruickshank, D. W. J. (1949b). The accuracy of atomic co-ordinates derived by least-squares or Fourier methods. Acta Cryst. 2, 154–157.
Cruickshank, D. W. J. (1950). The convergence of the least-squares and Fourier refinement methods. Acta Cryst. 3, 10–13.
Cruickshank, D. W. J. (1952). On the relations between Fourier and least-squares methods of structure determination. Acta Cryst. 5, 511–518.
Cruickshank, D. W. J. (1956). The determination of the anisotropic thermal motion of atoms in crystals. Acta Cryst. 9, 747–753.
Cruickshank, D. W. J. (1965a). Errors in Fourier series. In Computing Methods in Crystallography, edited by J. S. Rollett, pp. 107–111. Oxford: Pergamon Press.
Cruickshank, D. W. J. (1965b). Errors in least-squares methods. In Computing Methods in Crystallography, edited by J. S. Rollett, pp. 112–116. Oxford: Pergamon Press.
Cruickshank, D. W. J. (1970). Least-squares refinement of atomic parameters. In Crystallographic Computing, edited by F. R. Ahmed, pp. 187–197. Copenhagen: Munksgaard.
Cruickshank, D. W. J., Pilling, D. E., Bujosa, A., Lovell, F. M. & Truter, M. R. (1961). Crystallographic calculations on the Ferranti Pegasus and mark I computers. In Computing Methods and the Phase Problem in X-Ray Crystal Analysis, edited by R. Pepinsky, J. M. Robertson & J. C. Speakman, pp. 32–78. Oxford: Pergamon Press.
Cruickshank, D. W. J. & Rollett, J. S. (1953). Electron-density errors at special positions. Acta Cryst. 6, 705–707.
Curtis, C. W. & Reiner, I. (1962). Representation Theory of Finite Groups and Associative Algebras. New York: Wiley–Interscience.
Daniels, H. E. (1954). Saddlepoint approximation in statistics. Ann. Math. Stat. 25, 631–650.
Deisenhofer, J. & Steigemann, W. (1975). Crystallographic refinement of the structure of bovine pancreatic trypsin inhibitor at 1.5 Å resolution. Acta Cryst. B31, 238–250.
Diamond, R. (1971). A real-space refinement procedure for proteins. Acta Cryst. A27, 436–452.
Dickerson, R. E., Kendrew, J. C. & Strandberg, B. E. (1961a). The phase problem and isomorphous replacement methods in protein structures. In Computing Methods and the Phase Problem in X-Ray Crystal Analysis, edited by R. Pepinsky, J. M. Robertson & J. C. Speakman, pp. 236–251. Oxford: Pergamon Press.
Dickerson, R. E., Kendrew, J. C. & Strandberg, B. E. (1961b). The crystal structure of myoglobin: phase determination to a resolution of 2 Å by the method of isomorphous replacement. Acta Cryst. 14, 1188–1195.
Dietrich, H. (1972). A reconsideration of Fourier methods for the refinement of crystal structures. Acta Cryst. B28, 2807–2814.
Dodson, E. J. (1981). Block diagonal least squares refinement using fast Fourier techniques. In Refinement of Protein Structures, compiled by P. A. Machin J. W. Campbell & M. Elder (ref. DL/SCI/R16), pp. 29–39. Warrington: SERC Daresbury Laboratory.
Donohue, J. & Schomaker, V. (1949). The use of punched cards in molecular structure determinations. III. Structure factor calculations of X-ray crystallography. Acta Cryst. 2, 344–347.
Duane, W. (1925). The calculation of the X-ray diffracting power at points in a crystal. Proc. Natl Acad. Sci. USA, 11, 489–493.
Engel, P. (1986). Geometric Crystallography. Dordrecht: Kluwer Academic Publishers.
Ewald, P. P. (1940). X-ray diffraction by finite and imperfect crystal lattices. Proc. Phys. Soc. London, 52, 167–174.
Ewald, P. P. (1962). Fifty Years of X-ray Diffraction. Dordrecht: Kluwer Academic Publishers.
Farkas, D. R. (1981). Crystallographic groups and their mathematics. Rocky Mountain J. Math. 11, 511–551.
Forsyth, J. B. & Wells, M. (1959). On an analytical approximation to the atomic scattering factor. Acta Cryst. 12, 412–415.
Fowler, R. H. (1936). Statistical Mechanics, 2nd ed. Cambridge University Press.
Fowweather, F. (1955). The use of general programmes for crystallographic calculations on the Manchester University electronic digital computer (Mark II). Acta Cryst. 8, 633–637.
Freer, S. T., Alden, R. A., Levens, S. A. & Kraut, J. (1976). Refinement of five protein structures by constrained Fo − Fc Fourier methods. In Crystallographic Computing Techniques, edited by F. R. Ahmed, pp. 317–321. Copenhagen: Munksgaard.
Friedel, G. (1913). Sur les symétries cristallines que peut révéler la diffraction des rayons Röntgen. C. R. Acad. Sci. Paris, 157, 1533–1536.
Friedlander, P. H., Love, W. & Sayre, D. (1955). Least-squares refinement at high speed. Acta Cryst. 8, 732.
Frobenius, G. (1911). Über die unzerlegbaren diskreten Bewegungsgruppen. Sitzungsber. Preuss. Akad. Wiss. Berlin, 29, 654–665.
Gassmann, J. (1976). Improvement and extension of approximate phase sets in structure determination. In Crystallographic Computing Techniques, edited by F. R. Ahmed, pp. 144–154. Copenhagen: Munksgaard.
Gassmann, J. & Zechmeister, K. (1972). Limits of phase expansion in direct methods. Acta Cryst. A28, 270–280.
Gentleman, W. M. & Sande, G. (1966). Fast Fourier transforms – for fun and profit. In AFIPS Proc. 1966 Fall Joint Computer Conference, pp. 563–578. Washington: Spartan Books.
Gillis, J. (1948a). Structure factor relations and phase determination. Acta Cryst. 1, 76–80.
Gillis, J. (1948b). The application of the Harker–Kasper method of phase determination. Acta Cryst. 1, 174–179.
Goedkoop, J. A. (1950). Remarks on the theory of phase-limiting inequalities and equalities. Acta Cryst. 3, 374–378.
Greenhalgh, D. M. S. & Jeffrey, G. A. (1950). A new punched card method of Fourier synthesis. Acta Cryst. 3, 311–312.
Grems, M. D. & Kasper, J. S. (1949). An improved punched-card method for crystal structure-factor calculations. Acta Cryst. 2, 347–351.
Hall, M. (1959). The Theory of Groups. New York: Macmillan.
Harker, D. (1936). The application of the three-dimensional Patterson method and the crystal structures of proustite, Ag3AsS3, and pyrargyrite, Ag3SbS3. J. Chem. Phys. 4, 381–390.
Harker, D. & Kasper, J. S. (1948). Phases of Fourier coefficients directly from crystal diffraction data. Acta Cryst. 1, 70–75.
Harrison, S. C., Olson, A. J., Schutt, C. E., Winkler, F. K. & Bricogne, G. (1978). Tomato bushy stunt virus at 2.9 Ångström resolution. Nature (London), 276, 368–373.
Hauptman, H. & Karle, J. (1953). Solution of the Phase Problem. I. The Centrosymmetric Crystal. ACA Monograph No. 3. Pittsburgh: Polycrystal Book Service.
Havighurst, R. J. (1925a). The distribution of diffracting power in sodium chloride. Proc. Natl Acad. Sci. USA, 11, 502–507.
Havighurst, R. J. (1925b). The distribution of diffracting power in certain crystals. Proc. Natl Acad. Sci. USA, 11, 507–512.
Hendrickson, W. A. & Konnert, J. H. (1980). Incorporation of stereo­chemical information into crystallographic refinement. In Computing in Crystallography, edited by R. Diamond, S. Ramaseshan & K. Venkatesan, pp. 13.01–13.26. Bangalore: The Indian Academy of Science.
Hodgson, M. L., Clews, C. J. B. & Cochran, W. (1949). A punched card modification of the Beevers–Lipson method of Fourier synthesis. Acta Cryst. 2, 113–116.
Hoppe, W. & Gassmann, J. (1968). Phase correction, a new method to solve partially known structures. Acta Cryst. B24, 97–107.
Hoppe, W., Gassmann, J. & Zechmeister, K. (1970). Some automatic procedures for the solution of crystal structures with direct methods and phase correction. In Crystallographic Computing, edited by F. R. Ahmed, S. R. Hall & C. P. Huber, pp. 26–36. Copenhagen: Munksgaard.
Huber, R., Kulka, D., Bode, W., Schwager, P., Bartels, K., Deisenhofer, J. & Steigemann, W. (1974). Structure of the complex formed by bovine trypsin and bovine pancreatic trypsin inhibitor. II. Crystallographic refinement at 1.9 Å resolution. J. Mol. Biol. 89, 73–101.
Hughes, E. W. (1941). The crystal structure of melamine. J. Am. Chem. Soc. 63, 1737–1752.
Hörmander, L. (1973). An Introduction to Complex Analysis in Several Variables, 2nd ed. Amsterdam: North-Holland.
Immirzi, A. (1973). A general Fourier program for X-ray crystal-structure analysis which utilizes the Cooley–Tukey algorithm. J. Appl. Cryst. 6, 246–249.
Immirzi, A. (1976). Fast Fourier transform in crystallography. In Crystallographic Computing Techniques, edited by F. R. Ahmed, pp. 399–412. Copenhagen: Munksgaard.
Isaacs, N. W. (1982a). The refinement of macromolecules. In Computational Crystallography, edited by D. Sayre, pp. 381–397. New York: Oxford University Press.
Isaacs, N. W. (1982b). Refinement techniques: use of the FFT. In Computational Crystallography, edited by D. Sayre, pp. 398–408. New York: Oxford University Press.
Isaacs, N. W. (1984). Refinement using the fast-Fourier transform least squares algorithm. In Methods and Applications in Crystallographic Computing, edited by S. R. Hall & T. Ashida, pp. 193–205. New York: Oxford University Press.
Isaacs, N. W. & Agarwal, R. C. (1978). Experience with fast Fourier least squares in the refinement of the crystal structure of rhombohedral 2-zinc insulin at 1.5 Å resolution. Acta Cryst. A34, 782–791.
Jack, A. (1973). Direct determination of X-ray phases for tobacco mosaic virus protein using non-crystallographic symmetry. Acta Cryst. A29, 545–554.
Jack, A. & Levitt, M. (1978). Refinement of large structures by simultaneous minimization of energy and R factor. Acta Cryst. A34, 931–935.
James, R. W. (1948a). The Optical Principles of the Diffraction of X-rays. London: Bell.
James, R. W. (1948b). False detail in three-dimensional Fourier representations of crystal structures. Acta Cryst. 1, 132–134.
Janssen, T. (1973). Crystallographic Groups. Amsterdam: North-Holland.
Jaynes, E. T. (1957). Information theory and statistical mechanics. Phys. Rev. 106, 620–630.
Jaynes, E. T. (1968). Prior probabilities. IEEE Trans. SSC, 4, 227–241.
Jaynes, E. T. (1983). Papers on Probability, Statistics and Statistical Physics. Dordrecht: Kluwer Academic Publishers.
Jürgensen, H. (1970). Calculation with the elements of a finite group given by generators and defining relations. In Computational Problems in Abstract Algebra, edited by J. Leech, pp. 47–57. Oxford: Pergamon Press.
Karle, J. & Hauptman, H. (1950). The phases and magnitudes of the structure factors. Acta Cryst. 3, 181–187.
Khinchin, A. I. (1949). Mathematical Foundations of Statistical Mechanics. New York: Dover Publications.
Kitz, N. & Marchington, B. (1953). A method of Fourier synthesis using a standard Hollerith senior rolling total tabulator. Acta Cryst. 6, 325–326.
Klug, A. (1958). Joint probability distributions of structure factors and the phase problem. Acta Cryst. 11, 515–543.
Klug, A., Crick, F. H. C. & Wyckoff, H. W. (1958). Diffraction by helical structures. Acta Cryst. 11, 199–213.
Kluyver, J. C. (1906). A local probability problem. K. Ned. Akad. Wet. Proc. 8, 341–350.
Konnert, J. H. (1976). A restrained-parameter structure-factor least-squares refinement procedure for large asymmetric units. Acta Cryst. A32, 614–617.
Konnert, J. H. & Hendrickson, W. A. (1980). A restrained-parameter thermal-factor refinement procedure. Acta Cryst. A36, 344–350.
Kuriyan, J., Brünger, A. T., Karplus, M. & Hendrickson, W. A. (1989). X-ray refinement of protein structures by simulated annealing: test of the method on myohemerythrin. Acta Cryst. A45, 396–409.
Laue, M. von (1936). Die aüssere Form der Kristalle in ihrem Einfluss auf die Interferenzerscheinungen an Raumgittern. Ann. Phys. (Leipzig), 26, 55–68.
Ledermann, W. (1987). Introduction to Group Characters, 2nd ed. Cambridge University Press.
Leslie, A. G. W. (1987). A reciprocal-space method for calculating a molecular envelope using the algorithm of B. C. Wang. Acta Cryst. A43, 134–136.
Linnik, I. Ju. (1975). A multidimensional analogue of a limit theorem of G. Szegö. Math. USSR Izv. 9, 1323–1332.
Lipson, H. & Beevers, C. A. (1936). An improved numerical method of two-dimensional Fourier synthesis for crystals. Proc. Phys. Soc. London, 48, 772–780.
Lipson, H. & Cochran, W. (1953). The Determination of Crystal Structures. London: Bell.
Lipson, H. & Cochran, W. (1968). The Determination of Crystal Structures. Revised and enlarged edition. London: G. Bell & Sons.
Lipson, H. & Taylor, C. A. (1951). Optical methods in X-ray analysis. II. Fourier transforms and crystal-structure determination. Acta Cryst. 4, 458–462.
Lipson, H. & Taylor, C. A. (1958). Fourier Transforms and X-ray Diffraction. London: Bell.
Livesey, A. K. & Skilling, J. (1985). Maximum entropy theory. Acta Cryst. A41, 113–122.
Lonsdale, K. (1936). Simplified Structure Factor and Electron Density Formulae for the 230 Space Groups of Mathematical Crystallography. London: Bell.
Lunin, V. Yu. (1985). Use of the fast differentiation algorithm for phase refinement in protein crystallography. Acta Cryst. A41, 551–556.
MacGillavry, C. H. (1950). On the derivation of Harker–Kasper inequalities. Acta Cryst. 3, 214–217.
MacLane, S. (1963). Homology. Berlin: Springer-Verlag.
Magnus, W., Karrass, A. & Solitar, D. (1976). Combinatorial Group Theory: Presentations of Groups in Terms of Generators and Relations, 2nd revised ed. New York: Dover Publications.
Main, P. & Rossmann, M. G. (1966). Relationships among structure factors due to identical molecules in different crystallographic environments. Acta Cryst. 21, 67–72.
Main, P. & Woolfson, M. M. (1963). Direct determination of phases by the use of linear equations between structure factors. Acta Cryst. 16, 1046–1051.
Mayer, S. W. & Trueblood, K. N. (1953). Three-dimensional Fourier summations on a high-speed digital computer. Acta Cryst. 6, 427.
Montroll, E. W., Potts, R. B. & Ward, J. C. (1963). Correlations and spontaneous magnetization of the two-dimensional Ising model. J. Math. Phys. 4, 308–322.
Narayan, R. & Nityananda, R. (1982). The maximum determinant method and the maximum entropy method. Acta Cryst. A38, 122–128.
Navaza, J. (1985). On the maximum-entropy estimate of the electron density function. Acta Cryst. A41, 232–244.
Niggli, A. (1961). Small-scale computers in X-ray crystallography. In Computing Methods and the Phase Problem, edited by R. Pepinsky, J. M. Robertson & J. C. Speakman, pp. 12–20. Oxford: Pergamon Press.
Onsager, L. (1944). Crystal statistics. I. Two-dimensional model with an order–disorder transition. Phys. Rev. 65, 117–149.
Patterson, A. L. (1934). A Fourier series method for the determination of the components of interatomic distances in crystals. Phys. Rev. 46, 372–376.
Patterson, A. L. (1935a). A direct method for the determination of the components of interatomic distances in crystals. Z. Kristallogr. 90, 517–542.
Patterson, A. L. (1935b). Tabulated data for the seventeen plane groups. Z. Kristallogr. 90, 543–554.
Patterson, A. L. (1959). In International Tables for X-ray Crystallography, Vol. II, pp. 9–10. Erratum, January 1962. Birmingham: Kynoch Press.
Pearson, K. (1905). The problem of the random walk. Nature (London), 72, 294, 342.
Pepinsky, R. (1947). An electronic computer for X-ray crystal structure analyses. J. Appl. Phys. 18, 601–604.
Pepinsky, R. (1952). The use of positive kernels in Fourier syntheses of crystal structures. In Computing Methods and the Phase Problem in X-ray Crystal Analysis, edited by R. Pepinsky, pp. 319–338. State College: Penn. State University.
Pepinsky, R., van den Hende, J. & Vand, V. (1961). X-RAC and digital computing methods. In Computing Methods and the Phase Problem, edited by R. Pepinsky, J. M. Robertson & J. C. Speakman, pp. 154–160. Oxford: Pergamon Press.
Pepinsky, R. & Sayre, D. (1948). Quantitative electron-density contour delineation in the electronic Fourier synthesizer for crystal structure analysis. Nature (London), 162, 22–23.
Petrov, V. V. (1975). Sums of Independent Random Variables. Berlin: Springer-Verlag.
Qurashi, M. M. (1949). Optimal conditions for convergence of steepest descents as applied to structure determination. Acta Cryst. 2, 404–409.
Qurashi, M. M. (1953). An analysis of the efficiency of convergence of different methods of structure determination. I. The methods of least squares and steepest descents: centrosymmetric case. Acta Cryst. 6, 577–588.
Qurashi, M. M. & Vand, V. (1953). Weighting of the least-squares and steepest-descents methods in the initial stages of the crystal-structure determination. Acta Cryst. 6, 341–349.
Rayleigh (J. W. Strutt), Lord (1880). On the resultant of a large number of vibrations of the same pitch and arbitrary phase. Philos. Mag. 10, 73–78.
Rayleigh (J. W. Strutt), Lord (1899). On James Bernoulli's theorem in probabilities. Philos. Mag. 47, 246–251.
Rayleigh (J. W. Strutt), Lord (1905). The problem of the random walk. Nature (London), 72, 318.
Rayleigh (J. W. Strutt), Lord (1918). On the light emitted from a random distribution of luminous sources. Philos. Mag. 36, 429–449.
Rayleigh (J. W. Strutt), Lord (1919). On the problem of random flights in one, two or three dimensions. Philos. Mag. 37, 321–347.
Reif, F. (1965). Fundamentals of Statistical and Thermal Physics, Appendix A.6. New York: McGraw-Hill.
Reijen, L. L. van (1942). Diffraction effects in Fourier syntheses and their elimination in X-ray structure investigations. Physica, 9, 461–480.
Rice, S. O. (1944, 1945). Mathematical analysis of random noise. Bell Syst. Tech. J. 23, 283–332 (parts I and II); 24, 46–156 (parts III and IV). [Reprinted in Selected Papers on Noise and Stochastic Processes (1954), edited by N. Wax, pp. 133–294. New York: Dover Publications.]
Robertson, J. M. (1932). A simple harmonic continuous calculating machine. Philos. Mag. 13, 413–419.
Robertson, J. M. (1935). An X-ray study of the structure of phthalocyanines. Part I. The metal-free, nickel, copper, and platinum compounds. J. Chem. Soc. pp. 615–621.
Robertson, J. M. (1936a). Numerical and mechanical methods in double Fourier synthesis. Philos. Mag. 21, 176–187.
Robertson, J. M. (1936b). Calculation of structure factors and summation of Fourier series in crystal analysis: non-centrosymmetrical projections. Nature (London), 138, 683–684.
Robertson, J. M. (1936c). An X-ray study of the phthalocyanines. Part II. Quantitative structure determination of the metal-free compound. J. Chem. Soc. pp. 1195–1209.
Robertson, J. M. (1937). X-ray analysis and application of Fourier series methods to molecular structures. Phys. Soc. Rep. Prog. Phys. 4, 332–367.
Robertson, J. M. (1954). A fast digital computer for Fourier operations. Acta Cryst. 7, 817–822.
Robertson, J. M. (1955). Some properties of Fourier strips with applications to the digital computer. Acta Cryst. 8, 286–288.
Robertson, J. M. (1961). A digital mechanical computer for Fourier operations. In Computing Methods and the Phase Problem, edited by R. Pepinsky, J. M. Robertson & J. C. Speakman, pp. 21–24. Oxford: Pergamon Press.
Rollett, J. S. (1965). Structure factor routines. In Computing Methods in Crystallography, edited by J. S. Rollett, pp. 38–46. Oxford: Pergamon Press.
Rollett, J. S. (1970). Least-squares procedures in crystal structure analysis. In Crystallographic Computing, edited by F. R. Ahmed, pp. 167–181. Copenhagen: Munksgaard.
Rollett, J. S. & Davies, D. R. (1955). The calculation of structure factors for centrosymmetric monoclinic systems with anisotropic atomic vibration. Acta Cryst. 8, 125–128.
Rossmann, M. G. & Blow, D. M. (1962). The detection of sub-units within the crystallographic asymmetric unit. Acta Cryst. 15, 24–31.
Rossmann, M. G. & Blow, D. M. (1963). Determination of phases by the conditions of non-crystallographic symmetry. Acta Cryst. 16, 39–45.
Sayre, D. (1951). The calculation of structure factors by Fourier summation. Acta Cryst. 4, 362–367.
Sayre, D. (1952a). The squaring method: a new method for phase determination. Acta Cryst. 5, 60–65.
Sayre, D. (1952b). Some implications of a theorem due to Shannon. Acta Cryst. 5, 843.
Sayre, D. (1952c). The Fourier transform in X-ray crystal analysis. In Computing Methods and the Phase Problem in X-ray Crystal Analysis, edited by R. Pepinsky, pp. 361–390. State College: Penn. State University.
Sayre, D. (1972). On least-squares refinement of the phases of crystallographic structure factors. Acta Cryst. A28, 210–212.
Sayre, D. (1974). Least-squares phase refinement. II. High-resolution phasing of a small protein. Acta Cryst. A30, 180–184.
Sayre, D. (1980). Phase extension and refinement using convolutional and related equation systems. In Theory and Practice of Direct Methods in Crystallography, edited by M. F. C. Ladd & R. A. Palmer, pp. 271–286. New York, London: Plenum.
Schwarzenberger, R. L. E. (1980). N-Dimensional Crystallography. Research Notes in Mathematics, Vol. 41. London: Pitman.
Scott, W. R. (1964). Group Theory. Englewood Cliffs: Prentice-Hall. [Reprinted by Dover, New York, 1987.]
Shaffer, P. A. Jr, Schomaker, V. & Pauling, L. (1946a). The use of punched cards in molecular structure determinations. I. Crystal structure calculations. J. Chem. Phys. 14, 648–658.
Shaffer, P. A. Jr, Schomaker, V. & Pauling, L. (1946b). The use of punched cards in molecular structure determinations. II. Electron diffraction calculations. J. Chem. Phys. 14, 659–664.
Shenefelt, M. (1988). Group Invariant Finite Fourier Transforms. PhD thesis, Graduate Centre of the City University of New York.
Shmueli, U. & Weiss, G. H. (1985). Exact joint probability distribution for centrosymmetric structure factors. Derivation and application to the Σ1 relationship in the space group . Acta Cryst. A41, 401–408.
Shmueli, U. & Weiss, G. H. (1986). Exact joint distribution of , and , and the probability for the positive sign of the triple product in the space group . Acta Cryst. A42, 240–246.
Shmueli, U. & Weiss, G. H. (1987). Exact random-walk models in crystallographic statistics. III. Distributions of for space groups of low symmetry. Acta Cryst. A43, 93–98.
Shmueli, U. & Weiss, G. H. (1988). Exact random-walk models in crystallographic statistics. IV. P.d.f.'s of allowing for atoms in special positions. Acta Cryst. A44, 413–417.
Shmueli, U., Weiss, G. H. & Kiefer, J. E. (1985). Exact random-walk models in crystallographic statistics. II. The bicentric distribution for the space group . Acta Cryst. A41, 55–59.
Shmueli, U., Weiss, G. H., Kiefer, J. E. & Wilson, A. J. C. (1984). Exact random-walk models in crystallographic statistics. I. Space groups and P1. Acta Cryst. A40, 651–660.
Shoemaker, D. P., Donohue, J., Schomaker, V. & Corey, R. B. (1950). The crystal structure of LS-threonine. J. Am. Chem. Soc. 72, 2328–2349.
Shoemaker, D. P. & Sly, W. G. (1961). Computer programming strategy for crystallographic Fourier synthesis: program MIFR1. Acta Cryst. 14, 552.
Sneddon, I. N. (1972). The Use of Integral Transforms. New York: McGraw-Hill.
Sparks, R. A., Prosen, R. J., Kruse, F. H. & Trueblood, K. N. (1956). Crystallographic calculations on the high-speed digital computer SWAC. Acta Cryst. 9, 350–358.
Stout, G. H. & Jensen, L. H. (1968). X-ray Structure Determination. A Practical Guide. New York: Macmillan.
Suryan, G. (1957). An analogue computer for double Fourier series summation for X-ray crystal-structure analysis. Acta Cryst. 10, 82–84.
Sussman, J. L., Holbrook, S. R., Church, G. M. & Kim, S.-H. (1977). A structure-factor least-squares refinement procedure for macromolecular structures using constrained and restrained parameters. Acta Cryst. A33, 800–804.
Takano, T. (1977a). Structure of myoglobin refined at 2.0 Å resolution. I. Crystallographic refinement of metmyoglobin from sperm whale. J. Mol. Biol. 110, 537–568.
Takano, T. (1977b). Structure of myoglobin refined at 2.0 Å resolution. II. Structure of deoxymyoglobin from sperm whale. J. Mol. Biol. 110, 569–584.
Ten Eyck, L. F. (1973). Crystallographic fast Fourier transforms. Acta Cryst. A29, 183–191.
Ten Eyck, L. F. (1977). Efficient structure factor calculation for large molecules by fast Fourier transform. Acta Cryst. A33, 486–492.
Ten Eyck, L. F. (1985). Fast Fourier transform calculation of electron density maps. In Diffraction Methods for Biological Macromolecules (Methods in Enzymology, Vol. 115), edited by H. Wyckoff, C. H. W. Hirs & S. N. Timasheff, pp. 324–337. New York: Academic Press.
Titchmarsh, E. C. (1922). Hankel transforms. Proc. Camb. Philos. Soc. 21, 463–473.
Tronrud, D. E., Ten Eyck, L. F. & Matthews, B. W. (1987). An efficient general-purpose least-squares refinement program for macromolecular structures. Acta Cryst. A43, 489–501.
Trueblood, K. N. (1956). Symmetry transformations of general anisotropic temperature factors. Acta Cryst. 9, 359–361.
Truter, M. R. (1954). Refinement of a non-centrosymmetrical structure: sodium nitrite. Acta Cryst. 7, 73–77.
Vand, V. (1948). Method of steepest descents: improved formula for X-ray analysis. Nature (London), 161, 600–601.
Vand, V. (1951). A simplified method of steepest descents. Acta Cryst. 4, 285–286.
Vand, V., Eiland, P. F. & Pepinsky, R. (1957). Analytical representation of atomic scattering factors. Acta Cryst. 10, 303–306.
Wang, B. C. (1985). Resolution of phase ambiguity in macromolecular crystallography. In Diffraction Methods for Biological Macromolecules (Methods in Enzymology, Vol. 115), edited by H. Wyckoff, C. H. W. Hirs & S. N. Timasheff, pp. 90–112. New York: Academic Press.
Warren, B. & Bragg, W. L. (1929). The structure of diopside, CaMg(SiO3)2. Z. Kristallogr. 69, 168–193.
Warren, B. E. & Gingrich, N. S. (1934). Fourier integral analysis of X-ray powder patterns. Phys. Rev. 46, 368–372.
Waser, J. (1955a). Symmetry relations between structure factors. Acta Cryst. 8, 595.
Waser, J. (1955b). The anisotropic temperature factor in triclinic coordinates. Acta Cryst. 8, 731.
Waser, J. & Schomaker, V. (1953). The Fourier inversion of diffraction data. Rev. Mod. Phys. 25, 671–690.
Watson, G. L. (1970). Integral Quadratic Forms. Cambridge University Press.
Watson, G. N. (1944). A Treatise on the Theory of Bessel Functions, 2nd ed. Cambridge University Press.
Wells, M. (1965). Computational aspects of space-group symmetry. Acta Cryst. 19, 173–179.
Whittaker, E. J. W. (1948). The evaluation of Fourier transforms by a Fourier synthesis method. Acta Cryst. 1, 165–167.
Whittaker, E. T. & Robinson, G. (1944). The Calculus of Observations. London: Blackie.
Widom, H. (1960). A theorem on translation kernels in n dimensions. Trans. Am. Math. Soc. 94, 170–180.
Widom, H. (1975). Asymptotic inversion of convolution operators. Publ. Math. IHES, 44, 191–240.
Wilkins, S. W., Varghese, J. N. & Lehmann, M. S. (1983). Statistical geometry. I. A self-consistent approach to the crystallographic inversion problem based on information theory. Acta Cryst. A39, 47–60.
Wolf, J. A. (1967). Spaces of Constant Curvature. New York: McGraw-Hill.
Wondratschek, H. (2005). Introduction to space-group symmetry. In International Tables for Crystallography, Vol. A. Space-Group Symmetry, edited by Th. Hahn, Part 8. Heidelberg: Springer.
Zachariasen, W. H. (1945). Theory of X-ray Diffraction in Crystals. New York: John Wiley.
Zassenhaus, H. (1948). Über eine Algorithmus zur Bestimmung der Raumgruppen. Commun. Helv. Math. 21, 117–141.