International
Tables for
Crystallography
Volume B
Reciprocal space
Edited by U. Shmueli

International Tables for Crystallography (2010). Vol. B, ch. 1.1, pp. 2-9   | 1 | 2 |
https://doi.org/10.1107/97809553602060000758

Chapter 1.1. Reciprocal space in crystallography

U. Shmuelia*

aSchool of Chemistry, Tel Aviv University, 69 978 Tel Aviv, Israel
Correspondence e-mail: ushmueli@post.tau.ac.il

After a brief introduction in Section 1.1.1[link], Section 1.1.2[link] of this chapter presents a formal definition of dual (reciprocal) bases and a brief overview of applications of the reciprocal lattice basis to lattice geometry, diffraction conditions and Fourier synthesis of functions with the periodicity of the crystal. In Section 1.1.3[link], the fundamental relationships between direct and reciprocal bases are derived and summarized. Section 1.1.4[link] introduces the basics of tensor notation and the representation of the above relationships in this notation, which is particularly well adapted to analytical considerations as well as to computer programming. Several examples from various areas of crystallographic computing follow this introduction and a detailed derivation of the finite rotation operator is presented in this context. This is followed by a section on transformation of basis vectors (Section 1.1.5[link]), of importance in many areas of crystallography. The chapter is concluded with brief mentions of analytical aspects of the concept of reciprocal space in crystallography (Section 1.1.6[link]). The purpose of this chapter is to introduce the reader to or remind them of the fundamentals of concepts which are employed throughout this volume.

1.1.1. Introduction

| top | pdf |

The purpose of this chapter is to provide an introduction to several aspects of reciprocal space, which are of general importance in crystallography and which appear in the various chapters and sections to follow. We first summarize the basic definitions and briefly inspect some fundamental aspects of crystallography, while recalling that they can be usefully and simply discussed in terms of the concept of the reciprocal lattice. This introductory section is followed by a summary of the basic relationships between the direct and associated reciprocal lattices. We then introduce the elements of tensor-algebraic formulation of such dual relationships, with emphasis on those that are important in many applications of reciprocal space to crystallographic algorithms. We proceed with a section that demonstrates the role of mutually reciprocal bases in transformations of coordinates and conclude with a brief outline of some important analytical aspects of reciprocal space, most of which are further developed in other parts of this volume.

1.1.2. Reciprocal lattice in crystallography

| top | pdf |

The notion of mutually reciprocal triads of vectors dates back to the introduction of vector calculus by J. Willard Gibbs in the 1880s (e.g. Wilson, 1901[link]). This concept appeared to be useful in the early interpretations of diffraction from single crystals (Ewald, 1913[link]; Laue, 1914[link]) and its first detailed exposition and the recognition of its importance in crystallography can be found in Ewald's (1921[link]) article. The following free translation of Ewald's (1921[link]) introduction, presented in a somewhat different notation, may serve the purpose of this section:

To the set of [{\bf a}_{i}], there corresponds in the vector calculus a set of `reciprocal vectors' [{\bf b}_{i}], which are defined (by Gibbs) by the following properties:[{\bf a}_{i}\cdot {\bf b}_{k} = 0\quad (\hbox{for } i\neq k) \eqno(1.1.2.1)]and[{\bf a}_{i}\cdot {\bf b}_{i} = 1, \eqno(1.1.2.2)]where i and k may each equal 1, 2 or 3. The first equation, (1.1.2.1)[link], says that each vector [{\bf b}_{k}] is perpendicular to two vectors [{\bf a}_{i}], as follows from the vanishing scalar products. Equation (1.1.2.2)[link] provides the norm of the vector [{\bf b}_{i}]: the length of this vector must be chosen such that the projection of [{\bf b}_{i}] on the direction of [{\bf a}_{i}] has the length [1/a_{i}], where [a_{i}] is the magnitude of the vector [{\bf a}_{i}]….

The consequences of equations (1.1.2.1)[link] and (1.1.2.2)[link] were elaborated by Ewald (1921[link]) and are very well documented in the subsequent literature, crystallographic as well as other.

As is well known, the reciprocal lattice occupies a rather prominent position in crystallography and there are nearly as many accounts of its importance as there are crystallographic texts. It is not intended to review its applications, in any detail, in the present section; this is done in the remaining chapters and sections of the present volume. It seems desirable, however, to mention by way of an introduction some fundamental geometrical, physical and mathematical aspects of crystallography, and try to give a unified demonstration of the usefulness of mutually reciprocal bases as an interpretive tool.

Let us assume that the coordinates of all the (direct) lattice points are integers. This can only be true for P-type lattices. Consider the equation of a lattice plane in the direct lattice. It can be shown (e.g. Buerger, 1941[link]; also Shmueli, 2007[link]) that this equation is given by[hx + ky + lz = n, \eqno(1.1.2.3)]where h, k and l, known as Miller indices of the (hkl) lattice plane, are (under the above assumption) relatively prime integers (i.e. do not have a common factor other than [+1] or [-1]). In this equation, x, y and z are the coordinates of any point lying in the plane and are expressed as fractions of the magnitudes of the basis vectors a, b and c of the direct lattice, and n is an integer denoting the serial number of the lattice plane within the family of parallel and equidistant [(hkl)] planes. The interplanar spacing is denoted by [d_{hkl}], the value [n = 0] corresponding to the [(hkl)] plane passing through the origin.

Let [{\bf r} = x{\bf a} + y{\bf b} + z{\bf c}] and [{\bf r}_{\rm L} = u{\bf a} + v{\bf b} + w{\bf c}], where u, v, w are any integers, denote the position vectors of the point xyz and a lattice point uvw lying in the plane (1.1.2.3)[link], respectively, and assume that r and [{\bf r}_{\rm L}] are different vectors. If the plane normal is denoted by N, where N is proportional to the vector product of two in-plane lattice vectors, the vector form of the equation of the lattice plane becomes[{\bf N}\cdot ({\bf r} - {\bf r}_{\rm L}) = 0 \quad \hbox{or}\quad {\bf N}\cdot {\bf r} = {\bf N}\cdot {\bf r}_{\rm L}. \eqno(1.1.2.4)]For equations (1.1.2.3)[link] and (1.1.2.4)[link] to be identical, the plane normal N must satisfy the requirement that [{\bf N}\cdot {\bf r}_{\rm L} = n], where n is an (unrestricted) integer.

While the Miller indices of lattice planes in P-type lattices must be relatively prime, if the direct lattice is based on a non-primitive unit cell (any centring type) the Miller indices of some lattice planes are no longer relatively prime (e.g. Nespolo, 2015[link]).

Let us now consider the basic diffraction relations (e.g. Lipson & Cochran, 1966[link]). Suppose a parallel beam of monochromatic radiation, of wavelength [\lambda], falls on a lattice of identical point scatterers. If it is assumed that the scattering is elastic, i.e. there is no change of the wavelength during this process, the wavevectors of the incident and scattered radiation have the same magnitude, which can conveniently be taken as [1/\lambda]. A consideration of path and phase differences between the waves outgoing from two point scatterers separated by the lattice vector [{\bf r}_{\rm L}] (defined as above) shows that the condition for their maximum constructive interference is given by[({\bf s} - {\bf s}_{0})\cdot {\bf r}_{\rm L} = n, \eqno(1.1.2.5)]where [{\bf s}_{0}] and s are the wavevectors of the incident and scattered beams, respectively, and n is an arbitrary integer.

Since [{\bf r}_{\rm L} = u{\bf a} + v{\bf b} + w{\bf c}], where u, v and w are unrestricted integers, equation (1.1.2.5)[link] is equivalent to the equations of Laue:[{\bf h}\cdot {\bf a} = h,\quad {\bf h}\cdot {\bf b} = k,\quad {\bf h}\cdot {\bf c} = l, \eqno(1.1.2.6)]where [{\bf h} = {\bf s} - {\bf s}_{0}] is the diffraction vector, and h, k and l are integers corresponding to orders of diffraction from the three-dimensional lattice (Lipson & Cochran, 1966[link]). The diffraction vector thus has to satisfy a condition that is analogous to that imposed on the normal to a lattice plane.

The next relevant aspect to be commented on is the Fourier expansion of a function having the periodicity of the crystal lattice. Such functions are e.g. the electron density, the density of nuclear matter and the electrostatic potential in the crystal, which are the operative definitions of crystal structure in X-ray, neutron and electron-diffraction methods of crystal structure determination. A Fourier expansion of such a periodic function may be thought of as a superposition of waves (e.g. Buerger, 1959[link]), with wavevectors related to the interplanar spacings [d_{hkl}], in the crystal lattice. Denoting the wavevector of a Fourier wave by g (a function of hkl), the phase of the Fourier wave at the point r in the crystal is given by [2\pi {\bf g}\cdot {\bf r}], and the triple Fourier series corresponding to the expansion of the periodic function, say G(r), can be written as [G({\bf r}) = {\textstyle\sum\limits_{\bf g}} C({\bf g}) \exp (-2 \pi i{\bf g}\cdot {\bf r}), \eqno(1.1.2.7)]where C(g) are the amplitudes of the Fourier waves, or Fourier coefficients, which are related to the experimental data. Numerous examples of such expansions appear throughout this volume.

The permissible wavevectors in the above expansion are restricted by the periodicity of the function G(r). Since, by definition, [G({\bf r}) = G({\bf r} + {\bf r}_{\rm L})], where [{\bf r}_{\rm L}] is a direct-lattice vector, the right-hand side of (1.1.2.7)[link] must remain unchanged when r is replaced by [{\bf r} + {\bf r}_{\rm L}]. This, however, can be true only if the scalar product [{\bf g}\cdot {\bf r}_{\rm L}] is an integer.

Each of the above three aspects of crystallography may lead, independently, to a useful introduction of the reciprocal vectors, and there are many examples of this in the literature. It is interesting, however, to consider the representation of the equation [{\bf v}\cdot {\bf r}_{\rm L} = n, \eqno(1.1.2.8)]which is common to all three, in its most convenient form. Obviously, the vector v which stands for the plane normal, the diffraction vector, and the wavevector in a Fourier expansion, may still be referred to any permissible basis and so may [{\bf r}_{\rm L}], by an appropriate transformation.

Let [{\bf v} = U{\bf A} + V{\bf B} + W{\bf C}], where A, B and C are linearly independent vectors. Equation (1.1.2.8)[link] can then be written as [(U{\bf A} + V{\bf B} + W{\bf C})\cdot (u{\bf a} + v{\bf b} + w{\bf c}) = n, \eqno(1.1.2.9)]or, in matrix notation, [(UVW) \pmatrix{{\bf A}\cr {\bf B}\cr {\bf C}\cr}\cdot ({\bf abc}) \pmatrix{u\cr v\cr w\cr} = n, \eqno(1.1.2.10)]or [(UVW) \left(\matrix{{\bf A}\cdot {\bf a} &{\bf A}\cdot {\bf b} &{\bf A}\cdot {\bf c}\cr {\bf B}\cdot {\bf a} &{\bf B}\cdot {\bf b} &{\bf B}\cdot {\bf c}\cr {\bf C}\cdot {\bf a} &{\bf C}\cdot {\bf b} &{\bf C}\cdot {\bf c}\cr}\right) \pmatrix{u\cr v\cr w\cr} = n. \eqno(1.1.2.11)]The simplest representation of equation (1.1.2.8)[link] results when the matrix of scalar products in (1.1.2.11)[link] reduces to a unit matrix. This can be achieved (i) by choosing the basis vectors [{\bf ABC}] to be orthonormal to the basis vectors [{\bf abc}], while requiring that the components of [{\bf r}_{\rm L}] be integers, or (ii) by requiring that the bases [{\bf ABC}] and [{\bf abc}] coincide with the same orthonormal basis, i.e. expressing both v and [{\bf r}_{\rm L}], in (1.1.2.8)[link], in the same Cartesian system. If we choose the first alternative, it is seen that:

  • (1) The components of the vector v, and hence those of N, h and g, are of necessity integers, since u, v and w are already integral. The components of v include Miller indices in the case of the lattice plane, they coincide with the orders of diffraction from a three-dimensional lattice of scatterers, and correspond to the summation indices in the triple Fourier series (1.1.2.7)[link].

  • (2) The basis vectors A, B and C are reciprocal to a, b and c, as can be seen by comparing the scalar products in (1.1.2.11)[link] with those in (1.1.2.1)[link] and (1.1.2.2)[link]. In fact, the bases [{\bf ABC}] and [{\bf abc}] are mutually reciprocal. Since there are no restrictions on the integers U, V and W, the vector v belongs to a lattice which, on account of its basis, is called the reciprocal lattice.

It follows that, at least in the present case, algebraic simplicity goes together with ease of interpretation, which certainly accounts for much of the importance of the reciprocal lattice in crystallography. The second alternative of reducing the matrix in (1.1.2.11)[link] to a unit matrix, a transformation of (1.1.2.8)[link] to a Cartesian system, leads to non-integral components of the vectors, which makes any interpretation of v or [{\bf r}_{\rm L}] much less transparent. However, transformations to Cartesian systems are often very useful in crystallographic computing and will be discussed below (see also Chapters 2.3[link] and 3.3[link] in this volume).

We shall, in what follows, abandon all the temporary notation used above and write the reciprocal-lattice vector as [{\bf h} = h{\bf a}^{*} + k{\bf b}^{*} + l{\bf c}^{*} \eqno(1.1.2.12)]or [{\bf h} = h_{1}{\bf a}^{1} + h_{2}{\bf a}^{2} + h_{3}{\bf a}^{3} = {\textstyle\sum\limits_{i = 1}^{3}} h_{i}{\bf a}^{i}, \eqno(1.1.2.13)]and denote the direct-lattice vectors by [{\bf r}_{\rm L} = u{\bf a} + v{\bf b} + w{\bf c}], as above, or by [{\bf r}_{\rm L} = u^{1}{\bf a}_{1} + u^{2}{\bf a}_{2} + u^{3}{\bf a}_{3} = {\textstyle\sum\limits_{i = 1}^{3}} u^{i}{\bf a}_{i}. \eqno(1.1.2.14)]The representations (1.1.2.13)[link] and (1.1.2.14)[link] are used in the tensor-algebraic formulation of the relationships between mutually reciprocal bases (see Section 1.1.4[link] below).

1.1.3. Fundamental relationships

| top | pdf |

We now present a brief derivation and a summary of the most important relationships between the direct and the reciprocal bases. The usual conventions of vector algebra are observed and the results are presented in the conventional crystallographic notation. Equations (1.1.2.1)[link] and (1.1.2.2)[link] now become [{\bf a}\cdot {\bf b}^{*} = {\bf a}\cdot {\bf c}^{*} = {\bf b}\cdot {\bf a}^{*} = {\bf b}\cdot {\bf c}^{*} = {\bf c}\cdot {\bf a}^{*} = {\bf c}\cdot {\bf b}^{*} = 0 \eqno(1.1.3.1)]and [{\bf a}\cdot {\bf a}^{*} = {\bf b}\cdot {\bf b}^{*} = {\bf c}\cdot {\bf c}^{*} = 1, \eqno(1.1.3.2)]respectively, and the relationships are obtained as follows.

1.1.3.1. Basis vectors

| top | pdf |

It is seen from (1.1.3.1)[link] that [{\bf a}^{*}] must be proportional to the vector product of b and c, [{\bf a}^{*} = K ({\bf b} \times {\bf c}),]and, since [{\bf a}\cdot {\bf a}^{*} = 1], the proportionality constant K equals [1/[{\bf a}\cdot ({\bf b} \times {\bf c})]]. The mixed product [{\bf a}\cdot ({\bf b} \times {\bf c})] can be interpreted as the positive volume of the unit cell in the direct lattice only if a, b and c form a right-handed set. If the above condition is fulfilled, we obtain [ {\bf a}^{*} = {{\bf b} \times {\bf c}\over {V}},\quad {\bf b}^{*} = {{\bf c} \times {\bf a}\over {V}},\quad {\bf c}^{*} = {{\bf a} \times {\bf b}\over {V}} \eqno(1.1.3.3)]and analogously [ {\bf a} = {{\bf b^{*} \times c^{*}}\over {V}^{*}},\quad {\bf b} = {{\bf c^{*} \times a^{*}}\over {V}^{*}},\quad {\bf c} = {{\bf a^{*} \times b^{*}}\over {V}^{*}}, \eqno(1.1.3.4)]where V and [ {V}^{*}] are the volumes of the unit cells in the associated direct and reciprocal lattices, respectively. Use has been made of the fact that the mixed product, say [{\bf a}\cdot ({\bf b} \times {\bf c})], remains unchanged under cyclic rearrangement of the vectors that appear in it.

1.1.3.2. Volumes

| top | pdf |

The reciprocal relationship of V and [ {V}^{*}] follows readily. We have from equations (1.1.3.2)[link], (1.1.3.3)[link] and (1.1.3.4)[link] [ {\bf c}\cdot {\bf c^{*}} = {({\bf a} \times {\bf b})\cdot ({\bf a}^{*} \times {\bf b}^{*})\over {VV}^{*}} = 1.]If we make use of the vector identity [({\bf A} \times {\bf B})\cdot ({\bf C} \times {\bf D}) = ({\bf A}\cdot {\bf C}) ({\bf B}\cdot {\bf D}) - ({\bf A}\cdot {\bf D}) ({\bf B}\cdot {\bf C}), \eqno(1.1.3.5)]and equations (1.1.3.1)[link] and (1.1.3.2)[link], it is seen that [ {V}^{*} = 1/{V}].

1.1.3.3. Angular relationships

| top | pdf |

The relationships of the angles [\alpha, \beta, \gamma] between the pairs of vectors (b, c), (c, a) and (a, b), respectively, and the angles [\alpha^{*}, \beta^{*}, \gamma^{*}] between the corresponding pairs of reciprocal basis vectors, can be obtained by simple vector algebra. For example, we have from (1.1.3.3)[link]:

(i) [{\bf b}^{*}\cdot {\bf c}^{*} = b^{*} c^{*} \cos \alpha^{*}], with [b^{*} = {ca \sin \beta\over V}\quad \hbox{and}\quad c^{*} = {ab \sin \gamma\over V}]and (ii)

[{\bf b}^{*}\cdot {\bf c}^{*} = {({\bf c} \times {\bf a})\cdot ({\bf a} \times {\bf b})\over {V}^{2}}.]If we make use of the identity (1.1.3.5)[link], and compare the two expressions for [{\bf b}^{*}\cdot {\bf c}^{*}], we readily obtain [\cos \alpha^{*} = {\cos \beta \cos \gamma - \cos \alpha\over \sin \beta \sin \gamma}. \eqno(1.1.3.6)]Similarly, [\cos \beta^{*} = {\cos \gamma \cos \alpha - \cos \beta\over \sin \gamma \sin \alpha} \eqno(1.1.3.7)]and [\cos \gamma^{*} = {\cos \alpha \cos \beta - \cos \gamma\over \sin \alpha \sin \beta}. \eqno(1.1.3.8)]The expressions for the cosines of the direct angles in terms of those of the reciprocal ones are analogous to (1.1.3.6)[link]–(1.1.3.8)[link]. For example, [\cos \alpha = {\cos \beta^{*} \cos\gamma^{*} - \cos \alpha^{*}\over \sin \beta^{*} \sin \gamma^{*}}.]

1.1.3.4. Matrices of metric tensors

| top | pdf |

Various computational and algebraic aspects of mutually reciprocal bases are most conveniently expressed in terms of the metric tensors of these bases. The tensors will be treated in some detail in the next section[link], and only the definitions of their matrices are given and interpreted below.

Consider the length of the vector [{\bf r} = x{\bf a} + y{\bf b} + z{\bf c}]. This is given by [|{\bf r}| = [(x{\bf a} + y{\bf b} + z{\bf c})\cdot (x{\bf a} + y{\bf b} + z{\bf c})]^{1/2} \eqno(1.1.3.9)]and can be written in matrix form as [ |{\bf r}| = [{\bi x}^{T} {\bi Gx}]^{1/2}, \eqno(1.1.3.10)]where [ {\bi x} = \pmatrix{x\cr y\cr z\cr},\quad {\bi x}^{T} = (xyz)]and [ \eqalignno{ {\bi G} &= \pmatrix{{\bf a\cdot a} &{\bf a\cdot b} &{\bf a\cdot c}\cr {\bf b\cdot a} &{\bf b\cdot b} &{\bf b\cdot c}\cr {\bf c\cdot a} &{\bf c\cdot b} &{\bf c\cdot c}\cr} &(1.1.3.11)\cr &= \pmatrix{a^{2} &ab \cos \gamma &ac \cos \beta\cr ba \cos \gamma &b^{2} &bc \cos \alpha\cr ca \cos \beta &cb \cos \alpha &c^{2}\cr}. &(1.1.3.12)}%(1.1.3.12)]This is the matrix of the metric tensor of the direct basis, or briefly the direct metric. The corresponding reciprocal metric is given by [\eqalignno{ {\bi G^{*}} &= \pmatrix{{\bf a^{*}\cdot a^{*}} &{\bf a^{*}\cdot b^{*}} &{\bf a^{*}\cdot c^{*}}\cr {\bf b^{*}\cdot a^{*}} &{\bf b^{*}\cdot b^{*}} &{\bf b^{*}\cdot c^{*}}\cr {\bf c^{*}\cdot a^{*}} &{\bf c^{*}\cdot b^{*}} &{\bf c^{*}\cdot c^{*}}\cr} &(1.1.3.13)\cr &= \pmatrix{a^{*2} &a^{*}b^{*} \cos \gamma^{*} &a^{*}c^{*} \cos \beta^{*}\cr b^{*}a^{*} \cos \gamma^{*} &b^{*2} &b^{*}c^{*} \cos \alpha^{*}\cr c^{*}a^{*} \cos \beta^{*} &c^{*}b^{*} \cos \alpha^{*} &c^{*2}\cr}. &(1.1.3.14)}%(1.1.3.14)]The matrices G and [ {\bi G}^{*}] are of fundamental importance in crystallographic computations and transformations of basis vectors and coordinates from direct to reciprocal space and vice versa. Examples of applications are presented in Part 3[link] of this volume and in the remaining sections of this chapter.

It can be shown (e.g. Buerger, 1941[link]) that the determinants of G and [ {\bi G}^{*}] equal the squared volumes of the direct and reciprocal unit cells, respectively. Thus, [ \hbox{det } ({\bi G}) = [{\bf a}\cdot ({\bf b} \times {\bf c})]^{2} = V^{2} \eqno(1.1.3.15)]and [ \hbox{det } ({\bi G^{*}}) = [{\bf a}^{*}\cdot ({\bf b}^{*} \times {\bf c}^{*})]^{2} = V^{*2}, \eqno(1.1.3.16)]and a direct expansion of the determinants, from (1.1.3.12)[link] and (1.1.3.14)[link], leads to [\eqalignno{ V &= abc (1 - \cos^{2} \alpha - \cos^{2} \beta - \cos^{2} \gamma \cr &\quad + 2 \cos \alpha \cos \beta \cos \gamma)^{1/2} &(1.1.3.17)}]and [\eqalignno{ V^{*} &= a^{*}b^{*}c^{*} (1 - \cos^{2} \alpha^{*} - \cos^{2} \beta^{*} - \cos^{2} \gamma^{*} \cr &\quad + 2 \cos \alpha^{*} \cos \beta^{*} \cos \gamma^{*})^{1/2}. &(1.1.3.18)}]The following algorithm has been found useful in computational applications of the above relationships to calculations in reciprocal space (e.g. data reduction) and in direct space (e.g. crystal geometry).

  • (1) Input the direct unit-cell parameters and construct the matrix of the metric tensor [cf. equation (1.1.3.12)[link]].

  • (2) Compute the determinant of the matrix G and find the inverse matrix, [ {\bi G}^{-1}]; this inverse matrix is just [ {\bi G}^{*}], the matrix of the metric tensor of the reciprocal basis (see also Section 1.1.4[link] below).

  • (3) Use the elements of [ {\bi G}^{*}], and equation (1.1.3.14)[link], to obtain the parameters of the reciprocal unit cell.

The direct and reciprocal sets of unit-cell parameters, as well as the corresponding metric tensors, are now available for further calculations.

Explicit relations between direct- and reciprocal-lattice parameters, valid for the various crystal systems, are given in most textbooks on crystallography [see also Chapters 1.1[link] and 1.2[link] of Volume C (Koch, 2004[link])].

1.1.4. Tensor-algebraic formulation

| top | pdf |

The present section summarizes the tensor-algebraic properties of mutually reciprocal sets of basis vectors, which are of importance in the various aspects of crystallography. This is not intended to be a systematic treatment of tensor algebra; for more thorough expositions of the subject the reader is referred to relevant crystallographic texts (e.g. Patterson, 1967[link]; Sands, 1982[link]), and other texts in the physical and mathematical literature that deal with tensor algebra and analysis.

Let us first recall that symbolic vector and matrix notations, in which basis vectors and coordinates do not appear explicitly, are often helpful in qualitative considerations. If, however, an expression has to be evaluated, the various quantities appearing in it must be presented in component form. One of the best ways to achieve a concise presentation of geometrical expressions in component form, while retaining much of their `transparent' symbolic character, is their tensor-algebraic formulation.

1.1.4.1. Conventions

| top | pdf |

We shall adhere to the following conventions:

  • (i) Notation for direct and reciprocal basis vectors: [\displaylines{ {\bf a} = {\bf a}_{1}, {\bf b} = {\bf a}_{2}, {\bf c} = {\bf a}_{3}\cr {\bf a^{*}} = {\bf a}^{1}, {\bf b^{*}} = {\bf a}^{2}, {\bf c^{*}} = {\bf a}^{3}.}]Subscripted quantities are associated in tensor algebra with covariant, and superscripted with contravariant transformation properties. Thus the basis vectors of the direct lattice are represented as covariant quantities and those of the reciprocal lattice as contravariant ones.

  • (ii) Summation convention: if an index appears twice in an expression, once as subscript and once as superscript, a summation over this index is thereby implied and the summation sign is omitted. For example, [{\textstyle\sum\limits_{i} \sum\limits_{j}} x^{i} T_{ij} x\hskip 2pt^{j} \hbox{ will be written } x^{i} T_{ij} x\hskip 2pt^{j}]since both i and j conform to the convention. Such repeating indices are often called dummy indices. The implied summation over repeating indices is also often used even when the indices are at the same level and the coordinate system is Cartesian; there is no distinction between contravariant and covariant quantities in Cartesian frames of reference (see Chapter 3.3[link] ).

  • (iii) Components (coordinates) of vectors referred to the covariant basis are written as contravariant quantities, and vice versa. For example, [\eqalign{ {\bf r} &= x{\bf a} + y{\bf b} + z{\bf c} = x^{1} {\bf a}_{1} + x^{2} {\bf a}_{2} + x^{3} {\bf a}_{3} = x^{i} {\bf a}_{i}\cr {\bf h} &= h{\bf a^{*}} + k{\bf b^{*}} + l{\bf c^{*}} = h_{1} {{\bf a}^{1}} + h_{2} {{\bf a}^{2}} + h_{3} {{\bf a}^{3}} = h_{i} {\bf a}^{i}.}]

1.1.4.2. Transformations

| top | pdf |

A familiar concept but a fundamental one in tensor algebra is the transformation of coordinates. For example, suppose that an atomic position vector is referred to two unit-cell settings as follows: [{\bf r} = x^{k} {\bf a}_{k} \eqno(1.1.4.1)]and [{\bf r} = x'^{k} {\bf a}'_{k}. \eqno(1.1.4.2)]Let us multiply both sides of (1.1.4.1)[link] and (1.1.4.2)[link], on the right, by the vectors [{\bf a}^{m}], m = 1, 2, or 3, i.e. by the reciprocal vectors to the basis [{\bf a}_{1} {\bf a}_{2} {\bf a}_{3}]. We obtain from (1.1.4.1)[link] [x^{k} {\bf a}_{k} \cdot {\bf a}^{m} = x^{k} \delta_{k}^{m} = x^{m},]where [\delta_{k}^{m}] is the Kronecker symbol which equals 1 when [k = m] and equals zero if [k \neq m], and by comparison with (1.1.4.2)[link] we have [x^{m} = x'^{k} T_{k}^{m}, \eqno(1.1.4.3)]where [T_{k}^{m} = {\bf a}'_{k} \cdot {\bf a}^{m}] is an element of the required transformation matrix. Of course, the same transformation could have been written as [x^{m} = T_{k}^{m} x'^{k}, \eqno(1.1.4.4)]where [T_{k}^{m} = {\bf a}^{m} \cdot {\bf a}'_{k}].

A tensor is a quantity that transforms as the product of coordinates, and the rank of a tensor is the number of transformations involved (Patterson, 1967[link]; Sands, 1982[link]). E.g. the product of two coordinates, as in the above example, transforms from the a′ basis to the a basis as [x^{m} x^{n} = T_{p}^{m} T_{q}^{n} x'^{p} x'^{q}{\rm ;} \eqno(1.1.4.5)]the same transformation law applies to the components of a contravariant tensor of rank two, the components of which are referred to the primed basis and are to be transformed to the unprimed one: [Q^{mn} = T_{p}^{m} T_{q}^{n} Q'^{pq}. \eqno(1.1.4.6)]

1.1.4.3. Scalar products

| top | pdf |

The expression for the scalar product of two vectors, say u and v, depends on the bases to which the vectors are referred. If we admit only the covariant and contravariant bases defined above, we have four possible types of expression: [\eqalignno{ (\rm I)\ &\,{\bf u} = u^{i} {\bf a}_{i}, {\bf v} = v^{i} {\bf a}_{i}\cr &\,{\bf u} \cdot {\bf v} = u^{i} v^{\,j} ({\bf a}_{i} \cdot {\bf a}_{j}) \equiv u^{i} v\hskip 2pt^{j} g_{ij}, &(1.1.4.7)\cr (\rm II) \ &\,{\bf u} = {\bf u}_{i} {\bf a}^{i}, {\bf v} = v_{i} {\bf a}^{i}\cr &\,{\bf u} \cdot {\bf v} = u_{i} v_{j} ({\bf a}^{i} \cdot {\bf a}^{\,j}) \equiv u_{i} v_{j} g^{ij}, &(1.1.4.8)\cr (\rm III)\ &\,{\bf u} = u^{i} {\bf a}_{i}, {\bf v} = v_{i} {\bf a}^{i}\cr &\,{\bf u} \cdot {\bf v} = u^{i} v_{j} ({\bf a}_{i} \cdot {\bf a}^{\,j}) \equiv u^{i} v_{j} \delta_{i}^{\,j} = u^{i} v_{i}, &(1.1.4.9)\cr (\rm IV) \ &\,{\bf u} = u_{i} {\bf a}^{i}, {\bf v} = v^{i} {\bf a}_{i}\cr &\,{\bf u} \cdot {\bf v} = u_{i} v^{\,j} ({\bf a}^{i} \cdot {\bf a}_{j}) \equiv u_{i} v^{\,j} \delta_{j}^{i} = u_{i} v^{i}. &(1.1.4.10)}%(1.1.4.10)]

  • (i) The sets of scalar products [g_{ij} = {\bf a}_{i} \cdot {\bf a}_{j}] (1.1.4.7)[link] and [g^{ij} = {\bf a}^{i} \cdot {\bf a}\hskip 2pt^{j}] (1.1.4.8)[link] are known as the metric tensors of the covariant (direct) and contravariant (reciprocal) bases, respectively; the corresponding matrices are presented in conventional notation in equations (1.1.3.11)[link] and (1.1.3.13)[link]. Numerous applications of these tensors to the computation of distances and angles in crystals are given in Chapter 3.1[link] .

  • (ii) Equations (1.1.4.7)[link] to (1.1.4.10)[link] furnish the relationships between the covariant and contravariant components of the same vector. Thus, comparing (1.1.4.7)[link] and (1.1.4.9)[link], we have [v_{i} = v^{\,j} g_{ij}. \eqno(1.1.4.11)]Similarly, using (1.1.4.8)[link] and (1.1.4.10)[link] we obtain the inverse relationship [v^{i} = v_{j} g^{ij}. \eqno(1.1.4.12)]The corresponding relationships between covariant and contravariant bases can now be obtained if we refer a vector, say v, to each of the bases [{\bf v} = v^{i}{\bf a}_{i} = v_{k}{\bf a}^{k},]and make use of (1.1.4.11)[link] and (1.1.4.12)[link]. Thus, e.g., [v^{i}{\bf a}_{i} = (v_{k}g^{ik}){\bf a}_{i} = v_{k}{\bf a}^{k}.]Hence [{\bf a}^{k} = g^{ik}{\bf a}_{i} \eqno(1.1.4.13)]and, similarly, [{\bf a}_{k} = g_{ik}{\bf a}^{i}. \eqno(1.1.4.14)]

  • (iii) The tensors [g_{ij}] and [g^{ij}] are symmetric, by definition.

  • (iv) It follows from (1.1.4.11)[link] and (1.1.4.12)[link] or (1.1.4.13)[link] and (1.1.4.14)[link] that the matrices of the direct and reciprocal metric tensors are mutually inverse, i.e. [\pmatrix{g_{11} &g_{12} &g_{13}\cr g_{21} &g_{22} &g_{23}\cr g_{31} &g_{32} &g_{33}\cr}^{-1} = \pmatrix{g^{11} &g^{12} &g^{13}\cr g^{21} &g^{22} &g^{23}\cr g^{31} &g^{32} &g^{33}\cr}, \eqno(1.1.4.15)]and their determinants are mutually reciprocal.

1.1.4.4. Examples

| top | pdf |

There are numerous applications of tensor notation in crystallographic calculations, and many of them appear in the various chapters of this volume. We shall therefore present only a few examples.

  • (i) The (squared) magnitude of the diffraction vector [{\bf h} = h_{i}{\bf a}^{i}] is given by [|h|^{2} = {4 \sin^{2} \theta\over \lambda^{2}} = h_{i}h_{j}g^{ij}. \eqno(1.1.4.16)]This concise relationship is a starting point in a derivation of unit-cell parameters from experimental data.

  • (ii) The structure factor, including explicitly anisotropic displacement tensors, can be written in symbolic matrix notation as [ F({\bi h}) = {\textstyle\sum\limits_{j=1}^{N}}\, f_{(j)} \exp (-{\bi h}^{T}\boldbeta_{(\,j)} {\bi h}) \exp (2\pi i {\bi h}^{T} {\bi r}_{(\,j)}), \eqno(1.1.4.17)]where [\boldbeta_{(\,j)}] is the matrix of the anisotropic displacement tensor of the jth atom. In tensor notation, with the quantities referred to their natural bases, the structure factor can be written as [F(h_{1}h_{2}h_{3}) = {\textstyle\sum\limits_{j=1}^{N}}\, f_{(\,j)} \exp (-h_{i}h_{k}\beta_{(\,j)}^{ik}) \exp (2 \pi ih_{i}x_{(\,j)}^{i}), \eqno(1.1.4.18)]and similarly concise expressions can be written for the derivatives of the structure factor with respect to the positional and displacement parameters. The summation convention applies only to indices denoting components of vectors and tensors; the atom subscript j in (1.1.4.18)[link] clearly does not qualify, and to indicate this it has been surrounded by parentheses.

  • (iii) Geometrical calculations, such as those described in the chapters of Part 3[link] , may be carried out in any convenient basis but there are often some definite advantages to computations that are referred to the natural, non-Cartesian bases (see Chapter 3.1[link] ). Usually, the output positional parameters from structure refinement are available as contravariant components of the atomic position vectors. If we transform them by (1.1.4.11)[link] to their covariant form, and store these covariant components of the atomic position vectors, the computation of scalar products using equations (1.1.4.9)[link] or (1.1.4.10)[link] is almost as efficient as it would be if the coordinates were referred to a Cartesian system. For example, the right-hand side of the vector identity (1.1.3.5)[link], which is employed in the computation of dihedral angles, can be written as [(A_{i}C^{i})(B_{j}D^{j}) - (A_{k}D^{k})(B_{l}C^{l}).]This is a typical application of reciprocal space to ordinary direct-space computations.

  • (iv) We wish to derive a tensor formulation of the vector product, along similar lines to those of Chapter 3.1[link] . As with the scalar product, there are several such formulations and we choose that which has both vectors, say u and v, and the resulting product, [{\bf u} \times {\bf v}], referred to a covariant basis. We have [\eqalignno{ {\bf u} \times {\bf v} &= u^{i}{\bf a}_{i} \times v^{\,j}{\bf a}_{j} \cr &= u^{i}v^{\,j}({\bf a}_{i} \times {\bf a}_{j}). &(1.1.4.19)}]If we make use of the relationships (1.1.3.3)[link] between the direct and reciprocal basis vectors, it can be verified that [ {\bf a}_{i} \times {\bf a}_{j} = {V}e_{kij}{\bf a}^{k}, \eqno(1.1.4.20)]where V is the volume of the unit cell and the antisymmetric tensor [e_{kij}] equals [+1, -1], or 0 according as [kij] is an even permutation of 123, an odd permutation of 123 or any two of the indices [kij] have the same value, respectively. We thus have [ \eqalignno{ {\bf u} \times {\bf v} &= {V}e_{kij}u^{i}v\hskip 2pt^{j}{\bf a}^{k} \cr &= Vg^{lk}e_{kij}u^{i}v\hskip 2pt^{j}{\bf a}_{l}, &(1.1.4.21)}]since by (1.1.4.13)[link], [{\bf a}^{k} = g^{lk}{\bf a}_{l}].

  • (v) The rotation operator. The general formulation of an expression for the rotation operator is of interest in crystal structure determination by Patterson techniques (see Chapter 2.3[link] ) and in molecular modelling (see Chapter 3.3[link] ), and another well known crystallographic application of this device is the derivation of the translation, libration and screw-motion tensors by the method of Schomaker & Trueblood (1968[link]), discussed in Part 8 of Volume C[link] (IT C, 2004[link]) and in Chapter 1.2[link] of this volume. A digression on an elementary derivation of the above seems to be worthwhile.

    Suppose we wish to rotate the vector r, about an axis coinciding with the unit vector k, through the angle [\theta] and in the positive sense, i.e. an observer looking in the direction of [+{\bf k}] will see r rotating in the clockwise sense. The vectors r, k and the rotated (target) vector r′ are referred to an origin on the axis of rotation (see Fig. 1.1.4.1[link]). Our purpose is to express r′ in terms of r, k and [\theta] by a general vector formula, and represent the components of the rotated vectors in coordinate systems that might be of interest.

    [Figure 1.1.4.1]

    Figure 1.1.4.1 | top | pdf |

    Derivation of the general expression for the rotation operator. The figure illustrates schematically the decompositions and other simple geometrical considerations required for the derivation outlined in equations (1.1.4.22)[link][link][link][link][link][link]–(1.1.4.28)[link].

    Let us decompose the vector r and the (target) vector r′ into their components which are parallel [(\|)] and perpendicular [(\perp)] to the axis of rotation: [{\bf r} = {\bf r}_{\|} + {\bf r}_{\perp} \eqno(1.1.4.22)]and [{\bf r}' = {\bf r}'_{\|} + {\bf r}'_{\perp}. \eqno(1.1.4.23)]It can be seen from Fig. 1.1.4.1[link] that the parallel components of r and r′ are [{\bf r}_{\|} = {\bf r}'_{\|} = {\bf k}({\bf k} \cdot {\bf r}) \eqno(1.1.4.24)]and thus [{\bf r}_{\perp} = {\bf r} - {\bf k}({\bf k} \cdot {\bf r}). \eqno(1.1.4.25)]Only a suitable expression for [{\bf r}'_{\perp}] is missing. We can find this by decomposing [{\bf r}'_{\perp}] into its components (i) parallel to [{\bf r}_{\perp}] and (ii) parallel to [{\bf k} \times {\bf r}_{\perp}]. We have, as in (1.1.4.24)[link], [{\bf r}'_{\perp} = {{\bf r}_{\perp}\over |{\bf r}_{\perp}|} \left({{\bf r}_{\perp}\over |{\bf r}_{\perp}|} \cdot {\bf r}'_{\perp}\right) + {{\bf k} \times {\bf r}_{\perp}\over |{\bf k} \times {\bf r}_{\perp}|} \left({{\bf k} \times {\bf r}_{\perp}\over |{\bf k} \times {\bf r}_{\perp}|} \cdot {\bf r}'_{\perp}\right). \eqno(1.1.4.26)]We observe, using Fig. 1.1.4.1[link], that [|{\bf r}'_{\perp}| = |{\bf r}_{\perp}| = |{\bf k} \times {\bf r}_{\perp}|]and [{\bf k} \times {\bf r}_{\perp} = {\bf k} \times {\bf r},]and, further, [{\bf r}'_{\perp} \cdot {\bf r}_{\perp} = |{\bf r}_{\perp}|^{2} \cos \theta]and [ {\bf r}'_{\perp} \cdot ({\bf k} \times {\bf r}_{\perp}) = {\bf k} \cdot ({\bf r}'_{\perp} \times {\bf r}_{\perp}) = |{\bf r}_{\perp}|^{2} \sin \theta,]since the unit vector k is perpendicular to the plane containing the vectors [{\bf r}_{\perp}] and [{\bf r}'_{\perp}]. Equation (1.1.4.26)[link] now reduces to [{\bf r}'_{\perp} = {\bf r}_{\perp} \cos \theta + ({\bf k} \times {\bf r}) \sin \theta \eqno(1.1.4.27)]and equations (1.1.4.23)[link], (1.1.4.25)[link] and (1.1.4.27)[link] lead to the required result [{\bf r}' = {\bf k}({\bf k} \cdot {\bf r})(1 - \cos \theta) + {\bf r} \cos \theta + ({\bf k} \times {\bf r}) \sin \theta.\eqno(1.1.4.28)]The above general expression can be written as a linear transformation by referring the vectors to an appropriate basis or bases. We choose here [{\bf r} = x\hskip 2pt^{j}{\bf a}_{j}], [{\bf r}' = x'^{i}{\bf a}_{i}] and assume that the components of k are available in the direct and reciprocal bases.

    If we make use of equations (1.1.4.9)[link] and (1.1.4.21)[link], (1.1.4.28)[link] can be written as [x'^{i} = k^{i}(k_{\,j}x^{\,j})(1 - \cos \theta) + \delta_{j}^{i}x^{\,j} \cos \theta + Vg^{im} e_{mpj}k^{p}x^{\,j} \sin \theta,\eqno (1.1.4.29)]or briefly [x'^{i} = R_{j}^{i}x^{\,j}, \eqno(1.1.4.30)]where [R_{j}^{i} = k^{i}k_{j}(1 - \cos \theta) + \delta_{j}^{i} \cos \theta + Vg^{im} e_{mpj}k^{p} \sin \theta \eqno(1.1.4.31)]is a matrix element of the rotation operator R which carries the vector r into the vector r′. Of course, the representation (1.1.4.31)[link] of R depends on our choice of reference bases.

    If all the vectors are referred to a Cartesian basis, that is three orthogonal unit vectors, the direct and reciprocal metric tensors reduce to a unit tensor, there is no difference between covariant and contravariant quantities, and equation (1.1.4.31)[link] reduces to [R_{ij} = k_{i}k_{j}(1 - \cos \theta) + \delta_{ij} \cos \theta + e_{ipj}k_{p} \sin \theta, \eqno(1.1.4.32)]where all the indices have been taken as subscripts, but the summation convention is still observed. The relative simplicity of (1.1.4.32)[link], as compared to (1.1.4.31)[link], often justifies the transformation of all the vector quantities to a Cartesian basis. This is certainly the case for any extensive calculation in which covariances of the structural parameters are not considered.

1.1.5. Transformations

| top | pdf |

1.1.5.1. Transformations of coordinates

| top | pdf |

It happens rather frequently that a vector referred to a given basis has to be re-expressed in terms of another basis, and it is then required to find the relationship between the components (coordinates) of the vector in the two bases. Such situations have already been indicated in the previous section. The purpose of the present section is to give a general method of finding such relationships (transformations), and discuss some simplifications brought about by the use of mutually reciprocal and Cartesian bases. We do not assume anything about the bases, in the general treatment, and hence the tensor formulation of Section 1.1.4[link] is not appropriate at this stage.

Let [{\bf r} = {\textstyle\sum\limits_{j=1}^{3}} u_{j}(1){\bf c}_{j}(1) \eqno(1.1.5.1)]and [{\bf r} = {\textstyle\sum\limits_{j=1}^{3}} u_{j}(2){\bf c}_{j}(2) \eqno(1.1.5.2)]be the given and required representations of the vector r, respectively. Upon the formation of scalar products of equations (1.1.5.1)[link] and (1.1.5.2)[link] with the vectors of the second basis, and employing again the summation convention, we obtain [u_{k}(1)[{\bf c}_{k}(1) \cdot {\bf c}_{l}(2)] = u_{k}(2)[{\bf c}_{k}(2) \cdot {\bf c}_{l}(2)],\quad l = 1, 2, 3 \eqno(1.1.5.3)]or [u_{k}(1)G_{kl}(12) = u_{k}(2)G_{kl}(22),\quad l = 1, 2, 3, \eqno(1.1.5.4)]where [G_{kl}(12) = {\bf c}_{k}(1) \cdot {\bf c}_{l}(2)] and [G_{kl}(22) = {\bf c}_{k}(2) \cdot {\bf c}_{l}(2)]. Similarly, if we choose the basis vectors [{\bf c}_{l}(1)], l = 1, 2, 3, as the multipliers of (1.1.5.1)[link] and (1.1.5.2)[link], we obtain [u_{k}(1)G_{kl}(11) = u_{k}(2)G_{kl}(21),\quad l = 1, 2, 3, \eqno(1.1.5.5)]where [G_{kl}(11) = {\bf c}_{k}(1) \cdot {\bf c}_{l}(1)] and [G_{kl}(21) = {\bf c}_{k}(2) \cdot {\bf c}_{l}(1)]. Rewriting (1.1.5.4)[link] and (1.1.5.5)[link] in symbolic matrix notation, we have [ {\bi u}^{T}(1) {\bi G}(12) = {\bi u}^{T}(2) {\bi G}(22), \eqno(1.1.5.6)]leading to [ {\bi u}^{T}(1) = {\bi u}^{T}(2)\{{\bi G}(22)[{\bi G}(12)]^{-1}\}]and [ {\bi u}^{T}(2) = {\bi u}^{T}(1)\{{\bi G}(12)[{\bi G}(22)]^{-1}\}, \eqno(1.1.5.7)]and [ {\bi u}^{T}(1){\bi G}(11) = {\bi u}^{T}(2){\bi G}(21), \eqno(1.1.5.8)]leading to [ {\bi u}^{T}(1) = {\bi u}^{T}(2)\{{\bi G}(21)[{\bi G}(11)]^{-1}\}]and [ {\bi u}^{T}(2) = {\bi u}^{T}(1)\{{\bi G}(11)[{\bi G}(21)]^{-1}\}. \eqno(1.1.5.9)]

Equations (1.1.5.7)[link] and (1.1.5.9)[link] are symbolic general expressions for the transformation of the coordinates of r from one representation to the other.

In the general case, therefore, we require the matrices of scalar products of the basis vectors, G(12) and G(22) or G(11) and G(21) – depending on whether the basis [{\bf c}_{k}(2)] or [{\bf c}_{k}(1)], k = 1, 2, 3, was chosen to multiply scalarly equations (1.1.5.1)[link] and (1.1.5.2)[link]. Note, however, the following simplifications.

  • (i) If the bases [{\bf c}_{k}(1)] and [{\bf c}_{k}(2)] are mutually reciprocal, each of the matrices of mixed scalar products, G(12) and G(21), reduces to a unit matrix. In this important special case, the transformation is effected by the matrices of the metric tensors of the bases in question. This can be readily seen from equations (1.1.5.7)[link] and (1.1.5.9)[link], which then reduce to the relationships between the covariant and contravariant components of the same vector [see equations (1.1.4.11)[link] and (1.1.4.12)[link] above].

  • (ii) If one of the bases, say [{\bf c}_{k}(2)], is Cartesian, its metric tensor is by definition a unit tensor, and the transformations in (1.1.5.7)[link] reduce to [ {\bi u}^{T}(1) = {\bi u}^{T}(2)[{\bi G}(12)]^{-1}]and [ {\bi u}^{T}(2) = {\bi u}^{T}(1){\bi G}(12). \eqno(1.1.5.10)]The transformation matrix is now the mixed matrix of the scalar products, whether or not the basis [{\bf c}_{k}(1)], k = 1, 2, 3, is also Cartesian. If, however, both bases are Cartesian, the transformation can also be interpreted as a rigid rotation of the coordinate axes (see Chapter 3.3[link] ).

It should be noted that the above transformations do not involve any shift of the origin. Transformations involving such shifts, notably the symmetry transformations of the space group, are treated rather extensively in Volume A of International Tables for Crystallography (2005[link]) [see e.g. Part 5[link] there (Arnold, 2005[link])].

1.1.5.2. Example

| top | pdf |

This example deals with the construction of a Cartesian system in a crystal with given basis vectors of its direct lattice. We shall also require that the Cartesian system bear a clear relationship to at least one direction in each of the direct and reciprocal lattices of the crystal; this may be useful in interpreting a physical property which has been measured along a given lattice vector or which is associated with a given lattice plane. For a better consistency of notation, the Cartesian components will be denoted as contravariant.

The appropriate version of equations (1.1.5.1)[link] and (1.1.5.2)[link] is now [{\bf r} = x^{i}{\bf a}_{i} \eqno(1.1.5.11)]and [{\bf r} = X^{k}{\bf e}_{k}, \eqno(1.1.5.12)]where the Cartesian basis vectors are: [{\bf e}_{1} = {\bf r}_{\rm L}/|{\bf r}_{\rm L}|], [{\bf e}_{2} = {\bf r}^{*}/|{\bf r}^{*}|] and [{\bf e}_{3} = {\bf e}_{1} \times {\bf e}_{2}], and the vectors [{\bf r}_{\rm L}] and [{\bf r}^{*}] are given by [{\bf r}_{\rm L} = u^{i}{\bf a}_{i} \hbox{ and } {\bf r}^{*} = h_{k}{\bf a}^{k},]where [u^{i}] and [h_{k}], i, k = 1, 2, 3, are arbitrary integers. The vectors [{\bf r}_{\rm L}] and [{\bf r}^{*}] must of course be chosen to be mutually perpendicular, [{\bf r}_{\rm L} \cdot {\bf r}^{*} = u^{i}h_{i} = 0]. The [X^{1}(X)] axis of the Cartesian system thus coincides with a direct-lattice vector, and the [X^{2}(Y)] axis is parallel to a vector in the reciprocal lattice.

Since the basis in (1.1.5.12)[link] is a Cartesian one, the required transformations are given by equations (1.1.5.10)[link] as [x^{i} = X^{k}(T^{-1})_{k}^{i} \hbox{ and } X^{i} = x^{k}T_{k}^{i}, \eqno(1.1.5.13)]where [T_{k}^{i} = {\bf a}_{k} \cdot {\bf e}_{i}], k, i = 1, 2, 3, form the matrix of the scalar products. If we make use of the relationships between covariant and contravariant basis vectors, and the tensor formulation of the vector product, given in Section 1.1.4[link] above (see also Chapter 3.1[link] ), we obtain [\eqalignno{ T_{k}^{1} &= {1\over |{\bf r}_{\rm L}|} g_{ki}u^{i}\cr T_{k}^{2} &= {1\over |{\bf r}^{*}|} h_{k} &(1.1.5.14)\cr T_{k}^{3} &= {V\over |{\bf r}_{\rm L}||{\bf r}^{*}|} e_{kip}u^{i}g^{pl}h_{l}.}]

Note that the other convenient choice, [{\bf e}_{1}\propto {\bf r}^{*}] and [{\bf e}_{2}\propto {\bf r}_{\rm L}], interchanges the first two columns of the matrix T in (1.1.5.14)[link] and leads to a change of the signs of the elements in the third column. This can be done by writing [e_{kpi}] instead of [e_{kip}], while leaving the rest of [T_{k}^{3}] unchanged.

1.1.6. Some analytical aspects of the reciprocal space

| top | pdf |

1.1.6.1. Continuous Fourier transform

| top | pdf |

Of great interest in crystallographic analyses are Fourier transforms and these are closely associated with the dual bases examined in this chapter. Thus, e.g., the inverse Fourier transform of the electron-density function of the crystal [F({\bf h}) = \textstyle\int\limits_{\rm cell} \rho({\bf r}) \exp (2 \pi i{\bf h} \cdot {\bf r})\, \hbox{d}^{3}{\bf r}, \eqno(1.1.6.1)]where [\rho({\bf r})] is the electron-density function at the point r and the integration extends over the volume of a unit cell, is the fundamental model of the contribution of the distribution of crystalline matter to the intensity of the scattered radiation. For the conventional Bragg scattering, the function given by (1.1.6.1)[link], and known as the structure factor, may assume nonzero values only if h can be represented as a reciprocal-lattice vector. Chapter 1.2[link] is devoted to a discussion of the structure factor of the Bragg reflection, while Chapters 4.1[link] , 4.2[link] and 4.3[link] discuss circumstances under which the scattering need not be confined to the points of the reciprocal lattice only, and may be represented by reciprocal-space vectors with non-integral components.

1.1.6.2. Discrete Fourier transform

| top | pdf |

The electron density [\rho({\bf r})] in (1.1.6.1)[link] is one of the most common examples of a function which has the periodicity of the crystal. Thus, for an ideal (infinite) crystal the electron density [\rho({\bf r})] can be written as [\rho({\bf r}) = \rho({\bf r} + u{\bf a} + v{\bf b} + w{\bf c}), \eqno(1.1.6.2)]and, as such, it can be represented by a three-dimensional Fourier series of the form [\rho({\bf r}) = {\textstyle\sum\limits_{\bf g}} C({\bf g}) \exp (-2 \pi i{\bf g} \cdot {\bf r}), \eqno(1.1.6.3)]where the periodicity requirement (1.1.6.2)[link] enables one to represent all the g vectors in (1.1.6.3)[link] as vectors in the reciprocal lattice (see also Section 1.1.2[link] above). If we insert the series (1.1.6.3)[link] in the integrand of (1.1.6.1)[link], interchange the order of summation and integration and make use of the fact that an integral of a periodic function taken over the entire period must vanish unless the integrand is a constant, equation (1.1.6.3)[link] reduces to the conventional form [\rho({\bf r}) = {1\over V} {\sum\limits_{\bf h}} F({\bf h}) \exp (-2 \pi i{\bf h} \cdot {\bf r}), \eqno(1.1.6.4)]where V is the volume of the unit cell in the direct lattice and the summation ranges over all the reciprocal lattice.

Fourier transforms, discrete as well as continuous, are among the most important mathematical tools of crystallography. The discussion of their mathematical principles, the modern algorithms for their computation and their numerous applications in crystallography form the subject matter of Chapter 1.3[link] . Many more examples of applications of Fourier methods in crystallography are scattered throughout this volume and the crystallographic literature in general.

1.1.6.3. Bloch's theorem

| top | pdf |

It is in order to mention briefly the important role of reciprocal space and the reciprocal lattice in the field of the theory of solids. At the basis of these applications is the periodicity of the crystal structure and the effect it has on the dynamics (cf. Chapter 4.1[link] ) and electronic structure of the crystal. One of the earliest, and still most important, theorems of solid-state physics is due to Bloch (1928[link]) and deals with the representation of the wavefunction of an electron which moves in a periodic potential. Bloch's theorem states that:

The eigenstates [\psi] of the one-electron Hamiltonian [ {\scr h} = (-\hbar^{2}/2m) \nabla^{2} + U({\bf r})], where U(r) is the crystal potential and [{\bf U}({\bf r} + {\bf r}_{\rm L}) = U({\bf r})] for all [{\bf r}_{\rm L}] in the Bravais lattice, can be chosen to have the form of a plane wave times a function with the periodicity of the Bravais lattice.

Thus [\psi({\bf r}) = \exp (i{\bf k} \cdot {\bf r})u({\bf r}), \eqno(1.1.6.5)]where [u({\bf r} + {\bf r}_{\rm L}) = u({\bf r}) \eqno(1.1.6.6)]and k is the wavevector. The proof of Bloch's theorem can be found in most modern texts on solid-state physics (e.g. Ashcroft & Mermin, 1975[link]). If we combine (1.1.6.5)[link] with (1.1.6.6)[link], an alternative form of the Bloch theorem results: [\psi({\bf r} + {\bf r}_{\rm L}) = \exp (i{\bf k} \cdot {\bf r}_{\rm L}) \psi ({\bf r}). \eqno(1.1.6.7)]In the important case where the wavefunction [\psi] is itself periodic, i.e. [\psi({\bf r} + {\bf r}_{\rm L}) = \psi({\bf r}),]we must have [\exp (i{\bf k} \cdot {\bf r}_{\rm L}) = 1]. Of course, this can be so only if the wavevector k equals [2\pi] times a vector in the reciprocal lattice. It is also seen from equation (1.1.6.7)[link] that the wavevector appearing in the phase factor can be reduced to a unit cell in the reciprocal lattice (the basis vectors of which contain the [2\pi] factor), or to the equivalent polyhedron known as the Brillouin zone (e.g. Ziman, 1969[link]). This periodicity in reciprocal space is of prime importance in the theory of solids. Some Brillouin zones are discussed in detail in Chapter 1.5.[link]

Acknowledgements

I wish to thank Professor D. W. J. Cruickshank for bringing to my attention the contribution of M. von Laue (Laue, 1914[link]), who was the first to introduce general reciprocal bases to crystallography.

References

Arnold, H. (2005). Transformations in crystallography. In International Tables for Crystallography, Vol. A, Space-Group Symmetry, edited by Th. Hahn, Part 5. Heidelberg: Springer.
Ashcroft, N. W. & Mermin, N. D. (1975). Solid State Physics. Philadelphia: Saunders College.
Bloch, F. (1928). Über die Quantenmechanik der Elektronen in Kristallgittern. Z. Phys. 52, 555–600.
Buerger, M. J. (1941). X-ray Crystallography. New York: John Wiley.
Buerger, M. J. (1959). Crystal Structure Analysis. New York: John Wiley.
Ewald, P. P. (1913). Zur Theorie der Interferenzen der Röntgenstrahlen in Kristallen. Phys. Z. 14, 465–472.
Ewald, P. P. (1921). Das reziproke Gitter in der Strukturtheorie. Z. Kristallogr. 56, 129–156.
International Tables for Crystallography (2005). Vol. A, Space-Group Symmetry, edited by Th. Hahn. Heidelberg: Springer.
International Tables for Crystallography (2004). Vol. C, Mathematical, Physical and Chemical Tables, edited by E. Prince. Dordrecht: Kluwer Academic Publishers.
Koch, E. (2004). In International Tables for Crystallography, Vol. C, Mathematical, Physical and Chemical Tables, edited by E. Prince, Chapters 1.1 and 1.2. Dordrecht: Kluwer Academic Publishers.
Laue, M. (1914). Die Interferenzerscheinungen an Röntgenstrahlen, hervorgerufen durch das Raumgitter der Kristalle. Jahrb. Radioakt. Elektron. 11, 308–345.
Lipson, H. & Cochran, W. (1966). The Determination of Crystal Structures. London: Bell.
Nespolo, M. (2015). The ash heap of crystallography: restoring forgotten basic knowledge. J. Appl. Cryst. 48, 1290–1298.
Patterson, A. L. (1967). In International Tables for X-ray Crystallography, Vol. II, Mathematical Tables, edited by J. S. Kasper & K. Lonsdale, pp. 5–83. Birmingham: Kynoch Press.
Sands, D. E. (1982). Vectors and Tensors in Crystallography. New York: Addison-Wesley.
Schomaker, V. & Trueblood, K. N. (1968). On the rigid-body motion of molecules in crystals. Acta Cryst. B24, 63–76.
Shmueli, U. (2007). Theories and Techniques of Crystal Structure Determination, Section 1.2. Oxford University Press.
Wilson, E. B. (1901). Vector Analysis. New Haven: Yale University Press.
Ziman, J. M. (1969). Principles of the Theory of Solids. Cambridge University Press.








































to end of page
to top of page