Tables for
Volume H
Powder diffraction
Edited by C. J. Gilmore, J. A. Kaduk and H. Schenk
International Tables for Crystallography (2018). Vol. H, ch. 3.1, pp. 224-251

Chapter 3.1. The optics and alignment of the divergent-beam laboratory X-ray powder diffractometer and its calibration using NIST standard reference materials

J. P. Cline,a* M. H. Mendenhall,a D. Black,a D. Windovera and A. Heninsa

aNational Institute of Standards and Technology, Gaithersburg, Maryland, USA
Correspondence e-mail:

The laboratory X-ray powder diffractometer is one of the primary analytical tools in materials science. It is applicable to nearly any crystalline material, and with advanced data-analysis methods, it can provide a wealth of information concerning sample character. Data from these machines, however, are beset by a complex aberration function that can be addressed through calibration with the use of NIST standard reference materials (SRMs). Laboratory diffractometers can be set up in a range of optical geometries; considered herein are those of Bragg–Brentano divergent-beam configuration using both incident- and diffracted-beam monochromators. We review the origin of the various aberrations affecting instruments of this geometry and the methods developed at NIST to align these machines in a first-principles context. Data-analysis methods are considered as being in two distinct categories: those that use empirical methods to parameterize the nature of the data for subsequent analysis, and those that use model functions to link the observation directly to a specific aspect of the experiment. We consider a multifaceted approach to instrument calibration using both the empirical and model-based data-analysis methods. The particular benefits of the fundamental-parameters approach are reviewed.

3.1.1. Introduction

| top | pdf |

The laboratory X-ray powder diffractometer has several virtues that have made it a principal characterization device for providing critical data for a range of technical disciplines involving crystalline materials. The specimen is typically composed of small crystallites (5–30 µm), which is a form that is suitable for a wide variety of materials. A continuous set of reflections can be collected with a single scan in θ–2θ angle space. Not only can timely qualitative analyses be carried out, but with the more advanced data-analysis methods a wealth of quantitative information may be extracted. Modern commercial instruments may include features that include focusing mirror optics and the ability to change quickly between various experimental configurations. In this chapter, we discuss results from a NIST-built diffractometer with features specific to the collection of data that complement the NIST effort in standard reference materials (SRMs) for powder diffraction. While this machine can be configured with focusing optics, here we consider only those configurations that use a divergent beam in Bragg–Brentano, para-focusing geometry.

A principal advantage of the divergent-beam X-ray powder diffractometer is that a relatively large number of crystallites are illuminated, providing a strong diffraction signal from a representative portion of the sample. However, the para-focusing optics of laboratory diffractometers produce patterns that display profiles of a very complex shape. The observed 2θ position of maximum diffraction intensity does not necessarily reflect the true spacing of the lattice planes (hkl). While advanced data-analysis methods can be used to model the various aberrations and account for the observed profile shape and position, there are a number of instrumental effects for which there is not enough information for reliable, a priori modelling of the performance of the instrument. The task may be further compounded when instruments are set up incorrectly, because the resultant additional errors are convoluted into the already complex set of aberrations. Therefore, the results are often confounding, as the origin of the difficulty is problematic to discern. The preferred method for avoiding these situations is the use of SRMs to calibrate the instrument performance. We will describe the various methods with which NIST SRMs may be used to determine sources of measurement error, as well as the procedures that can be used to properly calibrate the laboratory X-ray powder diffraction (XRPD) instrument.

The software discussed throughout this manuscript will include commercial as well as public-domain programs, some of which were used for the certification of NIST SRMs. In addition to the NIST disclaimer concerning the use of commercially available resources,1 we emphasize that some of the software presented here was also developed to a certain extent through longstanding collaborative relationships between the first author and the respective developers of the codes. The codes that will be discussed include: GSAS (Larson & Von Dreele, 2004[link]), the PANalytical software HighScore Plus (Degen et al., 2014[link]), the Bruker codes TOPAS (version 4.2) (Bruker AXS, 2014[link]) and DIFFRAC.EVA (version 3), and the Rigaku code PDXL 2 (version 2.2) (Rigaku, 2014[link]). The fundamental-parameters approach (FPA; Cheary & Coelho, 1992[link]) for modelling X-ray powder diffraction line profiles, as implemented in TOPAS, has been used since the late 1990s for the certification of NIST SRMs. To examine the efficacy of the FPA models, as well as their implementation in TOPAS, we have developed a Python-based code, the NIST Fundamental Parameters Approach Python Code (FPAPC), that replicates the FPA method in the computation of X-ray powder diffraction line profiles (Mendenhall et al., 2015[link]). This FPA capability is to be incorporated into GSASII (Toby & Von Dreele, 2013[link]).

3.1.2. The instrument profile function

| top | pdf |

The instrument profile function (IPF) describes the profile shape and displacement as a function of 2θ that is the intrinsic instrumental response imparted to any data collected with that specific instrument. It is a function of the radiation used, the instrument geometry and configuration, slit sizes etc.2 The basic optical layout of a divergent-beam X-ray powder diffractometer of Bragg–Brentano, para-focusing geometry using a tube anode in a line-source configuration is illustrated in Fig. 3.1.1[link]. This figure shows the various optical components in the plane of diffraction, or equatorial plane. The dimensions of the optical components shown in Fig. 3.1.1[link] and the dimensions of the goniometer itself determine the resolution of the diffractometer. The divergent nature of the X-ray beam will increase the number of crystallites giving rise to the diffraction signal; the incident-beam slit defines an angular range within which crystallites will be oriented such that their diffraction is registered. One of the manifestations of this geometry is that knowledge of both the diffraction angle and specimen position are critical for the correct interpretation of the data. The goniometer radius is the distance between the rotation axes and the X-ray source (R1), or the distance between the rotation axes and receiving slit (R2), as shown in Fig. 3.1.1[link]; these two distances must be equal. The specimen surface is presumed to be on the rotation axes; however, this condition is rarely realized and it is common to have to consider a specimen-displacement error.

[Figure 3.1.1]

Figure 3.1.1 | top | pdf |

A schematic diagram illustrating the operation and optical components of a Bragg–Brentano X-ray powder diffractometer.

Goniometer assemblies themselves can be set up in several configurations. Invariably, two rotation stages are utilized. Fig. 3.1.1[link] illustrates a machine of θ/2θ geometry: the tube is stationary while one stage rotates the specimen through angle θ, sometimes referred to as the angle Ω, while a second stage rotates the detector through angle 2θ. Another popular configuration is θ/θ geometry, where the specimen remains stationary and both the tube and detector rotate through angle θ. However, the diffraction optics themselves do not vary with regard to how the goniometer is set up.

The detector illustrated in Fig. 3.1.1[link] simply reads any photons arriving at its entrance window as the diffracted signal is analysed by the receiving slit. Such detectors, which often use a scintillation crystal, are typically referred to as point detectors. A diffracted-beam post-sample monochromator is often added to the beam path after the receiving slit to filter out any fluorescence from the sample. The crystal optic of these monochromators typically consists of pyrolytic graphite with a high level of mosaicity that is bent to a radius in rough correspondence to that of the goniometer. This imposes a relatively broad energy bandpass of approximately 200 eV (with 8 keV Cu Kα radiation) in width on the diffracted beam. This window is centred so as to straddle that of the energy of the source radiation being used, thereby filtering fluorescent and other spurious radiation from the detector while transmitting the primary features of the emission spectrum, presumably without distortion.

Within the last decade, however, the popularity of this geometry has fallen markedly, as the use of the post-sample monochromator/point-detector assembly has been largely displaced by the use of a position-sensitive detector (PSD). This geometry is illustrated in Fig. 3.1.2[link]. A line detector replaces the point detector, and offers the ability to discriminate with respect to the position of arriving X-rays within the entrance window of the PSD. A multichannel analyser is typically used to map the arriving photons from the PSD window into 2θ space. Depending on the size of the PSD entrance window, increases in the counting rate by two orders of magnitude relative to a point detector can be easily achieved. Furthermore, this is accomplished by including the signal from additional crystallites, mitigating any problems with particle-counting statistics (Fig. 3.1.2[link]). A drawback to the PSD is that the increased intensity is achieved with the inclusion of signals that are not within the Bragg–Brentano focusing regimen (compare Figs. 3.1.1[link] and 3.1.2[link]), leading to a broadening of the line profiles. The level of broadening is proportional to the size of the PSD entrance window and inversely proportional to 2θ angle. The move to PSDs has been further augmented by the development of solid-state, silicon strip detectors that offer the advantages of a PSD without the maintenance issues of the early gas-flow proportional PSDs. Fluorescence can be problematic with a PSD; however, the problem can be countered with the use of filters. More recent developments in electronics have improved the ability of these PSDs to discriminate with respect to energy. We discuss only this newer class of solid-state linear PSDs in this chapter.

[Figure 3.1.2]

Figure 3.1.2 | top | pdf |

A schematic diagram illustrating the operation and optical components of a Bragg–Brentano X-ray diffractometer equipped with a position-sensitive detector. Only the rays striking the centre line of the PSD, outlined in black, are in accordance with Bragg–Brentano focusing.

A monochromator can also be used to condition the incident beam so that it will consist exclusively of Kα1 radiation. Monochromators of this nature are inserted into the beam path prior to the beam's arrival at the incident-beam slit shown in Fig. 3.1.1[link]. These devices typically use a Ge(111) crystal as the optic; Ge monochromators have a much smaller energy bandpass than graphite monochromators. They are, therefore, much more complex and difficult to align. Here we discuss an incident-beam monochromator (IBM) using a Johansson focusing optic (Johansson, 1933[link]), as shown in Fig. 3.1.3[link]. When incorporating an IBM assembly into a powder diffractometer using reflection geometry, the focal line of the optic must be positioned on the goniometer radius as per the line source of the tube anode in a conventional setup, shown in the right-hand side of Fig. 3.1.3[link]. In this way, a Johansson optic provides a monochromatic X-ray source, passing some portion of the Kα1 emission spectrum, while preserving the divergent-beam Bragg–Brentano geometry as shown in Figs. 3.1.1[link] and 3.1.2[link]. The use of an IBM reduces the number of spectral contributions to the observed line shape and results in an IPF that is more readily modelled with conventional profile fitting. Furthermore, equipping such a machine with a PSD affords all of its advantages, while the elimination of the Bremsstrahlung by the IBM reduces the impact of fluorescence that can otherwise be problematic with a PSD.

[Figure 3.1.3]

Figure 3.1.3 | top | pdf |

A schematic diagram illustrating the geometry of a Johansson incident-beam monochromator.

Throughout this manuscript we use the terms `width' and `length' when referring to the optics. Width expresses extent in the equatorial plane. Length is used to denote a physical dimension parallel to the rotation axes of the goniometer as defined in Fig. 3.1.1[link]. The designation of the axial divergence angle, as well as the specifications concerning Soller slits, will be considered in terms of the double angle, both for incoming and outgoing X-rays. This is in contrast to the generally accepted single-angle definition shown in Klug & Alexander (1974[link]); hence the axial-divergence angles reported throughout this chapter are twice those that are often encountered elsewhere.

The observed line shape in powder diffraction consists of a convolution of contributions from the instrument optics (referred to as the geometric profile), the emission spectrum and the specimen, as shown diagrammatically for divergent-beam XRPD in Fig. 3.1.4[link]. The specimen contribution is often the dominant one in a given experiment; however, we do not consider it to any great extent in this discussion. The factors comprising the geometric profile are listed in Table 3.1.1[link]. Technically, neither of the last two items (specimen transparency and displacement) are components of the geometric profile of the instrument. They are functions of the specimen and the manner in which it was mounted. However, it is not possible to use a whole-pattern data-analysis method without considering these two factors; as they play a critical role in the modelling of the observed profile positions and shapes they are included in this discussion. The convolution of the components of the geometric profile and emission spectrum forms the IPF. As will be discussed, both of these contributions are complex in nature, leading to the well known difficulty in modelling the IPF from Bragg–Brentano equipment. This complexity, and the relatively limited q-space (momentum space) range accessible with laboratory equipment, tends to drive the structure solution and refinement community, with their expertise in the development of data-analysis procedures, towards the use of synchrotron and neutron sources. A significant number of the models and analytical functions discussed here were developed for, and are better suited to, powder-diffraction equipment using such nonconventional sources.

Table 3.1.1| top | pdf |
Aberrations comprising the geometric component of the IPF

AberrationControlling parametersImpact
X-ray source width (wx) Angle subtended by source: wx/R Symmetric broadening
Receiving-slit width or PSD strip width (wr) Angle subtended by slit/strip: wr/R Symmetric broadening
Flat specimen error/equatorial divergence Angle of divergence slit: α Asymmetric broadening to low 2θ, with decreasing 2θ
PSD defocusing PSD window width, angle of divergence slit: α Symmetric broadening with 1/(tan θ)
Axial divergence   Below ∼ 100°:
 Case 1: no Soller slits Axial lengths of the X-ray source (Lx), sample (Ls) and receiving slit (Lr) relative to goniometer radius (R) asymmetric broadening to low 2θ, with decreasing 2θ
 Case 2: Soller slits define divergence angle Acceptance angles ΔI and ΔD of the incident- and diffracted-beam Soller slits else to high 2θ, with increasing 2θ
Specimen transparency Penetration factor relative to diffractometer radius 1/μR Asymmetric broadening to low 2θ, with sin(2θ)
Specimen displacement z height Displacement of specimen surface from goniometer rotation axes Displacement of profiles with cos θ
[Figure 3.1.4]

Figure 3.1.4 | top | pdf |

Diagrammatic representations of convolutions leading to the observed XRPD profile.

We now consider the geometric profile with an examination of the aberrations listed in Table 3.1.1[link]. Figs. 3.1.5[link]–3.1.10 illustrate simulations of the aberration function associated with the factors listed in Table 3.1.1[link]. The first two of these, the source and receiving-slit width or silicon strip width with a PSD, simply cause symmetric broadening, constant with 2θ angle, and are typically described with `impulse' or `top-hat' functions. The flat specimen error is due to defocusing in the equatorial plane. One can see from Fig. 3.1.1[link] that for any beam that is not on the centre line of the goniometer, R1 is not equal to R2. The magnitude of the effect is directly proportional to the divergent slit size as shown in Fig. 3.1.5[link]. Its functional dependence on 2θ angle, i.e. 1/tan θ, is illustrated in Fig. 3.1.6[link]. The flat specimen error leads to asymmetric profile broadening on the low-angle side, accentuated at decreasing values of 2θ. The functional dependence of this aberration on 2θ, shown in Fig. 3.1.6[link], is for a fixed slit; the use of a variable-divergence incident-beam slit to obtain a constant area of illumination reduces this dependence on the 2θ angle.

[Figure 3.1.5]

Figure 3.1.5 | top | pdf |

The flat specimen error aberration profile as a function of incident-slit size (R = 217.5 mm).

[Figure 3.1.6]

Figure 3.1.6 | top | pdf |

The flat specimen error aberration profiles for a 1° incident slit as a function 2θ (R = 217.5 mm).

The broadening imparted to diffraction line profiles from the early gas-flow proportional PSDs was due to defocusing originating from both the equatorial width of the PSD window and parallax within the gas-filled counting chamber. Early models for these effects (Cheary & Coelho, 1994[link]) included two parameters: one for the window width and a second for the parallax. The modern silicon strip PSDs do not need this second term as there is effectively no parallax effect. The aberration profile imparted to the data from a modern PSD (Mendenhall et al., 2015[link]) is illustrated in Fig. 3.1.7[link] as a function of window width. The profiles are symmetric about the centre line, exhibiting both increasing intensity and breadth as the window width is increased. The profile consists of two components: a central peak with a width independent of 2θ, which is due to the pixel strip width of the detector, and wings which are due to the defocusing. The breadths of the wings shown in Fig. 3.1.7[link] vary in proportion to the incident slit size and as 1/tan θ, and therefore are largely unobservable at high 2θ angles.

[Figure 3.1.7]

Figure 3.1.7 | top | pdf |

The PSD defocusing error aberration profiles for a silicon strip PSD as a function of window width (R = 217.5 mm, incident slit = 1° and strip width = 75 µm).

Cheary & Coelho (1998a[link],b[link]) have modelled axial divergence effects in the context of two geometric cases. Case 1 is the situation in which the axial divergence is limited solely by the width of the beam path as determined by the length of the tube filament, the receiving slit and the size of the sample. The aberration function in which these parameters are 12 mm, 15 mm and 15 mm, respectively, is illustrated in Fig. 3.1.8[link]; the extent of broadening is nearly 1° in 2θ at a 2θ angle of 15°. The other plots of Fig. 3.1.8[link] refer to a `case 2' situation where axial divergence is limited by the inclusion of Soller slits in the incident- and diffracted-beam paths. One also has to consider the impact of a including a graphite post-monochromator. This would increase the path length of the diffracted beam by 10 to 15 cm, reducing axial divergence effects substantially and effectively functioning as a Soller slit. Cheary & Cline (1995[link]) determined that the inclusion of a Soller slit with a post-monochromator did result in a slight improvement in resolution; however, this was at the cost of a threefold reduction in intensity. We do not use Soller slits in the diffracted beam when using a post-monochromator. The 5° primary and secondary Soller slit aberration profile of Fig. 3.1.8[link] corresponds to an instrument with a primary Soller slit and a graphite post-monochromator. The profiles shown for the two 2.3° Soller-slit configurations actually constitute a fairly high level of collimation given the double-angle definition of the specifications. Fig. 3.1.9[link] shows the functional dependence of the aberration profile for two 2.3° Soller slits on 2θ. Below approximately 100°, the effect increases with decreasing 2θ. Approximate symmetry is observed at 100°, while asymmetry to high angle increases thereafter. The aberration profile associated with specimen transparency to the X-ray beam is illustrated in Fig. 3.1.10[link]. The figure shows the impact at 90° 2θ where the effect is at its maximum. The observed profile is broadened asymmetrically to low 2θ; the effect drops off in a largely symmetric manner with 2θ on either side of 90°.

[Figure 3.1.8]

Figure 3.1.8 | top | pdf |

Axial divergence aberration profiles shown for several levels of axial divergence. Case 1 (of Table 3.1.1[link]) is computed for a source length of 12 mm and a sample and receiving-slit length of 15 mm. The remaining three simulations include are of case 2, where Soller slits limit the axial divergence (R = 217.5 mm).

[Figure 3.1.9]

Figure 3.1.9 | top | pdf |

Axial divergence aberration profiles for primary and secondary Soller slits of 2.3° as a function of 2θ (R = 217.5 mm).

[Figure 3.1.10]

Figure 3.1.10 | top | pdf |

Linear attenuation aberration profiles that would roughly correspond to SRMs 676a (50 cm−1), 640e and 1976b (100 cm−1), and 660c (800 cm−1) at 90° 2θ, where the transparency effect is at a maximum (R = 217.5 mm).

The wavelength profile or emission spectrum with its characterization on an absolute energy scale provides the traceability of the diffraction measurement to the International System of Units (SI) (BIPM, 2006[link]). The currently accepted characterization of the emission spectrum of Cu Kα radiation is provided by Hölzer et al. (1997[link]) and is shown in Fig. 3.1.11[link]. The spectrum is modelled with four Lorentzian profile shape functions (PSFs): two large ones for the primary Kα1 and Kα2 profiles, and two smaller ones displaced slightly to lower energy to account for the asymmetry in the observed line shape. The data shown in Fig. 3.1.11[link] are in energy space and are transformed into 2θ space with the dispersion relation. This is obtained by differentiating Bragg's law to obtain dθ/dλ. The dominant term in the result is tan θ, which leads to the well known `stretching' of the wavelength distribution with respect to 2θ. Maskil & Deutsch (1988[link]) characterized a series of satellite lines in the Cu Kα spectrum with an energy centred around 8080 eV and an intensity relative to the Kα1 line of 6 × 10−3. These are sometimes referred to as the Kα3 lines, and are typically modelled with a single Lorentzian within the FPA. The `tube tails' as reported by Bergmann et al. (2000[link]) are a contribution that is strictly an artifact of how X-rays are produced in the vast majority of laboratory diffractometers. With the operation of an X-ray tube, off-axis electrons are also accelerated into the anode and produce X-rays that originate from positions other than the desired line source. They are not within the expected trajectory of para-focusing X-ray optics and produce tails on either side of a line profile as illustrated, along with the Kα3 lines, in Fig. 3.1.12[link]. Lastly, the energy bandpass of the pyrolytic graphite crystals used in post-monochromators is not a top-hat (or square-wave) function. Thus, the inclusion of a post-monochromator influences the observed emission spectrum.

[Figure 3.1.11]

Figure 3.1.11 | top | pdf |

The emission spectrum of Cu Kα radiation as provided by Hölzer et al. (1997[link]), represented by four Lorentzian profiles: two primary ones and a pair of smaller ones to account for the observed asymmetry. The satellite lines, often referred to as the Kα3 lines, are not displayed.

[Figure 3.1.12]

Figure 3.1.12 | top | pdf |

Illustration of the Kα3 lines and tube-tails contributions to an observed profile on a log scale, shown with two fits: the fundamental-parameters approach, which includes these features, and the split pseudo-Voigt PSF, which does not.

A Johansson IBM dramatically reduces the complexity of the IPF by largely removing the Kα2, Kα3 and tube-tails contributions to the observed profile shape. The vast majority of the Bremsstrahlung is also removed. Furthermore, the inclusion of the IBM increases the path length of the incident beam by 25 to 30 cm. This substantially reduces the contribution of axial divergence to the observed profile shape. The crystals used are almost exclusively germanium (the 111 reflection), and are ground and bent to the Johansson focusing geometry, as shown in Fig. 3.1.3[link]. They can be symmetric, with the source-to-crystal distance a and the crystal-to-focal point distance b being equal, in which case they will exhibit a bandpass of the order of 8 eV. They will slice a central portion out of the Kα1 line, clipping the tails, to transmit perhaps 70% of the original width of the Cu Kα1 emission spectrum. This yields a symmetric profile shape of relatively high resolution, or reduced profile breadth (other parameters being equal). The crystals can also be asymmetric, with the distance a being ∼60% of the distance b. These optics will exhibit a bandpass of the order of 15 eV, in which case they transmit most of the Kα1 line for a higher intensity, but with a lower resolution. The optic discussed here is of the latter geometry, as shown in Fig. 3.1.3[link].

A potential drawback to the use of an IBM concerns the nature of the Kα1 emission spectrum it transmits, which may preclude the use of data-analysis methods that are based upon an accurate mathematical description of an incident spectrum. At best, a `perfect' focusing crystal will impose an uncharacterized, though somewhat Gaussian, energy filter on the beam it diffracts. However, in certain optics the required bend radius of Johansson geometry is realized by clamping the crystal onto a curved form. The clamping restraint exists only at the edges of the optic, not in the central, active area where it is illuminated by the X-ray beam. The crystal itself however, can minimize internal stress by remaining flat; in this case an anticlastic curvature of the optic results. A `saddle' distortion across the surface of the diffracting region of the crystal results in a complex asymmetric Kα1 spectrum that defies accurate mathematical description. Johansson optics, however, can be bent by cementing the crystals into a pre-form, yielding an optic of superior perfection in curvature. Fig. 3.1.13[link] shows data collected from such an optic using an Si single crystal, 333 reflection, as an analyser. Parallel-beam conditions were approximated in this experiment with the use of very fine 0.05 mm incident and receiving slits. The observed symmetric emission profile of Fig. 3.1.13[link](a) can be modelled with a combination of several Gaussians. However, a Johansson optic will scatter 1–2% of high-energy radiation to a higher 2θ angle than the Kα1 focal line of the optic. This unwanted scatter is dominated by, but not exclusive to, the Kα2 spectrum. Louër (1992)[link] indicated that it can be largely blocked with a knife edge aligned to just `contact' the high-angle side of the optic's focal line. Alternatively, the NIST method is to use a slit aligned to straddle the focal line. Proper alignment of this anti-scatter slit is critical to achieving a good level of performance with the absence of `Kα2' scatter, as illustrated in Fig. 3.1.13[link](b). As will be demonstrated, with use of any Johansson optic the elimination of the Kα2 line is of substantial benefit in fitting the observed peaks with analytical profile-shape functions.

[Figure 3.1.13]

Figure 3.1.13 | top | pdf |

Illustration of the effect of the Johansson optic on the Cu Kα emission spectrum. (a) Data collected for the Si 333 single-crystal reflection on a linear scale. (b) Analogous data from Johansson optic alone on a log scale. Both data sets were collected with 0.05 mm incident and receiving slits. The near absence of the Kα2 scatter displayed in (b) can only be realized with the use of a properly aligned anti-scatter slit located at the focal line of the optic.

3.1.3. Instrument alignment

| top | pdf |

Modern instruments embody the drive towards interchangeable pre-aligned or self-aligning optics, which, in turn, has led to several approaches to obtaining proper alignment with minimum effort on the part of the user. We will not review these approaches, but instead we decribe here the methods used at NIST, which could be used to check the alignment of newer equipment. With the use of calibration methods that simply characterize the performance (which includes the errors) of the machine in an empirical manner and apply corrections, the quality of the instrument alignment may be surprisingly uncritical for a number of basic applications such as lattice-parameter refinement. However, with the use of the more advanced methods for characterization of the IPF that are based on the use of model functions, the proper alignment of the machine is critical. The models invariably include refineable parameter(s) that characterize the extent to which the given aberration affects the data; the correction is applied, and the results are therefore `correct'. However, if the instrument is not aligned properly, the analysis attempts to model the errors due to misalignment as if they were an expected aberration. The corrections applied are therefore incorrect in degree and nature and an erroneous result is obtained.

The conditions for proper alignment of a Bragg–Brentano diffractometer (see Fig. 3.1.14[link]) are:

  • (1) the goniometer radius, defined by the source-to-rotation-axes distance, R1, equals that defined by the rotation-axes-to-receiving-slit distance, R2 (to ±0.25 mm);

    [Figure 3.1.14]

    Figure 3.1.14 | top | pdf |

    Diagrammatic explanation of the conditions necessary to realize a properly aligned X-ray powder diffractometer.

  • (2) the X-ray line source, sample and receiving slit are centred in the equatorial plane of diffraction (to ±0.25 mm);

  • (3) the goniometer rotation axes are co-axial and parallel (to ±5 µm and <10 arc seconds);

  • (4) the X-ray line source, specimen surface, detector slit and goniometer rotation axes are co-planar, in the `zero' plane, at the zero angle of θ and 2θ (to ±5 µm and ±0.001°); and

  • (5) the incident beam is centred on both the equatorial and `zero' planes (to ±0.05°).

The first three conditions are established with the X-rays off, while conditions (4) and (5) are achieved with the beam present, as it is actively used in the alignment procedure. Neither incident- nor diffracted-beam monochromators are considered; they are simply added on to the Bragg–Brentano arrangement and have no effect on the issues outlined here. Also, in order to execute this procedure, a sample stage that can be rotated by 180° in θ is required. However, this does not need to be the sample stage used for data collection. Before any concerted effort to achieve proper alignment, it is advisable to check the mechanical integrity of the equipment. Firmly but gently grasp a given component of the diffractometer, such as the tube shield, receiving-slit assembly or sample stage, and try to move it in a manner inconsistent with its proper mounting and function. The number of defects, loose bolts etc., that can be found this way, even with quite familiar equipment, can be surprising.

Let us briefly review the development of diffraction equipment and the subsequent impact on alignment procedures. The goniometer assemblies used for powder diffractometers utilize a worm/ring gear to achieve rotation of the θ and 2θ axes while allowing for the ∼0.002° resolution with the use of a stepper or servo motor actuating the worm gear. `Home' switches, with a coarse one on the ring gear and a fine one on the worm shaft, allow the software to locate the reference angle(s) of the goniometer assembly to a repeatability of the stepper motor resolution. With the first generation of these automated goniometers, the zero angles were fixed relative to the home positions. With such a design the invariant reference was the receiving slit, and the operator adjusted the height of the tube shield and the angle of the θ stage to realize alignment condition (4). Second-generation machines offered the ability to set the zero angles relative to the home positions (or those of optical encoders) via software, in which case the exact angular position of either the X-ray tube focal line or of the receiving slit in θ–2θ space is arbitrary. The operator simply determines the positions where the θ and 2θ angles are zero, and then sets them there. There is no technical reason why the older designs cannot be aligned to the accuracy of newer ones. In practice, however, with older equipment the patience of the operator tends to become exhausted, and a less accurate alignment is accepted. An important consideration in evaluating modern equipment is that it is often the incident optic, not the X-ray source (focal line), that is used as the reference. Which situation is the case can be readily discerned with an inspection of the hardware: if the incident optic is anchored to the instrument chassis, then it is the reference. If it is attached to the tube shield, however, then the source establishes the reference. The NIST equipment has the latter design.

Condition (1) is that the goniometer radius, defined by the source-to-rotation-axis distance, R1, equals that defined by the rotation-axis-to-receiving-slit distance, R2. This condition is required for proper focusing and is generally realized with the use of rulers to achieve a maximum permissible error of R ± 0.25 mm for a nominal R = 200 mm diffractometer. Condition (2) concerns the centring of the components in the plane of diffraction or equatorial plane. This condition is assured with the use of straightedges and rulers and, again for a line focus with an 8 to 12 mm source length, the maximum permissible error for deviations along the equatorial plane is ±0.25 mm. One can also consider the takeoff angle at this time; this is the angle between the surface of the X-ray tube anode and the equatorial centre line of the diffractometer incident-beam path. As this angle decreases the resolution is improved at the expense of signal intensity, and vice versa, as a consequence of the variation in the size of the source that the specimen `sees'. However, with modern fine-focus tubes, this is not a major effect. Qualitative experiments at NIST indicate that the exact angle is not critical; a 6° takeoff angle is reasonable.

The third issue concerns the concentricity of the θ and 2θ rotation axes of the goniometer assembly; this is a matter of underappreciated concern. It is not, however, one over which the end user has a great deal of control. Measurement of axes centricity requires the construction of some fairly complex and stiff structures capable of measuring displacements of the order of 1 to 2 µm and rotations of seconds of arc. The objective is to measure both the offset between the two axes and the angle between them. Concentricity errors affect XRPD data in a manner analogous to that of sample displacement; hence a 5 µm concentricity error is of concern. Worse yet is the possibility that some degree of precession occurs between the two axes with the operation of the goniometer. In this case, the performance of the machine will challenge description using established models.

Subsequent experiments are performed with the X-rays present in order to achieve conditions (4) and (5). The criteria for proper alignment are universal, but there is a range of experimental approaches by which they can be realized. The specific approach may well be based on the age and make of the equipment as well as the inclinations of the operator. The essence of the experimental design remains constant, however: the operator uses optics mounted in the sample position that will either pass or block the X-ray beam in such a way as to tell the operator if and when the desired alignment condition has been realized. One approach is to use a knife edge mounted as shown in Fig. 3.1.15[link]; a 2θ scan is performed using a point detector with a narrow receiving slit. When the intensity reaches 50% of the maximum, the X-ray source (focal line), the rotation axes of the goniometer and the 2θ (zero) angle are coplanar. However, the problematic presumption here is that the sample stage is aligned so exactly that the rotation axes of the goniometer assembly bisect the specimen surface, and therefore the knife edge, to within a few micrometres. This is equivalent to the z height being zero. The verification of this level of accuracy in stage alignment would be exceedingly difficult via direct measurements on the sample stage itself. While many would be inclined to trust the instrument manufacturer to have correctly aligned the stage, at NIST we use an alternative approach.

[Figure 3.1.15]

Figure 3.1.15 | top | pdf |

Diagrammatic view illustrating the use of a knife edge to determine the 2θ zero angle.

A straightforward means of addressing this problem is to use a stage that can be inverted, and perform the 2θ zero angle experiment in both orientations. 2θ scans of a knife edge in the normal and inverted positions can be compared to determine the true 2θ zero angle, independent of any z-height issue associated with the stage. It is often useful to draw a diagram of the results in order to avoid confusion; half the difference between the two measured zero angles yields the true one. With this information, the final alignment involves adjusting the specimen z height in the desired stage, which need not be invertible, until what is known to be the true 2θ zero angle is realized. The knife edge can also be used to centre the beam on the rotation axes, as per condition (5). Determination of the θ stage zero angle can be performed using a precision ground flat. An alternative optic to the knife edge is a rectangular `tunnel' through which the X-ray beam passes. The entrance window of said tunnel may measure 20 to 40 µm in height and 10 mm in width, while the tunnel itself is 5 cm long. It is mounted in the beam path as illustrated in Fig. 3.1.16[link], with the 20 to 40 µm dimension defining the width of the beam and the 10 mm dimension describing the beam's length. Optics like this can be made of metal but are often made of glass. This optic will pass an X-ray beam only if it is parallel to the direction of the tunnel and can be used to determine both θ and 2θ zero angles. These are the optics used at NIST, via an experimental approach that will be discussed below.

[Figure 3.1.16]

Figure 3.1.16 | top | pdf |

Diagrammatic view of the glass tunnel for determination of θ and 2θ zero angles.

If a diffractometer is being commissioned for the first time, or if major components have been replaced, it is appropriate to use fluorescent screens to achieve a rough alignment and to ensure that the incident beam does indeed cross the goniometer rotation axes and enter the detector; otherwise one may waste time looking for the beam. It is critical that these experiments are performed with the tube at operating power and that the equipment is at thermal equilibrium. Thermal effects will cause the anode to expand and contract, which will typically cause the position of the source to change. This is particularly critical when using optics to prepare the incident beam, as the performance of the optics can change markedly with movement of the source.

The objective of the first experiment using X-rays is to achieve parallelism between the line source of the tube anode, or focal line of the Johansson optic, and the receiving slit. A 5 µm platinum pinhole, which was originally manufactured as an aperture for transmission electron microscopy, is mounted in the sample position and used to image the focal line of the source onto the receiving slit (Fig. 3.1.17[link]). This experiment is the one exception to the operating-power rule, as otherwise Bremsstrahlung will penetrate the platinum foil of the pinhole and produce confounding results. Success can be realized with settings of 20 kV and 10 mA; these reduced power settings are not thought to affect the angle between the tube anode and receiving slit (which is the issue addressed in this experiment). The incident slit is opened to the point at which the line source itself is imaged, not the incident slit. The Soller slits, and the post-monochromator if there is one, must also be removed to allow for the axial divergence that is needed for the success of this experiment. The pinhole images the line source onto the receiving slit; as the angle between the two decreases, progressively larger lengths of the receiving slit are illuminated during a 2θ scan. The tilt of the X-ray tube shield is varied and sequential 2θ scans are collected. As parallelism is approached, the profiles will exhibit a progressive increase in the maximum intensity value, with corresponding decreases in breadth. Conclusive results are shown in Fig. 3.1.18[link]. It should be noted that this is a very difficult experiment to perform because the beam is essentially open and scatter is abundant. Shielding must be installed such that the detector can see only the signal that passes through the pinhole. The pinhole itself should also be shielded to minimize the area of (relatively transparent) platinum exposed to the direct beam.

[Figure 3.1.17]

Figure 3.1.17 | top | pdf |

Design of experiments using a pinhole optic to align the X-ray source with the receiving slit.

[Figure 3.1.18]

Figure 3.1.18 | top | pdf |

Successful results from the pinhole experiment showing variation in profile shape with successive adjustment of tube tilt; the central peak of highest intensity indicates the state of parallelism between the source and the receiving slit.

We now proceed to determine the θ and 2θ zero angles using the glass-tunnel optic. Initial experiments should be performed without a post-monochromator, as its presence tends to complicate finding the beam. However, it should be installed as experiments progress, as it will lead to an increase in resolution; it may alter the wavelength distribution slightly and its mass will change the torque moment on the 2θ axis. The latter two factors may alter the apparent 2θ zero by several hundredths of a degree. It is best to use a minimum slit size for the incident beam that will fully illuminate the entrance to the tunnel optic to avoid undue levels of scatter. The receiving slit should be the smallest size available, 0.05 mm in our case. The first experiment will determine a first approximation of the zero angle of θ. The tunnel optic is used, with a θ scan being performed with an open detector. Once an approximate zero angle of θ is determined, the receiving slit is installed and a 2θ scan is performed with θ at its zero point. Thus, we now have a qualitative idea of both zero angles. Then an experiment is performed as shown in Fig. 3.1.19[link]; sequential 2θ scans are performed as θ is stepped through its zero point by very small steps (0.004° in the case of our experiment). The tunnel scatters radiation from its upper and lower surfaces when it is not parallel to the central portion of the beam, resulting in a lobe on each side of the direct beam in Fig. 3.1.19[link]. When θ is at the desired zero angle, the direct beam is transmitted with minimum intensity in the lobes.

[Figure 3.1.19]

Figure 3.1.19 | top | pdf |

Results from 2θ scans at successive θ angles using the glass tunnel to determine the θ and 2θ zero angles.

Once the zero positions of the θ and 2θ angles are determined, the stage is inverted and this set of experiments is repeated. It is desirable to drive the stage by 180°; however, remounting the stage in an inverted position is acceptable if the mounting structure centres the stage to within a few micrometres. Again, it is often useful to draw a diagram of the results from these two zero-angle determinations to ensure that the data are interpreted correctly, as shown in Fig. 3.1.20[link]. In this example, the sample height is displaced in the positive z direction, otherwise the positions of orientation 1 and 180° from orientation 1 would be reversed. The operator should verify that fully self-consistent results are obtained with respect to the four zero angles shown in Fig. 3.1.20[link]. Because the beam is divergent, the difference between the two θ zero angles will not be precisely 180°, as shown in Fig. 3.1.20[link]. Again, half the difference between the two measured 2θ zero angles yields the true one, with respect only to the locations of the X-ray source and the goniometer rotation axes. Using the data of Fig. 3.1.20[link] and the goniometer radius, the z-height error on the stage in question could be computed and an adjustment made; this should be followed by repeating the two zero-angle measurements and checking for self-consistency to provide additional confidence in the alignment.

[Figure 3.1.20]

Figure 3.1.20 | top | pdf |

Diagram of hypothetical results from two zero-angle measurements (Fig. 3.1.19[link]) with the sample stage in the normal and inverted positions to determine the true 2θ zero angle of the goniometer assembly in the absence of a z-height error from sample-stage misalignment.

The final task is to mount the stage to be used in subsequent data collection and adjust its sample height until the known true 2θ zero angle is obtained. The final experiment is a θ–2θ scan of the tunnel optic to yield data of the kind shown in Fig. 3.1.21[link]. The symmetry of the lobes on each side of the peak from the direct beam is indicative of the correct θ zero angle setting. This final high-resolution experiment is an excellent indicator of the state of the alignment of the instrument. These experiments, when used in conjunction with profile fitting, can yield measurements of the zero angles with an uncertainty for θ and 2θ of ±0.001°. Given the high certainty with which the zero angles are determined, they would then not be refined in subsequent data analyses. The alignment of the incident-beam slit, issue (5), is accomplished with a scan of the direct beam. If the machine is equipped with a variable-divergence incident-beam slit, it is important to evaluate it at several settings because changes in the centre line of the beam may occur as the divergence angle is altered. Use of an excessively narrow receiving slit should be avoided for scans of the direct beam, since the thickness of the metal blades used for the slit itself may be larger than the width of the slit, leading to a directional selectivity as the scan is performed.

[Figure 3.1.21]

Figure 3.1.21 | top | pdf |

Final results from a θ–2θ scan using the glass tunnel, indicating the correct determination of θ and 2θ zero angles.

The alignment presented here was carried out using a scintillation detector; however, much of it could be performed using a PSD in `picture-taking' mode. In any case, the count rates have to be monitored to ensure that they are within the linear range of the detector (5000 to 10 000 counts per second), otherwise anomalous results are obtained. Attenuating foils that are flat and in good condition can be used to reduce the intensity. It should also be stressed that if the observations made during the experiments do not meet expectations, something is wrong and the desired outcome, i.e. the correct alignment, will not be realized. Drawing a diagram of the X-ray beam path can be very useful for discovering the cause of apparently unexplainable observations. Also, throughout these experiments it is appropriate for the operator to try various additional settings to ensure that the machine is operating as expected. Anomalous observations can almost always be explained in a quantitative manner with appropriate investigation. Patience is required.

In the past, achieving acceptable performance with a Johansson optic was considered so problematic that they were under-used, despite the improvements in the data quality they provided. Modern instrumentation can provide their advantages with dramatically reduced effort. The NIST Johansson IBM, however, was derived from an older design that was originally supplied with a Siemens D500, circa 1987. It uses a Huber 611 monochromator housing that provides 5 degrees of freedom in the positioning of the optic: the a distance, the takeoff angle, crystal 2θ, tilt and azimuth. For aforementioned reasons, we installed a modern Johansson optic manufactured by Crismatec (now part of Saint Gobain). There are two stages to the procedure for aligning the machine equipped with the IBM: first, the crystal optic itself is aligned with the line source of the tube anode, and then the tube shield/IBM assembly is aligned with the goniometer. The second stage is analogous to the instrument alignment described above, so here we will discuss only the first stage (although not exhaustively).

The alignment of the Johansson optic to the X-ray source is done largely with the X-rays present. The crystal tilt and azimuth are set by using a fluorescent screen or camera to observe the diffraction images from the optic as it is rotated through its diffraction angle. Fig. 3.1.22[link], which is reproduced from the instructions supplied by Siemens, shows how the images form and move, informing the operator of necessary adjustments. Initially, a set of hex-drive extensions was used to drive the optic remotely through its 2θ angle. The source was operated at full power while the movement of the image was observed through a lead-impregnated window. Later, a motor drive was installed onto the 2θ actuator of the 611 housing. In the end, the incident-beam intensity realized from the optic is dependent upon the operator's ability to discern the subtleties in the image movement (Fig. 3.1.22[link]). Blocking the axially divergent signals from the optic with a high-resolution 0.05° Soller slit dramatically improves the sensitivity of this observation to the setting of the tilt and azimuth angles. The inclusion of the Soller slit, however, will reduce the intensity markedly. A complete darkening of the room, including blocking of the shutter lights, as well as allowing time for pupil dilation, can be helpful. However, the use of an X-ray imager or a PSD in picture-taking mode improves the quality of the alignment by allowing for a more accurate interpretation of the observations.

[Figure 3.1.22]

Figure 3.1.22 | top | pdf |

Figures found within the instructions for a Siemens D500 incident-beam monochromator in a Huber 611 monochromator housing, illustrating image formation and movement for correct and incorrect settings of tilt and azimuth angles (reproduced with verbal permission from Huber).

The goal is the formation of an image in the centre of the beam path that splits symmetrically out to the edges with increasing crystal 2θ angle (Fig. 3.1.22[link]). The directions supplied by Siemens and Huber allude to the fine adjustment [see Huber (2014[link]) for movies] of the tilt and azimuth by examining the structure of the diffracted beam at the optic's focal point. A fluorescent screen located at the focal point and set at a 6° angle to the beam path is used to image the beam structure. With the use of the Soller slit for coarse alignment of tilt and azimuth, the desired final image for the fine-adjustment mode was, indeed, obtained. But it was not possible, even with a deliberate mis-setting of tilt and azimuth angles, to use the defective images at the focal point as a source of feedback for correcting the settings because they were too diffuse.

The Johansson optic is supplied with a and b distances that correspond to the angle of asymmetry in the diffraction planes and the bend radius. The instructions indicate that an incorrect setting in a will cause the optic's diffraction image to move up or down in the plane of diffraction with variation of the crystal 2θ angle. Again, a lack of sensitivity prevents the use of this effect as a feedback loop to set a. Alternative experiments for the optimization of the distance a of the optic were time consuming and not conclusive, so we decided to accept the supplied value for a. As before, we set the takeoff angle at 6°. A critical and quite difficult problem is the alignment of the slit located between the X-ray tube and the crystal optic (not shown in Fig. 3.1.3[link]). This slit centres the beam onto the active area of the optic; misalignment leads to unwanted scatter from the optic's edges. It is aligned with the X-ray beam present, yielding an image of the shadow cast by the optic itself on one side, and one edge of the slit on the other. The optic is rotated in 2θ so that its surface is parallel to the X-ray beam, i.e. shadowing is minimized. The shadow from the second edge of the slit is obscured by the optic. Geometric considerations are used in conjunction with knowledge of the radius of curvature of the optic to obtain the correct location for the slit. A drawing is highly useful in this instance. After the installation of this slit, it is appropriate to re-check the tilt and azimuth settings, as the alignment of the optic is nearly complete.

The setting of the crystal 2θ is performed by evaluation of the direct beam, either with scans using a scintillation detector or by taking pictures with a PSD. With increasing crystal 2θ, the beam diffracted by the optic will build in the centre forming a broad profile; then the intensity on either side of the initial profile will rise, leading to the desired box form; and then intensity at the centre of the box will fall, followed lastly by the intensity at either side of the centre. This is consistent with Fig. 3.1.22[link]. The process will repeat at half the Kα1 intensity for the Kα2 line. (Avoid tuning to the wrong line.) The crystal 2θ setting should be checked at regular intervals with a scan of the direct beam; this is the only setting on the IBM that has been observed to drift with time.

The final step in alignment of the IBM is the installation of the anti-scatter slit located at the focal line of the optic (Fig. 3.1.3[link]). This is performed after the IBM assembly is aligned to the goniometer. Optimal performance of the anti-scatter slit can be expected only if it is located precisely at the focal line, which itself constitutes the smallest region within which a maximum of X-ray flux is transmitted. Therefore, the NIST alignment procedure includes an experiment using a narrow slit positioned by an xy translator to evaluate the relative flux of the beam in the vicinity of the focal line. The y direction is parallel to the b direction (Fig. 3.1.3[link]). A 0.05 mm slit is translated across the beam in the x direction, while intensity readings are recorded from an open detector. This process is repeated for a sequence of y distances. A plot of the recorded intensity versus x at a sequence of y settings will yield a set of profiles which broaden on either side of the true value of b; the narrowest, highest-intensity profile will indicate the location of the focal line. Thus, the experiment determines both the true b distance and the location in the x direction of the focal line. Once b is known, translational adjustment of the IBM assembly may be required to locate the focal line precisely on the goniometer radius. The experiment also effectively measures the size of the focal line, in our case this was 0.15 mm. A slit of this dimension was fabricated, and the xy translator was replaced with a standard slit retainer positioned at the desired location. The results are shown in Fig. 3.1.13[link].

3.1.4. SRMs, instrumentation and data-collection procedures

| top | pdf |

NIST maintains a suite of SRMs suitable for calibration of powder-diffraction equipment and measurements (NIST, 2015a[link],b[link],c[link],d[link]). These SRMs can be divided into various categories based on the characteristic they are best at calibrating for: line position, line shape, instrument response or quantitative analysis, although some degree of overlap exists. The powder SRMs are certified in batches, typically consisting of several kilograms of feedstock, that are homogenized, riffled and bottled prior to the certification. A representative sample of the bottle population, typically consisting of ten bottles, undergoes certification measurements. The specific size of each lot is based on expected sales rates, mass of material per unit and an anticipated re-certification interval of 5 to 7 years. When the stocks of a given certification are exhausted, a new batch of the SRM is certified and a letter is appended to the code to indicate the new certification. Hence SRM 640e (2015) is the sixth certification of SRM 640, originally certified in 1973. The microstructural character of the SRM artifact and/or the certification procedure itself are expected to change (improve) with each renewal.

To understand the role of an SRM in the calibration of XRPD measurements and equipment, it is helpful to discuss briefly the documentation accompanying an SRM [see also Taylor & Kuyatt (1994[link]), GUM (JCGM, 2008a[link]) and VIM (JCGM, 2008b[link])]. NIST SRMs are known internationally as certified reference materials. Accompanying an SRM is a certificate of analysis (CoA), which contains both certified and non-certified quantity values and their stated uncertainties. Certified quantity values are determined by NIST to have metrological traceability to a measurement unit – often a direct linkage to the SI. Non-certified values (those lacking the word certified, as presented within a NIST CoA) are defined by NIST as best estimates of the true value provided by NIST where all known or suspected sources of bias have not been fully investigated. Both certified and non-certified quantity values are stated with an accompanying combined expanded (k = 2) uncertainty. Expanded uncertainty is defined as the combined standard uncertainty values for a given certified value multiplied by a coverage factor, k, of 2 and represents a 95% confidence interval for a given value. The combined standard uncertainties are determined by applying standard procedures for the propagation of uncertainty. The distinguishing characteristic of a NIST-certified quantity value is that all known instrumental measurement uncertainties have been considered, including the uncertainties from the metrological traceability chain. NIST defines uncertainties in two contexts: type A and type B. Type A are the random uncertainties determined by statistical methods, for example the standard deviation of a set of measurements. Type B uncertainties are systematic in nature and their extent is usually based on scientific judgment using all relevant information available on possible biases of the experiment. Assessing the technical origin and magnitude of these type-B uncertainties is a dominant part of the NIST X-ray metrology program.

XRPD SRM-certified quantity values are used primarily for calibration of XRPD measurement systems. The calibration data collected on test instruments also contain the two types of errors: random and systematic. It is the systematic measurement errors, or so-called instrument bias, that can be corrected with a calibration. Calibration is a multi-step process. First, certified quantity values are related to test instrument data. This is done by computing, from these values, what would constitute an `ideal' data set from the `measurement method' to be calibrated. The `method' in this case would include the test instrument, its configuration settings and the data-analysis method to be used in subsequent measurements. Then a data set from the SRM is collected and analysed under the conditions of the method. Lastly, a calibration curve is generated by comparing the `ideal' data set to the measured one. This would establish a correction to the instrument data and yield a calibrated measurement result. For XRPD, this correction has classically taken the form of a calibration function shifting the apparent 2θ indications. There is also the possibility that comparing the `ideal' instrument response with the observed one indicates a mechanical, optical or electrical malfunction of the instrument. This, of course, requires further investigation and repair, rather than simply applying a calibration curve.

The generation of a calibration curve as just described can be thought of as a `classical' calibration, and is applicable when the data-analysis procedure(s) use empirical methods to parameterize the observations. More recent, advanced methods such as the FPA use model functions that relate the form of the data directly to the characteristics of the diffraction experiment. The parameters of the model describing the experiment are refined in a least-squares context in order to minimize the difference between a computed pattern and the observed one. With the use of methods that use model functions, the calibration takes on a different form, as the collection and analysis of data can be thought of as replacing the aforementioned multi-step process. The calibration is completed by comparing the results of the refinement with certified quantity values from an appropriately chosen SRM and the known physical-parameter values that describe the optical configuration of the test instrument.

Random measurement error, describing the variation of data for a large set of measurements, can be estimated by repeating measurements over an extended period and computing the variance in the data. Furthermore, over time, one could recalibrate the system and look at the variance of the systematic bias for a given instrument, i.e. the rate of drift in the instrument. One would also have to investigate the sensitivity of both the random error and the variance in the systematic bias to environmental variables such as ambient temperature, power fluctuations etc. This systematic error variance, combined with the prior determined random error variance and the certified value and its uncertainty, provides an instrumental measurement uncertainty that can be applied to all measurements from a given instrument. Such an in-field study, however, would take years to complete. Instead, the instrumental measurement uncertainties for a given commercial XRPD measurement system are typically provided by the manufacturer, with the stated caveat that periodic calibrations should be performed via factory specifications. The instrumental measurement uncertainties determined through such a study are invariably much larger than those of the NIST-certified quantity values, as they contain both the instrument measurement errors (systematic and random) combined with certified quantity value uncertainties.

NIST maintains a suite of more than a dozen SRMs for powder diffraction. However, one often encounters discussions of non-institutionally-certified standards such as `NAC' (Na2Ca3Al2F14), annealed yttrium oxide and silver behenate. Our discussions here principally concern SRMs 640e (silicon), 660b (NIST, 2010[link]) (lanthanum hexaboride), 1976b (2012) (a sintered alumina disc) and 676a (NIST, 2008[link]) (alumina). SRM 660b has since been renewed as SRM 660c (2014). Most of the work presented here was performed using SRM 660b; however, SRM 660c could be used in any of these applications with identical results. SRMs certified to address the calibration of line position, such as SRMs 640e, 660c and 1976b, are certified in an SI-traceable manner with respect to lattice parameter. SRM 1976b is also certified with respect to 14 relative intensity values throughout the full 2θ range accessible with Cu Kα radiation. As such, it is used to verify the correct operation of a diffractometer with respect to diffraction intensity as a function of 2θ angle, i.e. instrument sensitivity (Jenkins, 1992[link]) or instrument response. SRM 676a is a quantitative-analysis SRM certified with respect to phase purity (Cline et al., 2011[link]). While SRM 676a is certified for use as a quantitative-analysis SRM, it is also certified with respect to lattice parameters.

Starting with the certification of SRM 640c in 2000, the 640x SRMs have been prepared in a way that minimizes sample-induced line broadening. These powders consist of single-crystal particles that were annealed after comminution in accordance to the method described by van Berkum et al. (1995[link]). Their crystallite-size distributions (as determined by laser scattering) have a maximum probable size of approximately 4 µm with 10% of the population being above 8 µm and 10% of the population being below 2.5 µm (with trace quantities below 1 µm). With Cu Kα radiation, silicon has a linear attenuation of 148 cm−1, which is a relatively low value. SRMs 660x consist of lanthanum hexaboride, which was prepared to display a minimal level of both size and microstrain broadening. With the release of SRM 660a, high-resolution diffraction using synchrotron radiation must be used to detect microstructural broadening. However, the use of lanthanum hexaboride by the neutron-diffraction community is problematic, as the naturally abundant isotope 10B has an extremely high neutron absorption cross section. Lanthanum hexaboride made from 10B is essentially opaque to neutrons, rendering it unsuitable for neutron experiments. This problem was addressed with SRMs 660b and 660c by means of a dedicated processing run using a boron carbide precursor enriched with the 11B isotope to a nominal 99% concentration. As such, SRMs 660b and 660c are suitable for neutron experiments; they display a miniscule reduction in microstrain broadening relative to 660a. SRMs 660b and 660c were prepared at the same time using identical procedures and equipment, but in different lots. Lanthanum hexaboride has a relatively high linear attenuation of 1125 cm−1 with Cu Kα radiation. This linear attenuation virtually eliminates the contribution of specimen transparency to the observed data; as such it offers a more accurate assessment of the IPF for a machine of Bragg–Brentano geometry than is available from other SRMs in the suite. The powders of the SRM 660x series consist of aggregates, with the crystallite size being approximately 1 µm and the aggregate size distribution being centred at approximately 8 µm for SRM 660a and 10 µm for 660b and 660c. SRM 676a consists of a fine-grained, equi-axial, high-phase-purity α-alumina powder that does not display the effects of preferred orientation. It consists of approximately 1.5 µm-diameter aggregates with a broad crystallite-size distribution centred at 75 nm. Therefore, the diffraction lines from SRM 676a display a considerable degree of Lorentzian size broadening, with a 1/cos θ dependence.

SRM 1976b consists of a sintered alumina disc; this format eliminates the variable of sample-loading procedure from the diffraction data collected from this SRM. The alumina powder precursor for SRMs 1976, 1976a and 1976b consists of a `tabular' alumina that has been calcined to a high temperature, approximately 1773 K. This calcination results in a phase-pure α-alumina powder with a plate-like crystal morphology, approximately 10 µm in diameter by 2 to 3 µm in thickness, leading to the texture displayed by these SRMs. The feedstock for SRMs 1976, 1976a and 1976b was manufactured with a common processing procedure: the compacts are liquid-phase sintered using a 3 to 5% anorthite glass matrix; hot forging was used to achieve a compact of approximately 97% of theoretical density. A unique outcome of the hot-forging operation used to manufacture these pieces was the axi-symmetric texture imparted to the microstructure. This axi-symmetric nature permits mounting of the sample in any orientation about the surface normal. Furthermore, as the sintered compacts cool, the viscosity of anorthite steadily increases, solidifying at approximately 1073 K. This permits intergranular movement during cooling, at least until 1073 K, and reduces the level of microstrain that would otherwise build between the grains due to the anisotropic thermal expansion behaviour of alumina. However, despite this relaxation mechanism, SRM 1976x still displays a discernable level of Gaussian microstrain broadening. SRMs 1976a and 1976b were manufactured in a single custom production run, and display a much more uniform level of texture than SRM 1976. This fact is reflected in the considerably smaller uncertainty bounds on the certified relative intensity values of SRMs 1976a and 1976b compared to the original SRM 1976.

Mounting of powder specimens for analysis using Bragg–Brentano geometry is a non-trivial process that typically requires 20 to 30 min. The objective is to achieve a maximum in packing density of the powder with a smooth, flat surface. A 5 µm displacement error in the position of the sample surface will have a noticeable impact on the data collected. Side-drifted mounts allow for realization of a flat surface with relative ease, though maximizing the density of the compact can be challenging. Top-mounted specimens can be compacted using a glass plate or bar that allows the operator to see the sample surface through the glass and, in real time, determine the success or failure in obtaining the desired outcome. Some powders, such as that of SRM 640e, `flow' in the mount with the oscillation of the glass plate across the sample surface. Others, such as SRM 676a do not flow at all, but can be `chopped' into the holder and compacted with a single compression. Several attempts may be necessary to realize a high-quality mount. A low-wetting-angle, low-viscosity silicone-based liquid resin, such as those marketed as vacuum-leak sealants for high-vacuum operations, can be used to infiltrate the compact once it is mounted; this results in a stable sample that will survive some degree of rough handling.

The diffractometer discussed in this work is a NIST-built instrument with a conventional optical layout, although it has several features that are atypical of equipment of this nature. It was designed and built to produce measurement data of the highest quality. This outcome is not only consistent with the certification of SRMs, but is also requisite to critical evaluation of modern data-analysis methods (another goal of this work), as discussed below. The essence of the instrument is a superior goniometer assembly that is both stiff and accurate in angle measurement, in conjunction with standard but thoroughly evaluated optics. The tube shield and incident-beam optics are mounted on a removable platform that is located via conical pins that constitute a semi-kinematic mount. This feature allows rapid interchange between various optical geometries. Fig. 3.1.23[link] shows the instrument set up in conventional geometry with a post-monochromator and point detector, while Fig. 3.1.24[link] shows the setup with a Johansson IBM and a PSD. Data from these two configurations are discussed below.

[Figure 3.1.23]

Figure 3.1.23 | top | pdf |

The X-ray powder diffractometer designed and fabricated at NIST, in conventional divergent-beam format.

[Figure 3.1.24]

Figure 3.1.24 | top | pdf |

The NIST-built powder diffractometer configured with the Johansson incident-beam monochromator and a position-sensitive detector.

The goniometer assembly, which is of θ–2θ geometry, uses a pair of Huber 420 rotation stages mounted concentrically with the rotation axes horizontal. The stage that provides the θ motion faces forward while the 2θ stage faces rearward; they are both mounted on a common aluminium monolith, visible in Figs. 3.1.23[link] and 3.1.24[link], which forms the basis of the chassis for the instrument. Both stages incorporate Heidenhain 800 series optical encoders mounted so as to measure the angle of the ring gear. With 4096-fold interpolation provided by IK220 electronics, an angle measurement to within ±0.00028° (1 arc second) was realized for both axes. The stages are driven by five-phase stepper motors that incorporate gear reducers of 10:1 for the θ stage and 5:1 for the 2θ stage, yielding step sizes of 0.0002° and 0.0004°, respectively. The manufacturer's specifications for the Huber 420 rotation stage claim an eccentricity of less than 3 µm and a wobble of less than 0.0008° (3 arc seconds). The construction of the goniometer assembly necessitated the development of a specialized jig to align the two 420 rotation stages with regard to both the concentricity (eccentricity) and parallelism (wobble) of their rotation axes. The result was that the overall eccentricity and wobble of the assembly met the specifications cited for the individual stages. The flexing of the detector arm, attached to the rearward-facing 2θ stage, was minimized by fabricating a honeycombed aluminium structure, 7.6 cm deep, which maximized stiffness while minimizing weight. Furthermore, the entire detector-arm assembly, including the various detectors, was balanced on three axes to minimize off-axis stress on the 2θ rotation stage (Black et al., 2011[link]). Thus, the goniometer assembly is exceedingly stiff and offers high-accuracy measurement and control of both the θ and 2θ angles.

The optics, graphite post-monochromator, sample spinner, X-ray generator and tube shield of the machine were originally components of a Siemens D5000 diffractometer, circa 1992. As previously discussed, the parts for the IBM configuration were obtained primarily from a Siemens D500, circa 1987. Both configurations include a variable-divergence incident-beam slit from a D5000. The PSD used in this work was a Bruker LynxEye XE. The cable attached to the sample spinner (as seen in Figs. 3.1.23[link] and 3.1.24[link]) is a flexible drive for the spinner itself; the remote location of the drive motor (not shown) isolates the sample and machinery from the thermal influence of the motor. The machine was positioned on an optical table within a temperature-controlled (±0.1 K) space. The temperature of the water used for cooling the X-ray tube and generator was regulated to within ±0.01 K. Operation of the machine was provided through control software written in LabVIEW. Data were recorded in true xy format using the angular measurement data from the optical encoders.

In conventional configuration, the 2.2 kW copper tube of long fine-focus geometry was operated at a power of 1.8 kW. This tube gives a source size of nominally 12 × 0.04 mm, while the goniometer radius is 217.5 mm. The variable-divergence slit was set to ∼0.9° for the collection of the data discussed here. This results in a beam width, or footprint at the lowest θ angle, on the sample of about 20 mm, conservatively smaller than the sample size of 25 mm. A Soller slit with a divergence of 4.4° defined the axial divergence of the incident beam. A 2 mm anti-scatter slit was placed approximately 113 mm in front of the 0.2 mm (0.05°) receiving slit. The total path length of the scattered radiation (the goniometer radius plus the traverse through the post-monochromator) was approximately 330 mm. This setup reflects what is thought to be a medium-resolution diffractometer that would be suitable for a fairly broad range of applications and is therefore a reasonable starting point for a study of instrument calibration. With the IBM, the 1.5 kW copper tube of fine-focus geometry was operated at a power of 1.2 kW. This tube had a source size of nominally 8 × 0.04 mm. The variable-divergence incident slit was also set to 0.9° with a 0.2 mm (0.05°) receiving slit. The receiving optics were fitted with a 4.4° Soller slit. The total beam-path length was about 480 mm.

With the scintillation detector, data were collected using two methods, both of which encompassed the full 2θ range available with these instruments and for which the SRMs show Bragg reflections. The first involves data collection in peak regions only, as illustrated in Table 3.1.2[link] for SRM 660b. The run-time parameters listed in Table 3.1.2[link] reflect the fact that the data-collection efficiency can be optimized by collecting data in several regions, as both the intensity and breadth vary systematically with respect to 2θ. This was the manner in which data were collected for the certification measurements of SRMs 660c, 640e and 1976b. The second involved a simple continuous scan of fixed step width and count time. It is generally accepted that a step width should be chosen so as to collect a minimum of five data points above the full-width-at-half-maximum (FWHM) to obtain data of sufficient quality for a Rietveld analysis (Rietveld, 1967[link], 1969[link]; McCusker et al., 1999[link]). This does not, however, constitute any sort of threshold; collecting data of a finer step width can, with proper data analysis, result in a superior characterization of the IPF. However, one must consider the angular range of acceptance of the receiving slit that is chosen. For a slit of 0.05° a step width of 0.005° would add only 10% `new' information, so selecting this step width would not be worth the extra data-collection time. We did, however, collect some data sets we refer to as `ultra-high-quality' data; the step widths for these were half those shown in Table 3.1.2[link] and the count times were approximately three times higher than those in Table 3.1.2[link]. For the reported instrument and configuration, the run-time parameters of Table 3.1.2[link] result in a minimum of 8 to 10 points above the FWHM. Count times were selected to obtain a uniform number of counts for each profile. It should be noted that it is probably not worth spending time collecting quality data from the 222 line of LaB6, as it is of low intensity and relatively close to other lines of higher intensity; however, this is not the case with the 400 line. Selection of the run-time parameters can be an iterative process; the total width of each profile scan was set to include at least 0.3° 2θ of apparent background on either side of the profile. Except for the data for SRM 676a, the continuous scans discussed were collected with a step width of 0.008° 2θ and a count time of 4 s to result in a scan time of roughly 24 h. The scans of 676a were collected with 0.01° 2θ step width and 5 s count time.

Table 3.1.2| top | pdf |
Run-time parameters used for collection of the data used for certification of SRM 660b

The `overhead time' associated with the operation of the goniometer is included.

hklStart angle (°)End angle (°)Step width (°)Count time (s)Total peak time (min)
100 20.3 22.2 0.01 2 6.3
110 29.1 31.4 0.01 1 3.8
111 36.4 38.4 0.01 3 10.0
200 42.7 44.4 0.01 5 14.2
210 48 50 0.008 2 8.3
211 53.2 54.896 0.008 5 17.7
110 62.5 64.204 0.008 11 39.0
300 66.7 68.596 0.008 4 15.8
310 70.9 72.7 0.008 6 22.5
311 75 76.904 0.008 9 35.7
222 79.3 80.804 0.008 47 147.3
320 83 84.904 0.008 15 59.5
321 86.9 88.9 0.008 8 33.3
400 95 96.704 0.008 42 149.1
410 98.6 100.8 0.008 9 41.3
330 102.7 104.9 0.008 12 55.0
331 106.9 108.9 0.01 27 90.0
420 111.1 113.1 0.01 20 66.7
421 115.3 117.6 0.01 10 38.3
332 119.9 122.1 0.01 19 69.7
422 129.6 131.796 0.012 32 97.6
500 134.9 137.396 0.012 27 93.6
510 140.5 144 0.014 7 29.2
511 147.5 150.908 0.016 15 53.2
Total time = 20.0 hours

The PSD used on the NIST diffractometer was a one-dimensional silicon strip detector operated in picture-taking mode for all data collection. It has an active window length of 14.4 mm that is divided into 192 strips for a resolution of 75 µm. With a goniometer radius of 217.5 mm this constitutes an active angular range of 3.80° with 0.020° per strip. Slits that would limit the angular range of the PSD window were not used; with each step the counts from all 192 channels were recorded. The PSD was stepped at 0.005° 2θ, for 25% new information per strip; however, to reduce the data-collection time a second coarse step was also included. Therefore, the data-collection algorithm includes the selection of three parameters: a fine step of 0.005°, the number of fine steps between coarse steps (4), and the size of a coarse step (typically 0.1° or 0.2° 2θ). This approach allows for the collection of high-resolution data without stepping through the entire pattern at the high-resolution setting. Data were collected with four fine steps per detector pixel and a coarse step of 0.1° 2θ. They were processed to generate xy data for subsequent analysis. The operator can select the portion of the 192 channels, centred in the detector window, to be included in the generation of the xy file. The PSD was fitted with a 1.5° Soller slit for collection of the data presented here.

3.1.5. Data-analysis methods

| top | pdf |

Data-analysis procedures can range from the entirely non-physical, using arbitrary analytical functions that have been observed to yield reasonable fits to the observation, to those that exclusively use model functions, derived to specifically represent the effect of some physical aspect of the experiment. The non-physical methods serve to parameterize the performance of the instrument in a descriptive manner. The origins of two of the most common measures of instrument performance are illustrated in Fig. 3.1.25[link]. The first is the difference between the apparent position, in 2θ, of the profile maximum and the position of the Bragg reflection computed from the certified lattice parameter. These data are plotted versus 2θ to yield a Δ(2θ) curve; a typical example is shown in Fig. 3.1.26[link]. An illustration of the half-width-at-half-maximum (HWHM), which is defined as the width of either the right or left half of the profile at one half the value of maximum intensity after background subtraction, is also shown in Fig. 3.1.25[link]. These values can be summed to yield the FWHM, and plotted versus 2θ to yield an indication of the profile breadth as it varies with 2θ (Fig. 3.1.27[link]). In addition, the left and right HWHM values of Fig. 3.1.28[link] gauge the variation of profile asymmetry with 2θ; additional parameters of interest, such as the degree of Lorentzian and Gaussian contribution to profile shape, can be plotted versus 2θ to describe the instrument and evaluate its performance.

[Figure 3.1.25]

Figure 3.1.25 | top | pdf |

Diagrammatic representation of a powder-diffraction line profile, illus­trating the metrics Δ(2θ) and half-width-at-half-maximum (HWHM). The full-width-at-half-maximum (FWHM) = left HWHM + right HWHM.

[Figure 3.1.26]

Figure 3.1.26 | top | pdf |

Δ(2θ) curve using SRM 660b illustrating the peak-position shifts as function of 2θ. The peak positions were determined via a second-derivative algorithm, and Δ(2θ) values (SRM − test) were fitted with a third-order polynomial. Simulated data are from FPAPC and were analysed via the second-derivative algorithm and polynomial fits as per the experimental data.

[Figure 3.1.27]

Figure 3.1.27 | top | pdf |

Simulated and actual FWHM data from SRM 660b using the two Voigt PSFs with (`with Caglioti') and without constraints.

[Figure 3.1.28]

Figure 3.1.28 | top | pdf |

Left and right HWHM data from SRM 660b using the split pseudo-Voigt PSF fitted with uniform weighting.

The least computationally intensive methods for the analysis of XRPD data, which have been available since the onset of automated powder diffraction, are based on first- or second-derivative algorithms. These methods report peak positions as the 2θ value at which a local maximum in diffraction intensity is detected in the raw data. Typical software provides `tuning' parameters so that the operation of these algorithms can be optimized for the noise level, step width and profile width of the raw data. These methods are highly mature and offer a quick and reliable means of analysing data in a manner suitable for qualitative analysis and lattice-parameter refinement. However, they only give information about the position of the top of the peak. Calibration of the diffractometer via this method is useful only for subsequent analyses that also use such peak-location methods.

Profile fitting with an analytical profile-shape function offers the potential for greater accuracy, because the entire profile is used in the analysis. As with the derivative-based methods, profile fitting also reports the observed 2θ position of maximum intensity, in addition to parameters describing profile shape and breadth. The discussion of the IPF in Section 3.1.1[link], as well as a quick look at Figs. 3.1.26[link]–3.1.28[link], shows the complexity in the line profile shape from a Bragg–Brentano instrument. The profiles are symmetric only in a limited region of 2θ; in other regions, the degree and direction of profile asymmetry also vary as a function of 2θ. To a first approximation, the optics of an instrument contribute to the Gaussian nature of the profiles; this Gaussian nature will be constant with respect to 2θ. The Lorentzian contribution is primarily from the emission spectrum; given the dominance of angular-dispersion effects at high angle, one can expect to see an increase in the Lorentzian character of the profiles with increasing 2θ. While it can be argued that it is physically valid to model specific contributions to the IPF with Gaussian and Lorentzian PSFs, either of these two analytical functions alone cannot be expected to fit the complexities of the IPF and yield useful results. Combinations of these two functions, however, using shape parameters that vary as a function of 2θ, have given credible results for fitting of data from the Bragg–Brentano diffractometer and have been widely incorporated into Rietveld structure-refinement software. The Voigt function is a convolution of a Gaussian with a Lorentzian, while the pseudo-Voigt is the sum of the two. The parameters that are refined consist of an FWHM and shape parameter that indicates the ratio of the Gaussian to Lorentzian character. The Voigt, being a true convolution, is the more desirable PSF as it is more physically realistic; the pseudo-Voigt tends to be favoured as it is less computationally intensive and the differences between the two PSFs have been demonstrated to be minimal (Hastings et al., 1984[link]), although there is not universal agreement about this.

Refining the profile shapes independently invariably leads to errors when analysing patterns with peak overlap, as correlations occur between shape parameters of neighbouring profiles. This problem can be addressed by constraining the shape parameters to follow some functional form with respect to 2θ. Caglioti et al. (1958[link]) developed such a function specifically for constant-wavelength neutron powder diffractometers; it has been incorporated in many Rietveld codes for use with XRPD data. It constrains the FWHM of the Gaussian contribution to the Voigt or pseudo-Voigt PSF:[{\rm FWHM}^2 = U\tan^2\theta + V\tan\theta + W,\eqno(3.1.1)]where the refineable parameters are U, V and W. The term U can be seen to correspond with microstrain broadening from the sample, and broadening due to the angular-dispersion component of the IPF. In GSAS an additional term, GP, in 1/cos θ, is included to account for Gaussian size broadening. The Lorentzian FWHM in GSAS can vary as[{\rm FWHM} = {{LX} \over {\cos\theta }} + {LY}\tan\theta,\eqno(3.1.2)]where LX and LY are the refineable parameters. Here LX varies with size broadening while LY is the Lorentzian microstrain and angular-dispersion term. Given that the emission spectrum is described with Lorentzian profiles, we would expect the LY term to model the effects of angular dispersion. Within the code HighScore Plus, the Lorentzian contribution is allowed to vary as[{\rm FWHM} = \gamma_1+ \gamma_2 (2\theta) + \gamma_3 (2\theta)^2,\eqno(3.1.3)]where γ1, γ2, and γ3 are the refineable parameters. Alternatives to the Caglioti function have been proposed that are arguably more appropriate for describing the FWHM data from a Bragg–Brentano instrument (Louër & Langford, 1988[link]; Cheary & Cline, 1995[link]). However, they have not yet been incorporated into many computer codes.

The asymmetry in the observed profiles can be fitted with the use of a split profile, where the two sides of the PSF are refined with independent shape and HWHM parameters. This approach will improve the quality of the fit to the observations; however, it is empirical in nature. The more physically valid approach is the use of models to account for the origins of profile asymmetry. The Finger et al. (1994[link]) model for axial divergence has been widely implemented in various Rietveld codes. It is formulated to model the axial-divergence effects of a synchrotron powder diffraction experiment where the incident beam is essentially parallel. The two refineable parameters, S/L and H/L, refer to the ratios of sample and receiving-slit length, relative to the goniometer radius; they define the level of axial divergence in the diffracted beam. This model is not in precise correspondence with the optics of a Bragg–Brentano diffractometer where both the incident and diffracted beams exhibit divergence in the axial direction. It does, however, give quality fits to such data. The use of such a model, as opposed to the sole use of a symmetric or split PSF, will yield peak positions and/or lattice parameters that are `corrected' for the effects of the aberration in question. Therefore, results from the use of model(s) cannot be directly compared with empirical methods that simply characterize the form of the observation. In the case of the Bragg–Brentano experiment, the correction that the Finger model applies is not rigorously correct. However, the impact of axial divergence, regardless of the details of diffractometer optics, is universal; as such the use of the Finger model results in a more accurate assessment of `true' peak position and, therefore, lattice parameters.

A third PSF that is in common use is the Pearson VII, or split Pearson VII, that was proposed by Hall et al. (1977[link]) for fitting X-ray line profiles. No a priori physical justification exists for the use of this PSF. The refineable parameters are the FWHM, or HWHM, and an exponent, m. The exponent can range from 1, approximating a Lorentzian PSF, to infinity, where the function tends to a Gaussian. Owing to the lack of a clear physical justification for use of this PSF, it is not often used in Rietveld analysis software.

Convolution-based profile fitting, as shown in Fig. 3.1.4[link], was proposed by Klug and Alexander in 1954 (see Klug & Alexander, 1974[link]) and much of the formalism of the aberration functions shown in Table 3.1.1[link] was developed by Wilson (1963[link]). However, limitations in computing capability largely prevented the realization of the full fundamental-parameters approach method until 1992, with the work of Cheary & Coelho[link]. This was made available to the community through the public-domain programs Xfit, and later KoalaRiet (Cheary & Coelho, 1996[link]) and more recently via TOPAS. Other FPA programs are available, most notably BGMN (Bergman et al., 1998[link]); more recently, PDXL 2 has had some FPA models incorporated. Within the FPA there are no PSFs other than the Lorentzians used to describe the emission spectrum, the shapes of which are not typically refined. All other aspects of the observation are characterized with the use of model functions that yield parameters descriptive of the experiment. Plausibility of the analysis is determined through evaluation of these parameters with respect to known or expected values. Direct comparison of the results from an FPA to those from methods using analytical PSFs is difficult because of the fundamental difference in the output from the techniques; for example, FWHM values are not obtained directly from the FPA method. However, the NIST program FPAPC can be used to determine FWHM values numerically.

The FPA models of TOPAS, BGMN and PDXL 2 were developed specifically for the analysis of data from a laboratory diffractometer of Bragg–Brentano geometry. Analyses using this method would be expected to result in the lowest possible residual error terms that characterize the difference between calculation and observation. As has been discussed, the various aberrations affecting the diffraction line shape are such that the observed profile maxima do not necessarily correspond to the d-spacing of the diffracting plane (hkl), except perhaps in a limited region of 2θ, emphasizing the need for physically valid modelling of the observed line shape to realize a credible value for the lattice parameter. At NIST, we are particularly interested in the capabilities of the FPA method, as one of the primary interests of the NIST X-ray metrology program is obtaining the correct values for lattice parameters. Furthermore, experience has demonstrated that the refined parameters obtained through the use of FPA models can be used in a `feedback loop' to isolate problems and anomalies with the equipment.

The instrument response, i.e. the diffracted intensity as a function of 2θ, is measured by Rietveld analysis using models for intensity-sensitive parameters such as crystal-structure parameters and Lorentz–polarization factors. The extraction of plausible crystal-structure parameters from standards via a Rietveld analysis serves as an effective and independently verifiable means of calibrating instrument performance. Considering these refined values provides an effective way to detect defects that vary smoothly over the full range of 2θ. However, errors that are only observable within limited regions of 2θ may be difficult to detect with a whole-pattern method; these should be investigated with second-derivative or profile-fitting methods. SRM 676a (alumina) is well suited to assessing instrument response because it is non-orienting and of high purity. Alumina is of lower symmetry than either silicon or lanthanum hexaboride; it has a considerable number of diffraction lines and has well established structure parameters. A Rietveld analysis of SRM 660c, however, yields the IPF in terms of code-specific profile shape terms and verifies that peak-position-specific aspects of the equipment and analysis are working correctly.

The instrument response may be evaluated with the more conventional data-analysis methods with use of SRM 1976b. Measurements of peak intensities are obtained from the test instrument, typically by profile fitting, and compared with the certified values. However, the use of SRM 1976b with diffraction equipment with different optical configurations may require the application of a bias to the certified values to render them appropriate for the machine to be qualified. This bias is needed to account for differences in the polarization effects from the presence, absence and character of crystal monochromators. The polarization factor for a diffractometer that is not equipped with a monochromator is (Guinier, 1994[link])[{1 + \cos^2 2\theta \over 2}.\eqno(3.1.4)]The polarization factor for a diffractometer equipped with only an incident-beam monochromator is (Azároff, 1955[link])[{{1 + \cos^2 2\theta_m\cos^2 2\theta} \over {1 + \cos^2 2\theta_m}},\eqno(3.1.5)]where 2θm is the 2θ angle of diffraction for the monochromator crystal. The polarization factor for a diffractometer equipped with only a diffracted-beam post-monochromator is (Yao & Jinno, 1982[link])[{{1 + \cos^2 2\theta_m \cos^2 2\theta} \over 2},\eqno(3.1.6)]where 2θm is the 2θ angle of the monochromator crystal. Equations (3.1.5)[link] and (3.1.6)[link] are appropriate when the crystal has an ideal mosaic structure, i.e. the diffracting domains are uniformly small and, therefore, the crystal is diffracting in the kinematic limit. This is in contrast to a `perfect' crystal, which would diffract in accordance with dynamical scattering theory. Note that equations (3.1.5)[link] and (3.1.6)[link] both have the cos2 2θm multiplier operating on the cos2 2θ term. Since this multiplier is less than unity, the intensity change on machines equipped with a monochromator exhibits a weaker angular dependence.

The certification data for SRM 1976b were collected with the NIST machine equipped with the Johansson IBM and a scintillation detector. The simplified IPF of this machine is advantageous for the accurate fitting of the profiles and, therefore, intensity measurement. The validity of the `ideal mosaic' assumption embodied in equation (3.1.5)[link] was evaluated using this diffractometer; the validity of equation (3.1.6)[link] was evaluated with data from the machine configured with the post-monochromator. With respect to equation (3.1.5)[link], for a Ge crystal (111) reflection, 2θm was set to 27.3°; with regard to equation (3.1.6)[link], for a pyrolytic graphite crystal (0002) basal-plane reflection, 2θm was set to 26.6°. Rietveld analyses of data from SRMs 660b, 1976b and 676a included a refinement of the polarization factor, modelled according to equations (3.1.5)[link] and (3.1.6)[link] in TOPAS, and yielded fits of high quality, indicating that these models were appropriate for these crystals and configurations. Equations (3.1.4)[link], (3.1.5)[link] and (3.1.6)[link] were used to bias the certified values to correspond to those of alternative configurations. These values are included in the SRM 1976b CoA as ancillary data.

3.1.6. Instrument calibration

| top | pdf |

The calibration procedure has traditionally involved the comparison of measurements from a reference (an SRM) with those of the test instrument. However, the exact form of this comparison depends upon the data-analysis procedure to be used. A classical calibration, permitting qualitative analyses and lattice-parameter refinement, can be readily performed as per Fig. 3.1.26[link]. These data are fitted with a polynomial that describes the 2θ error correction that is then applied to subsequent unknown samples. Furthermore, with this calibration method, the actual form of the curve of Fig. 3.1.26[link] is largely irrelevant. As the data-analysis methods become more advanced, physical models are chosen to replace analytical PSFs. The calibration is then based upon the observation that the machine performance does indeed correspond to the models used, and that acceptable values for refined parameters describing the experiment are obtained from an analysis of data from an SRM. A systematic approach to instrument calibration with a full evaluation of the data, including those obtained from the empirical methods shown in Figs. 3.1.26[link] and 3.1.27[link], results in the ability to use the advanced methods in a rational manner and obtain results in which one can have confidence. The advanced methods, while more complex to use and requiring a much more extensive instrument calibration process, reward the user with a sample characterization of greater breadth and reduced measurement uncertainty.

Consider the Δ(2θ) curve illustrated in Fig. 3.1.26[link]. The y-axis values are the differences between the peak positions computed from the certified lattice parameter of SRM 660b and those of each observed profile determined via a second-derivative-based peak-location algorithm. Therefore, each of the Δ(2θ) data points plotted on Fig. 3.1.26[link] were determined independently. It is immediately apparent that the data follow a smooth, monotonic curve with no substantive outliers. Discontinuities or non-monotonicity would typically indicate mechanical difficulties with the equipment, such as loose components or problems with the goniometer assembly. Evaluation of independently determined data such as these is critical to verifying that there are no `high-frequency' difficulties with the equipment that would otherwise be hidden or smoothed out with the use of methods that apply models or constraints across the entire 2θ range, such as a Rietveld analysis. The Δ(2θ) values were fitted with a third-order polynomial that is also illustrated in Fig. 3.1.26[link]. Consideration of the deviation values between the observations and the third-order fit indicates a random or `top hat' distribution with a maximum excursion of ±0.0025° 2θ; this provides further evidence that a machine is operating properly.

FPAPC was used to generate simulated data, which were then analysed using the same second-derivative algorithm as was applied to the raw data. The aforementioned optical setup of the NIST instrument was used in the as-configured simulation (see the caption for Fig. 3.1.26[link]), while the high-resolution and low-resolution data were simulated with a 50% increase or decrease of the incident and Soller slit angles. For the `high-resolution' and `low-resolution' data, third-order polynomial fits to the Δ(2θ) values are displayed in Fig. 3.1.26[link]; for the `as-configured' data, the Δ values themselves are indicated. The correspondence between the simulation and observation indicates that trends in the data can be readily explained in the context of the aberration functions discussed in Section 3.1.l[link] and that such a machine can generate data for successful analysis with the FPA method, i.e. the metrological loop is closed. At low 2θ the profiles are displaced to low angle by the effects of the flat specimen error and axial divergence. The Δ(2θ) curve crosses the zero point at approximately 100° 2θ where the profiles are largely symmetric; the slight asymmetry to low angle caused by the flat specimen error is somewhat offset by asymmetry of the emission spectrum at high angles. At higher 2θ the profiles are displaced to high angle by the combined effects of axial divergence and the asymmetry of the emission spectrum. As illustrated with the simulations at lower and higher resolution, the experimental curve of Fig. 3.1.26[link] would either flatten out or become steeper, respectively, with changes in instrument resolution. Given the uniformity of the data and overall plausibility of this Δ(2θ) curve, the third-order polynomial fit is used as a reference against which the merits of other techniques can be judged.

It should also be noted that the data and method shown Fig. 3.1.26[link] constitute the `low-hanging fruit' of powder diffraction. Data analogous to those of Fig. 3.1.26[link] can be used to correct peak positions of unknowns via either the internal- or external-standard method using a polynomial fit. The external-standard method, however, cannot account for specimen displacement or sample-transparency effects; these require use of the internal-standard method, which is the same procedure but applied to a standard admixed with the unknown. Either of these methods will correct for instrumental aberrations regardless of their form; the nature of the curve of Fig. 3.1.26[link] need only be continuous to permit modelling with a low-order polynomial. Studies performed in conjunction with the International Centre for Diffraction Data (ICDD) demonstrate that the use of the internal-standard method routinely yields results that are accurate to parts in 104 (Edmonds et al., 1986[link]). Fawcett et al. (2004[link]) demonstrated the direct relationship between the use of standards, with the vast majority of analyses being performed via the internal- or external-standard methods, and the number of high-quality starred patterns in the ICDD database. Thus, the community's collective ability to perform the most routine of XRPD analyses, qualitative analysis, has been greatly enhanced over the past 30 or so years by these most basic methods and the use of SRMs.

The Δ(2θ) and FWHM calibration curves shown in Figs. 3.1.27[link][link][link][link]–3.1.31[link] were determined via profile fitting, using several PSFs, of the same raw data from SRM 660b used to generate Fig. 3.1.26[link]. In general, results from the three commercial codes were in close correspondence. When used on a split PSF, the Caglioti function was applied independently to the left and right FWHM values. A five- to seven-term Chebyshev polynomial was used for modelling the background in these refinements. The goodness of fit (GoF) (which is the square root of reduced χ2) residual error term of the refinements ranged from 1.6 to 1.9, with the unconstrained refinements yielding the slightly improved fits to the data. Fig. 3.1.32[link] illustrates the fit quality of typical results using the split pseudo-Voigt PSF. However, as will be demonstrated, the more plausible parameters, particularly in the context of the FWHM values, were often obtained with the more constrained refinements.

[Figure 3.1.29]

Figure 3.1.29 | top | pdf |

Comparison of Δ(2θ) curves determined with profile fitting of SRM 660b data without the use of any constraints, as a function of 2θ.

[Figure 3.1.30]

Figure 3.1.30 | top | pdf |

Δ(2θ) curves from SRM 660b determined with profile fitting using the Caglioti function and the unconstrained split pseudo-Voigt PSF with uniform weighting.

[Figure 3.1.31]

Figure 3.1.31 | top | pdf |

FWHM data from SRM 660b using various split PSFs fitted without constraints.

[Figure 3.1.32]

Figure 3.1.32 | top | pdf |

Fits of the split pseudo-Voigt PSF to the low-angle 100, mid-angle 310 and high-angle 510 lines from SRM 660b illustrating the erroneous peak position and FWHM value reported for the 100 and 510 lines, respectively.

The results from the fitting of the Voigt PSF provide a reference for consideration of the Δ(2θ) data of Fig. 3.1.29[link]. The use of any of the symmetric PSFs considered here, with or without the Caglioti constraint, resulted in curves virtually identical to the one displayed in Fig. 3.1.29[link] for the Voigt PSF. Not surprisingly, the symmetric PSF performs quite well in the mid-angle region where the profiles are symmetric but will report an erroneous position in the direction of the asymmetry, when it is present. However, the opposite effect was observed with the use of any of the split PSFs, as can be seen in Figs. 3.1.29[link] and 3.1.32[link]. When two HWHM values are refined, the larger HWHM value will shift the reported peak position in the direction of the smaller one. This effect can be readily observed in the fit quality of the low-angle 100 reflection displayed in Fig. 3.1.32[link]. The spilt PSFs yield results that reflect an overly asymmetric profile; thus the reported peak positions are displaced to high angle at 2θ angles below 100°, and to low angle at 2θ angles above 100°. Curiously this effect was markedly reduced in one of the commercial computer codes (not shown) and was the sole difference observed between them when the models were equivalent. It is apparent that subtleties in implementation of an ostensibly identical PSF and minimization algorithm (the Marquardt algorithm) can result in dramatic differences in results. Careful examination of the fit quality is required to assess the reliability of profile-fitting results. The data of Fig. 3.1.29[link] indicate that errors in peak position of up to 0.015° 2θ are plausible with profile fitting of these data with these PSFs. In contrast to its use with symmetric PSFs, the Caglioti function will improve results when using split PSFs (Fig. 3.1.30[link]).

Consideration of the issues related to profile fitting shown in Fig. 3.1.32[link] led to the conjecture that fitting the data with a uniform weighting as opposed to Poisson statistical weighting might result in more accurate determination of the peak position and FWHM parameters. (In the vast majority of circumstances this approach would never be used, because the integrated intensity is a critical metric.) This was tried, and resulted in considerable success. Fig. 3.1.30[link] displays data from the use of split pseudo-Voigt that are in very good agreement with second-derivative values.

Experimental and simulated values of the FWHM are displayed in Figs. 3.1.27[link] and 3.1.31[link]. Data from the profile refinements performed without the use of the Caglioti function, displayed in Figs. 3.1.27[link] and 3.1.31[link], yield independently determined measures of the FWHM. Again, the lack of scatter and the continuity of these FWHM values are consistent with proper operation of the instrument, i.e. an absence of `high-frequency' problems. The basic trends are also consistent with the instrument optics: at low 2θ the observed increase in FWHM is due to both the flat specimen and axial divergence aberrations, while at high 2θ angular dispersion dominates and a substantial increase in FWHM with tan θ is apparent. The FPA simulations were performed using the settings for high and low resolution. The FWHM values were determined numerically from the simulated patterns; no PSF was used. As shown with the simulated data, the degree of upturn at low 2θ increases with a decrease in instrument resolution and vice versa. Angular-dispersion effects, however, are less dependent on the instrument configuration; FWHM values tend towards convergence at high 2θ (Fig. 3.1.27[link]).

As seen in Fig. 3.1.27[link], above 40° 2θ the Voigt and split-Voigt PSFs give similar values for the FWHM and a fairly accurate representation of instrument performance. It was observed that with regard to the correlation between FWHM values for split versus symmetric PSFs, the other PSFs behaved in an analogous manner to the Voigt (not shown): above 40° 2θ the values reported for the FWHM from split versus symmetric PSFs are nearly identical. From Fig. 3.1.31[link], the split Pearson VII PSF underestimates the FWHM throughout the mid-angle region; this error was duplicated with the use of the symmetric Pearson VII PSF (not shown). When fitted with uniform weighting, however, these FWHM data from the Pearson VII PSF fell quite precisely (not shown) on the simulated curve. Below 40° 2θ, a split PSF will provide results that overestimate true FWHM values, as shown in Figs. 3.1.27[link] and 3.1.31[link]. The cause for this is analogous to that discussed for the Δ(2θ) values, and can be readily observed in the fit quality displayed in Fig. 3.1.32[link] for the low-angle 100 reflection. In accounting for the asymmetry to low angle, the FWHM of the observed profile is substantially overestimated by the calculated one. With all PSFs, the high-angle FWHM values are observed to be overestimated, as shown in Figs. 3.1.27[link] and 3.1.31[link]; the problem is exacerbated with the use of the Caglioti function. Inspection of the fit quality of the high-angle 510 line shown in Fig. 3.1.32[link] indicates that there are two contributions to this effect: one is that the PSF cannot model the shape of the high side of the profile; the other is that the height of the profile is underestimated. These two effects, particularly the inability of the PSF to correctly model the height of the profile, were observed with all of the other PSFs considered here.

The use of the pseudo-Voigt PSF with the Caglioti function results in a reasonable fit to the FWHM values of the observation; however, the breadth of the high-angle lines is overestimated. The U, V and W terms of the Caglioti function vary in a specific manner to account for various physical effects (e.g. see Fig. 3.1.27[link]): the U term, in tan θ, accounts for angular dispersion; the W term describes the `floor' and the V term accounts for the reduction of the FWHM values in the mid-2θ region. Therefore, the U and W terms should refine to positive values, while the V term should tend to a negative value; negative values for V were, indeed, obtained in these analyses. V should be constrained to negative values or set to zero, as positive values for V are non-physical. With an instrument configured for high resolution, however, values of V = 0 are entirely reasonable as the trend towards an upturn in FWHM at low 2θ angle will be suppressed.

To some extent, the difficulties in determining profile positions through the use of these PSFs can be ascribed to the Cu Kα1/Kα2 doublet as it is stretched by angular dispersion. The pattern can be thought of as divided into three regions, each of which will confound fitting procedures in a different manner: the low-2θ range, where profiles can be considered as a peak with a shoulder, the mid-2θ range (perhaps 40 to 110° 2θ), where the profiles can be considered as a doublet, and the high-angle region where they are two distinct peaks. This `three-region' consideration is compounded by the direction and severity of the asymmetry in these profiles. The data shown in Fig. 3.1.27[link] largely correspond to the problematic effects of angular dispersion in the context of these three 2θ regions. These effects are particularly apparent, as shown in Fig. 3.1.31[link], with the use of the Pearson VII function: over-estimation of FWHM values occurs at low 2θ, under-estimation occurs in the mid-2θ region, and credible values are obtained at high angle. The use of the Caglioti function is effective in addressing the more extreme excursions from plausible FWHM values. Fig. 3.1.28[link] shows the left and right HWHM values for SRM 660b using the split pseudo-Voigt PSF refined with uniform weighting. For reasons discussed in Section 3.1.2[link], the degree, direction and point of crossover in the profile asymmetry indicated in Fig. 3.1.28[link] are in correspondence with expectation and the previously discussed results from these data from SRM 660b.

To consider the impact of instrument resolution on the use of analytical PSFs for the determination of FWHM values, the simulated high-resolution and low-resolution data were analysed via profile fitting. Fig. 3.1.33[link] shows the results from the use of the split Pearson VII and spilt pseudo-Voigt PSFs. The data of Fig. 3.1.33[link] indicate an effect that is dependent on the PSF used. The performance of the split Pearson VII PSF is observed to improve with instrument resolution; FWHM values from the narrower profiles are observed to correspond with expectation in the low- and mid-angle regions, while substantial deviation is noted with the broader profiles. This is counter to expectation, as broader profiles are generally easier to fit than narrow ones. The performance of the split pseudo-Voigt PSF is observed to degrade marginally with either an increase or decrease in instrument resolution. Curiously, the breadths of the profiles in the high-resolution data are overestimated, while those in low-resolution data are largely underestimated. Both PSFs do quite poorly in fitting the high-angle data from the high-resolution setting. These observations emphasize the need to scrutinize the results with an examination of the fit quality, as per Fig. 3.1.32[link].

[Figure 3.1.33]

Figure 3.1.33 | top | pdf |

FWHM data from fits of the split pseudo-Voigt and split Pearson VII PSFs to simulated low- and high-resolution data.

When the IPF is simplified with the use of a Johansson IBM, analytical PSFs can provide an excellent fit to the observations. Fig. 3.1.34[link] shows the fit quality of the split Pearson VII PSF to (high-quality) peak-scan data. The split Pearson VII PSF consistently provides a better fit to IBM data than either the split Voigt or split pseudo-Voigt PSFs. Note that the asymmetry exhibited by the profiles follows the same trends as were outlined previously, but to a much reduced extent because of the extended incident-beam path length and the resulting reduction in the effects of axial divergence. Fig. 3.1.35[link] shows the Δ(2θ) calibration curves that were obtained as per the procedures outlined for Fig. 3.1.29[link]. Indeed, the trends that are followed, and the reasons why, are largely analogous to those of Fig. 3.1.29[link], but to a much reduced extent because of the reduced profile asymmetry. Use of symmetric PSFs yields reported peak positions that are shifted in the direction of the asymmetry, while use of split PSFs yields positions shifted in the opposite direction owing to the fitted profiles displaying excessive levels of asymmetry. One notes the complete failure of the split pseudo-Voigt, split Voigt (not shown) and, to a lesser extent, the split Pearson VII PSFs at high angle. However, the more accurate peak positions are obtained from the more intense reflections, indicating that higher-quality data may improve the results. Improvements in FWHM determination with the use of an IBM are illustrated in Fig. 3.1.36[link], where it can be seen that the pseudo-Voigt and Pearson VII yield values for the FWHM that differ in a systematic manner, but to a reduced extent than with the conventional data. The virtues of the peak-scan data are illustrated by the continuity of the FWHM values of Fig. 3.1.36[link] relative to the discontinuities observed in the corresponding data from the conventional scans that were fitted with the pseudo-Voigt PSF. The results from the use of the Caglioti function in Fig. 3.1.36[link] illustrate that otherwise noisy FWHM data are effectively smoothed out, but a significant bias at high angle is indicated.

[Figure 3.1.34]

Figure 3.1.34 | top | pdf |

Fits of a split Pearson VII PSF to data from SRM 660b collected using a Johansson IBM.

[Figure 3.1.35]

Figure 3.1.35 | top | pdf |

Δ(2θ) curves from the NIST machine configured with a Johansson IBM, illustrating a comparison of results from second-derivative and various profile-fitting methods. Data are from SRM 660b.

[Figure 3.1.36]

Figure 3.1.36 | top | pdf |

FWHM data from SRM 660b collected using the NIST machine configured with a Johansson IBM, illustrating a comparison of results from various profile-fitting and data-collection methods.

FWHM values from the machine equipped with the IBM and PSD are shown Fig. 3.1.37[link], again with data from SRM 660b. These values were obtained from fits of the split Pearson VII PSF using uniform weighting. The resolution improvement from the use of the PSD is due to the 75 µm strip width, as opposed to the 200 µm receiving slit used with the scintillation detector. This is analogous to a reduction in the width of the top-hat function used to model the impact of the receiving slit or silicon strip width as discussed in Section 3.1.2[link]. The impact is greatest at low 2θ angles where the other contributions to the overall breadth are small. With increasing 2θ angle, the contribution of a top-hat function to overall breadth is reduced because it is being convoluted with profiles influenced by ever-increasing spectral dispersion. The improvement in resolution with the reduction in the width of the PSD window is apparent, and is in accordance with expectations as per Fig. 3.1.7[link] of Section 3.1.2[link]. Also, because of the 1/tan θ dependence of this broadening effect, the impact of the window size nearly vanishes above 100° 2θ.

[Figure 3.1.37]

Figure 3.1.37 | top | pdf |

FWHM data from SRM 660b collected using the NIST machine configured with a Johansson IBM and PSD, illustrating the contribution to defocusing at low angles with increasing window width.

Fig. 3.1.38[link] shows FWHM data obtained for SRMs 640e, 1976b and 660c using the split Pearson VII PSF, fitted using uniform weighting on data collected with the IBM and PSD with a 4 mm window. The 660c data set, which exhibits the lowest FWHM values, will be discussed first. The FPA analysis performed in the certifications of SRM 660b and 660c included a Lorentzian FWHM with a 1/cos θ dependence to account for size-induced broadening; a domain size of approximately 0.7 to 0.8 µm was indicated. There is a high level of uncertainty in these values, as they are reflective of an exceedingly small degree of broadening, the detection of which is near the resolution limit of the equipment. The term varying as tan θ, interpreted as microstrain, refined to zero. These values are found in the CoA for the SRMs. The linear attenuation coefficient for a compact of LaB6, with an intrinsic linear attenuation of 1125 cm−1 and a particle-packing factor of 60 to 70%, would be approximately 800 cm−1. Therefore, the contribution to the observed FWHM from specimen transparency with SRM 660c is negligible, as illustrated in Fig. 3.1.10[link]. Likewise, the FPA analysis performed for the certification of SRM 640e included size and microstrain terms; a smaller crystallite size of 0.6 µm was obtained with a very slight amount of microstrain broadening. However, the linear attenuation coefficient for silicon is 148 cm−1; for a powder compact it would be approximately 100 cm−1. The transparency of this specimen would lead to significant broadening. (See Fig. 3.1.10[link] for the effect of an attenuation of 100 cm−1.) Therefore, these three effects, in combination, would be expected to lead to a small degree of broadening throughout the 2θ range for SRM 640e, but with a substantial effect in the mid-angle region because of the sin 2θ dependence of the transparency aberration. Lastly, SRM 1976b is a sintered compact of near theoretical density; therefore, considering the linear attenuation coefficient for alumina, 126 cm−1, a value for the actual SRM 1976b specimen of somewhat less than this is expected. An FPA analysis of SRM 1976b indicates a domain size of 1 µm, but with a significant degree of Gaussian microstrain broadening; this is evident in the observed increase in FWHM with 2θ angle shown in Fig. 3.1.38[link]. We conclude that the FWHM data from all three SRMs shown in Fig. 3.1.38[link] are in correspondence with expectations and can be used to select which SRM is best suited for a given application. We do not, however, recommend using an SRM other than SRM 660x for a microstructure analysis. It should be added that fitting the profiles of SRM 1976b is complicated by the fact that many of them overlap; this leads to the oscillations in the FWHM values shown in Fig. 3.1.38[link] for this SRM. The origins of this difficulty were discussed in Section 3.1.5[link] and can be addressed with the use of the Caglioti function.

[Figure 3.1.38]

Figure 3.1.38 | top | pdf |

FWHM data from SRMs 640e, 1976b and 660c collected with the IBM and PSD (4 mm window) and fitted using the split Pearson VII PSF with uniform weighting.

With the use of model-based methods for calibration and subsequent data analysis, it is appropriate to consider a strategy for the refinement of the available parameters. The successful refinement will yield the right answer and, with the use of models that make sound physical sense with respect to the experimental design, a good fit to the observation. The refinement strategies for both FPA and Rietveld analyses can be based on a consideration of which terms are specific to the IPF and the manner in which they can be determined. Several parameters can be measured explicitly from experiments other than the diffraction experiment under examination. Examples of these `well determined' parameters include the goniometer zero angles and the incident- and receiving-slit sizes. Conversely, indeterminate metrics that can only be determined through the diffraction experiment itself include the impact of the post-monochromator on the Cu Kα1/Kα2 ratio and the degree of axial divergence. Indeterminate parameters specific to the IPF are only refined using high-quality data from standards and are fixed for subsequent analyses of unknowns. This approach tends to result in stable and robust refinements. Parameters can, therefore, be considered as falling into three groups: those that are specific to any given sample and are always refined, ones that are specific to the IPF and are refined using only high-quality data from standards, and lastly the highly determined parameters that are refined only as a basic test of the model.

To consider the Thompson, Cox & Hastings (1987[link]) (TCH) formalism of the pseudo-Voigt PSF with the Finger model for asymmetry, which is common to many Rietveld codes, a Rietveld analysis of SRM 660b was performed using GSAS (using the type-3 PSF) and TOPAS (using the PV_TCHZ peak type). The TCH formalism allows for the direct refinement of the Gaussian and Lorentzian FWHM values. The Caglioti function was used; Lorentzian terms were constrained as per equation (3.1.2)[link]. The S/L and H/L terms are highly correlated; S/L was refined, while H/L was adjusted manually so that the two terms were nearly equal. Additional parameters that were refined included the lattice parameters, sample displacement and transparency terms, Chebyshev polynomial terms (typically 5 to 7) to represent the background, scale factors, the type-0 Lorentz–polarization term (GSAS), the Cu Kα1/Kα2 ratio, and structural parameters. With this strategy, the sample shift and transparency aberration functions, in conjunction with the Finger asymmetry model, were used to model the data of Fig. 3.1.26[link]. Given that the Finger model is not entirely appropriate for divergent-beam laboratory data, the sample shift and transparency terms may refine to non-physical values. They will, however, correctly indicate relative values for sample z height and transparency. The model for specimen transparency in TOPAS is the asymmetric function illustrated in Fig. 3.1.10[link], while the model in GSAS consists of a profile displacement in sin 2θ. The TCH/Finger formalism of TOPAS reproduced the certified lattice parameter and resulted in a GoF of 1.5, whereas the GoF value realized with GSAS was 1.85. Fig. 3.1.39[link] displays the fit quality of the 100, 310 and 510 reflections obtained with TOPAS. The fit to the asymmetry of the 100 reflection is reasonable, with a 0.007° shift in position. The fit to the 510 reflection is not dissimilar to that shown in Fig. 3.1.32[link], indicating that the Caglioti function is working analogously to the manner previously discussed. The improvement in fit with the TOPAS implementation was most notable around the 70 to 90° 2θ region, where the transparency effects are at a maximum. These results validate the TCH/Finger formalism and constitute a valid calibration for this equipment and data-analysis method; the utility of the aberration function for specimen transparency as documented by Cheary & Coelho (1992[link]) is demonstrated.

[Figure 3.1.39]

Figure 3.1.39 | top | pdf |

Fits of three SRM 660b lines obtained with a Rietveld analysis using the Thompson, Cox and Hastings formalism of the pseudo-Voigt PSF and the Finger model for asymmetry. TOPAS was used for the analysis.

Differentiating between the profile-shape terms that are specific to the IPF and those refined to consider the microstructure of unknowns yields a stable refinement strategy when using the TCH/Finger formalism. The profile parameters GU, GV, GW, LX, LY, S/L and H/L as determined from SRM 660b constitute the IPF and are fixed, or used as floors, in subsequent refinements (Cline, 2000[link]). The IPF for the NIST machine was described with only the GW, LX and LY parameters. In subsequent analyses only the GP, GU, LX and LY terms were refined to represent Gaussian size and microstrain and Lorentzian size and microstrain broadening, respectively, and thus yield microstructural information from the sample. Parameters that tend to values less than the IPF were fixed at IPF values. The Finger asymmetry parameters determined from the standard need not be refined with unknowns; it has, however, been observed that doing so will neither substantially improve the quality of the fit, nor will it result in instability. Additional parameters that are always refined with unknowns include: scale factors, lattice parameters, specimen displacement and transparency terms, background terms, and structural parameters.

While an analysis of SRM 660x permits the calibration of the instrument with respect to profile shape and position, it is also desirable to evaluate parameters related to the diffraction intensity. However, the analysis of data from high-symmetry materials such as silicon and lanthanum hexaboride may result in some degree of instability with the refinement of the intensity-specific parameters, perhaps because of the relatively small number of lines. Use of SRM 676a addresses this difficulty (Fig. 3.1.40[link]). With this analysis, the Lorentz–polarization factor refined to a credible value and structure parameters were within the bounds of those obtained from the high-q-range experiments performed in the certification of SRM 676a (Cline et al., 2011[link]).

[Figure 3.1.40]

Figure 3.1.40 | top | pdf |

Fits of SRM 676a obtained from a Rietveld analysis using GSAS with the Thompson, Cox and Hastings formalism of the pseudo-Voigt PSF and the Finger model for asymmetry.

We start the discussion on the FPA method for instrument calibration by listing the parameters specific to the IPF that would have to be refined with a most basic calibration using an analysis of an SRM. The parameters to be refined for the emission spectrum include the positions and intensities of the Kα2 profile, the satellite components and the tube tails. When addressing the Kα2 profile, the relative positions and intensity ratios of the Kα21 and Kα22 Lorentzian profiles were constrained so as to preserve the overall shape as characterized by Hölzer et al. (1997[link]). For the geometric profile, a single Soller-slit angle was refined, characterizing the degree of axial divergence and using the case-2 axial-divergence model applied to both the incident and diffracted beams. Other parameters of the geometric profile were fixed at known values. Additional parameters included a Lorentzian size-broadening term, background terms, and profile intensities and positions. A Gaussian microstrain term was included for analyses of SRM 1976b. Fig. 3.1.41[link] shows the quality of the fits obtained from an FPA analysis of SRM 660b. These fits present a substantial improvement over those using any of the analytical PSFs (Figs. 3.1.32[link] and 3.1.39[link]). Furthermore, the GoF residual error term for an FPA profile analysis of a continuous scan of SRM 660b was 1.08, while the corresponding terms from analyses of the same data using the split pseudo-Voigt and split Pearson VII PSFs were 1.65 and 1.43, respectively (these three analyses were all from TOPAS). The FPA method can account for subtleties in the observed X-ray line profiles that analytical PSFs could never be expected to fit. In subsequent analyses of unknowns, it is not imprudent to fix parameters associated with the IPF; refining them, however, is typically not problematic with the FPA.

[Figure 3.1.41]

Figure 3.1.41 | top | pdf |

Fit quality realized with a fundamental-parameters-approach analysis of SRM 660b peak-scan data using TOPAS.

There were indications that the breadths of the profiles of the Cu Kα emission spectrum as characterized by Hölzer et al. (1997[link]) were in excess of those of our observations. This was investigated using the ultra-high-quality data. The FWHM ratios of the two pairs of Lorentzian profiles, the Kα11 versus the Kα12 and the Kα21 versus the Kα22, were constrained to those reported by Hölzer et al. (1997[link]). The positions and intensities of the Kα2 doublet were also refined, again with constraints applied to preserve the shape as per Hölzer et al. (1997[link]). These refinements indicated that the breadths given by Hölzer et al. (1997[link]) were significantly in excess of those that gave the best fit to the data. After an extensive investigation, this observation was confirmed to originate with the performance of the post-monochromator. Several graphite monochromator crystals were investigated using a beam diffracted from an Si single crystal (333 reflection) mounted in the specimen position. The graphite crystals that were manufactured within the last 15 years all gave identical results: after an alignment procedure to optimize the intensity of the Kα1 line, they do clip the breadths of the profiles of the emission spectrum by approximately 20%. They also alter the position of diffraction lines by perhaps 0.01° in 2θ; therefore, the goniometer zero angles must be determined with the monochromator installed. We therefore used a reduced-breadth Hölzer emission spectrum in our FPA analysis. Note that these breadths vary with tan θ because of angular dispersion, as does microstrain; therefore, only a microstrain-free specimen can be used for an analysis of the impact of a monochromator on the emission spectrum. We found that both SRMs 660c and 640e were suitable for this analysis.

The refinement strategy for the case-2 Soller-slit angle was also investigated with the ultra-high-quality data. Technically, the axial divergence of the incident beam, with the inclusion of the Soller slit, is less than that of the diffracted beam, which is limited by its extended beam-path length through the monochromator. Several strategies were investigated, some of which may have represented a more accurate physical model than that of a single divergence value applied to both beams, but none resulted in any improvement in the fit quality. Lastly, it was observed that the value for the width of the divergence slit, particularly with the use of the IBM, refined to values in excess of the known value. This observation will be discussed further.

With the certification of SRMs 640e, 660c and 1878b (respirable quartz, 2014), global refinements were set up allowing for the simultaneous analysis of the 20 high-quality data sets collected for the certification of each SRM. With this approach, the analyses could be carried out in the context of highly favourable Poisson counting statistics and permitted a robust analysis of FPA models that would otherwise be problematic because of parameter correlation. Data were collected from two samples from each bottle. With SRMs 640e and 660c, the machine was configured as per the work of this study with the post-monochromator; for 1878b, the machine was configured likewise with the IBM and scintillation detector. For SRM 660c the data were collected in accordance to the run-time parameters of Table 3.1.2[link], and in an analogous manner for SRM 640e. For SRM 1878b, the data were collected on mixtures of 50% SRM 1878b and 50% SRM 676a in continuous, 24 h scans. Concurrent with the effort to certify SRMs 640e and 660c, the agreement between the results from FPAPC and TOPAS was established, indicating that both codes operated in accordance with published FPA models (Mendenhall et al., 2015[link]). Initially with FPAPC and later with TOPAS, the data from these three SRMs were analysed using the global refinement strategy.

The global refinements were used to investigate possible difficulties with the FPA models. First, the global refinements were used to determine more robust values for the breadths of the emission spectrum as influenced by the post-monochromator. The issue concerning the refined value for the incident slit size was revisited with the global refinements. Values of 25% in excess of the known size were observed in refinements of IBM data from several materials using TOPAS. While these refinements were quite robust, corresponding analysis of ultra-high-quality post-monochromator data sets resulted in a slow increase in the slit value with little change in residual error terms, indicating a shallow χ2 minimization surface. With the global analysis of the SRM 660c, 640e and 1878b data, however, the incident slit value refined to a value of 15 to 25% in excess of the known value in a robust manner. The reduced correlations between models with the global refinements led to this improved ability to reach the minimum in error space for both data types. An investigation into the sensitivity of the lattice-parameter value and GoF on the incident slit size was consistent with the shallow χ2 minimization surface; changes in lattice parameters were less than 2 fm and only small changes in GoF were noted. The lowest-angle lines used in our analyses were at 18°; given the 1/tan θ dependence of the incident-slit correction, lower-angle lines are required for robust use of this model for refinement of incident-slit size. A second observation of concern was the low values for sample attenuation refined from data for SRM 660x. As previously stated, a reasonable value for a compact of LaB6 would be 800 cm−1, yet the fits were giving values in the 400 cm−1 range. Again, a sensitivity study indicated little dependence of either the lattice parameter or the GoF on the attenuation values when they are this large. In contrast, sensitivity studies on SRM 640x (silicon in the 80 to 100 cm−1 range) indicated a high level of response to changes in attenuation values. Again, in the range where the model is active, results are in correspondence with expectations; where there is little impact on the refinement, parameter values may differ from true values with little impact on the refinement as a whole. We are continuing to investigate the issue of the non-physical values obtained for the refined divergence-slit width.

The Δ(2θ) data shown in Fig. 3.1.42[link] illustrate results from an FPA analysis of the 20 data sets collected for the certification of SRMs 660c and 640e. The Δ(2θ) values were generated using the certified lattice parameters of SRMs 660c and 640e to compute `SRM' or reference peak positions, and the unconstrained profile positions from the FPA analysis were used as the `test' data. The analyses were performed using TOPAS with the divergence-slit width fixed at the known value. The data in Fig. 3.1.42[link] clearly reflect the efficacy of the FPA method. The certification data for these SRMs were collected on the machine set up as for Fig. 3.1.26[link]; the trends of the peak position for these data are identical to those of Fig. 3.1.26[link]. Yet the FPA has corrected the profile positions to a degree indiscernible from the `true' positions in the 40 to 120° 2θ region. The trends observed otherwise in these data are consistent with prior observations discussed at length above, albeit in 2θ regions limited to below 40° and above 120° and to a vastly reduced level. These deviations are consistent with shortcomings in the model, although the deviations are so small that it may be difficult to work out their origin. The unequivocal technical justification for use of the FPA in SRM certification is also apparent in Fig. 3.1.42[link]; when properly used, the method is capable of reporting the `true' d-spacing for profiles located in the 40 to 120° 2θ region.

[Figure 3.1.42]

Figure 3.1.42 | top | pdf |

Δ(2θ) data from the 20 data sets collected for the certification of SRMs 660c and 640e, determined via FPA analyses using TOPAS.

Using SRM 1976b for calibration of the instrument response entails determining the integrated intensity of 14 profiles from the test instrument and comparing them with certified values. However, the test instrument in this case was the NIST instrument equipped with the graphite post-monochromator. Therefore, the relative intensity values used for comparison were the ones biased to account for the effects of polarization. They were obtained from Table 4 of the SRM 1976b CoA. Fig. 3.1.43[link] shows the results from various data-analysis techniques performed on a common raw data set from the test instrument. With the noteworthy exception of the split Pearson VII PSF, all methods gave an acceptable result. It can be seen that when intensity measurement is the issue, the use of unconstrained PSFs is more effective than the analyses described earlier, which were intended to determine the profile position or FWHM. With the use of GSAS, the pattern was fitted with a Rietveld analysis using a sixth-order spherical harmonic to model the texture. The reported relative intensity data are computed from the observed structure factors using the GSAS utility REFLIST. This approach is identical to that used for the certification of SRM 1976b, except the certification data were collected on the NIST instrument setup with the IBM.

[Figure 3.1.43]

Figure 3.1.43 | top | pdf |

Qualification of a machine using SRM 1976b. The data were analysed using several PSFs.

The structure common to all the data sets of Fig. 3.1.43[link] is as yet unexplained. With any of these methods, modelling the background is of critical concern. The intensity scale of the fitted pattern must be expanded to allow for inspection of the background fit alone. The weak amorphous peak at approximately 25° 2θ, which is associated with the anorthite glass matrix phase, complicates the matter. Certain refinement programs allow for the insertion of a broad peak to account for this. Alternatively an 11- to 13-term Chebyshev polynomial could be used. Keeping the number of these terms to a minimum is consistent with preventing the background function from interfering with the modelling of the profiles. Lastly, the use of Kβ filters in conjunction with a PSD can be problematic for the calibration of instrument response using SRM 1976b. Such filters typically impart an absorption edge in the background on the low-energy side of the profiles. With the use of a high-count-rate PSD, this effect can be quite pronounced and can cause difficulties in fitting the background and, therefore, erroneous determination of the profile intensity.

3.1.7. Conclusions

| top | pdf |

In this chapter, we reviewed the theoretical background behind the well known complexity of X-ray powder diffraction line profiles. A divergent-beam laboratory X-ray diffractometer with a conventional layout was used to rigorously examine the full range of procedures that have been developed for the analysis of the instrument profile function. The machine featured superlative accuracy in angle measurement, and attention was paid to the precision and stability of the optical components and sample positioning. The instrument was aligned in accordance with first-principles methods and was shown to exhibit an optical performance that conformed with the expectations of established theories for powder-diffraction optics.

Data-analysis methods can be divided into two categories that require fundamentally different approaches to instrument calibration. Empirical profile-analysis methods, either based on second-derivative algorithms or profile fitting using analytical profile-shape functions, seek to characterize the instrument performance in terms of shape and position parameters that are used in subsequent analysis for determining the character of the specimen. These methods, however, provide no information about the origins of the peak shift or profile shape that they describe. Model-based methods seek to link the observation directly to the character of the entire experiment. The calibration procedure for the first category can be regarded as a `classical' calibration where a correction curve is developed through the use of an SRM and applied to subsequent unknowns. With model-based methods, it is the user's responsibility to calibrate the instrument in a manner that ensures that the models that are being used correctly correspond to the experiment. This is best accomplished through the analysis of results from empirical methods, particularly Δ(2θ) curves, as well as the analysis of data from an SRM followed by a critical examination of the refined parameters.

Second-derivative-based algorithms for determining peak locations are able to provide the 2θ positions (the positions of the maxima in the observed profile intensity) to within ±0.0025° 2θ. Profile fitting using analytical profile-shape functions to determine the peak position was shown to be problematic; errors of up to 0.015° 2θ were noted. The use of uniform weighting in the refinements resulted in improved accuracy in the reported peak positions and FWHM values. Using a Johansson incident-beam monochromator led to high-quality fits of diffraction data using analytical profile shape functions. The Caglioti function can be used to improve the reliability of FWHM values.

The fundamental-parameters approach was found to be effective in modelling the performance of the Bragg–Brentano divergent-beam X-ray diffractometer. The form of the Δ(2θ) curve, determined via a second-derivative algorithm, can be explained quantitatively through an examination of FPA models. Furthermore, FPA simulations of diffraction data, computed from the instrument configuration using both commercial and NIST FPA codes, and analysed using the same second-derivative algorithm, reproduced the Δ(2θ) results from the experimental data. This self-consistency verified the correct operation of both the instrument and the FPA models. Using the FPA for modelling the diffraction profiles provided the best fits to the observations and the most accurate results for the `true' reported peak positions. The TCH/Finger models for profile shape yielded credible results for refinement of lattice parameters via the Rietveld method.

This chapter is based on an article published in the Journal of Research of the National Institute of Standards and Technology (Cline et al., 2015[link]).


Azároff, L. V. (1955). Polarization correction for crystal-monochromatized X-radiation. Acta Cryst. 8, 701–704.Google Scholar
Bergmann, J., Friedel, P. & Kleeberg, R. (1998). BGMN – a new fundamental parameters based Rietveld program for laboratory X-ray sources, it's use in quantitative analysis and structure investigations. IUCr Commission on Powder Diffraction Newsletter, 20, 5–8. .Google Scholar
Bergmann, J., Kleeberg, R., Haase, A. & Breidenstein, B. (2000). Advanced fundamental parameters model for improved profile analysis. In Proceedings of the Fifth European Conference on Residual Stresses, edited by A. J. Böttger, R. Delhez & E. J. Mittemeijer, Mater. Sci. Forum, 347–349, 303–308. Zürich Uetikon, Switzerland: Trans Tech. Publications.Google Scholar
Berkum, J. G. M. van, Sprong, G. J. M., de Keijser, T. H., Delhez, R. & Sonneveld, E. J. (1995). The optimum standard specimen for X-ray diffraction line-profile analysis. Powder Diffr. 10, 129–139.Google Scholar
BIPM (2006). The International System of Units (SI), 8th ed. Sèvres: Bureau International des Poids et Mesures. .Google Scholar
Black, D. R., Windover, D., Henins, A., Filliben, J. & Cline, J. P. (2011). Certification of standard reference material 660b. Powder Diffr. 26 (Special Issue 02), 155–158.Google Scholar
Bruker AXS (2014). TOPAS Software. .Google Scholar
Caglioti, G., Paoletti, A. & Ricci, F. (1958). Choice of collimators for a crystal spectrometer for neutron diffraction. Nucl. Instrum. 3, 223–228.Google Scholar
Cheary, R. W. & Cline, J. P. (1995). An analysis of the effect of different instrumental conditions on the shapes of X-ray line profiles. Adv. X-ray Anal. 38, 75–82.Google Scholar
Cheary, R. W. & Coelho, A. (1992). A fundamental parameters approach to X-ray line-profile fitting. J. Appl. Cryst. 25, 109–121.Google Scholar
Cheary, R. W. & Coelho, A. (1994). Synthesizing and fitting linear position-sensitive detector step-scanned line profiles. J. Appl. Cryst. 27, 673–681.Google Scholar
Cheary, R. W. & Coelho, A. A. (1996). Programs XFIT and FOURYA. CCP14 Powder Diffraction Library, Engineering and Physical Sciences Research Council, Daresbury Laboratory, UK. Google Scholar
Cheary, R. W. & Coelho, A. A. (1998a). Axial divergence in a conventional X-ray powder diffractometer. I. Theoretical foundations. J. Appl. Cryst. 31, 851–861.Google Scholar
Cheary, R. W. & Coelho, A. A. (1998b). Axial divergence in a conventional X-ray powder diffractometer. II. Realization and evaluation in a fundamental-parameter profile fitting procedure. J. Appl. Cryst. 31, 862–868.Google Scholar
Cline, J. P. (2000). Use of NIST standard reference materials for characterization of instrument performance. In Industrial Applications of X-ray Diffraction, edited by F. H. Chung & D. K. Smith, pp. 903–917. New York: Marcel Dekker, Inc.Google Scholar
Cline, J. P., Mendenhall, M. H., Black, D., Windover, D. & Henins, A. (2015). The optics and alignment of the divergent beam laboratory X-ray powder diffractometer and its calibration using NIST standard reference materials. J. Res. NIST, 120, 173–222. .Google Scholar
Cline, J. P., Von Dreele, R. B., Winburn, R., Stephens, P. W. & Filliben, J. J. (2011). Addressing the amorphous content issue in quantitative phase analysis: the certification of NIST standard reference material 676a. Acta Cryst. A67, 357–367.Google Scholar
Degen, T., Sadki, M., Bron, E., König, U. & Nénert, G. (2014). Powder Diffr. 29, S13–S18.Google Scholar
Edmonds, J., Brown, A., Fischer, G., Foris, C., Goehner, R., Hubbard, C., Evans, E., Jenkins, R., Schreiner, W. N. & Visser, J. (1986). JCPDS – International Centre for Diffraction Data Task Group on Cell Parameter Refinement. Powder Diffr. 1, 66–76.Google Scholar
Fawcett, T. G., Kabbekodu, S. N., Faber, J., Needham, F. & McClune, F. (2004). Evaluating experimental methods and techniques in X-ray diffraction using 280,000 data sets in the Powder Diffraction File. Powder Diffr. 19, 20–25.Google Scholar
Finger, L. W., Cox, D. E. & Jephcoat, A. P. (1994). A correction for powder diffraction peak asymmetry due to axial divergence. J. Appl. Cryst. 27, 892–900.Google Scholar
Guinier, A. (1994). X-ray Diffraction in Crystals, Imperfect Crystals, and Amorphous Bodies. N. Chelmsford, USA: Courier Dover Publications.Google Scholar
Hall, M. M., Veeraraghavan, V. G., Rubin, H. & Winchell, P. G. (1977). The approximation of symmetric X-ray peaks by Pearson type VII distributions. J. Appl. Cryst. 10, 66–68.Google Scholar
Hastings, J. B., Thomlinson, W. & Cox, D. E. (1984). Synchrotron X-ray powder diffraction. J. Appl. Cryst. 17, 85–95.Google Scholar
Hölzer, G., Fritsch, M., Deutsch, M., Härtwig, J. & Förster, E. (1997). Kα1,2 and Kβ1,3 X-ray emission lines of the 3d transition metals. Phys. Rev. A, 56, 4554–4568.Google Scholar
Huber (2014). Monochromator 611 manual. .Google Scholar
JCGM (2008a). Uncertainty of Measurement – Part 3: Guide to the expression of uncertainty in measurement (JCGM 100:2008, GUM: 1995). Tech. Rep. Joint Committee for Guides in Metrology. .Google Scholar
JCGM (2008b). International vocabulary of metrology – basic and general concepts and associated terms (VIM). Tech. Rep. Joint Committee for Guides in Metrology. .Google Scholar
Jenkins, R. (1992). Round robin on powder diffractometer sensitivity. ICDD Workshop at Accuracy in Powder Diffraction II, May 26–29, NIST, Gaithersburg, USA.Google Scholar
Johansson, T. (1933). Über ein neuartiges, genau fokussierendes Röntgenspektrometer. Z. Phys. 82, 507–528.Google Scholar
Klug, H. P. & Alexander, L. E. (1974). X-ray Diffraction Procedures, 2nd ed. New York: John Wiley & Sons.Google Scholar
Larson, A. C. & Von Dreele, R. B. (2004). General Structure Analysis System (GSAS). Tech. Rep. Los Alamos National Laboratory, New Mexico, USA.Google Scholar
Louër, D. (1992). Personal communication.Google Scholar
Louër, D. & Langford, J. I. (1988). Peak shape and resolution in conventional diffractometry with monochromatic X-rays. J. Appl. Cryst. 21, 430–437.Google Scholar
McCusker, L. B., Von Dreele, R. B., Cox, D. E., Louër, D. & Scardi, P. (1999). Rietveld refinement guidelines. J. Appl. Cryst. 32, 36–50.Google Scholar
Maskil, N. & Deutsch, M. (1988). X-ray Kα satellites of copper. Phys. Rev. A, 38, 3467–3472.Google Scholar
Mendenhall, M. H., Mullen, K. & Cline, J. P. (2015). An implementation of the fundamental parameters approach for analysis of X-ray powder diffraction line profiles. J. Res. NIST, 120, 223–251.Google Scholar
NIST (2008). Standard Reference Material 676a: Alumina Internal Standard for Quantitative Analysis by X-ray Powder Diffraction. SRM certificate. NIST, U. S. Department of Commerce, Gaithersburg, MD, USA. .Google Scholar
NIST (2010). Standard Reference Material 660b: Lanthanum Hexaboride – Powder Line Position and Line Shape Standard for Powder Diffraction. SRM certificate. NIST, U. S. Department of Commerce, Gaithersburg, MD, USA. .Google Scholar
NIST (2015a). 209.1 – X-ray Diffraction (powder and solid forms). SRM catalog. NIST, U. S. Department of Commerce, Gaithersburg, MD, USA. .Google Scholar
NIST (2015b). Standard Reference Material 1976b: Instrument Response Standard for X-ray Powder Diffraction. SRM certificate. NIST, U. S. Department of Commerce, Gaithersburg, MD, USA. .Google Scholar
NIST (2015c). Standard Reference Material 640e: Line Position and Line Shape Standard for Powder Diffraction (Silicon Powder). SRM certificate. NIST, U. S. Department of Commerce, Gaithersburg, MD, USA. .Google Scholar
NIST (2015d). Standard Reference Material 660c: Line Position and Line Shape Standard for Powder Diffraction (Lanthanum Hexaboride Powder). SRM certificate. NIST, U. S. Department of Commerce, Gaithersburg, MD, USA. .Google Scholar
Rietveld, H. M. (1967). Line profiles of neutron powder-diffraction peaks for structure refinement. Acta Cryst. 22, 151–152.Google Scholar
Rietveld, H. M. (1969). A profile refinement method for nuclear and magnetic structures. J. Appl. Cryst. 2, 65–71.Google Scholar
Rigaku (2014). PDXL 2, Rigaku powder diffraction data analysis software version 2.2. Rigaku Corporation, Tokyo, Japan.Google Scholar
Taylor, B. & Kuyatt, C. (1994). TN1297: Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results. Tech. Rep. NIST. Washington, DC: U. S. Government Printing Office. .Google Scholar
Thompson, P., Cox, D. E. & Hastings, J. B. (1987). Rietveld refinement of Debye–Scherrer synchrotron X-ray data from Al2O3. J. Appl. Cryst. 20, 79–83.Google Scholar
Toby, B. H. & Von Dreele, R. B. (2013). GSAS-II: the genesis of a modern open-source all purpose crystallography software package. J. Appl. Cryst. 46, 544–549.Google Scholar
Wilson, A. J. C. (1963). Mathematical Theory of X-ray Powder Diffractometry. New York: Gordon & Breach.Google Scholar
Yao, T. & Jinno, H. (1982). Polarization factor for the X-ray powder diffraction method with a single-crystal monochromator. Acta Cryst. A38, 287–288.Google Scholar

to end of page
to top of page