International
Tables for Crystallography Volume H Powder diffraction Edited by C. J. Gilmore, J. A. Kaduk and H. Schenk © International Union of Crystallography 2018 
International Tables for Crystallography (2018). Vol. H, ch. 3.8, pp. 327329
Section 3.8.3. Cluster analysis^{a}Department of Chemistry, University of Glasgow, University Avenue, Glasgow, G12 8QQ, UK 
Cluster analysis uses d (or s, or δ) to partition the patterns into groups based on the similarity of their diffraction profiles. Associated with cluster are a number of important ancillary techniques all of which will be discussed here. A flowchart of these methods is shown in Fig. 3.8.4.
Using d and s, agglomerative, hierarchical cluster analysis is now carried out, in which the patterns are put into clusters as defined by their distances from each other. [Gordon (1981, 1999) and Everitt et al. (2001) provide excellent and detailed introductions to the subject. Note that the two editions of Gordon's monograph are quite distinct and complementary.] The method begins with a situation in which each pattern is considered to be in a separate cluster. It then searches for the two patterns with the shortest distance between then, and joins them into a single cluster. This continues in a stepwise fashion until all the patterns form a single cluster. When two clusters (C_{i} and C_{j}) are merged, there is the problem of defining the distance between the newly formed cluster and any other cluster C_{k}. There are a number of different ways of doing this, and each one gives rise to a different clustering of the patterns, although often the difference can be quite small. A general algorithm has been proposed by Lance & Williams (1967), and is summarized in a simplified form by Gordon (1981). The distance from the new cluster formed by merging C_{i} and C_{j} to any other cluster C_{k} is given byThere are many possible clustering methods. Table 3.8.1 defines six commonly used clustering methods, defined in terms of the parameters α, β and γ. All these methods can be used with powder data; in general, the groupaveragelink or singlelink formalism is the most effective, although differences between the methods are often slight.

The results of cluster analysis are usually displayed as a dendrogram, a typical example of which is shown in Fig. 3.8.6(a), where a set of 13 powder patterns is analysed using the centroid method. Each pattern begins at the bottom of the plot as a separate cluster, and these amalgamate in stepwise fashion linked by horizontal tie bars. The height of the tie bar represents a similarity measure as measured by the relevant distance. As an indication of the differences that can be expected in the various algorithms used for dendrogram generation, Fig. 3.8.6(e) shows the same data analysed using the singlelink method: the resulting clustering is slightly different: the similarity measures are larger, and, in consequence, the tie bars are higher on the graph. [For further examples see Barr et al. (2004b,c) and Barr, Dong, Gilmore & Faber (2004).]
An estimate of the number of clusters present in the data set is needed. In terms of the dendrogram, this is equivalent to `cutting the dendrogram' i.e. the placement of a horizontal line across it such that all the clusters as defined by tie lines above this line remain independent and unlinked. The estimation of the number of clusters is an unsolved problem in classification methods. It is easy to see why: the problem depends on how similar the patterns need to be in order to be classed as the same, and how much variability is allowed within a cluster. We use two approaches: (a) eigenvalue analysis of matrices ρ and A, and (b) those based on cluster analysis.
Eigenvalue analysis is a well used technique: the eigenvalues of the relevant matrix are sorted in descending order and when a fixed percentage (typically 95%) of the data variability has been accounted for, the number of eigenvalues is selected. This is shown graphically via a scree plot, an example of which is shown in Fig. 3.8.2.
We carry out eigenvalue analysis on the following:
The most detailed study on cluster counting is that of Milligan & Cooper (1985), and is summarized by Gordon (1999). From this we have selected three tests that seem to operate effectively with powder data:
The results of tests (4)–(6) depend on the clustering method being used. To reduce the bias towards a given dendrogram method, these tests are carried out on four different clustering methods: the singlelink, the groupaverage, the sumofsquares and the completelink methods. Thus there are 12 semiindependent estimates of the number of clusters from clustering methods, and three from eigenanalysis, making 15 in all.
A composite algorithm is used to combine these estimates. The maximum and minimum values of the number of clusters (c_{max} and c_{min}, respectively) given by the eigenanalysis results [(1)–(3) above] define the primary search range; tests (4)–(6) are then used in the range to find local maxima or minima as appropriate. The results are averaged, any outliers are removed, and a weighted mean value is taken of the remaining indicators, then this is used as the final estimate of the number of clusters. Confidence levels for c are also defined by the estimates of the maximum and minimum cluster numbers after any outliers have been removed.
A typical set of results for the PXRD data from 23 powder patterns for doxazosin (an antihypertension drug) in which five polymorphs are present, as well as two mixtures of polymorphs, is shown in Fig. 3.8.2(a) and (b) (see also Table 3.8.2). The scree plot arising from the eigenanalysis of the correlation matrix indicates that 95% of the variability can be accounted for by five components, and this is shown in Fig. 3.8.2(a). Eigenvalues from other matrices indicate that four clusters are appropriate. A search for local optima in the CH, γ and C tests is then initiated in the range 2–8 possible clusters. Four different clustering methods are tried, and the results indicate a range of 4–7 clusters. There are no outliers, and the final weighted mean value of 5 is calculated. As Fig. 3.8.2(b) shows, the optimum points for the C and γ tests are often quite weakly defined (Barr et al., 2004b).

This is, in its essentials, the particleinabox problem. Each powder pattern is represented as a single sphere, and these spheres are placed in a cubic box of unit dimensions such that the positions of the spheres reproduce as closely as possible the distance matrix, d, generated from correlating the patterns. The spheres have an arbitrary orientation in the box.
To do this, the (n × n) distance matrix d is used in conjunction with metric multidimensional scaling (MMDS) to define a set of p underlying dimensions that yield a Euclidean distance matrix, d^{calc}, whose elements are equivalent to or closely approximate the elements of d.
The method works as follows (Cox & Cox, 2000; Gower, 1966; Gower & Dijksterhuis, 2004).
The matrix d has zero diagonal elements, and so is not positive semidefinite. A positive definite matrix, A(n × n) can be constructed, however, by computingwhere I_{n} is an (n × n) identity matrix, i_{n} is an (n × 1) vector of unities and D is defined in equation (3.8.8). The matrix is called a centring matrix, since A has been derived from D by centring the rows and columns.
The eigenvectors v_{1}, v_{2},…, v_{n} and the corresponding eigenvalues λ_{1}, λ_{2},…, λ_{n} are then obtained. A total of p eigenvalues of A are positive and the remaining (n − p) will be zero. For the p nonzero eigenvalues a set of coordinates can be defined via the matrix X(n × p),where Λ is the vector of eigenvalues.
If p = 3, then we are working in three dimensions, and the X(n × 3) matrix can be used to plot each pattern as a single point in a 3D graph. This assumes that the dimensionality of the problem can be reduced in this way while still retaining the essential features of the data. As a check, a distance matrix d^{calc} can be calculated from X(n × 3) and correlated with the observed matrix d using both the Pearson and Spearman correlation coefficients. In general the MMDS method works well, and correlation coefficients greater than 0.95 are common. For large data sets this can reduce to ∼0.7, which is still sufficiently high to suggest the viability of the procedure. Parallel coordinates based on the MMDS analysis can also be used, and this is discussed in Sections 3.8.4.2.1 and 3.8.4.2.2.
There are occasions in which the underlying dimensionality of the data is 1 or 2, and in these circumstances the data project onto a plane or a line in an obvious way without any problems.
An example of an MMDS plot is shown in Fig. 3.8.6(b), which is linked to the dendrogram in Fig. 3.8.6(a).
It is also possible to carry out principalcomponent analysis (PCA) on the correlation matrix. The eigenvalues of the correlation matrix can be used to estimate the number of clusters present via a scree plot, as shown in Fig. 3.8.2(a), and the eigenvectors can be used to generate a score plot, which is an X(n × 3) matrix and can be used as a visualization tool in exactly the same way as the MMDS method to indicate which patterns belong to which class. Score plots traditionally use two components with the data thus projected on to a plane; we use 3D plots in which three components are represented. In general, we find that the MMDS representation of the data is nearly always superior to the PCA analysis for powder and spectroscopic data.
It is possible to use the MMDS plot (or, alternatively, PCA score plots) to assist in the choice of clustering method, since the two methods operate semiindependently. The philosophy here is to choose a technique that results in the tightest, most isolated clusters as follows:
Similar techniques can be used to identify the most representative sample in a cluster. This is defined as the sample that has the minimum mean distance from every other sample in the clusters, i.e. for cluster J containing m patterns, the most representative sample, i, is defined as that which givesThe most representative sample is useful in visualization and can, with care, be used to create a database of known phases (Barr et al., 2004b).
Amorphous samples are an inevitable consequence of highthroughput experiments, and need to be handled correctly if they are not to lead to erroneous indications of clustering. To identify amorphous samples the total background for each pattern is estimated and its intensity integrated; the integrated intensity of the nonbackground signal is then calculated. If the ratio falls below a preset limit (usually 5%, but this may vary with the type of samples under study) the sample is treated as amorphous. The distance matrix is then modified so that each amorphous sample is given a distance and dissimilarity of 1.0 from every other sample, and a correlation coefficient of zero. This automatically excludes the samples from the clustering until the last amalgamation steps, and also limits their effect on the estimation of the number of clusters (Barr et al., 2004b). Of course, the question of amorphous samples is not a binary (yes/no) one: there are usually varying degrees of amorphous content, which further complicates matters.
References
Barr, G., Dong, W. & Gilmore, C. J. (2004a). Highthroughput powder diffraction. II. Applications of clustering methods and multivariate data analysis. J. Appl. Cryst. 37, 243–252.Google ScholarBarr, G., Dong, W. & Gilmore, C. J. (2004b). Highthroughput powder diffraction. IV. Cluster validation using silhouettes and fuzzy clustering. J. Appl. Cryst. 37, 874–882.Google Scholar
Barr, G., Dong, W. & Gilmore, C. J. (2009). PolySNAP3: a computer program for analysing and visualizing highthroughput data from diffraction and spectroscopic sources. J. Appl. Cryst. 42, 965–974.Google Scholar
Barr, G., Dong, W., Gilmore, C. & Faber, J. (2004). Highthroughput powder diffraction. III. The application of fullprofile pattern matching and multivariate statistical analysis to roundrobintype data sets. J. Appl. Cryst. 37, 635–642.Google Scholar
Bruker (2018). DIFFRAC.EVA: software to evaluate Xray diffraction data. Version 4.3. https://www.bruker.com/eva .Google Scholar
Calinški, T. & Harabasz, J. (1974). A dendritic method for cluster analysis. Commun. Stat. 3, 1–27.Google Scholar
Cox, T. F. & Cox, M. A. A. (2000). Multidimensional Scaling, 2nd ed. Boca Raton: Chapman & Hall/CRC.Google Scholar
Everitt, B. S., Landau, S. & Leese, M. (2001). Cluster Analysis, 4th ed. London: Arnold.Google Scholar
Goodman, L. A. & Kruskal, W. H. (1954). Measures of association for crossclassifications. J. Am. Stat. Assoc. 49, 732–764.Google Scholar
Gordon, A. D. (1981). Classification, 1st ed., pp. 46–49. London: Chapman and Hall.Google Scholar
Gordon, A. D. (1999). Classification, 2nd ed. Boca Raton: Chapman and Hall/CRC.Google Scholar
Gower, J. C. (1966). Some distance properties of latent root and vector methods used in multivariate analysis. Biometrika, 53, 325–328.Google Scholar
Gower, J. C. & Dijksterhuis, G. B. (2004). Procrustes Problems. Oxford University Press.Google Scholar
Lance, G. N. & Williams, W. T. (1967). A general theory of classificatory sorting strategies: 1. Hierarchical systems. Comput. J. 9, 373–380.Google Scholar
Milligan, G. W. & Cooper, M. C. (1985). An examination of procedures for determining the number of clusters in a data set. Psychometrika, 50, 159–179.Google Scholar