## Abstract

The purpose of this paper is to provide an overview of how a self-consistent set of ‘best values’ of the fundamental physical constants for use worldwide by all of science and technology is obtained from all of the relevant data available at a given point in time. The basis of the discussion is the 2002 Committee on Data for Science and Technology (CODATA) least-squares adjustment of the values of the constants, the most recent such study available, which was carried out under the auspices of the CODATA Task group on fundamental constants. A detailed description of the 2002 CODATA adjustment, which took into account all relevant data available by 31 December 2002, plus selected data that became available by Fall of 2003, may be found in the January 2005 issue of the *Reviews of Modern Physics*. Although the latter publication includes the full set of CODATA recommended values of the fundamental constants resulting from the 2002 adjustment, the set is also available electronically at http://physics.nist.gov/constants.

## 1. Introduction

Turn to the back of a physics or chemistry textbook, or browse a standard physics or chemistry reference work such as the well-known *CRC Handbook of Chemistry and Physics*, and you will inevitably find a list of recommended values of the fundamental physical constants—familiar constants such as the speed of light in vacuum *c*, the Newtonian constant of gravitation *G*, the Planck constant *h*, and the elementary charge *e*, and, depending on the book or reference work, perhaps less familiar constants such as the Josephson and von Klitzing constants *K*_{J} and *R*_{K}, respectively, the electron magnetic moment anomaly *a*_{e}, the proton *g*-factor *g*_{p}, and the first radiation constant for spectral radiance *c*_{1L}. It is the goal of this paper to describe in a concise way where these values come from. The discussion is based on the 2002 Committee on Data for Science and Technology (CODATA) least-squares adjustment of the values of the constants (Mohr & Taylor, 2005), the most recent such study available, which was carried out by the author in collaboration with Peter J. Mohr, also of NIST, under the auspices of the CODATA Task Group on Fundamental Constants.

For reasons of simplicity and brevity, however, the present paper includes only six references and a limited amount of detail; the 2005 Mohr & Taylor article cited above, which is 107 pages in length and contains hundreds of references, should be consulted for information not specifically included here, especially the references for any experimentally measured or theoretically calculated data given in the text. Although the cited article gives all of the 2002 CODATA recommended values of the constants, they are also available electronically at http://physics.nist.gov/constants. The 145-page article by Mohr & Taylor (2000) describing the 1998 CODATA adjustment may also be consulted for additional information. For convenience, in the remainder of this paper, these two articles are referred to as ‘CODATA-02’ and ‘CODATA-98’, respectively.

CODATA itself was established in 1966 as an interdisciplinary committee of the International Council of Science (ICSU, formerly the International Council of Scientific Unions), with the goal of improving the quality, reliability, processing, management, and accessibility of data of importance to science and technology. Three years latter, in 1969, CODATA established its Task Group on Fundamental Constants (hereafter simply referred to as the ‘CODATA Task Group’) with the express purpose of periodically providing the scientific and technological communities with a self-consistent set of internationally recommended values of the basic constants and conversion factors of physics and chemistry that reflects all available information.

As will be shown, and perhaps it will come as somewhat of a surprise, the vast majority of the values of the fundamental constants are not determined by direct measurement, but are calculated from a sub-set of constants, which themselves are derived from measurements of a rather diverse group of quantities—such as frequencies and frequency ratios—with the considerable aid of theory. The relevant theoretical expressions, some of which are discussed in the following paper, are to a very large extent based on the most precise theory of modern physics—quantum electrodynamics or QED. Thus comparisons of values of fundamental constants obtained from experiments performed in different laboratories, and from experiments using different methods—sometimes from different fields of physics—can provide critical tests not only of our understanding of cutting-edge metrology, but also of our understanding of the fundamental theories of physics themselves. Such studies provide a second motivation for carrying out and publishing from time-to-time a broad and in-depth review of all of the information available that bears on the determination of a self-consistent set of recommended values of the fundamental constants, the first being that of providing an internationally accepted set of values for worldwide use throughout all of science and technology. Underlying both is the further benefit to the user communities of having in one place a summary of a vast amount of rather diverse information–information that often requires an extensive search of the literature to find, as well as direct communication with the researchers who performed the measurements or calculations under review in order to clarify important points.

The individual generally credited with pioneering the careful investigation of the values of the fundamental constants and the compilation of a self-consistent set of best, or recommended, values from all of the relevant data available at a given ‘epoch’ is R. T. Birge, who published his first study in 1929, more than 75 years ago. Over the subsequent decades, others have also carried out such work, most notably, until the mid-1960s, Cohen and DuMond and Bearden and Thomsen. Based on their measurement of the Josephson constant *K*_{J} using the ac Josephson effect (JE) in superconductors, which was predicted in 1962 by B. D. Josephson, Taylor *et al*. (1969) published a major review at the end of the 1960s. Such efforts have come to be commonly called ‘least-squares adjustments of the values of the constants’ (or an appropriate contraction of these words), because they employ the method of least squares. Since the review of Taylor *et al*. (1969), such least-squares adjustments have been carried out under the auspices of the CODATA Task Group—the first by Cohen & Taylor (1973), called the 1973 CODATA adjustment, the second by Cohen & Taylor (1987), called the 1986 CODATA adjustment, and the third and fourth by Mohr & Taylor (2000, 2005), called the 1998 and 2002 CODATA adjustments.

It may be noticed that for the four CODATA adjustments, the 4 year time interval between the third and fourth is much shorter than the approximate 13 year time interval between the first and second and second and third. Because new experimental and theoretical results that influence our knowledge of the values of the constants appear nearly continuously, because the worldwide web allows new sets of recommended values of the constants to be rapidly and widely distributed, and because the web has engendered new modes of work and thought—its users expect to find the latest information available electronically only a mouse-click away—the CODATA Task Group decided at the time of the 1998 adjustment to take advantage of the high degree of computerization that had been incorporated in that effort to provide a new set of recommended values every 4 years. The 2002 set is the first from the new schedule.

## 2. The 2002 CODATA adjustment

### (a) Some preliminaries

We briefly recall a few key points related to (i) the evaluation of uncertainty, (ii) the expression of numerical values, (iii) the definitions of some important quantities, (iv) the Josephson and quantum Hall effects, and (v) the conventional electrical units based on these effects.

The

*Guide to the expression of uncertainty in measurement*(ISO 1993) is followed in evaluating the uncertainty of measured or calculated values of quantities. The*Guide*is a*de facto*international standard published by the International Organization for Standardization (ISO) in collaboration with a number of other international organizations, has been adopted by researchers worldwide, and its basic approach to uncertainty evaluation has been used in the field of precision measurement and fundamental constants for many years. In brief, the*standard uncertainty**u*(*y*) (or simply*u*) of a result*y*represents the estimated standard deviation (square root of the estimated variance) of*y*. If*y*is obtained from*N*other quantities*x*_{i}on which it depends via some function*f*,*y*=*f*(*x*_{1},*x*_{2},…,*x*_{N}), then*u*(*y*) is obtained by combining the individual standard uncertainty components*u*(*x*_{i}), and covariances*u*(*x*_{i},*x*_{j}) where appropriate, using the standard law of propagation of uncertainty (commonly called the ‘root-sum-of squares’ method—see Appendix E of CODATA-98 (Mohr & Taylor 2000)). This procedure is followed whatever the source of the components*u*(*x*_{i}) may happen to be; that is, all components, whether arising from random effects or from systematic effects, are treated in the exact same way. The*relative standard uncertainty*of*y*,*u*_{r}(*y*) (or simply*u*_{r}) is defined by*u*_{r}(*y*)=*u*(*y*)/|*y*|,*y*≠0 (and similarly for*u*_{r}(*x*_{i})).A result is usually written in the form

*y*=1234.567 89(12) U [9.7×10^{−8}], where U represents a unit symbol and the number in parentheses is the numerical value of*u*(*y*) referred to the last digits of the quoted value. The number in square brackets is*u*_{r}(*y*). (Minor variations of the above form are also used, for example, 1234.5(6.7), where 6.7 is the standard uncertainty of the figures 4.5; and 1234.567 8901(23) to avoid the confusion that would arise from having a single figure followed by a two-figure uncertainty in parentheses.)In the international system of units (SI), which is the unit system employed in all CODATA adjustments, the speed of light in vacuum

*c*, by way of the definition of the metre, is given by*c*=299 792 458 m s^{−1}exactly; the magnetic constant*μ*_{0}, by way of the definition of the ampere, is given by*μ*_{0}=4*π*×10^{−8}N A^{−2}exactly; and the molar mass of the carbon-12 atom*M*(^{12}C), by way of the definition of the mole, is given by*M*(^{12}C)=12×10^{−3}kg mol^{−1}exactly. The unified atomic mass unit u (also called the dalton, Da) is defined according to 1 u=*m*_{u}=*m*(^{12}C)/12, where*m*_{u}is the atomic mass constant and in general,*m*(*X*) is the mass of entity*X*. (Note, however, that the masses of some fundamental particles such as the electron e, proton p, neutron n, and muon μ are usually written in the form*m*_{X}.) The relative atomic mass of*X*,*A*_{r}(*X*), a dimensionless quantity, is defined by*A*_{r}(*X*)=*m*(*X*)/*m*_{u}, which implies that*A*_{r}(^{12}C)=12 exactly. Use is also sometimes made of the molar mass constant*M*_{u}=10^{−3}kg mol^{−1}exactly, so that*M*(^{12}C)=12*M*_{u}and in general,*M*(*X*)=*A*_{r}(*X*)*M*_{u}. The Avogadro constant*N*_{A}is defined by*N*_{A}=*M*_{u}/*m*_{u}, or equivalently,*N*_{A}=*M*(*X*)/*m*(*X*). The unit u is thus related to*N*_{A}by 1 u=*m*_{u}=*m*(^{12}C)/12=*M*_{u}/*N*_{A}.When a Josephson device is cooled below the transition temperatures of the superconductors of which it is composed and is irradiated with microwave radiation of frequency

*f*, its current versus voltage curve displays current steps at precisely quantized Josephson voltages*U*_{J}. The voltage of the*n*th step,*n*an integer, is related to*f*by*U*_{J}(*n*)=*nf*/*K*_{J}, where, as mentioned above (§1),*K*_{J}is the Josephson constant. The importance of the JE to the fundamental constants lies in the fact that a rich body of experimental as well as theoretical evidence supports the fundamental relation*K*_{J}=2*e*/*h*.Like the JE, the quantum Hall effect (QHE) is a solid-state physics phenomenon that manifests itself at cryogenic temperatures. For a fixed current

*I*through a QHE device of usual Hall bar geometry—for example, a GaAs–Al_{x}Ga_{1−x}As heterostructure—there are regions in the curve of Hall voltage*U*_{H}versus applied magnetic flux density*B*(of order 10–15 T) where the Hall voltage*U*_{H}remains constant as*B*is varied. These regions of constant*U*_{H}are called quantized Hall resistance plateaus, because in the limit of zero dissipation in the direction of current flow, the Hall resistance of the*i*th plateau*R*_{H}(*i*)=*U*_{H}(*i*)/*I*is quantized:*R*_{H}(*i*)=*R*_{K}/*i*, where*i*is an integer and, as indicated above (§1),*R*_{K}is the von Klitzing constant, after Klaus von Klitzing who discovered the QHE in 1980. (We are concerned here only with the integer QHE.) In analogy with the JE, the importance of the QHE to the fundamental constants lies in the fact that a rich body of experimental as well as theoretical evidence supports the fundamental relation*R*_{K}=*h*/*e*^{2}=*μ*_{0}*c*/2*α*, where*α*is the familiar fine-structure constant.On 1 January 1990, the International Committee for Weights and Measures (CIPM), in order to establish worldwide uniformity in the measurement of voltage and resistance and other electrical quantities, adopted new, practical representations of the volt V and ohm Ω based on the JE and QHE, respectively, and conventional (i.e. adopted) values of

*K*_{J}and*R*_{K}. Initially requested by the General Conference on Weights and Measures, these assigned, exact values are*K*_{J−90}=483 597.9 GHz V^{−1}and*R*_{K−90}=25 812.807 Ω. In the 2002 and 1998 CODATA adjustments, the CIPM's adoption of these conventional values is interpreted as establishing conventional, practical units of voltage and resistance*V*_{90}and*Ω*_{90}, which are defined according to*V*_{90}=(*K*_{J−90}*/K*_{J}) V and*Ω*_{90}=(*R*_{K}/*R*_{K−90}) Ω, and hence as establishing conventional, practical units for other electrical quantities such as current and power:*A*_{90}=*V*_{90}/*Ω*_{90}and*W*_{90}=*A*_{90}*V*_{90}. Consequently, as appropriate, in the 2002 and 1998 adjustments, the values of measured quantities dependent upon electrical units are expressed as the numerical value that would be obtained from a measurement carried out in terms of conventional electrical units, times the SI unit for the quantity. Such quantities are indicated by means of a subscript ‘90’. For example, one has*F*_{90}={*F*}_{90}A s mol^{−1}, where*F*is the normal Faraday constant as defined in the system of quantities on which the SI is based with SI unit A s mol^{−1}, and {*F*}_{90}is the numerical value of*F*obtained assuming it is measured in the unit*A*_{90}s mol^{−1}.

### (b) Why least squares?

In general, a value of a particular constant can be obtained from different experiments, and relationships of varying degrees of complexity can exist among groups of fundamental constants. Some of the simpler examples can be seen above (§2*a*), including *K*_{J}=2*e*/*h* and *R*_{K}=*h*/*e*^{2}=*μ*_{0}*c*/2*α*. Some others are(2.1a)(2.1b)where *R*_{∞}=*α*^{2}*m*_{e}*c*/2*h* is the Rydberg constant. Equation (2.1*b*) is the theoretical expression for the magnetic moment anomaly *a*_{e}, which is related to the *g*-factor of the electron *g*_{e} and the magnetic moment of the electron *μ*_{e} in units of the Bohr magneton *μ*_{B}=*eh*/4*π**m*_{e} via the equations *g*_{e}=−2(1+ *a*_{e}) and *a*_{e}=(|*μ*_{e}|/*μ*_{B})−1. In equation (2.1*b*), the *C*_{i} are numerical coefficients calculated from QED and *δa*_{e} accounts for comparatively small effects that that are not purely QED. Relationships such as are given in equations (2.1*a*) and (2.1*b*) also imply that a particular constant may be obtained either by direct measurement or indirectly by appropriately combining other directly measured quantities. If the direct and indirect values have comparable uncertainties, both must be considered in order to arrive at a ‘best value’ for that constant. However, each of the various routes that can be followed to a particular constant, both direct and indirect, provides a somewhat different numerical value. The best way to handle this situation is by the well-known method of least squares.

As applied to the fundamental constants, the least-squares technique furnishes a well-defined procedure for calculating from all of the available data best ‘compromise’ values of the subset of constants taken as the variables or ‘unknowns’ of the adjustment. It automatically takes into account all possible routes to a particular variable, and yields a single value for each by weighting the different routes according to their uncertainties. The weights themselves are obtained from the *a priori* standard uncertainties *u* assigned to the individual measurements or calculated values that constitute the data being considered. Simply put, it enables all of the information relevant to the constants to be taken into account in a consistent way, including correlations among the data.

### (c) Basic approach

The basic steps necessary to carry out an adjustment of the values of the constants via least squares are conceptually straightforward and may be summarized as follows:

Identify and critically review all possible

*input data*, both experimental and theoretical, with special attention to uncertainty evaluation. The latter is of crucial importance, because the uncertainty assigned to a datum determines its level of agreement with other values of the same quantity as well as its weight in a least-squares calculation. Ensuring that the uncertainties of the input data have been evaluated on the same basis, namely, that the uncertainty assigned each datum is a standard uncertainty, makes possible the ready identification of discrepancies among different measurements and calculations. This identification, in turn, can stimulate new experimental and theoretical work aimed at resolving the inconsistencies.Express the initial

*N*input data in terms of a subset of*M*quantities called*adjusted constants*, which are the variables or ‘unknowns’ of the least-squares calculation, by means of a set of relations called*observational equations*. The latter are the theoretical relations between the measured, as well as calculated, input data and the adjusted constants.Investigate the compatibility of the input data and adjust

*a priori*assigned uncertainties and/or eliminate data as deemed appropriate.Investigate the extent to which each datum contributes to the determination of the values of the adjusted constants and omit inconsequential data as deemed appropriate.

Obtain ‘best values’ in the least-squares sense of the

*M*adjusted constants by solving the observational equations using the input data finally selected.Calculate all other constants of interest from the ‘best values’ of the

*M*adjusted constants, taking into account their uncertainties and correlations, that is, their covariances. (It should be noted that the covariance of a quantity with itself is its variance, and its standard uncertainty is the square root of its variance.)

### (d) Guiding principles

A number of principles guide these steps:

For the 2002 CODATA adjustment, the input data had to be available by 31 December 2002, except the deadline was extended to the Fall of 2003 in a few critical cases.

Normative physics is assumed to be valid unless otherwise indicated by the data, for example, special relativity, QED, the standard model of particle physics, including combined charge conjugation, parity inversion, and time-reversal (

*CPT*) invariance.The validity of the fundamental Josephson and QHE relations,

*K*_{J}=2*e*/*h*and*R*_{K}=*h*/*e*^{2}, is assumed, although as discussed below (§2*i*), tests of this assumption were performed as part of the examination of the consistency of the data.As noted above (§2

*a*), all uncertainties are evaluated by the method currently accepted internationally.Each input datum considered for initial inclusion in the adjustment has to have a sufficiently small standard uncertainty

*u*so that its weight in the adjustment*w*=1/*u*^{2}is non-trivial compared to the weight of other directly measured values of the same quantity. This leads to the ‘factor of five’ rule: if the uncertainty*u*_{1}of measured value 1 of a given quantity is greater than about five times the uncertainty*u*_{2}of measured value 2 of the same quantity, then result 1 is not considered for inclusion. Such a rule reflects the fact that experiments with uncertainties that differ by a factor of 5 or more are qualitatively different.The latest result from a given laboratory for a particular quantity, which usually has the smallest uncertainty, is normally viewed as superseding an earlier result from the same laboratory for the same quantity, because they are not independent.

All input data are treated on an equal footing—data are not classified as ‘auxiliary constants’ (data with uncertainties considered small enough to be neglected) or ‘stochastic input data’ (data with significantly larger uncertainties), as was usually the case prior to the 1998 CODATA adjustment. This allows all components of uncertainty and all correlations among the input data to be properly taken into account. It also eliminates the somewhat arbitrary division of the data into two categories and hence the possible shift in category of a particular datum from one adjustment to the next.

It is also important to recognize that the underlying philosophy of most adjustments of the values of the constants since the pioneering work of Birge over 75 years ago has been to provide the user community with values of the constants having the smallest possible uncertainties consistent with the information available at the time. The motivation for this approach is that it gives the most critical users of the values of the constants the best possible tools for their work based on the current state of knowledge. The downside is that the information available may include an error or oversight. Nevertheless, the CODATA Task Group rejects the idea of making uncertainties sufficiently large that any future change in the recommended value of a constant will likely be less than its uncertainty—simply put, the Task Group shuns the use of a ‘safety factor’ as employed in some data compilations.

### (e) Data

The *N* input data initially considered for inclusion in the 2002 CODATA adjustment consists of 112 separate items, with 170 associated and distinct correlation coefficients that vary from −0.375 to 0.991. These 112 items, which are treated in the least-squares calculations as a single group, are catalogued in the following paragraphs, where for convenience the data are divided into two main categories or groups: the principal data that contribute to the determination of the 2002 recommended value of the Rydberg constant *R*_{∞} and the bound state root-mean-square (r.m.s.) charge radius of the proton and of the deuteron *R*_{p} and *R*_{d} (50 items); and the principal data that contribute to the determination of the 2002 recommended values of the ‘other’ constants (62 items, not counting the eight measurements of the Newtonian constant of gravitation *G—*see §2*k*). The actual numerical values together with detailed descriptions of how they were obtained—either by experiment or calculation from theory—may be found in CODATA-02 and CODATA-98 (Mohr & Taylor 2000, 2005). It should be emphasized, however, that these data represent an enormous amount of dedication and hard work by many researchers extending over long periods of time, in some cases approaching 40 years. Reading the original papers is the only way one can gain a true understanding of the difficulty of determining experimentally the value of a quantity such as the von Klitzing constant with *u*_{r}≈1×10^{−8} (one part in 100 million), or of calculating from theory a fractional contribution of ≈1×10^{−8} to an expression such as *a*_{e}(theor).

#### (i) Data: group I—Rydberg constant data

Twenty-three frequencies *ν* corresponding to transitions between two different energy levels of the hydrogen atom ^{1}H (nine transition frequencies), of the deuterium atom ^{2}H (five transition frequencies), differences between different transition frequencies (six transition frequency differences for ^{1}H and two for ^{2}H), and the difference in frequency between the 1S_{1/2}−2S_{1/2} transition in ^{1}H and the same transition in ^{2}H (one such difference). An example of the 23 frequencies is the 1S_{1/2}−2S_{1/2} transition in ^{1}H obtained using frequency-comb technology, which in fact is the most accurate of the 23:(2.2)

One value of *R*_{p} and one value of *R*_{d}, both obtained from scattering experiments: *R*_{p}=0.895(18) fm [2.0×10^{−2}], *R*_{d}=2.130(10) fm [4.7×10^{−3}].

Twenty-five additive corrections *δ*_{i} that arise as follows: in order to properly take into account the uncertainty of the theoretical expression used to relate any one of the 23 frequencies to the adjusted constants, and each of which may in fact contain up to four distinct theoretical expressions for particular hydrogen or deuterium energy levels, an additive correction *δ*_{i} is introduced for each distinct expression. The *δ*s are then included among the variables or adjusted constants of the least-squares adjustment, and their estimated values are taken as input data. The best *a priori* estimate of each *δ*_{i} is taken to be zero, but with a standard uncertainty *u* equal to that of the theoretical expression. This approach enables the uncertainty of each distinct theoretical expression, and the correlations among the different expressions, to be taken into account in a rigorous way. This is of critical importance, because some of the expressions are highly correlated. For example, the correlation coefficient *r* of the expressions for the 1S_{1/2} and 2S_{1/2} energy levels is 0.979. Indeed, of the 147 correlation coefficients involving the Rydberg constant data, 74 are due to correlations among the theoretical expressions. The actual values of *δ*_{i} for the 1S_{1/2} and 2S_{1/2} energy-level expressions are *δ*_{H}(1S_{1/2})=0.0(1.7) kHz [5.3×10^{−13}] and *δ*_{H}(2S_{1/2})=0.00(21) kHz [2.6×10^{−13}], where the relative standard uncertainties are relative to the frequency equivalent of the binding energy of the respective levels.

#### (ii) Data: group II—‘other’ constants data

Eight relative atomic masses of various atoms and particles, such as *A*_{r}(^{1}H), *A*_{r}(^{16}O), *A*_{r}(e), and *A*_{r}(p), for example, *A*_{r}(e)=0.000 548 579 9111(12) [2.1×10^{−9}] obtained from measurements using a Penning trap;

One value of the electron magnetic moment anomaly *a*_{e}=0.001 159 652 1883(42) [3.7×10^{−9}], also obtained from measurements in a Penning trap, and one frequency ratio related to the determination of the muon magnetic moment anomaly *a*_{μ};

One spin-flip to cyclotron frequency ratio for trapped hydrogen-like, or hydrogenic, atoms of ^{12}C (five electrons removed), *f*_{s}(^{12}C^{5+})/*f*_{c}(^{12}C^{5+})=4376.210 4989(23) [5.2×10^{−10}], which is related to the determination of either the *g*-factor of the electron bound in ^{12}C^{5+} or *A*_{r}(e), and one similar ratio for an electron bound in a hydrogenic atom of ^{16}O (seven electrons removed);

Five magnetic moment ratios involving the electron e, proton p, neutron n, deuteron d, and helion h (nucleus of ^{3}He atom), for example, *μ*_{n}/μ′_{p}=−0.684 996 94(16) [2.4×10^{−7}] obtained from Ramsey separated-oscillatory-field magnetic resonance measurements on protons and neutrons in the same applied magnetic flux density (here and throughout this paper, the prime indicates protons in a spherical sample of pure H_{2}O at 25 °C);

Two values of the muonium (μ^{+}e^{−} atom) ground-state hyperfine-splitting frequency Δ*ν*_{Mu} and two values of a related Zeeman transition frequency that provides information about the electron–muon mass ratio *m*_{e}/*m*_{μ}, for example, Δ*ν*_{Mu}=4 463 302 765(53) Hz [1.2×10^{−8}] obtained from Zeeman transition-frequency measurements in muonium;

Five additive corrections *δ*_{i} related to the theoretical expressions for *a*_{e}, *a*_{μ}, Δ*ν*_{Mu}, and the *g*-factor of the electron bound in hydrogenic ^{12}C and in hydrogenic ^{16}O, for example, *δ*_{e}=0.0(1.1)×10^{−12} [9.9×10^{−10}, relative to *a*_{e}] due mainly to numerical uncertainties of certain QED calculations;

Six values of the gyromagnetic ratios *γ* of the proton p and helion h determined by both the low magnetic field and high magnetic field nuclear magnetic resonance methods in conventional electrical units based on *K*_{J−90} and *R*_{K−90}, for example, ;

Two values of *K*_{J}, five of *R*_{K}, two of *K*_{J}^{2}*R*_{K}=4/*h* obtained using a moving-coil watt balance, and one of the Faraday constant *F*, the latter determined in conventional electrical units based on *K*_{J−90} and *R*_{K−90}, (see the last few lines of §2*k*), for example, *R*_{K}=25 812.808 31(62) Ω [2.4×10^{−8}] obtained using a calculable capacitor;

Two values of the ratio of *h* to the mass of an atomic particle, in one case the mass of the cesium-133 atom and in the other case the mass of the neutron times the {220} lattice spacing *d*_{220} of a particular sample of silicon (all measurements involving silicon are carried out on highly pure, nearly crystallographically perfect single crystals and, where appropriate, the results are converted to the same reference temperature and pressure), for example, *h*/[*m*_{n}*d*_{220}(W04)]=2060.267 004(84) m s^{−1} [4.1×10^{−8}], where W04 indicates a silicon crystal from a single-crystal boule designated ‘WASO 04’;

One value of the ratio of the wavelength *λ* of the 2.2 MeV capture γ-ray emitted in the reaction n+p→d+γ to the *d*_{220} lattice spacing of a particular single crystal of silicon, nine measured fractional differences between the *d*_{220} lattice spacing of a number of silicon single-crystal samples, one difference between *d*_{220}(W04) and *d*_{220} of ideal silicon obtained by correcting *d*_{220}(W04) for its impurity content, four ratios of the wavelengths of three different X-ray radiations to the *d*_{220} lattice spacing of two different silicon crystals, one value of *d*_{220} in metres for a particular silicon sample as obtained using a combined X-ray and optical interferometer or XROI, and one value of the molar volume of ideal, naturally occurring silicon *V*_{m}(Si) (that is, the volume of one mole of silicon atoms in a perfect crystal of naturally occurring silicon) obtained from measurements of density and isotopic composition: *V*_{m}(Si)=12.058 825 7(36)×10^{−6} m^{3} mol^{−1} [3.0×10^{−7}]; and

Two values of the molar gas constant *R* obtained from measurements of the speed of sound in argon, for example, *R*=8.314 471(15) J mol^{−1} K^{−1} [1.8×10^{−6}].

The 50 Rydberg constant input data have relative standard uncertainties *u*_{r} that range from 2.0×10^{−2} for *R*_{p} to 1.9×10^{−14} for the 1S_{1/2}−2S_{1/2} transition frequency in ^{1}H. The 62 ‘other’ data (70 if one includes the 8 measured values of the Newtonian constant of gravitation *G* discussed in §2*k* below) have *u*_{r} that range from 1.0×10^{−4} for one particular measurement of *G* to 1.0×10^{−11} for *A*_{r}(^{16}O). Again, for a concise description of how these 112 input data (120 if *G* is included) were obtained, see CODATA-02 and CODATA-98 (Mohr & Taylor 2000, 2005).

### (f) Adjusted constants

As discussed in §2*c*, a least-squares adjustment is carried out by first expressing the input data in terms of a set of quantities called adjusted constants. The latter are the variables or ‘unknowns’ of the adjustment and the resulting equations, which are the theoretical relations between the measured and calculated input data and the adjusted constants, are called observational equations. The set of adjusted constants are to some extent arbitrary—the two principal requirements that they must meet are that all of the input data must be expressible in terms of them, and no adjusted constant can be eliminated by expressing it in terms of other adjusted constants. In the 2002 CODATA adjustment, 61 quantities are taken as adjusted constants, 28 primarily for the multivariate analysis of the Rydberg-constant data, and 33 primarily for the multivariate analysis of the ‘other’ data. The 28 are *R*_{∞}, *R*_{p}, *R*_{d}, and the 25 *δ*s related to the theoretical expressions of various energy levels in hydrogen and deuterium; the 33 are eight relative atomic masses such as *A*_{r}(e), *A*_{r}(p), and *A*_{r}(n), *α*, *h*, and *R*, five magnetic moment ratios such as that of the electron to that of the proton *μ*_{e}/*μ*_{p}, the mass ratio *m*_{e}/*m*_{μ}, 5 *δ*s such as *δ*_{e}, *δ*_{μ}, and *δ*_{Mu}, which are the additive corrections to *a*_{e}(theor), *a*_{μ}(theor), and Δ*ν*_{Mu}(theory), respectively, three specialized units in the field of X-rays of historic interest, and 8 *d*_{220} lattice spacings of different silicon crystals, including that of a crystal of ideal silicon. All of the adjusted constants may be found in tables XVIII and XX of CODATA-02 (Mohr & Taylor 2005).

### (g) Observational equations

Forty-nine different types of observational equation were required for the 50 Rydberg-constant data, because two of the 23 frequencies were for the same hydrogen transition. Fifty-one different types of observational equation were required for the 62 ‘other’ constant data, because there were two measurements each of Δ*ν*_{Mu}, of the gyromagnetic ratio of the proton obtained by the low-field method, of the same quantity obtained by the high-field method, of the gyromagnetic ratio of the helion obtained by the low-field method, of *K*_{J}, of *K*_{J}^{2}*R*_{K}, and of *R*, and five measurements of *R*_{K}.

All 100 of the different types of observational equation used to analyse the 112 input data initially considered in the 2002 adjustment may be found in tables XIX and XXI of CODATA-02 (Mohr & Taylor 2005). However, the following examples, which are the observational equations for the experimentally measured values of *ν*_{H}(1S_{1/2}−2S_{1/2}), Δ*ν*_{Mu}, *K*_{J}, *R*_{K}, *K*_{J}^{2}*R*_{K}, and *F*_{90}, should provide a sense of what they are about (for simplicity, the function *f*_{H} for the hydrogen 1S_{1/2}−2S_{1/2} transition frequency and the function *f*_{Mu} for the ground-state hyperfine splitting frequency in muonium are not explicitly given):

(2.3a)

(2.3b)

(2.3c)

(2.3d)

In each of these observational equations, the quantities on the left-hand side of the ≐ sign are the measured values of the indicated quantities. The symbol ≐ is used rather than the ordinary equals sign to indicate the fact that the two sides of the equations are equal in principle but not numerically, because the set of observational equations is over determined. The right-hand side contains only quantities with values that are exactly known in SI units such as *μ*_{0}, *c*, and *M*_{u}, and adjusted constants—*R*_{∞}, *α*, *A*_{r}(e), *A*_{r}(p), *R*_{p}, *δ*_{H}(1S_{1/2}), *δ*_{H}(2S_{1/2}), *m*_{e}/*m*_{μ}, *δ*_{μ}, *δ*_{Mu} and *h* in these six examples. Although the last four observational equations are among the simplest (equations (2.3*c*) and (2.3*d*)), some are even simpler. For example, the observational equation for *δ*_{e}—and for that matter, all of the other *δ*s—is simply of the form *δ*_{e}≐*δ*_{e}, because *δ*_{e} cannot be expressed in terms of any other adjusted constant. The observational equation for *R* is of the same form for the same reason: *R*≐*R*.

### (h) Analysis of data

The focus of the data analysis is to identify discrepancies among the data and to understand the degree to which a particular datum would contribute to the determination of the 2002 set of recommended values. It proceeds in three stages. First, directly measured values of the same quantity are compared, that is, data of the same type, usually by calculating their weighted mean, which is actually a single variable (as compared to a many variable or multivariate) least-squares adjustment. An example of this case is the comparison of the five measured values of the von Klitzing constant. (For the case where there are only two values of the same quantity, simply calculating the difference Δ between the values and comparing that difference to its uncertainty *u*_{diff} is an equivalent procedure.)

Second, directly measured values of different quantities, that is, data of different types, are compared through a third quantity that can be inferred from the values of the directly measured quantities. The most important of these inferred values are the fine-structure constant *α* and the Planck constant *h*. Indeed, in the 2002 adjustment, some 15 different items of input data could be compared through their inferred values of *α* and eight through their inferred values of *h*.

Finally, a multivariate analysis of the data is carried out using the method of least squares for correlated input data, which was developed in about the first third of the 20th century. (The basic method of least squares has its origins in the work of the famous mathematicians Legendre, Gauss, and Laplace in the first quarter of the nineteenth century.) In carrying out either a single variable or multivariate adjustment, which often involves sequentially deleting various input data in order to investigate the impact these data might have on the weighted mean or on the values of the adjusted constants, several statistical tools are used to decide if the data are compatible and if a datum contributes in a meaningful way to the determination of the mean or the adjusted constants. These include

the well-known statistic

*chi-square*, symbol*Χ*^{2}, and the closely associated*Birge ratio**R*_{B}=(*Χ*^{2}/*ν*)^{1/2}, where*ν*=*N*−*M*is the*degrees of freedom*of the adjustment, with*N*the number of input data and*M*the number of adjusted constants;the

*normalized residual*of a given input datum , which is the ratio of the difference between the value of the input datum*q*_{i}and the best estimated value of that datum resulting from the adjustment, to the*a priori*uncertainty*u*(*q*_{i}) assigned the datum (note that is obtained by evaluating the right-hand side of the observational equation for*q*_{i}with the adjusted constants that result from a given adjustment, and that for uncorrelated input data,*Χ*^{2}is equal to the sum of the squares of the normalized residuals); andthe

*self-sensitivity coefficient**S*_{c}of a particular datum, which is normally between 0 and 1 and is a measure of the influence of datum*q*_{i}on the best estimated value of the corresponding quantity.

A value of *R*_{B} much larger than about 1 usually indicates some inconsistencies among the data, while a normalized residual much larger than about 2 for a particular datum usually indicates that the datum is inconsistent with the other input data. A value of *S*_{c} of only about 0.01 (1%) for a particular datum indicates that the datum plays a rather inconsequential role in determining the best value of the corresponding quantity (that is, ), and can be omitted with little effect, which recalls the ‘factor-of-five’ rule discussed in point (v) of §2*d*.

### (i) Final data selection

Based on the analyses described in §2*h*, two significant inconsistencies among the data were identified, as were a number of data whose values of *S*_{c} were below 0.01. This leads to the final least-squares adjustment on which the 2002 CODATA set of recommended values is based, which may be summarized as follows: from the initially considered set of 112 potential input data, seven were deleted, one because of its significant disagreement with the other data and because of its extremely small influence on the best estimated value of the corresponding quantity, and six solely for the latter reason. The first of these data is a measurement of the proton gyromagnetic ratio by the low-field method, while the remaining six are two measurements of the helion gyromagnetic ratio by the low field method, and four measurements of the von Klitzing constant. In addition, the *a priori* uncertainties of five input data were weighted by the multiplicative factor 2.325 in order to reduce their inconsistency to an acceptable level. These five data, all of which contributed to the determination of the Planck constant *h*, are the two measurements of the Josephson constant *K*_{J}, the two moving-coil watt balance measurements of the product *K*_{J}^{2}*R*_{K}=4/*h*, and the one value of the molar volume of silicon *V*_{m}(Si) (the factor 2.325 reduces the absolute value of the normalized residual |*r*_{i}| of this result for *V*_{m}(Si) from |−3.18| to |−1.50|).

It is worth noting that this disagreement involving the measurements of *K*_{J}, *K*_{J}^{2}*R*_{K}, and *V*_{m}(Si) led Mohr and Taylor to consider in Appendix F of CODATA-02 (Mohr & Taylor 2005) whether relaxing the assumptions *K*_{J}=2*e*/*h* and *R*_{K}=*h*/*e*^{2} would reduce or possibly even eliminate the inconsistency, even though these fundamental relations are strongly supported by both experiment and theory. To this end, various adjustments were carried out in which *K*_{J} and/or *R*_{K} were treated simply as phenomenological constants unrelated to *e* and *h*. However, no statistically significant evidence was found that indicated the above fundamental relations that characterize the Josephson and Quantum hall effects are not valid.

### (j) Final adjustment

The *N*=105 final input data were expressed in terms of *M*=61 adjusted constants, corresponding to *ν*=*N*−*M*=44 degrees of freedom. For this final adjustment, *Χ*^{2}=31.2, *R*_{B}=0.84, and the probability *p* that this observed value of *Χ*^{2} for *ν*=44 degrees of freedom would have exceeded that observed value is the quite acceptable *p*=0.93. Each input datum included in the final adjustment had a value of *S*_{c}>0.01, or was a subset of the data of an experiment that provided a datum with such a value of *S*_{c}. The final input data with the four largest |*r*_{i}| had values of *r*_{i} of 2.20, −1.50, 1.43, and 1.39; three other input data had values of *r*_{i} of 1.11, −1.11, and 1.05, and all other values of |*r*_{i}| were less than 1.

### (k) Calculation of other constants

The direct output of the final adjustment is best estimated values in the least-squares sense of the 61 adjusted constants, together with their covariances; all of the 300-plus 2002 CODATA recommended values and their uncertainties are obtained from these and, as appropriate, (i) those constants that have defined values such as *c*, *μ*_{0} and *M*_{u}; (ii) the value of the Newtonian constant of gravitation *G* resulting from the weighted mean of eight independent results, but taking into account the historical difficulty of measuring *G* in assigning its uncertainty (because *G* has no known relationship with any other constant, and because none of the eight measurements of *G* are significantly correlated with any of the other input data, for convenience, the eight values were treated independently of the main adjustment); and (iii) the values of the tau mass *m*_{τ}, Fermi coupling constant *G*_{F}, and sin^{2} *θ*_{W}, where *θ*_{W} is the weak mixing angle, all based on the 2002 biennial Review of particle physics by the particle data group.

Of course, a number of the 300-plus CODATA 2002 recommended values are the adjusted constants themselves, for example, *α*, *h*, *R*_{∞}, *A*_{r}(e), *R*, and the mass ratio *m*_{e}/*m*_{μ}, and thus are a direct consequence of the final adjustment. However, most must be calculated from the adjusted constants using the relations that exist among the constants, such as the first three relations in equation (2.1*a*) and the two in equation (2.3*c*). Three others are(2.4)where *k* is the Boltzmann constant. In all of these equations, the quantities on the right-hand side are either adjusted constants or exactly defined constants.

Although evaluating such expressions is a matter of simple substitution, calculating the uncertainty of the resulting constant requires some care, because the covariances of the adjusted constants entering the equations must be properly taken into account in order to obtain the correct uncertainty. (A detailed discussion of how such calculations are carried out may be found in Appendix E of CODATA-98 (Mohr & Taylor 2000), while further details on how the values of various constants are obtained from the adjusted constants may be found in section V.B of CODATA-98.)

### (l) Comparison of last three CODATA adjustments

We compare in tables 1 and 2 the recommended values of a representative group of constants resulting from the last three CODATA adjustments in order to see how our knowledge of the constants has evolved over this approximate 16 year period. In fact, the three principal characteristics of the progression of such knowledge over a time interval of this length that these tables demonstrate are not atypical of any similar period during the last 75 years, that is, since the seminal 1929 publication of Birge.

The first of these characteristics is how far our knowledge of the values of the constants, as measured by their uncertainties, has advanced during this comparatively short period. Such advances are due, of course, to continual improvements—sometimes revolutionary but often simply evolutionary—in both experiment and theory, the former aided by the discovery of new phenomena such as the Josephson and quantum Hall effects and the development of new and better instrumentation such as the laser, Penning trap, and desktop computer, and the latter by new calculation techniques as well as very high speed computer work-stations.

Second, mistakes are made: changes in the recommended values of the constants from one adjustment to the next larger than one would expect from their uncertainties occur all too frequently. On the other hand, it could be argued that such changes are inevitable when one provides recommended values with uncertainties based only on current knowledge—a policy whose purpose is to provide users with the sharpest knife possible for doing their work. Although incorporating some sort of ‘safety factor’ in the evaluation of uncertainties would reduce the occurrence of such changes, doing so would likely restrict the usefulness of the recommended values—it would diminish their role in identifying discrepancies among different results, which quite often fosters new experimental and theoretical work aimed at understanding and eliminating them.

Third, although new data usually lead to smaller uncertainties for the constants from one adjustment to the next, the comparison of the 2002 recommended values with their 1998 counterparts in table 1 shows quite clearly that this is not always the case. As discussed above (§2*i*), the value of the molar volume of silicon *V*_{m}(Si) that became available for the 2002 CODATA adjustment turned out to be inconsistent with four previously available and quite consistent measurements, two of *K*_{J} and two of *K*_{J}^{2}*R*_{K}, and led to the decision to weight the *a priori* assigned uncertainties of all five data by the multiplicative factor 2.325 in the final adjustment on which the 2002 CODATA recommended values are based. The end result of this decision is an increase in the *u*_{r} of the 2002 CODATA recommended value of *h* and in the *u*_{r} of those constants that depend strongly on *h* by a factor of about 2.2 compared to their 1998 values. Nevertheless, one can argue that because the larger uncertainties to which the new data gave rise are presumably closer to the truth, our knowledge of the values of the constants has actually advanced, even though the uncertainties of some constants have increased.

A not dissimilar case is the 1986, 1998, and 2002 recommended values of the Newtonian constant of gravitation *G*. The uncertainty of the 1998 value is about 12 times larger than that of 1986, because of the availability in the 1998 adjustment of a credible result for *G* obtained in an apparently careful experiment carried out at a major national metrology institute or NMI, but which was in gross disagreement with all other values. Because of this credible but discrepant result, the CODATA Task Group decided to retain the 1986 recommended value as the 1998 recommended value, but with a significantly increased uncertainty to reflect the discrepant value's existence. But, between 1998 and 2002, additional new results for *G* became available, and three researchers at the NMI, two of whom were involved in the original work that led to the questionable value of *G*, carried out experimental investigations of several critical aspects of the experiment that led them to conclude that the original result and the uncertainty assigned to it could no longer be considered correct. It was then possible, based on this new information, to include in the 2002 CODATA set of recommended values a new value of *G* with a reduced uncertainty.

### (m) Problems

The issue of most concern in the 2002 CODATA adjustment is the inconsistency of the measured value of the molar volume of silicon with other, equally credible results, as discussed in the penultimate paragraph of the previous section. Such disagreement immediately calls to mind an axiom of the fundamental constants field, namely, that the best way to establish confidence in a measurement or calculation is to have it repeated in another laboratory, preferably by a dissimilar method. (The different results should have comparable uncertainties.) Although it does not ensure that an unsuspected error in a result will be brought to light, history shows that it is an excellent way of discovering an error if one exists.

Unfortunately, such redundancy is all too rare among the data available at any given point in time that are relevant at the level of uncertainty of interest in the fundamental constants field at that time. This is because that level is always at the frontier of current knowledge, and hence the measurements and calculations required to push back that frontier are invariably complex, time consuming, and expensive. For example, in the 2002 adjustment, the fine-structure constant *α*, Planck constant *h*, and molar gas constant *R*, each of which is an adjusted constant, play a major role in the determination of the recommended values of many other constants, yet the adjusted value of each is still to a considerable extent determined by a pair of input data or a single input datum.

## 3. Conclusion

Clearly, the answer to all of the problems discussed in the last paragraph, and hence to the broader and more fundamental question of how our knowledge of the values of the constants can be advanced, is the same—more and better data! Although, and somewhat regrettably, it is much easier to give the answer than to implement it, nevertheless, the importance of the fundamental constants to (i) our understanding of the physical world as based on the theories that we develop to describe it, (ii) the development of an invariant system of measurement units that can be realized by anyone at anytime and at anyplace with the requisite uncertainty, and (iii) the advancement of the state-of-the-art in many fields of metrology, ensures that work in the fundamental constants field will continue unabated, even if not at the pace some might like.

With the above knowledge-advancement question in mind, we close by recalling that the values of the constants, including their uncertainties, depend upon the units in terms of which they are measured. For example, the value of the electron mass in the unit kilogram is *m*_{e}=9.109 3826(16)×10^{−31} kg [1.7×10^{−7}], while the value in the unified atomic mass unit u=*m*(^{12}C)/12 is *m*_{e}=5.485 799 0945(24)×10^{−4} u [4.4×10^{−10}]. Thus, in this case, changing the unit reduces the uncertainty by a factor approaching 400!

It has been recently proposed by Mills *et al*. (2005) that the kilogram should be redefined in the very near future in such a way as to fix the value of either the Planck constant *h* or Avogadro constant *N*_{A}, in analogy with the 1983 redefinition of the metre that fixes the value of the speed of light in vacuum to have the exact value *c*=299 792 458 m s^{−1}. Either new kilogram definition (that is, one that fixes *h* or one that fixes *N*_{A}) would in fact lead to significant reductions in the uncertainties of many other constants. Further, one could also consider redefining the kelvin in such a way as to fix the value of the Boltzmann constant *k*, and the ampere in such a way as to fix the value of the elementary charge *e*. Together, these three SI base-unit redefinitions would lead to truly revolutionary reductions in the uncertainties of a broad spectrum of constants. Thus, perhaps somewhat surprisingly, one of the simplest ways our knowledge of the values of the constants can be advanced is by redefining our units of measurement!

## Tables

## Footnotes

One contribution of 14 to a Discussion Meeting ‘The fundamental constants of physics, precision measurements and the base units of the SI’.

- © 2005 The Royal Society