## Abstract

Over the past 40 years, a number of discoveries in quantum physics have completely transformed our vision of fundamental metrology. This revolution starts with the frequency stabilization of lasers using saturation spectroscopy and the redefinition of the metre by fixing the velocity of light *c*. Today, the trend is to redefine all SI base units from fundamental constants and we discuss strategies to achieve this goal. We first consider a kinematical frame, in which fundamental constants with a dimension, such as the speed of light *c*, the Planck constant *h*, the Boltzmann constant *k*_{B} or the electron mass *m*_{e} can be used to connect and redefine base units. The various interaction forces of nature are then introduced in a dynamical frame, where they are completely characterized by dimensionless coupling constants such as the fine structure constant *α* or its gravitational analogue *α*_{G}. This point is discussed by rewriting the Maxwell and Dirac equations with new force fields and these coupling constants. We describe and stress the importance of various quantum effects leading to the advent of this new quantum metrology. In the second part of the paper, we present the status of the seven base units and the prospects of their possible redefinitions from fundamental constants in an experimental perspective. The two parts can be read independently and they point to these same conclusions concerning the redefinitions of base units. The concept of rest mass is directly related to the Compton frequency of a body, which is precisely what is measured by the watt balance. The conversion factor between mass and frequency is the Planck constant, which could therefore be fixed in a realistic and consistent new definition of the kilogram based on its Compton frequency. We discuss also how the Boltzmann constant could be better determined and fixed to replace the present definition of the kelvin.

## 1. Introduction: questions about the future of the international system of units

The problem faced by the international system of units (SI) today originates in the revolution brought by an ensemble of recent discoveries and of new technologies based on quantum physics, such that, there is now an increasing gap between the SI and modern physics at the level of concepts as well as of measurement techniques. The seven base units are all more or less challenged:

the metre has already been redefined from the second and the speed of light;

the kilogram could be redefined at a relatively short notice from the Planck constant, thanks to the watt balance using the equivalence between the electrical watt and the mechanical watt;

the electrical units are practically independent from the SI, by the adoption of conventional values for the Josephson and Von Klitzing constants;

the kelvin is defined using the triple point of water, whereas fixing the Boltzmann constant would be a much more satisfactory definition;

the candela is a derived unit of energetic flux;

the mole is defined by a pure number, the Avogadro number, which should be better determined to allow an alternative to redefining the mass unit, in which it would be fixed;

the second, in the long run, could be better defined from an optical clock, for instance using atomic hydrogen, which might make it possible to link it to the Rydberg constant and maybe, one day, to the mass of the electron.

Such a situation, of course, results from a significant increase of scientific knowledge over some decades, when the international metrological institutions necessarily need a longer time to react. Hence, today, the consistency of the base units system should be restored through a thorough questioning of fundamental metrology.

A system of units, so-called natural and based on fixed fundamental constants, does exist. For a long time the reserve of theoretical physicists and considered as unrealistic in its applications, today this system seems to come back in favour thanks to new technologies: laser measurements of length, Josephson effect (JE), quantum Hall effect (QHE), cold atoms, atomic clocks…. Of course, fixing the value of these constants to unity, as theoreticians often do, is out of the question, but it is possible to fix them to nominal values consistent with the previous definitions.

So, today, we can consider the proposition of an overlay to the SI system, linking it more or less completely to the fundamental constants, which would allow our present understanding of the physical world and of the fundamental interactions to be better taken into consideration, and interfering as little as possible with the current system in use (Taylor & Mohr 2001; Kose *et al*. 2003; Moldover & Ripple 2003; Tuninsky 2003; Bordé 2004*a*; Bordé & Kovalevsky 2004).

In other respects, reforming the units system cannot be done without being able to put each definition into practice. So, the new definitions will have to rest on technologies allowing such a ‘mise en pratique’. Moreover, these new technologies generally possess a less sophisticated version which is or will be emerging in everyday life. Let us quote for example, the measurements of length by laser interferometry, of space-time coordinates by GPS (Global Positioning System), the electrical measurements by Josephson or QHEs, the ‘electrical’ kilogram….

First, we will come back to the new theoretical frame imposed on fundamental metrology by modern physics. Then, unit by unit, we will examine the different choices consistent with this theoretical context and instrumental constraints imposed by the requirement to have fully developed and credible technologies. The reader uninterested in theoretical issues is welcome to go directly to this second part. Such an examination will allow us to specify the competing strategies as to the future of the base units system. Finally, it is clear that any system based on natural constants, ought to take into account the potential variability of these constants (Bize 2004).

## 2. Towards a new conceptual framework for the base units system: a theoretical overview of possible paths and plans

This framework is naturally the one imposed by the two great physical theories of the twentieth century: relativity and quantum mechanics. These two major theories themselves have given birth to quantum field theory, which incorporates all their essential aspects and adds those associated with quantum statistics. The quantum theory of fields allows a unified treatment of fundamental interactions, especially, of electroweak and strong interactions within the standard model. General Relativity is a classical theory, hence gravitation remains apart and is reintegrated into the quantum world only in the recent theories of strings. We do not wish to go that far and we will keep to quantum electrodynamics and to the classical gravitation field. Such a framework is sufficient to build a modern metrology, taking into account an emerging quantum metrology. Of course, quantum physics has been operating for a long time at the atomic level, for example in atomic clocks, but now it also fills the gap between this atomic world and the macroscopic world, thanks to the phenomena of quantum interferences whether concerning photons, electrons, Cooper pairs or more recently atoms in atom interferometers (Bordé 1989, 2002).

The main point is to distinguish between a ‘kinematical’ framework associated with fundamental constants having a dimension, such as *c*, *ℏ*, *k*_{B} and a ‘dynamical’ framework where the interactions are described by coupling constants without dimension. The former framework relies on the statistical relativistic quantum mechanics of free particles, and the latter on the quantum field theory of interactions.

Two possible goals can be pursued:

redefine each unit in terms of a fundamental constant with the same dimension, e.g. mass in terms of the mass of an elementary particle;

reduce the number of independent units by fixing a fundamental constant having the proper dimension for this reduction, e.g. fixing

*c*to connect space and time units or*ℏ*to connect mass and time units.

The existence of fundamental constants with a dimension is often an indication that we are referring to the same thing with two different names. We recognize this identity as our understanding of the world gets deeper. We should then apply an economy principle (Occam's razor) to our unit system to take this into account and to display this connection.

When abandoning a unit for the sake of another, the first condition (C1) is thus to recognize an equivalence between the quantities measured with these units (e.g. equivalence between heat and mechanical energy and between mass and energy), or a symmetry of nature that connects these quantities in an operation of symmetry (for example a rotation transforming the space coordinates into one another or a Lorentz transformation mixing the space and time coordinates).

A second condition (C2) is that a realistic and mature technology of measurement is to be found. For example, notwithstanding the equivalence between mass and energy, in practice the kilogram standard will not be defined by an energy of annihilation, but on the other hand, thanks to the watt balance, it can be tied to its Compton frequency by measurements of time and frequency.

A third condition (C3) is connected to the confidence felt for the understanding and the modelization of the phenomenon used to create the link between quantities. For instance, the exact measurement of distances by optical interferometry is never questioned because we believe that we know everything, and in any case, that we know how to calculate everything concerning the propagation of light. That is the reason why redefining the metre ultimately took place with few problems. On the other hand, measuring differences of potential by the JE or electrical resistances by the QHE, still needs support, because despite a 10^{−9} confirmation of their reproducibility (Tsai *et al*. 1983; Niemeyer *et al*. 1986; Jain *et al*. 1987; Kautz & Lloyd 1987; Jeckelmann & Jeanneret 2001; Piquemal *et al*. 2004), and a good understanding of the universal topological character behind these phenomena, some people still feel uncertain as to whether all possible small parasitical effects have been dealt with. For a physical phenomenon to be used to measure a quantity properly, is directly related to our knowledge of the whole underlying physics. In order to switch to a new definition, this psychological barrier must be overcome, and we must have complete faith in our total understanding of the essentials of the phenomenon. Therefore, through a number of experiments as varied as possible, we must make sure that the measurement results are consistent up to a certain level of accuracy which will be that of the mise en pratique and we must convince ourselves that no effect has been neglected at that level. If all of these conditions are fulfilled, the measured constant that linked the units for the two quantities will be fixed, e.g. the mechanical equivalent of the calorie or the speed of light. We shall examine how these three criteria (C1, C2, C3) apply to the main structure constants of the SI.

### (a) Kinematical framework

#### (i) Relativity has already allowed the value of *c* to be fixed and the standard of length to be redefined

(C1) As mentioned before, condition 1 is satisfied here thanks to the existence of a symmetry in space-time between space and time.1 The theory of relativity is the conceptual framework in which space and time coordinates are naturally connected by Lorentz transformations. Relativity uses clocks and rods to define these coordinates. The rods of relativity are totally based on the propagation properties of light waves, either in the form of light pulses or of continuous beams.

(C2) Condition 2 is the existence of mature technologies to implement this symmetry. Are the tools there?

It was possible to redefine the length unit from the time unit, because modern optics allowed not only the measurement of the speed of light generated by superstable lasers with a relative uncertainty lower than the best length measurements, but also because today the same techniques allow the new definition of the metre to be realized in an easy and daily way. Interferometry is the technique that allows us to go from a nanometric length, the wavelength linked to an atomic transition, to a macroscopic length at the metre level and superstable laser sources are now believed to be fully reproducible for length measurements (see below).

(C3) There is a theoretical background universally accepted to describe the propagation of light in real interferometers.

The four-potential *A*_{μ} satisfies the wave equation(2.1)This equation has been solved to yield the modal content of real laser beams and diffraction corrections have been studied in great detail by generations of opticians. The propagation of light pulses is also very well described and understood in laser and microwave links. It is true that ultra-short pulses can generate nonlinear effects (even the vacuum becomes nonlinear at some power level) but these effects can be neglected under usual conditions.

In the case of light waves, the wave equation involves only one fundamental constant *c*. This is because, in the absence of interaction with matter (if one ignores discrete momentum exchanges with the mirrors and the Casimir force), one does not need to quantize the Maxwell field and there is no need to introduce photons as quanta of this field. If we deal with massive particles, however, the field-particle connection is unavoidable and a new fundamental constant appears: the Planck constant *h* (or *ℏ*=*h*/2*π*).

#### (ii) Quantum mechanics allows the value of *h* to be fixed and the mass standard to be defined

(C1) Quantum mechanics tells us that there is an equivalence between an action and the phase of a wave. Essentially an action is the product of a mass and a proper time (once *c* is fixed).

In fact, we know that quantum mechanics always links the mass *M* with the Planck constant in the form *M*/*ℏ* (in Schrödinger equation as well as in the relativistic equations of Klein–Gordon and Dirac). For massive particles, the energy–momentum relation (see figure 1) is(2.2)or, in manifestly covariant form(2.3)and, with the usual rules of correspondence in quantum mechanics which connect the particle properties to the field derivatives(2.4)(2.5)one obtains the corresponding equation for the matter-wave field, which is the Klein–Gordon equation(2.6)where □≡*∂*^{μ}*∂*_{μ} is the d'Alembertian. In a more axiomatic approach, one starts with a Lagrangian for the matter-wave field, from which both the wave equation and the particle properties can be derived by a canonical quantization procedure.

The mass shell hyperboloid of figure 1 thus gives the dispersion relation for the matter-wave, relating the de Broglie wavevector to the de Broglie frequency. The de Broglie frequency for zero momentum (body at rest) is equal to the Compton frequency *Mc*^{2}/*h* of the particle. It is a Lorentz scalar. Thus, masses are equivalent to frequencies.

In the case of a composite complex body, such as an atomic species or a macroscopic object, there is a rich spectrum of internal energies and hence masses. This spectrum is the spectrum of eigenvalues of the internal Hamiltonian *H*_{0} and we can write Schrödinger equation in the rest frame of the body as(2.7)where *τ* is the proper time. Rest mass and proper time thus appear as conjugate variables and are both Lorentz invariants. So that, in the general case, the Klein–Gordon equation (2.6) can be written(2.8)in which *c* is again the only fundamental constant. Equation (2.8) is indeed the wave equation in a space with four spatial dimensions *x*, *y*, *z*, *cτ*. We recover equation (2.6) in the case of a monomassic state for which the field oscillates at a single internal frequency *Mc*^{2}/*h*.

To measure proper time and build a clock, we need at least two different masses and figure 2 illustrates the simple case of a two-level atom that can absorb single photons. Since both energy and momentum need to be conserved in this process, we see that one photon absorption from an atom at rest in the lower state will lead to a recoil energy that adds up to the Bohr energy. The Compton frequency of the atom appears directly in this correction and this is the basis for its measurement. Also, if there is a thermodynamic distribution of initial momenta, this will result in a corresponding distribution of absorbed frequencies (Doppler broadening).

(C2) Can we measure *ℏ*/*M*?

Today we know how to measure *ℏ*/*M*_{at} for an atom of mass *M*_{at} by atomic interferometry with an uncertainty of ∼10^{−8}, thanks to a measurement of frequency and thus of time (Bordé 1989, 2002; Wicht *et al*. 2002*a*,*b*). Unfortunately, going to macroscopic masses through direct or indirect counting (measure and realization of the Avogadro number) meets significant problems at the 10^{−7} level. Fortunately, there is another way to go to macroscopic masses: through the watt balance, which in fact measures the ratio for the mass of the kilogram standard, also by time and frequency measurements, thanks to quantum electric metrology. The final formula for the watt balance gives the Compton frequency of the kilogram in the form(2.9)where *f*_{1} and *f*_{2} are Josephson frequencies, *g* is the acceleration due to gravity determined by a phase measurement *φ* in an atom or optical interferometer, in which the laser circular frequency is *ω* and the time interval *T* and *v*/*c* is the Doppler factor involved in the measurement of the velocity *v*.

(C3) Is there an adequate modelization and a full theoretical understanding (universally accepted) for all these measurement techniques?

The theory of matter-wave interferometers still requires full validation with various atomic species and beam splitters but does not involve *a priori* unknown physics or complicated solid-state physics. The situation of the Avogadro number determination appears to be much more uncertain in this respect.

As for the watt balance, the confidence in the reliability in measuring truly is directly connected to our confidence in the expressions of the Josephson and von Klitzing constants in terms of true fundamental constants.

#### (iii) Statistical mechanics allows the value of the Boltzmann constant *k*_{B} to be fixed and the scale of temperature to be defined

(C1) Our first condition rests in this case on the equivalence between temperature and energy.

Statistical mechanics allows us to go from probabilities *W* to entropy *S* thanks to a third fundamental constant with a dimension, the Boltzmann constant *k*_{B},(2.10)

At present the scale of temperature is defined by an artefact, the triple point of water, admittedly natural, but, nevertheless, far from being a fundamental constant.

By analogy with the case of the Planck constant, it appears natural to propose fixing the Boltzmann constant *k*_{B}. There is indeed a deep analogy between the two *S* in physics, i.e. action and entropy. The corresponding energy conjugated variables are time and reciprocal temperature with the two associated fundamental constants, *h* quantum of action and *k*_{B} quantum of information (Brillouin 1959). The two concepts appear unified as a phase and an amplitude in the density operator that satisfies both the Liouville–Von Neumann equation and the Bloch equation with the same Hamiltonian *H*_{0}. The combination of proper time and temperature as a complex time *τ*−i*ℏ*/*k*_{B}*T* is even found as a privileged variable of the thermal Green function (Bordé 2002). To put it another way, the Planck constant should not be fixed without also fixing the Boltzmann constant for fear of some inconsistency. In fact, it is the combination *k*_{B}/*ℏ* rather than *k*_{B} that appears naturally.

(C2) We will see that several methods to measure the Boltzmann constant *k*_{B} are presently under study, which hopefully will carry a small enough uncertainty to lead to a new definition of the kelvin that could compete with the present definition.

(C3) Classical statistics is not always the only mechanism at work in the various methods to measure *k*_{B}. Quantum statistical effects can bring collective or many-body effects depending on whether we deal with bosons or fermions. Various other interactions (e.g. collisions) can come into play and spoil the result. Fortunately, a change of isotopic species and extrapolation to zero pressure can always be performed. Concerning the Doppler width method (see below) many other checks can easily be thought of and give full confidence in a very simple formula. In any case, it is hard to see why a new definition of thermodynamic temperature based upon a fixed value for the Boltzmann constant *k*_{B} could not be universally accepted.

#### (iv) Mass spectrum quantization of elementary particles and the unit of time

Once this kinematical framework is identified and used to redefine the length, mass and temperature units from *c*, *ℏ*, *k*_{B}, there still remains a free choice for the unit of time. Is there a natural unit of time?2 In the hypothesis of *c* and *ℏ* being fixed, it has to be a mass of reference (or a difference in masses) conjugate variable of proper time, which is able to fix its scale. There is a spectrum of masses of elementary particles that could offer a natural time-scale and we think of the electron, which has no known structure, in the first place. The Compton frequency of the electron would then be the choice but the only accurate access to this quantity is at present by means of the Rydberg constant and the fine structure constant. This is not so far from the time defined by atomic clocks, as we shall discuss later.

### (b) Dynamical frame: introduction of forces

The ampere is specific to the electromagnetic force, which is one of the four fundamental interactions, the only one privileged by the present SI system. The four interactions should be considered all together in the light of the present state of unification of the fundamental forces.

(C1) The standard model of physics treats these fundamental interactions as bosonic gauge fields coupled to matter with a set of dimensionless fundamental constants. In the case of electromagnetism, the corresponding field theory is quantum electrodynamics and the coupling constant is the fine structure constant:(2.11)with *ϵ*_{0}*μ*_{0}*c*^{2}=1.

At the atomic level, all QED calculations are performed with this single constant. This means that, at the macroscopic level also, the whole of electromagnetism should be described by means of this constant alone, without the help of any additional base unit, or any other fundamental constant with a dimension such as the electron charge.

This point can be discussed in more concrete terms by means of Dirac and Maxwell's equations. Let us start with the Dirac equation(2.12)in which all the fundamental constants of interest to us can be found and where *D*_{μ} is the covariant derivative given for electromagnetism by(2.13)that is to say the canonical momentum is the sum of the mechanical four-momentum *p*_{μ}=i*ℏ∂*_{μ} and of the interaction term *qA*_{μ} (*q*=−*e* for the electron).

In order to realize the desired program, the Dirac equation should be rewritten without the electric charge appearing explicitly as a separate fundamental constant. This is possible only if the electron charge is absorbed in the four-potential in the form of an interaction four-momentum *Π*_{μ}=*eA*_{μ}. The charge *q*=*Qe* becomes an integer *Q* and the ‘dimensioned electron charge’ *e* (in Coulombs) is systematically included in the four-potential and thus also in the electric and magnetic fields (*eE* and *ecB*) which become homogeneous to mechanical forces.

The same path can be followed for Maxwell's equations:3(2.16)where *F*^{λμ} is the Faraday tensor(2.17)They can be rewritten as(2.18)where all quantities are now dimensionally homogeneous to mechanical quantities.

(The same equation would have been obtained if we had used the CGSG system)

(C2) The quantities that now appear in the previous equations are precisely those that are measured by modern quantum electrical metrology: variations of electric fields measured as frequencies by the JE and of magnetic fields measured in units of flux quanta by Squids, currents determined by single electron counting and the fine structure constant which can be seen as the ratio of the vacuum impedance *Z*_{0} to the von Klitzing resistance *R*_{K}.

(C3) Quantum electrodynamics being unquestionable at the atomic scale, the only problem met when going from the microscopic to the macroscopic world is to validate the Josephson and QHEs as to their ability to make the ‘real fundamental constants’ intervene. This may require effective constants *K*_{J} and *R*_{K} to be kept for some time. The constants *e* of the electron charge and *μ*_{0} (and hence *ϵ*_{0}) are not separately genuine fundamental constants since their role is only to link the mechanical units to the electrical ones. Reducing the fundamental interactions to mechanical forces in fact amounts to define equivalences as mentioned above. We could thus simply abandon electrical units for mechanical units just as the calorie was abandoned for the joule.

As for gravitation, it is a special case, and it is known that the corresponding field theory is not renormalizable. In fact, at present, it is only in the weak field limit that the theory of gravitation can be simply incorporated into the previous scheme. A role similar to *qA*_{μ} is then played by *h*_{μν}*p*^{ν}/2, where *h*_{μν} is the small deviation of the space-time metric *g*_{μν} with respect to the Minkowski metric *η*_{μν} and where the role of the charge is played by *p*^{ν} (Bordé *et al*. 2000):4(2.19)outside the effects connected to the spin with .

So a ‘gravitational’ fine structure constant without dimension can be defined:5(2.21)in which the reference energy *E* is chosen to be equal to the electron mass energy *m*_{e}*c*^{2}.

Another way of introducing this analogy is to compare the laws of Coulomb for the electric potential and of gravitation for the gravito-electric potential(2.22)(2.23)which stress the analogy between *α* and *α*_{G}.

Now the choice of *m*_{e} rather than *m*_{P} proton mass or any other particle mass, more or less elementary, can be questioned. In fact, it is known that the constant *G* of gravitation combined with the Planck constant allows us to define a time, the Planck time(2.24)which can be compared with the time given by atomic clocks well represented by the Rydberg constant(2.25)to yield the dimensionless quantity(2.26)which suggests the previous choice, not a unique one, nevertheless.

To conclude the above debate, no new unit is theoretically required for this dynamical frame; every quantity related to the various forces of nature can be formulated from the mechanical units and coupling constants without dimension.

## 3. Present status of the seven base units and prospects of their redefinition from fundamental constants: an experimental perspective

The direct link between the definition of a base unit, its putting into practice and a major scientific discovery is well shown in the case of the metre and its redefinition from the technological progress of laser sources. It is the archetypal example of a process that could be a model to redefine the other units.

### (a) The metre, the SI length unit and the speed of light

The metre is the first base unit for which a new definition was formulated from a fundamental constant, imposed by progress in physics in the second-half of the twentieth century. Length measurements by interferometry and the definition of the metre from a wavelength standard (formerly given by the Krypton lamp) positively moved into that direction with the discovery of lasers in 1959.

It is especially when sub-Doppler spectroscopic methods appeared, particularly saturated absorption spectroscopy in 1969 (Barger & Hall 1969; Bordé 1970), that the lasers became sources of stable and reproducible optical frequencies (3,4,5,6,7). The other revolution was the metal-insulator-metal (MIM) diode technique, which could be used for their direct frequency measurements from the caesium microwave frequency reference. From then on, the speed of light could be measured with a small enough uncertainty (Evenson *et al*. 1974 and references therein) and the 17th Conférence Générale des Poids et Mesures (CGPM) fixed its value in 1983, so linking the metre to the second. This implies a procedure of putting the definition into practice with lasers frequency-locked to recommended atomic and molecular resonances (Quinn 1999, 2003 and references therein).

In the future, cold atom optical clocks (see below) should little-by-little displace lasers slaved to saturation peaks of iodine or other molecules to deliver a controlled wavelength. Locally, the metre may also be obtained by frequency control of lasers by time transfer from primary clocks on earth or in space, or through a fibre optics or satellite transmission of femtosecond combs. Laser and microwave links for time transfer are now well developed and their performance match those of the best clocks.

### (b) The second, SI unit of time, the Rydberg constant and the electron mass

The measurement of time is at the leading edge of metrology: the accuracy of atomic clocks gains a factor 10 every 10 years (their uncertainty is better than 10^{−15} today). Thanks to this very high level of accuracy, it pulls up all the other measurements which mostly come down to a measure of time or frequency. It has its roots in the most advanced atomic physics (cold atoms) but also has daily applications, especially to navigation (GPS). Finally, the fascination exerted by the notion of time and its philosophical implications are also a strong motivation for this race. The teams of the BNM–SYRTE at the Paris Observatory and of the Kastler Brossel laboratory have pioneered atomic fountains using cold atoms to make clocks (Bize 2004). They have started to construct a spatial clock with cold atoms (PHARAO) in the frame of an ESA (European Space Agency)/CNES (Centre National d'Etudes Spatiales) project on the international station: ACES (Atomic Clock Ensemble in Space). The importance of Australian cryogenic resonators for the short-term stability of the future clocks must also be mentioned.

Among the recent revolutions, let us mention the cold atom optical clocks (Sterr *et al*. 2004) that associated with the frequency combs (Reichert *et al*. 2000) given by femtosecond lasers, will allow a better and faster counting and have a good probability to take the place of the microwave clocks in the future. This tendency to build clocks having higher and higher frequencies will probably continue beyond the optical spectrum and one day or another the gap to reach the spectral region of Mössbauer resonances will be filled by coherent stable sources of measurable frequencies.

It is still a fierce competition between neutral atoms (in free flight or confined in a light grating so as to benefit from the Lamb-Dicke effect) and trapped ions (Gill 2002). In the end, what part will space play in the intercomparison of clocks and the distribution of time? The accuracy of clocks on earth will necessarily be restricted in the future by the unknown potential of the earth's gravity at the level of 10^{−17}. So, an orbital clock will have to be available as a reference.

The future possible redefinition of the second is an open contest. Will there be for the second, as for the metre, a universal definition accompanied by a mise en pratique and secondary realizations? Just as it is the case for the metre, this involves the problem of the possible variation of the fundamental constants that would differentially affect the various transitions.

Rubidium has advantages over caesium because of its collisional properties and the hyperfine transition of rubidium has just been recommended by the CCTF (Consulting Committee for Time and Frequency) as a secondary representation of the time unit. As for hydrogen, many metrology physicists find it most appealing. We saw above that it is really tempting to recommend hydrogen to define a primary clock, for instance the transition 1S–2S which has been the subject of spectacular intercomparisons (at 10^{−14}) with the cold caesium fountain (Niering *et al*. 2000) or even an appropriate combination of optical frequencies allowing better isolation of the Rydberg constant from various corrections (which is realized today at nearly 8×10^{−12}, thanks to the work of the groups of F. Biraben in Paris and T. Haensch in Garching; Huber *et al*. 1999; de Beauvoir *et al*. 2000). So the calculations of the hydrogen spectrum should be carried as far as possible, nevertheless, remembering the gap (several orders of magnitude) that will still separate, for a long time, theory from experiment. Finally, between the Rydberg constant *R*_{∞}=*α*^{2}*m*_{e}*c*/2*h* and the electron mass *m*_{e}, we find the fine structure constant, known today only to ∼10^{−8}. As can be seen, it is still a long way to connect the time unit to a fundamental constant, but we have to be aware of the implicit link, which already exists, between the definition of the time unit and these fundamental constants.

### (c) The kilogram, SI mass unit and the Planck constant

Today everyone accepts the idea that the kilogram mass standard, invariable by definition, has in fact drifted by several tens of micrograms (i.e. some 10^{−8} in relative value) since it was made and to say that every effort must be made to replace it in its definition role (CGPM recommendation; Gläser 2003*a*). Several possibilities have been explored (Schwitz *et al*. 2004). The two most serious consist in a measurement of *h*/*M* by frequency or time measurements. For the first one, based on atom interferometry, *M* is an atomic mass *M*_{at} (Bordé 1989, 2002; Wicht *et al*. 2002*a*,*b*). Hence, the link must next be made with the macroscopic scale: realization of an object whose number of atoms is known and whose mass can be compared to that of the kilogram standard (this amounts to the determination of Avogadro number). For the time being, this second step is the more difficult and is limited to 10^{−6}/10^{−7} (Becker *et al*. 2003; Schwitz *et al*. 2004). Moreover, its major drawback is not to lead to an easy and universal realization of the new definition (Taylor & Mohr 1999).

The second way, which today looks the more promising, is that of the ‘electric’ kilogram for which *M* is directly the mass of the kilogram standard (Eichenberger *et al*. 2003). The electric kilogram was born with the watt balance, suggested by Kibble (1975), which in one (cryogenic Bureau International des Poids et Mesures (BIPM) version) or two steps accomplishes the direct comparison between a mechanical watt, realized by the motion of a mass in a gravitational field (the earth's field) or an inertial field (space version) and an electrical watt, realized through the combination of the Josephson and QHEs. In the case of the earth's gravity field of acceleration *g*, the power balance is written as(3.1)the electrical power *I*_{1}*U*_{2} is given (up to integers corresponding to Josephson steps or QHR plateaux, that we shall omit) by(3.2)where *f*_{1} and *f*_{2} are Josephson frequencies. The acceleration *g* is measured as a dimensionless phase *φ* by optical or atom interferometry, using lasers of circular frequency *ω* and time delay *T*,(3.3)The velocity *v* is measured as a laser beam Doppler shift *v*/*c* by a heterodyne interferometric measurement. Combining these relations leads to the formula given earlier for the Compton frequency of the kilogram in the form(3.4)This method, operated in the US and in England for more than 20 years, has already showed that it can reach a level of uncertainty matching that of the present kilogram, i.e. some 10^{−8}. Two new realizations are being carried and evaluated, one in Switzerland, and a more recent one in France. A third very attractive and ambitious cryogenic project is under study at BIPM. More programs will probably follow. In all likelihood, in a few years this effort will lead up to the possibility of following the evolution of the present kilogram, and later on to redefine this kilogram by its Compton frequency as measured by the watt balance:6

The kilogram is the mass of a body whose Compton frequency is 1.356 392..×10

^{50} Hz exactly

As we have seen in the previous part, this probable evolution is also to be hoped for at a conceptual and theoretical level. Without any doubt it must be strongly encouraged. It is equivalent to fixing the Planck constant to a nominal value, which has a number of other advantages (Taylor & Mohr 1999; Mills *et al*. in press) and one should certainly consider to do it as soon as possible. Moreover. there is every chance that the future balances will be simplified versions of the watt balance.

### (d) The kelvin, the SI temperature unit and the Boltzmann constant

The measure of thermodynamic temperature calls for an absolute instrument very difficult to put in practice, that is the reason why the international scale of temperature ITS-90 uses a set of fixed points and interpolation instruments and equations. Very low and very high temperatures are particularly active fields of research.

Here again, the present definition of the unit, based on the triple point of water (Renaot *et al*. 2000), could evolve if the Boltzmann constant could be determined with sufficient accuracy. Several methods are being attempted (electrical noise power, measure of a Doppler width, black body radiometry) and they could lead to new technologies in absolute thermometry.

The relative uncertainty of the recommended value of the Boltzmann constant given by CODATA 1998 is 1.7×10^{−6} (Mohr & Taylor 1999). We must stress the fact that this value does not result from a direct measurement of *k*_{B}. It comes from the relation *k*_{B}=*R*/*N*_{A} in which the recommended values for *R*=8.314 472(15) J mol^{−1} K^{−1} and *N*_{A}=6.022 141 99(47)×10^{23} mol^{−1} are used. Yet, direct measurements of *k*_{B} do exist. They are mainly of two types.

One is based on measuring the mean square voltage 〈*U*^{2}〉 of Johnson noise across the terminals of a resistor *R*_{S} in thermal equilibrium at temperature *T* measured in a bandwidth Δ*f* (Storm 1986; White *et al*. 1996). One has 〈*U*^{2}〉=4*k*_{B}*TR*_{S}Δ*f*. It must be noted that this expression, based on Nyquist's theorem, is correct to better than 10^{−6}, if the bandwidth is less than 1 MHz and if *T*<25 K. Experimentally, this method is facing the difficult problem of precisely estimating the measurement bandwidth, which limits the relative uncertainty of this type of experiment to a few times 10^{−4}.

The other rests on the virial expansion of the Clausius–Mossotti equation and on the determination of one of its coefficients (Pendrill 1996). This one uses the measurement of the relative change of capacitance of a ^{4}He-filled capacitor between 4.2 and 27 K as a function of pressure. Besides, the determination of *k*_{B} from such a measurement involves knowing the static electric dipole polarizability of ^{4}He (Luther *et al*. 1996; Pendrill 1996; and more recently Lach *et al*. 2004). Now, the best value deduced from that method is reported with an uncertainty of 1.9×10^{−5}, but is marginally compatible with the recommended value. In a commented article about the values of CODATA, Mohr & Taylor (1999) consider this uncertainty to be underestimated. For this reason, no direct measurement of the Boltzmann constant enters into the value recommended by CODATA.

So, new methods of direct determination must be used. A very promising method is due to Quinn & Martin (1996) and is based on the determination of the Stefan–Boltzmann constant from a measurement of total energy radiated by a black body at the temperature of the triple point of water with a cryogenic bolometer. Another method, still more promising, reduces the determination of the Boltzmann constant to a frequency measurement and was proposed by the author (Bordé 2002). It consists of measuring as accurately as possible the Doppler profile of an absorption line in an atomic or molecular gas at thermodynamic equilibrium. At low pressures the lineshape is dominated by Doppler broadening (collisional, transit-time and saturation broadening and Dicke narrowing are well-known phenomena in this low pressure regime that are well understood and can, in principle, be extrapolated to zero values of pressure and laser power). In the pure Doppler limit, the lineshape is a Gaussian, whose width is *ku* where *k* is the light wave number and *u* the gas most probable velocity (see figures 2 and 3). From this very accurate frequency measurement, one can infer a value for *k*_{B}/*ℏ* from a measurement of *ℏ*/*M*_{at} by atom interferometry. The relative uncertainty on atomic and molecular masses can be reduced to 10^{−9}–10^{−10} thanks to measurements of the ratios of masses of atomic and molecular ions in Penning traps (DiFilippo *et al*. 1994; Bradley *et al*. 1999; Carlberg *et al*. 1999). Very encouraging preliminary results have been obtained in our Villetaneuse laboratory with ammonia as the molecular species. The *E*-line of methane at 3.39 μm could be another very good candidate in the future.

### (e) The ampere, SI electrical intensity unit and the fine structure constant

The electric units already underwent two quantum revolutions at the end of last century with the JE and the QHE. They are experiencing a third one with single electron tunnelling which should allow very soon the closing of the metrologic triangle (checking the consistency of quantum realizations by the application of Ohm's law) at a level around 10^{−7} (Piquemal *et al*. 2004). It is generally considered that closing this triangle at a level around 10^{−8} will bring enough confidence to move to a redefinition of the electrical units from the fundamental constants. In the meantime these units are linked to the SI only by conventional values of the *K*_{J} and *R*_{K} constants. In actual practice, the reproducibility of JE and QHE reaches such a level (10^{−9}) that now electrical measurements use these effects without any connection to the definition of the ampere. Closing so precisely, the metrological triangle will possibly allow the correction of these constants making them consistent, and to check further the theories that connect them to the fundamental constants of physics.

Ohm's law will only be an equality between frequencies: a difference of potential being expressed as a Josephson frequency,7 a current as a number of electrons per second and the electric resistances divided by von Klitzing resistance will be without dimension.

So electric metrology is in the middle of a revolution. In the future it will be in a key position for the whole of metrology (see the electric kilogram).

We have also stressed the key role of the fine structure constant and how important is its determination. Thompson–Lampard calculable capacitance together with the QHE offers a first method, presently limited to a few 10^{−8} (Trapon *et al*. 2003 and references therein). Atom interferometry seems to be a more promising method to move further (see figure 8).

### (f) The mole, the SI unit of quantity of matter and Avogadro number

The mole is a quantity of microscopic objects, defined as a conventional number of similar entities (as a rule a material, but the concept is sometimes extended to non-material identities like the photons; Becker 2001). This pure number8 is taken as equal to the Avogadro number defined from the present definition of the kilogram divided by the mass of a carbon atom (^{12}C) which allow us to go from the atomic scale to the macroscopic scale. It is convenient to establish the balance in any transmutation reaction between elementary entities, but it is definitely not essential in a base units system because it is redundant with the existing macroscopic mass unit, the kilogram, that many users prefer anyway.

There is an international program to determine the Avogadro number from the knowledge of a silicon sphere under all its ‘angles’(physical characteristics of dimension, mass, unit cell volume, isotopic composition, surface state, etc.; Becker 2003; Becker *et al*. 2003). This program has already met and overcome many problems and, one day, it could succeed in a determination of the Avogadro number with an accuracy consistent with a redefinition of the kilogram. By fixing the Avogadro number the kilogram would then be defined from the mass of an elementary particle, preferably that of the electron. This program whose chances of success are not to be disregarded, spur on several advanced technologies mostly bearing on the knowledge of properties of a silicon sphere. Unfortunately today there is a significant difference (larger than 10^{−6}) with the value derived from the watt balance and the formula in the footnote. Other methods of realizing a macroscopic mass from the counting of individual particles (for example gold ions) are less advanced but, nevertheless, very interesting (Gläser 2003*b*).

### (g) The candela, the SI light intensity unit and…a number of photons

The candela is sometimes qualified as a physiological unit and, except for a more or less universal and famous sensitivity curve of the human eye, (*λ*), it boils down to an energy flux. Its survival among base units is not easily understood. Nevertheless, photometry and radiometry are important fields of metrology for industrial use and in the environmental field. In this domain, cryogenic radiometers and trap detectors are two new technologies permitting absolute radiometric measures. But, above all, Quinn and Martin's reference black body mentioned before (Quinn & Martin 1996; Martin & Haycocks 1998), constitutes a primary realization of a radiometric source, if however we give ourselves the Boltzmann constant that also intervenes directly here too. Lastly, nonlinear optics and particularly the parametric three-photon processes provide an absolute calibration of the quantum efficiency of detectors through correlated photon counting and direct application of Manley–Rowe relations(3.5)These apply very generally to parametric processes but they can also be interpreted in this case as a balance between numbers of light quanta.

## 4. Conclusion

This ends our survey of the seven base units and of their connection with fundamental constants. Fundamental metrology is nowadays in a process of complete transformation and a strong tendency to relate the base units to fundamental constants is becoming apparent. The debate is open as to the way newly relevant and appropriate definitions should be formulated. There is an obvious competition between two schemes to define the mass unit. In the first one, the Planck constant is fixed and the watt balance allows an easy measurement of masses. In the second one, the Avogadro number is determined and fixed and the mass unit is defined from an elementary mass such as the electron mass, but in this case, the practical realization of a macroscopic mass must go through the realization of a macroscopic object (thus an artefact) whose number of microscopic entities is known. The time unit, which is implicitly connected to the Rydberg constant, becomes, in the first case, implicitly linked to the electron mass, and in the second to the Planck constant. The future will tell us which of these choices has a chance of being retained. One would certainly prefer to reduce the number of independent units and to define masses in terms of frequencies which means to redefine the kilogram through its Compton frequency . This is done either in one step through the result of the watt balance or in two steps through the Avogadro number _{A} determination and the measurement of the Compton frequency of atoms *ν*_{u}=*m*_{u}*c*^{2}/*h* or of electrons *ν*_{e},(4.1)with(4.2)Since we cannot fix *ν*_{e}, today, without redefining the time unit, only the product _{B}*ν*_{e} can be fixed and then _{A} is necessarily linked to the measured value of *ν*_{e} and cannot be fixed in this option. We may dream of a future world, where *ν*_{e} will define the unit of time and _{B} the unit of mass and thus, in which, a base unit will no longer be necessary. This challenge is open to physicists.

It is clear that quantum physics has become an essential aspect of modern metrology, since interferometry of light and matter waves is now bridging the gap between the microscopic and macroscopic worlds. Phases introduced in the wave function by electromagnetic and gravito-inertial fields can be measured very accurately by matter-wave interferometry using Aharonov–Bohm, Aharonov–Casher, Sagnac, COW effects…. Recently, the measurement of *G* has been demonstrated in the group of M. Kasevich, using an atom gradiometer and an accurate experiment is in preparation in Florence (MAGIA; Stuhler *et al*. 2003). Sources of coherent matter-wave (BECs, atomasers…) are being developed. Interesting analogies and combinations between electrical and gravito-inertial effects in matter-wave interferometers have been suggested and are under study. For example, the analogue of a Josephson junction is obtained for atoms with Raman pulses and with two of these in opposite directions one can compensate the acceleration of gravity and measure the gravito-electric field by a frequency measurement. Another example suggested quite some time ago is to combine the magnetic and gravito-magnetic field effects in a Squid to detect the Lense–Thirring effect (DeWitt 1966). We are likely to witness an explosion of these ideas and their application to metrology.

Fundamental metrology is thus now definitely a quantum metrology in which modern sub-Doppler spectroscopy, quantum electrical devices and atom interferometry play a key role and have become essential tools for the determination of fundamental constants (we have seen the examples of *c*, *h*/*M*, *R*_{∞}, *α*, *k*_{B}, *G*) and for the redefinition of base units.

## Acknowledgments

The subject-matter of this paper is being currently studied by a working group of the Académie des Sciences and this manuscript has thus greatly benefited from many discussions and reflexions within this group. The chairman of the group was Hubert Curien until our latest meeting in January 2005, shortly after which, we had the deep sorrow of his disparition. Other members of this group are: Thibault Damour, Daniel Estève, Pierre Fayet, Bernard Guinot, Marc Himbert, Yves Jeannin, Jean Kovalevsky, Pierre Perrier, Terry Quinn, Christophe Salomon, Claudine Thomas and Gabriele Veneziano. The author is also very grateful to Mrs Marie-Claire Céreuil for her help in translating a former version of this paper and in the preparation of the final manuscript.

## Footnotes

One contribution of 14 to a Discussion Meeting ‘The fundamental constants of physics, precision measurements and the base units of the SI’.

↵In this respect we should carefully distinguish two different meanings of time: on the one hand, time and position mix as coordinates and this refers to the concept of time coordinate for an event in space-time, which is only one component of a four-vector; on the other hand, time is the evolution parameter of a composite system and this refers to the proper time of this system and it is a Lorentz scalar (see below).

↵Planck time comes to mind first, but there is no known way to use it for any practical clock and we shall see when we consider interactions that it is natural to introduce it in a dimensionless constant.

↵More generally one could introduce also a polarization tensor

^{λμ}:(2.14)The other Maxwell equations are simply obtained from the dual tensor as:(2.15)↵One can also extend equation (2.8) by introducing the d'Alembertian in curved space-time as in recent theories of atomic clocks and atom interferometry(Bordé 2004

*b*). For an overview of metrology and general relativity see Guinot (1997, 2004).↵The gravitation potentials

*h*_{μν}satisfy Einstein's linearized field equations for the metric written in the harmonic gauge as:(2.20)where_{μν}is the energy–momentum tensor of the source and where . So that, in the Dirac equation, the interaction term can be written as a dimensionless gravitational charge*p*_{α}/*m*_{e}*c*multiplied by a field having the dimension of a linear momentum.↵One should avoid the introduction of massless photons in this definition to make the connection between energy and frequency. We have sufficiently emphasized that there is a direct connection between mass and frequency in quantum mechanics. Furthermore, mass and the Compton frequency are relativistically invariant quantities (Lorentz scalars) unlike an energy or a de Broglie frequency (or wavelength). A definition based on the de Broglie wavelength has been proposed by Wignall (1992) without any connection to the watt balance.

↵The other choice would be to fix the value of the electron charge

*e*to a nominal value to turn the volt into a unit derived from the joule (and hence from the second, once*h*fixed). This requires to let*μ*_{0}be determined by the value of*α*fixed by nature. This situation would be opposite to the present one, in which*μ*_{0}is fixed and*e*has to be measured.↵The Avogadro number

_{A}is defined here as the number of atoms, isolated, at rest and in their ground state, contained in 0.012 kg of carbon 12. It is therefore, up to a numerical factor 0.012, the dimensionless ratio of the mass of the kilogram standard to the mass of the carbon atom. The Avogadro constant*N*_{A}stands usually for the same number per mole and it is expressed in mol^{−1}. This number and this constant are just a way to express the mass of the carbon atom or its 12th, which is the unified atomic unit of mass*m*_{u}. One should note that, if the kilogram is characterized by its Compton frequency , the Avogadro number is related to the Rydberg constant byand thus that, if this frequency is fixed, the Avogadro number will be implicitly linked to the atomic time.- © 2005 The Royal Society