## Abstract

Measuring time is a continuous activity, an international and restless enterprise hidden in time laboratories spread all over the planet. The Bureau International des Poids et Mesures is charged with coordinating activities for international timekeeping and it makes use of the world's capacity to produce a remarkably stable and accurate reference time-scale. Commercial atomic clocks beating the second in national laboratories can reach a stability of one part in 10^{14} over a 5 day averaging time, compelling us to research the most highly performing methods of remote clock comparison. The unit of the international time-scale is the second of the International System of Units, realized with an uncertainty of the order 10^{−15} by caesium fountains. Physicists in a few time laboratories are making efforts to gain one order of magnitude in the uncertainty of the realization of the second, and more refined techniques of time and frequency transfer are in development to accompany this progress. Femtosecond comb technology will most probably contribute in the near future to enhance the definition of the second with the incorporation of optical clocks. We will explain the evolution of the measuring of time, current state-of-the-art measures and future challenges.

## 1. Introduction

‘What time is it?’ is undoubtedly the most recurring question of all epochs. Man is dependent on time and this dependency has existed in the development of all human activities since the birth of mankind. So, it was necessary to measure the flow of time. We stumble here over a concept that associates time with water as both of them flow. But, whereas we can stop a stream of water and we can pour and store a volume of water in a bottle, we cannot impede the course of time and we cannot lock up time and keep it in store for another occasion. The need for time emerged with the need for dating events. Seasons, seed, harvest, births, battles, catastrophes, positions of the celestial objects were events that regulated the evolution of civilizations in ancient times. Far from weakening this dependency, our contemporaries are making it stronger and pushing it to astonishing limits of precision; protocols for time transfer by computer networks, air traffic control, stock exchange, space navigation, satellite systems, mobile phones and security systems (and unlimited future applications) make us ‘time-dependent’ today.

The necessity of measuring time pushed the evolution of technology and, as technology improved, more demanding applications arose. The Ariadne's thread of this process is human intellect, which came to understand that the notion of an absolute physical world is a contradiction and proved that general relativity models the real world.

Astronomy was the first exact science dealing with time; trying to find the mechanisms to measure the uniform flow of time had been, until the last half of the twentieth century, the challenge of astrometry. The celestial motions were considered natural phenomena providing the immutable frequency standards needed for the measurement of time. Terrestrial clocks were ‘timekeepers’, whose frequency was compared and contrasted with that coming from the heavens until the moment when they become more stable than the celestial motions themselves. The assertion that an atomic transition could be the basis of time measurement, and the development of the first atomic caesium clock in 1955 by Essen & Parry (1957), put timekeeping in the hands of metrology. However, this transition was not without difficulty: when time was derived from a device in a laboratory, rather than a natural phenomenon, it was accused of being unreliable. Metrology proved that the accuracy of the realization of the second could increase up to the point of requiring the development of more precise methods of comparison; the relativistic treatment of time comparison is now compulsory at the accuracies at which we now work. The modelling of a primary frequency standard is made in the framework of general relativity. The most demanding applications today require atomic timekeeping.

## 2. Before the atomic era

The first time-scales were based on the repeatability of celestial motions. The underlying phenomenon responds to the requisite of being natural and the concept of constant frequency remained valid until measurement devices improved, thus demonstrating the instability of natural celestial frequencies.

This was the case for the Earth's rotation, reflected by the diurnal motion of the celestial bodies. *Apparent solar time* was defined as ‘the hour angle of the Sun’; it has irregularities that were well understood in the times of Ptolemy, since the laws of planetary motions explained that the orbit of the Earth around the Sun is elliptical and that it is tilted with respect to the plane defined by the Earth's rotation. A uniform time was then derived by correcting the effects of the ellipticity of the Earth's orbit and of its tilt under the name *mean solar time*. This concept was associated with the local meridian. The nineteenth century brought the necessity of unifying time (mainly for national railway timetables!), at least on a national level. Some countries adopted as the ‘official’ hour that of an adopted meridian. The next step, in 1884, was the adoption of a *prime meridian* and a *universal time*, the mean solar time corresponding to the selected prime meridian. The globe was divided into 24 time zones of one hour, the first centred on the Greenwich meridian, which had been adopted as the prime meridian. This was the first step towards international time coordination with countries adopting legal times that corresponded to local time zones.

The establishment of time-scales up to then had been in the domain of astronomy but it was not until 1948 that the International Astronomical Union (IAU) formally recommended the use of universal time (UT). A difference must be made between Greenwich mean time (GMT) and UT. While GMT starts at noon (the hour angle is zero at the moment of the upper transit of a celestial object), UT starts at midnight, as does civil time. GMT is no longer officially used as a reference for civil time, instead it is a form of UT. However, GMT is so embedded that it is still incorrectly used to mean UT decades after it was removed from the conventions.

A determination of universal time required astronomical observations to measure the *sidereal time*. Sidereal time is the hour angle of the equinox calculated from the meridian (it has the denomination ‘Greenwich sidereal time’ when it refers to the Greenwich meridian). The instants of stars' meridian transits were detected and corrections to the reference clock were derived with the help of transit instruments.

The periodic phenomenon used as the basis of the time-scale was the rotation of the Earth, from which the mean solar day was derived. The unit of rotational time was defined as a fraction of the mean solar day, that is, the second of mean solar time was 1/86 400 of its duration. Clocks had the role of timekeepers and they were not at all associated with the phenomenon underlying the time-scale. Clocks provided, through extrapolation, real time UT and they produced time signals for time dissemination.

The discovery of polar motion at the end of the nineteenth century, and the evidence of irregularities in the Earth's rotation discovered some time later, led to the organization of worldwide coordinated observations. In a certain manner, this was the beginning of international cooperation that makes current international timekeeping a success: in the 1950s the precision in the determination of time was a few milliseconds and the delay to obtain definitive results was about 1 year; in the 2000s we determine time with a few nanoseconds precision and a delay of one month.

The improvement of clocks (pendulums, quartz clocks) contributed to the demonstration of irregularities in the Earth's rotation. The astronomers thought they had found the solution to the problem of defining a uniform time-scale with the adoption of *ephemeris time* (TE); a form of dynamical time, generally defined as the argument in the dynamical equations of the bodies of the Solar System and specifically as the geometrical mean longitude of the Sun in Newcomb's theory. The practical realization of TE was through simultaneously observing the Moon, due to its fast motion, to reference stars; the fact that dynamical theories of its motion were available at that time allowed a precise prediction of its ephemeris. The second of TE, as adopted by the 10th Conférence Générale des Poids et Mesures in 1954, was defined as a fraction of a specific tropical year (1900). The duration of the ephemeris second approximately equals the duration of a second of mean solar time averaged over a century, shorter by 1.4×10^{−8} s (averaged over the year 1960). In fact, the century over which it was averaged was mostly the nineteenth century, hence the present need for occasional leap seconds. Until 1960, the second was defined from the duration of the mean solar day, between 1960 and 1967 it was defined from the duration of a specific tropical year.

Many reasons converge to make TE a time-scale without applications in civil timekeeping and used by astronomers only: TE was available in the form of a correction to UT and the definitive values for this correction were published with a delay of about 1 year, its precision of reading was lower than that of UT in an order of about 100 and it did not follow solar time. Consequently, UT remains the time-scale for practical use.

An effort to establish world coordination was made in 1912, with the creation of the Bureau International de l'Heure (BIH) in the Paris Observatory. The mission of the BIH was to construct a time reference based on astronomical observations and time signals provided by the astronomers. This time reference was a form of universal time. It must be said that no such coordination had ever existed for the determination of TE. The mission of the BIH was to provide a form of universal time and to give the difference between this ‘reference time’ and the emission times of time signals. These differences were of the order of tenths of a second in the 1940s.

A form of universal time called UT1 is derived by eliminating the effects of polar motion; it is proportional to the rotation angle of the Earth in space. UT1 is affected by a secular deceleration and decade fluctuations. If seasonal fluctuations are eliminated from UT1, we obtain UT2.

## 3. The atomic time era

Experiments with atomic clocks already existed at the moment of the definition of TE; it can be considered as a ‘transition time’ between the astronomical and the physical time-scales. In 1955, Essen & Parry developed the first operational caesium frequency standard at the National Physical Laboratory (NPL, UK), characterized by an accuracy of the order 10^{−10}, the highest at that moment. In 1958 the United States Naval Observatory (USNO) determined the frequency of the caesium in terms of the second of ephemeredes as the result of a programme of worldwide observations with the Markowitz Moon camera (Markowitz *et al*. 1958).

TE was essentially based on lunar motion. It relied on the dynamical theory of the moon and on observations; improvement of the theory would imply a redefinition of the scale. The ephemeris second had been defined as a fraction of the tropical year 1900, thus it had been defined in retrospect and was not likely to be reproduced; this was a clear drawback. In 1967, the 13th Conférence Générale de Poids et Mesures adopted a new definition of the second (*Metrologia* 1968), now called the SI second, using the value of the caesium frequency in terms of the ephemeris second determined by Markowitz & Hall.

Laboratory caesium standards were constructed in a number of other laboratories (National Bureau of Standards, USA; Laboratoire Suisse des Recherches Horlogères) and commercial caesium standards become available a short time later. These allowed independent atomic times (TA(*k*)) and led to two challenges: comparing them at the level of performance of the standards and averaging them to produce a mean atomic scale more uniform and reliable than the individual ones.

The acceptance of international atomic time (TAI) was not straightforward. It is an integrated time-scale and, as opposed to astronomical scales, the uncertainties do not decrease with the improvement of observations; they are also integrated so that the sum increases with time. At some time, therefore, the error of an atomic time-scale exceeds that from the reading of astronomical time. As an example, the error accumulated on TAI since 1977 may be about 20 μs; the uncertainty on UT1 is now 10 μs. However, we also have to consider the accuracy of the theory of motion of celestial bodies and the nature of the uncertainties. In the example of UT1, the uncertainties of TAI, which have long-term variations over decades, do not matter because the long-term irregularities of UT1 have no precise theoretical modelling. Another objection to TAI was its artificial character; the frequency used to define the unit was not derived from a spontaneous phenomenon, it was generated by a device.

However, in spite of many objections, atomic time was slowly accepted. The adoption of the BIH's atomic time-scale was recommended by the IAU in 1967, the International Union of Radio Sciences (URSI 1969) and the International Radio Consultative Committee of the International Telecommunications Union (CCIR 1970). The 14th CGPM officially adopted TAI in 1971. By 1968, the clock comparisons had much evolved. By means of the system of navigation LORAN-C, by clock transportation, by frequencies broadcasted by VLF and by clock synchronization by television allowed time transfer with uncertainties of order 10–1 μs.

The scale named TA(BIH) was based on three local independent atomic scales maintained in a few laboratories in Europe and the USA but it became based directly on the contribution of individual clocks when it became TAI in 1971. The new strategy established at the BIH for the computation of TAI should be considered as the first step in international atomic timekeeping. The BIH time-scale relied on a non-negligible number of atomic clocks in time laboratories; these time laboratories provided access to the scale and the time community enlarged and started a period of fruitful cooperation that continues today. The algorithm Algos (Guinot & Thomas 1988; Audoin & Guinot 2001), based on differences of clock readings, was developed and set the basis for the calculation of TAI still currently in use.

TAI is a continuous reference time-scale, is used for scientific applications and has never been disseminated directly; Coordinated Universal Time (UTC), approximating UT1, continues to rule the world because it is needed in real time for many specific applications including astronomical navigation, geodesy, telescope settings, space navigation, satellite tracking and so on. UTC has been defined to provide a time-scale that approximates to UT1 better than one second; this tolerance coming from the needs of astronomical navigation at that moment. Therefore, UTC has differed from TAI since 1972 by an integer number of seconds, changed when necessary by insertion of a *leap second* to maintain |UT1−UTC|<0.9 s. Although this system works well, it seems that there are no applications today that require UTC. The access to UT1 can be satisfied by other means today: in the case of ephemeredes for astronomical navigation and civil applications, where a precision of the order of one second is required, a 2 year prediction should allow the preparation and distribution of ephemeredes 1 year in advance; for more exigent users, this dissemination is already satisfied under the form of [UT1−UTC] or [UT1−UTC] by the International Earth Rotation and reference Systems Service (IERS). Improvements of the dissemination of these quantities are to be expected, most probably by the satellite navigation systems. Leap seconds become increasingly cumbersome, introducing an ambiguity in dating events when they occur. This leads to the proliferation of continuous time-scales parallel to TAI, offset by an integer number of seconds, which puts the unification of time at risk. With the progress of communications, other means to provide UT1 in real time can be conceived and the future of UTC (i.e. whether or not we need to continue to insert leap seconds) is being discussed in international forums (Nelson *et al*. 2001; Arias & Guinot in press).

TAI and UTC have numerous applications in time synchronization at all levels of precision; from the minute needed by the general public to the nanoseconds required by the most demanding applications. TAI is the basis of time-scales used in dynamics and modelling the motions of artificial and natural celestial bodies, with applications in the exploration of the universe, tests of theories, geodesy, geophysics and studies of the environment. Relativistic effects are important in all these applications.

### (a) Proper and coordinate time

Until the arrival of general relativity, the time-scale based on Newtonian dynamics was considered equivalent to an atomic time-scale, based on the reproducibility of a local phenomenon. In the framework of general relativity, however, a difference must be made between *proper time* and *coordinate time*. Local time can only be measured by a clock; it is the time in the vicinity of the observer. A clock defines its own time-scale; this scale is proper time. Coordinate time is the time defined for a region of space with a system of space-time coordinates that have been chosen arbitrarily. Time metrology provides practical representations for (i) the unit of proper time used by an observer in local experiments and (ii) the different coordinate times requiring models in spaces that exceed the confines of the laboratory. The SI second, as defined in 1967, should be considered as the unit of proper time, provided that corrections for the motion of the atoms with respect to the device measuring their frequency are applied. TAI is a weighted average of clock readings (proper time) after transformation into a conventionally defined coordinate time called terrestrial time (TT). The definition of TT ensures that the rate of TAI is very close to the rate of a caesium clock fixed on the rotating geoid. TT is a coordinate time in a geocentric reference system. One also needs coordinate time in a coordinate system centred at the barycentre of the Solar System. These coordinates are mathematically linked to the second of SI as a unit of proper time. Their realizations are based on TAI. These systems have been described by the IAU (1991) as the geocentric and barycentric coordinate systems, respectively.

The relativistic treatment of time measurements is today essential in time metrology and its precise applications. Nevertheless, for the sake of simplicity, we shall use the language of classical mechanics in the following.

## 4. The properties required of a time-scale

It has been said that the phenomenon giving origin to a time-scale should be reproducible with a frequency that is, ideally, constant. Since this is never exactly the case, we must be able to identify the causes of its variation and eliminate, or at least minimize, them. The realizations of the SI second differ from the ideal duration settled in its definition; we should be capable of reducing these differences in the process of constructing a time-scale. One solution is to average; we can average in time but taking a process that keeps the frequency memory into account is essential since the measurements are not simultaneous. Or, we can average over the realizations of the second in different laboratories; in this case there are two sources of uncertainty: the individual experiments that converge to a realization and the algorithm used to fabricate the scale.

TAI is an integrated time-scale; it is built by accumulation of seconds, which means that uncertainties are also accumulated. This was one of the criticisms raised in different forums against adopting atomic time to replace TE. The main point was the increasing long-term discrepancy, which must be taken into account in some applications. On the contrary, the accuracy puts the votes in favour of TAI. To construct an integrated (atomic) time-scale the following elements are necessary: (i) the phenomenon, (ii) the definition of the unit, (iii) realizations of the unit, that is, frequency standards, called clocks if they operate continuously and (iv) an algorithm of calculation adapted to the required characteristics of the scale. A time-scale is characterized by its reliability, frequency stability, frequency accuracy and accessibility.

The *reliability* of a time-scale is closely linked with the reliability of the clocks whose measurements are used for its construction; at the same time, redundancy is also requested. In the case of the international reference time-scale, a large number of clocks are required; this number is today about 300, most of them high-performance commercial caesium atomic standards and active auto-tuned hydrogen masers.

The *frequency stability* of a time-scale is the capacity to maintain a fixed ratio between its unitary scale interval and its theoretical counterpart. A means of estimating the frequency stability of a time-scale is by calculating the Allan variance, which is the two-sample variance designed for the statistical analysis of time-series and depends on the sampling interval.

The *frequency accuracy* of a time-scale is the aptitude of its unitary scale interval to reproduce its theoretical counterpart. After the calculation of a time-scale on the basis of an algorithm conferring the required frequency stability, frequency accuracy is improved by comparing the frequency (rate) of the time-scale with that of primary frequency standards (PFS) and by applying, if necessary, frequency (rate) corrections.

The *accessibility* to a worldwide time-scale is its aptitude at providing a way of dating events for everyone. It depends on the precision that is required. We consider here only the ultimate precision which, as we shall see, requires a delay of a few tens of days in order to reach the long-term frequency stability required for a reference time-scale. Besides, the process needs to be designed in such a way that the measurement noise is eliminated or at least minimized; this requires a minimum of data sampling intervals.

We will see in the following sections how these conditions are fulfilled for TAI.

## 5. Realizations of the SI second

### (a) Primary frequency standards

The best realizations of the SI second are those of the PFS, developed and operated in a few time laboratories. Table 1 presents the list of such standards having contributed to TAI in the past 3 years together with their Type B uncertainties, as declared by the laboratories and published in the Bureau International des Poids et Mesures (BIPM) *Circular T* (monthly).1 Today the definition of the SI second is realized by the best PFS with an accuracy of order 10^{−15}.

Results from PFS are compared with TAI and published in *Circular T*. For this, we distinguish uncertainties originated in the laboratory operating the standards (those that are reported in an evaluation) and uncertainties in the link between the clock in the laboratory and TAI (these are evaluated at the BIPM). The uncertainty in the link with TAI is estimated considering its components: the one coming from the time transfer technique and eventually that coming from the clock providing the link to TAI and the time transfer device, whenever applicable (Petit 2000).

The reports of evaluation of PFS provide the average fractional frequency difference between the standard and a clock participating in the calculation of TAI over an interval of the standard's operation. A complete characterization of the uncertainty is provided, with a reference to a peer-reviewed publication for the type B uncertainty. The type A uncertainty represents the instability of the primary standard and can be increased by the ‘dead time’, that is, the periods within an interval of the standard's evaluation, for which no comparison data are available. During these periods, the frequency of the primary standard relative to the reference clock is interpolated and its uncertainty depends on how well the frequency stability of the clock is known. TAI is calculated every 5 days, for the so-called *standard dates* (modified Julian dates ending in 4 and 9, at 0 h UTC). If the interval of frequency comparison between the primary standard and the clock is between two standard dates, no interpolation is necessary, and the instability of the clock does not affect the result of the comparison. If this is not the case, the interval of measurement needs to be transferred to a ‘standard TAI interval’ and the uncertainty of the link between the primary standard and the clock in the laboratory is taken into account.

### (b) Commercial clocks

Commercial caesium clocks installed in time laboratories realize the atomic second with a relative frequency accuracy in the range of 10^{−12} to 10^{−13}, depending on the model. They are characterized by a long-term frequency stability in the order of 10^{−14}. In addition to caesium standards, active hydrogen masers are installed in many laboratories; they are highly stable in frequency but over a shorter averaging interval (sometimes better than 1×10^{−15} over 1 day). Both types of clocks are used as the time reference in laboratories; hydrogen masers are used as the reference for frequency comparisons of PFS. Both caesium and hydrogen maser standards contribute data for TAI and they contribute to the reliability and frequency stability of the scale. They do not contribute to the realization of the unit that relies on the primary standard.

## 6. The algorithm of calculation of TAI

The reference time-scale TAI has been calculated at the BIPM since 1988, continuing the activity that had been the responsibility of the BIH. The algorithm of calculation, Algos, was developed at the BIH in the 1970s and fixed the principles of the construction of TAI. The improvements in clock technology, clock comparison methods and the physical and mathematical models applied acted upon the algorithm without changing its general principles and it remains the basis of the calculation of TAI at the BIPM.

The calculation is made in a two-step process; in the first step, the algorithm calculates an average scale making use of clock and clock comparison data. This scale has optimized frequency stability for a given averaging time but its frequency is not constrained to be accurate. It has been called *free atomic time-scale* (*Echelle Atomique Libre*, EAL). In a second step, frequency corrections are applied to the EAL based on the data of PFS; the result is TAI. The UTC is obtained by the addition of the integer number of seconds resulting from the application of leap seconds to the UTC.

The primary data used in the calculation of TAI are time and frequency differences at dates that have been chosen depending on the performance of the methods of comparison. In each participating laboratory (*k*), the approximation to UTC denoted UTC(*k*) serves as the reference for local clock differences and frequencies. Comparisons between laboratories *j* and *k* have the form [*UTC*(*j*)−*UTC*(*k*)]. The dissemination of a global time-scale (*T*) takes the form of the time-series [*T*−*UTC*(*k*)] at selected dates.

### (a) Clock weighting and frequency prediction

Fifty-six time laboratories in national metrology institutes and astronomical and geodetic observatories participated in the calculation of TAI at the BIPM in January 2005. They contribute data each month from about 300 commercial clocks.

The algorithm treats, at a delayed time, clock differences of the form [*UTC*(*j*)−*UTC*(*k*)] over a 30 day period, with measurements every 5 days at the standard dates. To improve the frequency stability of EAL, a weighting procedure is applied to the clocks. The weight of a clock is considered as constant during the 30 day period of computation and continuity with the previous period is assured by clock frequency prediction, a procedure that renders the scale insensitive to changes in the set of participating clocks (on average, 10% of the total number of clocks do not participate in a month of calculation). The frequency prediction model is random walk frequency modulation, which correspond to the predominant noise for commercial caesium clocks for times averaging around 30 days; all clocks in TAI are treated with this same frequency prediction model. However, a revision now appears to be necessary to take into account the increasing number of participating hydrogen masers, for which the predominant frequency noise is a linear drift (about 20% of the total number).

To avoid the possibility that very stable clocks indefinitely increase their weights and come to dominate the scale, a maximum relative weight is fixed for every period of calculation. The maximum relative weight is fixed as a function of the number of participating clocks (*N*) as *ω*_{max}=*A*/*N* (*A* is a constant equal to 2.5 at present), allowing a clock to reach the maximum weight when its variance computed from 12 consecutive 30 day samples is, at most, 5.8×10^{−15} (Azoubib 2001). The algorithm is able to detect abnormal clock behaviour and disregard a clock if necessary; this is done in an iterative process that starts with the weights obtained in the previous month and serves as an indicator of the clock's behaviour in the month of computation.

The present frequency stability of TAI is obtained by processing clock and clock comparison data at 5 day intervals over a monthly analysis, with a delay of publication of about 15 days after the last data report date. In the very long-term, over a decade, the frequency stability is maintained by PFS and is limited by the accuracy at the level of 10^{−15}, assuming that the present performance of PFS remain constant. The medium-term frequency stability of EAL, expressed in terms of an Allan deviation, is estimated to be 0.4×10^{−15} for averaging times of 20–40 days (Petit in press). The frequency fluctuations of clocks that serve to characterize their weights are evaluated with respect to EAL.

### (b) The frequency accuracy of TAI

The frequency accuracy of TAI is improved by using the frequency measurements of the PFS maintained in the time metrology laboratories. According to the directives of the Consultative Committee for Time and Frequency (CCTF), a report of a primary frequency standard sent to the BIPM for maintaining the TAI frequency accuracy should include the measurement of the frequency of the standard relative to that of a clock participating in TAI and a complete characterization of its uncertainty as published in a peer-reviewed journal. Six caesium fountains and two optically pumped caesium beam standards have contributed over the last 2 years to TAI, more or less regularly, with measurements over 10–30 day intervals. Two magnetically deflected caesium beam standards of the PTB (CS1 and CS2) are operated in a continuous manner and permanently contribute to both the frequency accuracy of TAI and the frequency stability of EAL as a clock (table 1). In 2005, the definition of the SI second is realized, at best, by the PFS with an accuracy of order 10^{−15}.

The frequency accuracy of TAI is characterized by the fractional deviation (*d*) of the unitary scale interval of TAI from the duration of the SI second on the rotating geoid. Based on the frequency measurements of the PFS reported to the BIPM during a 12 month period (the last being the month of calculation), the fractional deviation *d* is evaluated together with its uncertainty. A filter is applied to the individual measurements that takes into account the correlation terms of successive measurements reported for the same standard and a model of the frequency instability of TAI (Azoubib *et al*. 1977).

In order to keep the unitary scale interval of TAI as close as possible to its definition, a process called *frequency steering* is used. It consists of applying a correction to the frequency of EAL when *d* exceeds a tolerance value. These frequency corrections should be smaller than the frequency fluctuations of the time-scale in order to preserve its long-term frequency stability. Over the period 1998–2004, frequency steering corrections of ±1×10^{−15} had been applied, when necessary, for intervals of at least two months when *d* exceeded 2.5 times its uncertainty. This frequency steering compensates for a trend of decreasing frequency observed in commercial caesium clocks, which persists today and remains unexplained. The values of *d* demonstrated that the unitary scale interval of TAI had significantly deviated from its definition and that the steering procedure was in need of revision. A different strategy for the frequency steering was adopted in July 2004. A frequency correction of variable magnitude, up to 0.7×10^{−15}, is now applied for intervals of at least one month if the value of *d* reaches 2.5 times its uncertainty.

### (c) Clock comparison in TAI

The calculation of a time-scale on the basis of the readings of clocks located in different laboratories requires the use of methods to compare remote clocks. A prime requisite is that the methods of time transfer at a distance do not contaminate the frequency stability of the clocks. In the past this was a major limitation in the construction of a time-scale when comparing industrial clocks. The uncertainty of clock comparison is today between a few tens of nanoseconds and a nanosecond for the best links, sufficient for comparing the best atomic standards over integration times of a few days. This assertion is valid strictly for frequency comparisons, where only the type A uncertainty affects the process. In the case of time comparisons, type B uncertainty coming from the calibration should also be taken into account. In the present situation, calibration contributes an uncertainty that surpasses the statistical component and that can reach a few tens of nanoseconds for non-calibrated equipment (refer §6*c*(iii)). It can be inferred that repeated equipment calibrations are indispensable for clock comparison.

Clock comparisons for TAI are organized following a network of international time links that have been established by the BIPM. It is a star-like scheme with short-base-line links from laboratories to a pivot laboratory within a continent and long base-lines providing the links between the pivot points. It should be noted that participating laboratories provide time transfer data in the form of a comparison of their UTC(*k*) with respect to another time-scale ([*T*−*UTC*(*k*)]) or another local realization of UTC ([*UTC*(*j*)−*UTC*(*k*)]).

#### (i) Use of *GPS receiver* for time transfer

The use of GPS receiver satellites in time comparisons, introduced in the 1980s, was a major improvement in the construction and dissemination of time-scales. The method consists of using the signal broadcast by GPS receiver satellites, which contains timing and positioning information. It is a one-way method, the signal being emitted by a satellite and received by specific equipment installed in a laboratory. For this purpose, GPS receivers have been developed and commercialized to be used specifically for time transfer. The common-view method proposed in the 1980s by Allan & Weiss (1980) consists of the reception of the same emitted signal by two or more receivers. It is still in use for clock comparison because it eliminates the instability of the satellite clocks. The Russian satellite system of global navigation, GLONASS, is not used for time comparison in TAI on a routine basis since the satellite constellation is not yet complete and stable. Nevertheless, studies conducted at the BIPM and in other laboratories have shown (Azoubib & Lewandowski 2000) that the system is potentially useful for accurate time transfer.

The CCTF established a common format and standard formulae and parameters to facilitate the data exchange for time dissemination and transfer. Modern GNSS (Global Navigation Satellite Systems) receivers installed in national time laboratories that contribute to the calculation of TAI provide time transfer data in an automated way according to these directives (Lewandowski *et al*. 1997). GNSS transfer data is provided in the form of the difference between a clock in a laboratory (generally the one realizing UTC(*k*)) and the GPS receiver time.

As a result of new hardware and improvements in data processing and modelling, the uncertainty of clock synchronization via GPS receiver fell from a few hundreds of nanoseconds at the beginning of the 1980s to 1 ns today. Old single-channel, single-frequency C/A-code receivers are being replaced in time laboratories by multi-channel receivers, which allow the simultaneous observation of all satellites over the horizon. The effects of ionospheric delay introduce one of the most significant errors in GPS receiver time comparison; particularly in the case of clocks compared over long base-lines. Dual-frequency receivers installed in some of the participating laboratories allow the removal of the delay introduced by the ionosphere, thus improving the accuracy of time transfer. GPS receiver observations with single-frequency receivers used in regular TAI calculations are corrected for ionospheric delays by making use of ionospheric maps produced by the International GNSS Receiver Service (IGS; Wolf & Petit 1999). All GPS receiver links in TAI are corrected for satellite positions using IGS post-processed precise satellite ephemeredes.

A pilot project carried out jointly by the BIPM and the IGS between 1998 and 2002, focused on the feasibility of making accurate time and frequency comparisons using GPS receiver phase and code measurements (Ray 1999). As a result of studies undertaken during this project, precise-code data from GPS receiver geodetic-type, multi-channel, dual-frequency receivers have been introduced since June 2004 in the calculation of TAI (Defraigne & Petit 2003). These links, denominated by GPS receiver P3 links, provide ionosphere-free data and allow clock comparisons with nanosecond uncertainty or better. Studies of the combination of phase and code measurements indicate that the precision of time comparisons could be improved by a factor of two with respect to code measurements only.

#### (ii) Two-way satellite time and frequency transfer

After about 25 years of experimentation, the method of two-way satellite time and frequency transfer (TWSTFT) started to be extensively used in TAI at the beginning of the twenty-first century. The TWSTFT technique uses a geostationary telecommunication satellite to compare clocks located in two receiving-emitting stations. Two-way observations are scheduled between pairs of laboratories so that their clocks are simultaneously compared at both ends of the base-line. The clocks are directly compared using the transponder of the satellite. The two-way method has the advantage over the one-way method of eliminating or reducing some sources of systematic error such as ionospheric and tropospheric delays and uncertainty on the positions of the satellite and ground stations. The differences between two clocks placed in two stations are directly computed. The first TWSTFT link was introduced in TAI in 1999 (Azoubib & Lewandowski 1999). Since then, the number of laboratories operating two-way equipment has increased, allowing links within and between North America, Europe and the Asia–Pacific region. Until mid-2004, intervals of 5 min measurements were made 3 days per week, preventing the technique from reaching its highest potential performance, which is sub-nanosecond uncertainty. With the installation of automated stations in most laboratories, some of the TWSTFT link observations in TAI are at daily or even sub-daily intervals, with the consequence of the uncertainty of comparison being below one nanosecond.

#### (iii) Calibration of time transfer equipment and link uncertainties

Repeated calibration of the laboratory's equipment for time transfer is fundamental to the frequency stability of TAI and its dissemination. Campaigns of differential calibration of GPS receiver time equipment are organized by the BIPM to compensate for internal delays in laboratories by comparing their equipment with travelling GPS receiver equipment. Successive campaigns with BIPM travelling receivers have been carried out since 2001 with the result that more than 50% of the GPS receiver equipment used in TAI has been calibrated (Petit *et al*. 2001; Lewandowski & Moussay 2002, 2003*a*,*b*; Lewandowski & Tisserand 2004*a*,*b*,*c*). The situation for the TWSTFT links is rather different; the laboratories organize, with the support of the BIPM, calibrations of the TWSTFT equipment (Matsakis 2003; Cordara *et al*. 2004; Lewandowski *et al*. 2004; Piester *et al*. in press). In the case of links between non-calibrated TW stations, a calibration by using the corresponding GPS receiver backup link is introduced to include them in the calculation of TAI.

Estimates of type A and type B uncertainties are made by the BIPM for all time links in TAI (Azoubib & Lewandowski 2003). Some links have been selected, see table 2, to show examples of their values; they are indicated with *u*_{A} and *u*_{B}, respectively. The statistical uncertainty *u*_{A} is evaluated by taking the level of phase noise in the raw data into account; *u*_{B} is the uncertainty on the calibration.

For two decades, GPS receiver C/A-code observations provided a unique tool for clock comparison in TAI, rendering any test of its performance with respect to other methods impossible. The present situation is quite different; the introduction of the TWSTFT technique has allowed the possibility of comparing the results of clock comparisons traditionally obtained with the GPS receiver common-view technique with those coming from an independent technique, and made the system more reliable. For the links where the two techniques are available, both GPS receiver and TWSTFT links are computed; the best being used in the calculation of TAI, the other kept as a backup. The GPS receiver P3 links have further increased the reliability of the system of time links, providing a method of assessing the performance of the TWSTFT technique. Comparison of results obtained on the same base-lines with the different techniques shows equivalent performances for GPS receiver geodetic-type dual-frequency receivers and TWSTFT equipment, when two-way sessions have a daily regularity (1 ns or less).

At the moment, 80% of the links in TAI are obtained by using GPS receiver equipment (65% with GPS receiver time-receivers; 15% with GPS receiver geodetic-type receivers) and about 14% of the links are provided by TWSTFT observations.

## 7. Dissemination and access to the international time-scales

TAI and UTC are not time-scales represented by clocks in real-time. They are calculated in a post-real time process so that they have the benefit of high frequency accuracy and long-term frequency stability. TAI is the uniform time-scale used as the primary reference for scientific applications. UTC is derived from TAI by application of an integer number of seconds; it is the time-scale used for civil timekeeping that is realized locally in national laboratories.

TAI and UTC are disseminated every month by BIPM *Circular T* (available on the BIPM Web site: www.bipm.org). Access to UTC is provided in the form of differences [*UTC*−*UTC*(*k*)] and their uncertainties (to the tenth of a nanosecond; Lewandowski *et al*. 2005). These values make the local approximations UTC(*k*) traceable to UTC. The use of the integer number of seconds if [TAI−UTC] leads to TAI; this difference continues to stay at 32 s, at least until the end of 2005. The values of the fractional deviation *d* of the unitary scale interval of TAI from the SI second are published in *Circular T*, giving access to the second as realized by the PFS. *Circular T* provides wide access to the best realization of the second through the estimation of *d* on the basis of the individual PFS measurements reported over a year.

GPS receiver satellites disseminate the system time-scale, denominated GPS receiver time, as well as UTC (as measured by the USNO). GPS receiver time results from clock combination; it is steered to UTC (USNO; modulo 1 s), from which it cannot differ by more than one microsecond. UTC (USNO) represents UTC at the level of a few nanoseconds and its dissemination via GPS receiver is the widest access to a real-time approximation of TAI and UTC. Similarly, GLONASS disseminates a Russian realization of UTC (UTC(SU)). Similar features will be adopted for Galileo, the future European satellite positioning system.

## 8. Future improvements

A major concern for time metrology is the comparison of standards at their highest level of performance. Caesium fountains have stabilities at a level between 10^{−13} and 10^{−14} for averaging intervals of 1 s; the GPS receiver common view method is inefficient since the averaging intervals are far from their optimal performance. Carrier-phase and code measurements from geodetic-type GPS receivers have shown themselves to be well adapted to the comparison of such highly stable standards; work is progressing in this direction to take the phase measurements of the corrections that could affect the potential precision of the method into account. Clock comparison by TWSTFT reaches its full potential when observations are scheduled over sub-daily intervals; comparisons of caesium fountains have provided excellent results by using this technique. The work of national time laboratories is aimed at improving the stability and the accuracy of PFS; in the near future we shall have to face up to the necessity of further improving the methods for precise time and frequency transfer.

In 2001 the CCTF recommended that, in view of the possibility of directly comparing microwave and optical frequencies, frequency measurements of other atomic transitions relative to that of caesium 133 be considered as secondary representations of the second. Various laboratories are working on this and have started operating optical frequency standards. Experts representing the Consultative Committee for Length (CCL) and the CCTF have agreed that the SI value of the unperturbed frequency of a quantum transition suitable as a secondary representation of the second (i) must have an uncertainty that is evaluated and documented so as to meet the requirements adopted for the primary frequency standard for use in TAI and (ii) that this uncertainty should be no larger than approximately a factor of 10 greater than that of the primary standards of the date that serves as the best realizations of the second. Based on these requirements, it has been recommended that the unperturbed groundstate hyperfine quantum transition of ^{87}Rb may be used as a secondary representation of the second, with a frequency of *f*_{Rb}=6 834 682 610.904 324 Hz and an estimated relative standard uncertainty (1*σ*) of 3×10^{−15}. The Bureau National de Métrologie (BNM, France) has developed a dual Cs–Rb fountain for which results have already been published (Sortais *et al*. 2000; Bize *et al*. 2004). We are on the threshold of beginning to discuss a new primary realization of the SI second.

## Tables

## Footnotes

One contribution of 14 to a Discussion Meeting ‘The fundamental constants of physics, precision measurements and the base units of the SI’.

↵

*Circular T*, a monthly publication of the BIPM.- © 2005 The Royal Society