## Abstract

Conventional definitions of ‘near fields’ set bounds that describe where near fields may be found. These definitions tell us nothing about what near fields are, why they exist or how they work. In 1893, Heaviside derived the electromagnetic energy velocity for plane waves. Subsequent work demonstrated that although energy moves in synchronicity with radiated electromagnetic fields at the speed of light, in reactive fields the energy velocity slows down, converging to zero in the case of static fields. Combining Heaviside's energy velocity relation with the field Lagrangian yields a simple parametrization for the reactivity of electromagnetic fields that provides profound insights to the behaviour of electromagnetic systems. Fields guide energy. As waves interfere, they guide energy along paths that may be substantially different from the trajectories of the waves themselves. The results of this paper not only resolve the long-standing paradox of runaway acceleration from radiation reaction, but also make clear that pilot wave theory is the natural and logical consequence of the need for quantum mechanics correspond to the macroscopic results of the classical electromagnetic theory.

This article is part of the theme issue ‘Celebrating 125 years of Oliver Heaviside's ‘Electromagnetic Theory’’.

## 1. Introduction

Ask a half dozen electromagnetic practitioners to define ‘near field’ and you will likely obtain a dozen answers [1]. For an electrically small dipole source, electrical engineer Harold Wheeler (1903–1996) identified that the reactive field terms dominate within a radius *r* = *λ*/2π; the radiation terms dominate beyond. Wheeler dubbed this the ‘radiansphere’ [2,3]. Electromagnetic compatibility engineers take five times this distance (*r* = 5*λ*/2π) as the limit at which the field impedance converges to 376.7 Ω, the impedance of free space. For larger sources, such as an antenna of aperture size ‘*D*’, the radiated fields approximate far-field plane waves at *r* = *D*^{2}/*λ* to eighth of a wavelength precision and at *r* = 2*D*^{2}/*λ* to a precision of a 16th of a wavelength. Traditional definitions of ‘near fields’ tell where to find them, not what they are or how they work.

In 1884–1885, John Henry Poynting (1852–1914) [4] and Oliver Heaviside (1850–1925) [5] independently discovered the relation governing how electromagnetic energy moves from place to place. The ‘Poynting vector’, or directed power flow per unit area is given by **S** = **E** × **H**, the cross product of the electric (**E**) and magnetic (**H**) fields. Just a few years later, Heinrich Hertz (1857–1894) caught electromagnetic energy in the act of moving from place to place in a series of simple and striking experiments [6]. Hertz demonstrated that electromagnetic waves propagate at the speed of light, proving the suspected link between electromagnetics and optics.

Heaviside derived a ‘wave velocity’ under the assumption that the energy propagates along with the wave [7]. His wave velocity or energy velocity, **v*** _{u}*, is the ratio of the Poynting vector to the energy density,

*u*: 1.1 where

*ε*

_{0}= 8.85 × 10

^{−12}F m

^{−1}is the permittivity, and

*μ*

_{0}= 4π × 10

^{−7}H m

^{−1}is the permeability of free space. Kirk McDonald provides an excellent discussion of electromagnetic momentum and velocity in a DC circuit [8].

Hertz's experiments dealt with standing waves in which the fields propagate back and forth at the speed of light, but the average energy velocity is zero. The fact that electromagnetic energy velocity may, in general, be less than *c* appears to have first been explicitly noted by Harry Bateman (1882–1946) in 1915 [9]. Recently, Gerald Kaiser re-examined this question, interpreting energy velocity as a local time-dependent characteristic of electromagnetic fields [10].

Clearly, the ratio of electric to magnetic energy relates to energy velocity. The Poynting vector goes to zero if either field goes to zero. A concentration of electric energy with no magnetic energy will be static, as will a concentration of magnetic energy with no electric energy. One form of energy must transform, at least in part, to the other in order for energy to move. Only when both forms of energy coexist at the same location can there be a progression or motion of energy. This observation, due to Oliver Lodge (1851–1940), aids in understanding electromagnetic energy velocity [11]. The remarkable properties of the electric to magnetic field intensity ratio were not appreciated at the time, however.

Only in the 1930s did Sergei Schelkunoff (1897–1992) identify the significance of the ‘field impedance’ in applied electromagnetics [12]. Schelkunoff extended Heaviside's impedance concept from AC electronics to electromagnetic fields. The impedance of the fields, *Z*, is the ratio of the electric to magnetic field intensity, *Z* = |**E**|/|**H**|. Schelkunoff's ground-breaking work identified the distinction between the characteristic impedance of a medium—the natural ratio assumed by propagating fields, and the actual, instantaneous impedance which may be radically different under interference between multiple waves.

The final ingredient is the difference between the electric and magnetic energy densities at a particular place and time, the Lagrangian, *L* = 1/2 *ε*_{0} |**E**|^{2} − 1/2 *μ*_{0} |**H**|^{2}. Heaviside was sceptical of the Lagrangian or Least Action approach [13]:
Now at Cambridge, or somewhere else, there is a golden or brazen idol called the Principle of Least Action. Its locality is kept secret, but numerous copies have been made and distributed amongst the mathematical tutors and lecturers at Cambridge, who make the young men fall down and worship the idol.

James Clerk Maxwell (1831–1879) also expressed concern that the abstract nature of Lagrangian formalism severed the link to the physical system under consideration [14]:
…Lagrange and most of his followers, to whom we are indebted for these methods, have in general confined themselves to a demonstration of them, and, in order to devote their attention to the symbols before them, they have endeavoured to banish all ideas except those of pure quantity, so as not only to dispense with diagrams, but even to get rid of the ideas for velocity, momentum, and energy, after they have been once for all supplanted by symbols in the original equations.

This paper offers a definition for ‘near’ or ‘reactive’ fields by building upon Heaviside's definition of energy velocity using impedance and employing the electromagnetic Lagrangian.

## 2. Theory

An isolated electromagnetic wave in free space exhibits equal amounts of electric and magnetic energy. Thus, the Lagrangian is zero (*L* = 1/2 *ε*_{0} |**E**|^{2} − 1/2 *μ*_{0} |**H**|^{2} = 0). Equivalently, the ratio of electric to magnetic intensity is given by the free space impedance, Ω. Under these circumstances, the wave and its energy propagate in synchronicity at the speed of light. If we disrupt the balance of electric to magnetic energy, some of the energy slows down.

This phenomenon is implicit in Heaviside's theory of distortionless propagation along a transmission line. With a balance of inductance (*L*) and capacitance (*C*), a wave propagates along a transmission line without distortion. In Heaviside's theory, distortionless propagation requires the impedance of a transmission line must be
2.1
where *R* is the resistance and *G* is the conductance, and all quantities are taken per unit length of the transmission line [15]. Alas, the number of physical quantities exceeds that of the number of symbols to express them, so the reader will have to distinguish between Lagrangian and inductance and between resistance and radial distance by context.

If an imbalance exists between electric and magnetic energy, some of the energy slows down and the wave becomes distorted. William Suddards Franklin (1863–1930) explained the distinction between ‘pure’ and ‘impure’ waves in 1909 [16]. Franklin noted that a pure wave propagates without distortion. When waves superimpose or interfere, the resulting ‘impure’ wave exhibits distortion—a change in shape from the original waves due to their interaction.

First, consider the field impedance, normalized to the impedance of free space so as to yield a dimensionless quantity
2.2
This normalized impedance comes in handy for analysing energy velocity. When *z* = 1, the electric and magnetic energy are in balance, and the energy propagates at the speed of light (*c* = 2.9979 × 10^{+8} m s^{−1}). We may normalize the energy velocity with respect to the speed of light to obtain a similarly dimensionless vector [17]:
2.3

This result is similar to the transmission coefficient for a discontinuity in the impedance of a medium and describes how electromagnetic energy slows down when the electric and magnetic fields are no longer balanced, i.e. when z ≠ 1.

Second, consider the Lagrangian, as normalized by the Hamiltonian (*H*, or equivalently the total energy) [18]:
2.4
Schelkunoff encouraged us to simplify electromagnetic analysis by considering one-dimensional propagation, as with transmission lines or plane waves [12]. This allows us to capture much of the essential physics of electromagnetic phenomena while avoiding the complexity of a full three-dimensional treatment. We may drop the vector dependence on the normalized energy velocity and identify the circular relationship between the normalized Lagrangian and the normalized energy velocity:
2.5
The normalized Lagrangian allows for a simple characterization of near or reactive fields. In the limit ℓ → 0, we have a pure, far-field electromagnetic wave. In the limit ℓ → +1, the fields are electrostatic, and in the limit ℓ → −1, the fields are magnetostatic. Figure 1 shows what has been dubbed the Great Electromagnetic Circle of normalized Lagrangian versus energy velocity [19]. Figure 1 intuitively relates the energy velocity to the normalized Lagrangian, allowing a precise parametrization of intermediate ‘near-field’ states. No longer must we characterize near fields by proximity to a source. Using the normalized Lagrangian, we can parametrize near fields along the continuum from electrostatic fields, through radiation fields, to magnetostatic fields and through reverse propagating radiation fields, back again to electrostatic fields. This result makes clear the close connexion between the reactive fields found near a radiating source, and the reactive fields found when one electromagnetic wave interferes with another.

For simplicity, consider two waves of equal, unit amplitude interfering with each other as in figure 2. We define interference with respect to the electric field. In a destructive interference, the electric field cancels out, leaving only a magnetic field. In a constructive interference, the electric field doubles and the magnetic field cancels out. Energy is proportional to the square of the field intensity. In the constructive interference case, the initial unit electric energy on each side combines to yield four times the electric energy; in the destructive interference case, the initial unit electric energy on each side combines to cancel out entirely. This apparently counterintuitive result follows because conservation applies to the total energy, not the electric or magnetic energy individually. The excess electric energy in constructive interference arises because the magnetic field and associated energy are zero. All the magnetic energy in the original balance has become electric energy. The ‘missing’ electric energy in destructive interference has become magnetic energy due to the concomitant interference of the magnetic field component [20].

Total energy is conserved. When we diminish one component of the electromagnetic field, we increase the other in offset to maintain a fixed amount of total energy. Further, when either component of the field goes to zero, there is no power flow and the remaining field is momentarily static. In an isolated wave, the energy moves at a constant velocity at the speed of light in spatially distributed impulses. For a sine wave or harmonic wave, the impulses occur every half wavelength. When two waves interfere, however, the energy actually stops and changes direction. The following section provides a variety of examples.

## 3. Applications

The previous section demonstrated how we may combine Heaviside's energy velocity and the field Lagrangian to yield a simple, elegant framework within which to understand near or reactive fields and energy flow. As electromagnetic waves interact and interfere, the energy they convey slows down and may even come to a rest. Counter-propagating waves may exchange energy with each other, temporarily storing energy in the resulting quasi-static fields. There is a fundamental correspondence between the near fields present around a small electromagnetic source and the reactive fields that form as two waves interfere in free space. This section examines that similarity by considering three examples: the fields and energy flow around an exponentially decaying dipole, around an accelerating charge, and the fields and energy flow around two interfering waves.

### (a) The exponentially decaying dipole

The usual analysis of a small or ‘Hertzian’ dipole with moment *p*_{0} assumes harmonic oscillation. Each successive oscillation of the dipole results in an outward propagating radiation wave that strips off some of the energy presents in the near fields and conveys that energy away in radiation. The process—first identified by Hertz—is complicated, and has been described by the author elsewhere [21]. This section will examine a simpler case of radiation—that of an exponentially decaying dipole, to illustrate some of the fundamental physics underlying electromagnetic waves and energy.

An exponential decay time dependence of the form *T*(*t*) = exp[–*t*/(*cτ*)] arises in the discharge of a small capacitor. The time constant, *τ* = *RC*, depends on the product of associated resistance (*R*) and capacitance (*C*). The fascinating details of the fields and energy flow were originally noted by Mandel [22] and examined elsewhere in more detail [23]. Analytically simple cases like these serve as good test cases for validating finite difference time domain algorithms. Assuming the dipole moment aligns with the *z*-axis, the fields are
3.1
and
3.2
where *r* is the radial distance from the dipole in spherical coordinates. Unlike the harmonic dipole case, all field terms in an exponential dipole decay share the same time dependence. Thus, the impedance, the normalized energy velocity, and the Lagrangian are constant with respect to time; they depend only upon radial distance and angle. The radial impedance for an exponentially decaying electric dipole evaluated along the equatorial (*θ* = π/2) plane is
3.3

Figure 3*a* plots the magnitude of the field impedance in units of free space impedance (*Z _{s}*) for an electric dipole as a function of range in units of

*c τ*. The impedance of the corresponding exponentially decaying magnetic dipole is the inverse of the electric case, and is offered for comparison. The electric dipole impedance exhibits a pole at

*r*=

*c τ*because the magnetic field goes to zero. Inside this spherical shell, the impedance is negative corresponding to the inward flow of energy. Outside this shell, the impedance is positive, corresponding to the outward flow of radiation energy. Figure 3

*b*presents a qualitative diagram of this process. Figure 3

*c*plots the normalized Lagrangian and energy velocity as a function of the radial distance. Figure 3

*d*shows the trajectory of the process on the Great Electromagnetic Circle of normalized Lagrangian versus energy velocity.

Within the spherical region of radius *r* = *c τ*, the radiation fields from an exponentially decaying dipole ripple outward through the inflowing energy absorbed by the dipole at the origin. Only after the radiation fields propagate through the *r* = *c τ* boundary do they associate themselves with the energy originally stored in the electrostatic fields around the dipole but outside the boundary. Inside the boundary, inductive field components dominate the radiation fields and the net energy flow is inward, despite the outward propagating radiation fields. Outside the boundary, the radiation fields dominate and energy propagate outward. Fields guide energy, but the net energy flows with the dominant fields. One must consider the superposition of all fields to understand the energy flow, not look at a field component in isolation.

### (b) Accelerating charge: is radiation ‘kinky’?

Heaviside also turned his attention to understanding how fields and energy radiate from an accelerating charge (*q*), pioneering what might be termed the ‘kinked field’ model of radiation [24]. He imagined a charge suddenly jerked into motion at the speed of light. Defining the axis of a globe as the direction of the displacement, the resulting field lines lie along the longitudes and propagate outward at the speed of light, as in figure 4*a*. Heaviside's approach—as later refined and extended by Richard Feynman (1918–1988)—can be applied to yield the electric field of a non-relativistically accelerated charge in a co-moving reference frame where velocity goes to zero (*v* → 0) [25]:
3.4
Assume the acceleration is due to an applied external field:
3.5

Note that the magnitude of the force on the charge of mass *m* is *F* = *ma* = *qE*_{0}. If we assume the accelerating charge is an electron, then the tangential fields go to zero on a spherical shell whose radius is the classical electron radius
3.6
The corresponding magnetic field due to the accelerating charge is
3.7
The radial impedance becomes
3.8
Certain qualitative features of the energy flow, depicted in figure 4*b*, become obvious upon plotting the normalized Lagrangian and energy velocities as in figure 4*c,d*.

In the limit as *r* approaches zero, radial energy flow is directed out. At the classical electron radius, *r*_{e}, the transverse fields are magnetostatic and there is no radial energy flow. The radial energy flows in at the speed of light at twice the classical electron radius. The energy flow remains inward yet converges to zero as the electrostatic applied field becomes dominant at ‘great’ distances, which in this case is by about a thousand times the classical electron radius. Interestingly, an accelerating charge does not emit energy—it absorbs energy. The source of the energy radiated by an accelerating charge is the fringing fields at the boundary of the applied field. As the applied field decreases, the radiation field from the accelerating charge overwhelms the fringing field, causing the inward radial energy flow to reverse direction. In the fringing region, the Great Circle of the Lagrangian versus energy velocity closes. The normalized Lagrangian goes to zero and the normalized energy velocity converges to the speed of light.

The problem of how radiation energy reacts against accelerating charges has long defied the imagination of physicists [26]. Conventional theory appears to require paradoxical results like a non-physical ‘runaway’ or exponentially increasing acceleration. The present analysis suggests that the proper question is not ‘how’ radiation reacts against accelerating charges, but ‘whether’ it does. Radiated energy reacts against the applied field as it pushes off in the fringing region—not against the accelerating charge itself. The fringing region may be arbitrarily far away from the accelerating charge. There appears to be no reason to expect an action–reaction force balance between accelerating charges and their radiation, except in the overall context including the applied field and its fringing region.

The traditional view espoused by Heaviside that radiation is a transverse component or kink in an otherwise radial field leads to confusion because it presents only a partial picture. Here again, the lesson is that energy flow depends upon the big picture—the broader context including all the relevant fields present in a given context. The radiation fields from an accelerating charge do not convey energy from the accelerating charge to the far field. Rather they slightly decrease the dominant inward flow of energy from the applied field, and only collect energy from the fringing field region and convey it to the far field.

### (c) Interacting waves

Consider two identical harmonic electromagnetic waves, one propagating forward and the other in the reverse direction along the *z*-axis. We may plot representative space–time trajectories for the waves and for their energy flow in the space–time energy flow diagram of figure 5*a*. The diagram is scaled in units of quarter wavelength for distance (*z*) and units of quarter period (*T*) for time (*t*), so the space–time trajectories of the waves, propagating at the speed of light, have unit slope. Shading shows the energy density as the two waves interfere constructively and destructively. The energy bounces back and forth between constructive interference electrostatic and destructive interference magnetostatic nodes spaced a quarter wavelength in distance and a quarter period in time. Gerald Kaiser appears to have been the first to examine the physics of this situation in detail [10].

We can also plot this behaviour in the Lagrangian-velocity space of the Great Circle of figure 5*b*. The Forward wave propagates through a region of electrostatic energy (a), conveys forward propagating electromagnetic energy (b), propagates through a region of magnetostatic energy (c), conveys forward propagating electromagnetic energy (d) and propagates through another region of electrostatic energy (a) as the Forward wave repeats the cycle. Similarly, the Reverse wave propagates through a region of electrostatic energy (a), conveys reverse propagating electromagnetic energy (b), propagates through a region of magnetostatic energy (c), conveys reverse propagating electromagnetic energy (d), and propagates through another region of electrostatic energy (a) as the Reverse wave repeats the cycle.

From the energy's space–time perspective, a trajectory may begin in an electrostatic node (1), move in the reverse direction under the influence of the reverse propagating wave (2), change direction in a magnetostatic node (3), move in the forward direction under the influence of the forward propagating wave (4) and change direction again in an electrostatic node (1) repeating the cycle. More general cases of interactions between waves have been examined elsewhere [27].

Note how decoupled the wave motion is with respect to the energy flow. Half the time, the waves convey energy in their respective directions of propagation while unopposed by the counter-propagating wave. The other half the time, the counter-propagating wave opposes the energy flow resulting in an electrostatic node for the case of constructive interference, or a magnetostatic node for the case of destructive interference. The waves propagate through each other at the speed of light, but every half wavelength or so, an identifiably different lump of energy is associated with a particular segment of the wave.

## 4. Conclusion

This paper applies ideas pioneered by Oliver Heaviside—energy flow and energy velocity—along with an idea Heaviside regarded and dismissed—the electromagnetic Lagrangian—to offer a novel way of looking at and understanding near or reactive fields. There is a ‘Great Electromagnetic Circle’ in a Lagrangian-energy velocity space that allows for an intuitive characterization of electromagnetic phenomena from electrostatic fields, through radiation or propagating fields, to magnetostatic fields, and back again. A deep kinship exists between the ‘near’ fields close to an electromagnetic source and the reactive fields present when two waves interfere with each other in free space. Fields and waves propagate through each other and in so doing they exchange energy. Fields and waves serve to guide the flow of energy along space–time trajectories that may be entirely different from the trajectory of any particular wave. These ideas are entirely consistent with and follow from the ideas of Maxwell and Heaviside, and offer a different perspective in considering practical problems from applied electromagnetics.

Conventional thinking assumes that energy remains tightly coupled to fields—that a particular electromagnetic wave and its energy are linked together as a wave propagates from place to place. To the contrary, the lesson of this paper is that fields and energy are only loosely coupled. As fields and waves interact, they exchange energy. Electromagnetic waves propagate at the speed of light—energy propagates at the speed of light only in a pure and isolated wave. In the presence of other electromagnetic fields or waves, the energy propagates at speeds less than that of light—even slows to a stop and changes direction in many near-field and interference situations.

This paper examines well-established principles in electromagnetic physics and offers a novel reinterpretation of their meaning with the potential to make clear how electromagnetics works. The concepts of this paper may resolve the long-standing paradox of radiation reaction, for instance. There is one final important consideration worthy of examination: the implications of these electromagnetic energy flow concepts for our understanding of quantum mechanics.

Paul A. M. Dirac (1902–1984) was one of the founding fathers of quantum mechanics (as Richard Feynman quipped, physicists may read Schiff, but they quote Dirac). In his classic quantum mechanics text, Dirac argued that photons do not interfere with each other [28]:
Suppose we have a beam of light consisting of a large number of photons split up into two components of equal intensity. On the assumption that the intensity of this beam is connected with the probable number of photons in it, we should have half the total number of photons going into each component. If the two components are now made to interfere, we should require a photon in one component to be able to interfere with one in the other. Sometimes these two photons would have to annihilate one another, and other times they would have to produce four photons. This would contradict the law of conservation of energy. The new theory, which connects the wave function with probabilities for one photon, gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs.

We know on the macroscopic level that waves interfere with each other, giving rise to regions in which the propagating electromagnetic energy transforms to electrostatic energy, becomes magnetostatic energy, or shifts elsewhere on the continuum, somewhere in between. The conventional wisdom, then, as stated by Dirac, is that although this energy is quantized, there can be no interactions or interference between the photons associated with the forward propagating wave and the photons associated with the reverse propagating wave. We get something—the macroscopically observed shifts in the energy balance—from nothing: the supposed non-interaction of the photons themselves. To quote Newton's comments on action-at-a-distance, this is ‘so great an absurdity that I believe no man, who has in philosophical matters a competent faculty of thinking, can ever fall into it’ [29]. Arguing from the quantum perspective, Roy J. Glauber calls Dirac's ‘a highly simplistic remark which has sown confusion among physicists ever since’ [30].

In the conventional wisdom, fields and energy appear tightly coupled. This paper demonstrates they are not. Fields exchange energy all the time. Electromagnetic waves and fields guide energy. Fields go one way, other fields go other ways, and the energy they convey goes in a completely different trajectory under the influence of the interacting fields. Waves and fields serve to guide electromagnetic energy along trajectories governed by the interactions and interference of the relevant fields. How would this classical theory naturally segue into understanding reality at the quantum level?

In the classical electromagnetic picture, an electromagnetic wave is an incident on the two slits. Each slit becomes a source for secondary waves that interfere with each other, generating an interference pattern. The flow of energy is not governed by the fields or wave from either slot, but rather by the interference pattern generated by the superposition of the two wave sources. A continuous flow of electromagnetic energy propagates along the streamlines set up by the interference pattern guided by the interaction of the fields. Now instead of a continuous flow of energy, imagine that the energy is quantized and rarefied to the point that only a single photon traverses the system at a time. By chance or happenstance, the photon will follow a particular energy flow streamline set up by the superposition of the fields. As in the classical case, no particular slot or particular wave source dictates the trajectory of the photon—only the interference resulting from the superposition of the fields.

Physicists confuse themselves speculating which slit the photon went through, assuming it had to go through both slits in order to generate the interference pattern and had to be in both places at the same time. Feynman argued the phenomenon of two-slit interference is ‘impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics' [31]. He noted that all the mystery of quantum reality is wrapped up in this simple experiment. The mystery vanishes upon understanding the simple classical picture of this paper.

Fields behave like waves. They generate interference patterns between the two slits, throughout the region, ‘non-locally’, even when there is only enough energy for one photon to be moving through the slits. Energy behaves like particles, following a particular trajectory under the influence of the interacting waves. Each does its own thing. The quantum weirdness and confusion are gone. There is an interpretation of quantum mechanics that adopts a qualitatively similar perspective: the pilot wave theory of Louis de Broglie (1892–1987) and David Bohm (1917–1992). As de Broglie observed, ‘[a] photon coming from one laser or the other and arriving in the interference zone is guided – and this seems to us to be physically certain – by the superposition of the waves emitted by the two lasers…’ [32]. The results of this paper make clear that pilot wave theory is the natural and logical consequence of the need for quantum mechanics correspond to the macroscopic results of the classical electromagnetic theory.

Heaviside's and Poynting's work on energy flow led to a revolution in the understanding of how electric circuits convey energy. Rather than a continuous flow of energy through wires like water through a pipe, Heaviside and Poynting taught that wires sculpt adjacent fields which in turn convey energy through the space around the wires depositing it in the surface of the wire. The charge carriers themselves move slowly with a relatively short mean free path, while the fields propagate unimpeded at the speed of light along the wire.

An analogous worldview emerges from the present work. There is no need for photons to couple exclusively with any particular electromagnetic wave. Rather, they engage in a complex dance, pausing to form a momentary electrostatic or magnetostatic pirouette, before shifting direction and waltzing off with a different wave. In most realistic, complicated electromagnetic environments, photons have relatively short mean free paths and a drift velocity notably less than the speed of light.

In 1889, Heaviside observed, ‘There are many more things in Maxwell which are not yet discovered’ [33]. The same may be said of Heaviside's own work. When we apply Heaviside's insights to contemporary problems, we ‘make patent what was merely latent’ in the work and thought of the great man upon whose shoulders we stand.

## Data accessibility

This article has no additional data.

## Competing interests

The author declares that he has no competing interests.

## Funding

The author received no funding for the present work.

## Acknowledgements

The author dedicates this paper to the memory of his doctoral advisor, E.C.G. ‘George’ Sudarshan (1931–2018). In addition, the author would like to thank Kirk McDonald, Travis Norsen, Tasos Papaioannou, Greg Merchán, Jack Gardner and the peer reviewers for useful suggestions and feedback.

## Footnotes

One contribution of 13 to a theme issue ‘Celebrating 125 years of Oliver Heaviside's ‘Electromagnetic Theory’’.

- Accepted July 17, 2018.

- © 2018 The Author(s)

Published by the Royal Society. All rights reserved.