## Abstract

We study the unique role played in quantum mechanics by non-events or ‘counterfactuals’. Our earlier analysis of ‘quantum oblivion’ has revealed some subtle stages in the measurement process, which may end up in self-cancellation. To these findings, we now add two insights derived by two time-symmetric interpretations of quantum mechanics. (i) Like all quantum interactions, the non-event is formed by the conjunction of forward-plus-backward-evolving wave functions. (ii) Then, it is another feature of such dual evolutions, namely the involvement of negative masses and energies, that enables Nature to make some events ‘unhappen’ while leaving causal traces.

## 1. Introduction

So many, varied and sharp are the differences between quantum and classical physics; so persistent the conflicts between the quantum formalism and macroscopic reality; and so unsatisfactory have all proposed resolutions remained so far that a unifying hypothesis naturally emerges: Can all quantum oddities share a subtler common origin? Should that be the case, some novel ‘method in the madness’ is very likely to emerge, offering hints towards the long sought-for theory that better accounts for these and other oddities.

This is the aim of this paper. We submit that

(i) A promising candidate for a ‘master quantum oddity’ is the unique status of the

*non-event*, i.e. a ‘counterfactual’. This is an event which*could have happened*but*did not*. Quantum non-events are unique in that they can exert significant causal effects just by virtue of this ‘could have’.(ii) Earlier [1,2], studying an apparent violation of momentum conservation in a very basic quantum non-event, we showed that, within very brief intervals, momenta have been exchanged, yet their recurrence ended up in a net zero result. We proposed the term ‘quantum oblivion’ (QO) for these self-cancelling interactions.

(iii) Another piece of the puzzle is offered by some time-symmetric interpretations of quantum mechanics (QM), according to which the quantum interaction is formed by

*two*time evolutions, going back and forth between past and future.(iv) Significantly, these time-symmetric models, for independent reasons, have also invoked

*negative values*for otherwise positive parameters, such as negative mass and energy.(v) The resulting account emerging from all the above is very intuitive. Both positive and negative values, such as mass and energy, are exchanged between past and future interactions of the quantum with macroscopic systems. These give rise to the familiar quantum non-local and non-temporal phenomena, which, despite apparent violations of space and time limits, obey conservation laws and the non-signalling principle.

(vi) QO thus turns out to be an event which, in some deeper sense, has occurred and then ‘unoccurred’.

(vii) This has implications for several foundational issues in classical physics, bearing in mind that

*quantum non-events far outnumber events in our Universe*.

This general model has been named ‘quantum hesitation’ [2], honouring de Broglie's quantum-mechanical rephrasing of Bergson's definition of time as Nature's hesitation between possible outcomes.

## 2. The non-event: the odd potency of quantum counterfactuals

A simple ordinary quantum event, namely one that has occurred, suffices to elucidate the importance of its opposite. Consider an excited atom floating in empty space, far from any other matter. At *t*_{1} it emits a photon, which at a much later time *t*_{2} is absorbed by a macroscopic system located far away. That system, by irreversibly amplifying the absorption's effects, constitutes a detector. Assume further that the system can also record the absorption's precise time. Thus measured, the photon's emission is finalized.

Apparently, the detector has recorded a single event, namely the photon's absorption. By QM, however, there have been numerous events prior to *t*_{2}: *every instance in which the detector remained silent is also a measurement—hence a quantum event*. Logically, the non-event is subject to the counterfactual statement ‘it *could* have happened’, quantum indeterminism being the only reason why, just by chance, it did not. What makes this counterfactual intriguing, far beyond logic, is the non-event's causal efficacy: *another physical event can occur just by virtue of this ‘could have’.*

Consider first the time–energy uncertainty ruling the photon's emission by the atom:
2.1making the atom, respectively, superposed:
2.2where, as usual, *e* and *g* stand for excited and ground states, respectively. Therefore, the distant detector, wherever positioned, is constantly *measuring* the atom, even when the measurement ends with no click (which is the more *frequent* occurrence), making the atom undergo a subtle ‘collapse’
2.3This shift from superposition to a definite state repeats itself periodically.^{1} Eventually, the cycle comes to an end by detection:
2.4It is the quantum non-event's causal efficacy, sometimes equalling that of a positive event, for which interaction-free measurement (IFM) [3] provides a lucid example: a detector's otherwise-impossible click sometimes does occur as a result of another no-click. But then, as the Universe contains *countless* potential detectors for every excited atom located anywhere, the actual number of non-events turns out to be astronomically huge. A plethora of even more intriguing phenomena can therefore be revealed by studying quantum non-events, in both experiment and theory.

### (a) The information conveyed by the intervals between Feynman's clicks

Richard Feynman's genius as a physicist was matched by his inspirational teaching. He famously argued that the double-slit experiment ‘has in it the heart of quantum mechanics. In reality, it contains the *only* mystery’ [4]. His lecture on this topic employs a stream of single electrons undergoing interference, with two sensitive detectors, each placed next to one of the slits, to indicate which path the electron has taken. His vivid description, best quoted verbatim, is very suitable for our study of non-events:
The first thing we notice with our electron experiment is that we hear sharp ‘clicks’ from the detector (that is, from the loudspeaker). And all ‘clicks’ are the same. There are no ‘half-clicks’.

We would also notice that the ‘clicks’ come very erratically. Something like: click ….. click-click … click …….. click …. click-click …… click …, etc., just as you have, no doubt, heard a Geiger counter operating. If we count the clicks which arrive in a sufficiently long time—say for many minutes—and then count again for another equal period, we find that the two numbers are very nearly the same. So we can speak of the average rate at which the clicks are heard (so-and-so-many clicks per minute on the average). [4, pp. 1–5]

These clicks are, by nature, indeterministic. But what about the intermediate *non-clicks*? This is actually how IFM [3] was conceived: omit one detector, and, lo and behold, interference vanishes just as before.

Non-events, then, not only far outnumber the events in our Universe, they contain a significant amount of quantum information, and take a major causal portion in every physical process.

### (b) The simplest non-event offering a new insight

Einstein's advice ‘Make things as simple as possible, but not simpler’ may offer further help. The following experiment provides the simplest quantum interaction that can shed new light on the nature of non-events.

Consider an electron and a positron, both with spin state , split by two different Stern–Gerlach magnets positioned at (*t*_{0},*x*_{e−},*y*_{0}) and (*t*_{0},*x*_{e+},*y*_{0}), respectively, to enable the asymmetric interaction in figure 1. The magnets split the particles' paths according to their spins in the *x*-direction:
2.5Let care be taken to ensure that, should the particles turn out to reside in the intersecting paths, they would mutually annihilate.

Let us follow the time evolution of these two wave functions plus two nearby detectors |READY 〉_{1}, |READY〉_{2}, set to measure the photons emitted upon pair annihilation, which would change their states to |CLICK〉_{1} or |CLICK〉_{2}.

Initially, the total wave function is the separable state:
2.6The particles, depending on their positions at *t*_{1} or *t*_{2}, may (not) annihilate and consequently (not) release a pair of photons, which would in turn (not) trigger one of the detectors.

At *t*_{0}≤*t*<*t*_{1}, then, the superposition is still as in equation (2.6). But at *t*_{1}<*t*<*t*_{2}, either a photon pair is emitted (figure 1*b*), indicating that the system ended up in , *or not*, thereby
2.7which is a superposition of an interesting type: one component of it is a superposition in itself.

Similarly, at *t*>*t*_{2}, if a photon pair is emitted (figure 1*c*), we know that the particles ended up in paths 1′ and . Otherwise, however (figure 1*d*), we find the product state:
2.8which is peculiar. The positron is observably affected: its momentum has thus changed, because if we time-reverse its splitting, it may fail to return to its source. Not so with the electron: it remains superposed, hence its time-reversibility remains intact (figure 2).

This is QO [1], where one party of the interaction ‘remembers’ it through momentum change, whereas the other remains unaffected, in apparent defiance of momentum conservation.

### (c) The critical interval: the non-event was not that silent after all

It is obviously the intermediate time interval *t*_{1}<*t*<*t*_{2} that conceals the momentum conservation in QO. The details, however, are not trivial.

The two particles, during this interval, become partly entangled in both positions and momenta. Should one be found on the left path, the other will be on the right. The momentum entanglement is even more interesting: suppose that within the interval we reunite the two halves of each particle's wave function through the original beam splitter, to see whether they return to their source. *Either one* of the particles may fail to do that; in which case the other must *not*. This entanglement is identical to that of the electron–positron pair in Hardy's experiment [5]:
2.9The state underlying this entanglement is a *higher order superposition* in that it is composed of a superposed plus a non-superposed state. Equally interesting is this state's sequel, which is *un*entanglement. The difference from the familiar ‘*dis*entanglement’ is fundamental: disentanglement is the natural consequence of entanglement, e.g. the Einstein–Podolsky–Rosen (EPR) correlations following spin measurements. Unentanglement, in contrast, creates the deceptive impression that entanglement *never took place*. Rephrased in more familiar terms, oblivion is the process where the wave function undergoes momentary decoherence followed by ‘*re*coherence’. Notice that, in contrast to the familiar decoherence induced by the macroscopic environment, which is usually believed to be irreversible, here decoherence temporarily ensues by the interaction's mere *potential* to become macroscopic. Nature, so to speak, takes back its assertion.

### (d) Oblivion's ubiquity: every detector's pointer must be superposed in the conjugate variable

We now submit our main argument. Rather than a curious interaction, *Oblivion is part and parcel of every routine quantum measurement*. Its elucidation can therefore shed new light on the nature of measurement, further enabling some novel varieties of it.

Recall first the trivial preparation required by quantum measurement, e.g. in the common position measurement.^{2} The detector's pointer, positioned at a specific location, reveals the particle's presence in that precise place by receiving *momentum* from it. This naturally obliges the pointer to have precise (preferably 0) momentum. In return, however, the pointer's position must be highly *uncertain*.

Let this trade-off be studied with the aid of our first experiment (figure 1) with one modification. In the original version, the experiment's two possible interactions are annihilations, which are mutually exclusive events. So let us replace annihilation by mere collision (figure 3): two superposed atoms *A*1 and *A*2 undergo splitting and interactions exactly like the electron and positron in figure 1. These atoms, unable to annihilate, can only collide, which can now happen on *both* possible occasions at *t*_{1} and *t*_{2}, namely at the two locations, where *A*2 can turn out to reside. Instead of photon detectors, therefore, we now detect the atoms themselves.

This experiment's version has greater sensitivity. Suppose that the atom detector on path 2′′ (positioned closer to the beam splitter; figure 3*b*) remains silent: we thereby know that the two atoms have collided, yet remain oblivious about this collision's location. What we thus measure is the ordinary momentum exchange: both atoms' momenta have been reversed along the horizontal axis.

Here, oblivion is negligible, affecting only the two atoms' positions which undergo only little change. Yet, because *A*2 has vanished from the macroscopically distant location on 2′′, the final outcome is a position measurement of *A*2.

Conversely, we may fail to detect *A*1 and *A*2 in paths 1′′′ 1′′′′ and 2′′′ 2′′′′, to which they would have been diverted in case of collision. We are now certain that atom *A*2 resides on path 2′′, while *A*1 has returned to its initial superposition over 1′ and 1′′ (figure 3*c*). In other words, *A*2 undergoes momentum oblivion, which is again an ordinary position measurement of *A*1, but this time the measurement is *interaction free*—a point studied in greater detail below.

To summarize: we have studied an asymmetric interaction between two atoms, where two halves of *A*1's wave function interact with one half of *A*2, under great time and position resolution. Two momentum exchanges can occur between the two atoms:

(i)

*A*2 turns out to have collided with*A*1. This amounts to*A*2 undergoing position measurement. The price exacted by the uncertainty principle is a minor position oblivion of both*A*1 and*A*2.(ii)

*A*2 has*not*collided with*A*1. This again amounts to*A*2 undergoing position measurement—‘collapsing’ to the remote 2′′ path. Here, however, a macroscopic momentum oblivion plagues*A*1, making the measurement interaction free.

Both (i) and (ii) occur under an unusually high space and time resolution, enabling a novel study of the critical interval. During this interval, entanglement between the two atoms has ensued, as they have assumed new possible locations 2.10which has remained undistinguished until the macroscopic detection which finalized the interaction and sealed the oblivion.

The generalization is natural. During *every* quantum measurement, the detector's pointer interacts with a particle in the same manner as atoms 1 and 2 in the above setting: both the measured particle and the measuring pointer are superposed, but their wave functions interact asymmetrically: the particle's wave function's *part* interacts with the pointer's wave function's *whole*. To make the analogy more realistic, recall that in reality the pointer's superposition is continuous, smeared over an array of locations, which is much wider than the two discrete locations in the former case. Momentum measurement thus becomes much more precise. This transition from discrete to continuous superposition also opens the door for several interesting interventions [1,2].

## 3. The clue of Escher's self-drawing hands: use time to bypass spatial restrictions

The above account of QO is a purely orthodox one, based on standard quantum theory alone. It thus says nothing about the ‘decision’ of which of all possible particle–pointer interactions will eventually be materialized, annihilating all its competitors. Such an explanation lies, of course, outside of quantum formalism, which is the very reason for the conception of several interpretations of QM. In our case, it turns out that two of the most radical interpretations offer the best clues for a deeper account of quantum measurement in general and oblivion in particular.

Taking an example from M. C. Escher may help us to elucidate our proposal. This artist has earned his reputation by paintings of impossible causal loops, such as a pair of hands drawing one another, or a waterfall where the water is enclosed into an eternal circle, etc. These pictures do not challenge our physical intuition because we know that the artist has proceeded through normal causality to create the paradoxical causal chain. In other words, *the apparently consecutive stages that we see in the picture were drawn in a different order in time*.

Is it possible that the causal anomalies manifested by quantum uncertainty and non-locality owe their existence to a somewhat similar creation? This is the spirit of the time-symmetric interpretations considered below.

### (a) The transactional interpretation: each quantum interaction is affected by the past as well as the future

The following is a brief introduction of Cramer's transactional interpretation of QM [6,7]. Any quantum event that involves the exchange of conserved quantities (energy, momentum, angular momentum, etc.), and that can be represented by a matrix element, is considered to have been formed in three stages:

1. An ‘offer wave’ (the usual retarded wave function

*ψ*(*x*) or a ket |*ψ*〉) originates from the ‘source’ (the object supplying the quantities transferred) and spreads through space–time until it encounters the ‘absorber’ (the object receiving the conserved quantities).2. The absorber responds by producing an advanced ‘confirmation wave’ (the complex conjugate wave function

*ψ**(*x*) or a bra 〈*ψ*|) which travels in the reverse-time direction back to the source, giving rise to the density*ψ***ψ*.3. The source chooses between the possible transactions {

*i*} based on the strengths of the*ψ**_{i}*ψ*_{i}echoes it receives, and reinforces the selected transaction repeatedly until the conserved quantities are transferred and the potential quantum event becomes real.

It is quantum non-locality where the transactional interpretation offers its best reward. It takes advantage of the trivial effect that entangled systems, though spacelike-separated, have been *locally* connected in the past. The quantum transaction, along the resulting space–time zigzag, thus makes the effect natural, as in Cramer's own illustration for the EPR experiment (figure 4). An interesting variant of the transactional interpretation (TI) is Kastner's possibilist transactional interpretation [8], which deals with several consistency and relativistic issues left open in the original version. Indeed, Cramer's own revised version [7] seems to respond to these challenges.

### (b) The two-state-vector formalism: the combined causal effect of past and future states can yield odd values

Despite several important differences, the time-symmetric idea of TI appears also in Aharonov's two-state-vector formalism (TSVF). Its basic assumption was proposed by Aharonov *et al.* [9]: every quantum system is determined by two wave functions. One (also known as the pre-selected wave function) evolves forward in time while the other (post-selected) evolves backward. The forward- and backward-evolving wave functions, |*ψ*〉 and 〈*ϕ*|, respectively, define the so-called two-state vector 〈|*ϕ*|*ψ*〉. Both are equally important for describing the quantum system in between the pre- and post-selection via the weak value of any operator *A* defined by
3.1Here a logical catch ensues: ‘the state *between* two measurements’ cannot, by the very nature of quantum measurement, be revealed by measurement. Weak measurement [10,11] was conceived in order to bypass this obstacle as well as to test other TSVF predictions. This led to numerous intriguing works, both theoretical and experimental. In what follows we demonstrate its value for the understanding of non-events.

### (c) Why time-symmetric causality?

Our reasons for opting for these two time-symmetric interpretations have been explained in several earlier works. Although their novel pictures of quantum reality are still incomplete, at times even raising new questions, they offer very elegant explanations for well-known quantum paradoxes, as well as some novel ones, presented also in earlier works of ours [12].

Such is the present case. Our analysis of QO has revealed a subtle quantum interaction that rigorously applies for quantum measurement in general, with a particular insight into the intriguing causal efficacy of IFM: *the interaction between the quantum and the macroscopic pointer turns out to consist of a feeble occurrence of the event, followed by its ‘unoccurrence’.* It is therefore only natural to ask whether these antagonistic stages occur during the above back-and-forth exchanges between future and past.

## 4. The complementary clue: erasure must be added for making the deception perfect

But it was not only for its elegance that we have found time-symmetric quantum causality to naturally integrate with our findings. Both TI and TSVF, for their own reasons, had to invoke unique values such as negative energy and negative momentum as parts of the interactions across time. This bring us back to the visual analogy we employed earlier: when Escher was drawing his masterpieces, he was not only going back-and-forth in terms of the paradoxical causality he was presenting; without doubt, *he also used a great deal of erasure in the process*. With the appropriate caution needed when following analogies, let us follow this one and ask whether Nature is also refining her quantum feats this way.

### (a) Negative energy exchange in transactional interpretation

Cramer [6] drew his initial inspiration from the Wheeler–Feynman (W-F) ‘absorber theory’ of electromagnetism. They invoked retarded and advanced waves going between the source and the absorber, such that, due to destructive interference, all the intermediate anomalies of advanced waves eventually disappear. Cramer found it natural to follow this reasoning into the quantum realm:
The emitter can be considered to produce an ‘offer’ wave F1 which travels to the absorber. The absorber then returns a ‘confirmation’ wave to the emitter and the transaction is completed with a ‘handshake’ across space-time. To an observer who had not viewed the process in the pseudo-time sequence employed in the above discussion, there is no radiation before T1 or after T2 but a wave travelling from emitter to absorber. This wave can be reinterpreted as a purely retarded wave because its advanced component G2, a negative energy wave travelling backwards in time from absorber to emitter, can be reinterpreted as a positive energy wave travelling forward in time from emitter to absorber, in one-to-one correspondence with the usual description.

Thus the W-F time symmetric description of electrodynamic processes is completely equivalent in all observables to the conventional electrodynamic description. Time-symmetric electrodynamics, in both its classical and quantum mechanical forms, leads to predictions identical with those of conventional electrodynamics. [6, p. 661]

### (b) Weak quantum values can be odd, even negative

Next comes what may be TSVF's most intriguing prediction. Consider the values straightforwardly predicted for mass and momentum in the following experiment. The three boxes paradox [13] is a by-now familiar surprise yielded by TSVF. A particle is prepared with equal probability to reside in one out of three boxes:
4.1Later, it is post-selected in the state
4.2What is the particle's state between these two strong measurements? By definition, *another measurement of this kind is unsuitable for answering this question*, as it would reveal the state *upon* the intermediate measurement. It is *weak* measurement, again, that comes to help. TSVF predicts the following weak values of the projection operators, *Π*_{i}≡|*i*〉〈*i*| for *i*=1,2,3:
4.3Therefore, the total number of particles is 1, as it should be, but in this case 1 is *a sum of 2 ordinary particles plus 1 odd*. That is, the first two boxes contain a particle with certainty, while the third box contains a particle with an apparent ‘negative probability’. Can this odd mathematical value be given physical meaning? If we choose to trust the formalism,^{3} we can assign the negative value to mass. In this case, the last equation denotes a *negative weak value for the particle's very existence in the third box*. In order to fully grasp the paradoxical nature of this term, let us consider within the context of its standard versions:
IF ‘

*P*=1’ means ‘the particle certainly resides within this box’; ‘

*P*=0’ means ‘the particle has never resided within this box’;THEN ‘

*P*=−1’ means ‘the particle certainly *un*resides within this box’.

As absurd as the latter expression may sound, we find it mathematically simple and physically significant (i.e. weak interactions with the third box would flip their sign in this specific pre- and post-selections), the alternative being dismissing it as having ontological meaning, by relying on forward evolving interference and retrodiction.^{4} The choice to trust the mathematics has led Aharonov *et al*. 15 to assign a negative sign to every interaction involving the third box, as long as it is weak enough. Obviously, this cannot be done to the particle's charge, as charge plays no role in this case. The remaining choice therefore is mass. The simplest way to prove this prediction is through the particle's momentum: a collision with another particle must give the latter a ‘pull’ rather than a ‘push’, even though their initial velocities were opposite. Moreover, since weak values are additive, if one is interested to know, for instance, what is the net interaction with the second and third boxes, one will find the 1−1=0 net interaction which motivated the title of this paper. A preliminary empirical test of this prediction has already yielded promising results [16].

In passing, it is worth comparing this step of the TSVF with Dirac's choice to trust the mathematics upon encountering the negative value for the electron's charge, following the four-spinor solution of his famous equation. That choice later led to the discovery of the positron. This may be the case with the present choice as well.

### (c) Weak, odd and negative: but not necessarily rare

To the extent that the above weak values are indeed part of quantum reality rather than a mere mathematical curiosity, one cannot help doubting whether their existence is limited to the probabilistic fringe that allows them, namely under very unique pre- and post-selections. If we do not observe such values under ‘ordinary’ quantum interactions, it may be because they do not exist there, but there is also another possibility which merits consideration: *perhaps even a normal quantum value is the sum of several odd quantum values*.

We are well aware of the extent of speculation in which we are now indulging towards the end of this work. Not only do we favour the highly unorthodox account of time-symmetric quantum causality, but we are willing to ascribe it exchanges of negative momentum and mass to account for the fact that, despite its spatial and temporal oddities, the quantum interaction does obey conservation laws.

## 5. From information to ontology

How, then, can non-events play a part in the quantum process? The challenge's acuity is reflected in the various radical moves it has elicited from the rival interpretations of QM. Among these, two famous schools have resorted to the following opposite extremes:

(i)

*Abandonment of ontology: Copenhagen*. Since counterfactuals are facts of our knowledge, just like actual facts, let us define QM as dealing only with knowledge, information, etc., rather than with objective reality.(ii)

*Excess ontology: Many Worlds*. The counterfactual does occur, but in a different world, split from ours at the instant of measurement.

Our option is *moderate ontology*. It does not seek shelter in strict positivism, which can seldom be falsified yet is equally barren. It rather indulges into a modicum of speculation with its inherent risk of being proven wrong. Our inspiration comes from Wigner's seminal essay ‘The unreasonable effectiveness of mathematics in the natural sciences’ [17]. Following a long line of thinkers ever since Plato, Wigner mused that ‘The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve’. What, after all, is so unique about science? It is the fact that mere *information*, especially mathematical, can yield, with appropriate manipulation of its symbols, *additional information*, which may strike us as odd or even wrong. Yet, if we take it seriously, we may find that Nature indeed possesses what this new information conveys!

Our moderate ontology concerns the nature of time. It is from the present physical and mathematical attributes of time that we seek to derive a new property. Our first derivation (which is, let it be stressed again, a rigorous one) is QO [1], whereby quantum events happen and sometimes also unhappen. This ontology offers, paradoxically, a *demystifying* account of the quantum world [2]. We now dare to speculate further into the nature of time and endorse the idea of Becoming [18,19]; namely, the genuine *emergence of events together with their surrounding space–time* [18,20]. It is within this Becoming, our speculation goes, that happening and unhappening can most naturally occur.

Becoming, however, is notoriously hard to reconcile with general relativity, which seems to portray a block universe where all events—past, present and future—have the same degree of existence, just like events in different locations. It is therefore significant that even ardent block universe adherents, such as Stuckey *et al.* [21], resort to their own moderate ontology. They invoke ‘space–timesource elements’ which are presented as more fundamental than space–time. This, we believe, is not so remote from Becoming, especially its recent moderate variants in the works of Cramer [6,7] and Kastner [8]. Something that is as yet unknown about time may be waiting to be revealed.

## 6. Summary

How, in conclusion, can an event which has never occurred but just *could* have exert observable effects? Our analysis of QO has revealed some elusive intermediate stages in the information yielded by measurement, which can sometimes end up in nearly perfect self-cancellation, still leaving some traces in neighbouring systems. We then found it natural to follow the transactional interpretation and the two-state-vector formalism of QM in search of a more comprehensive account of this process. Allowing ourselves one more unorthodox hypothesis, we suggest that Becoming may be the fundamental ingredient of physical reality which makes such phenomena possible. Several insights emerging from these works, and moreover some experimental predictions of the TSVF, converged into a picture in which both advanced and retarded actions form the quantum event, where even physical values known so far to be only positive, such as energy and mass, can also change their sign.

Whether the luring analogies which have guided us were genuine, the burden of proof remains with us, to be addressed in works to follow.

## Authors' contributions

Both authors contributed substantially to the ideas, contents and presentation of this work. Both authors gave their final approval for publication.

## Competing interests

We do not have any competing interests with regard to this manuscript.

## Funding

E.C. acknowledges support from ERC-AD NLST.

## Acknowledgements

We are grateful to Yakir Aharonov, William Mark Stuckey and Ruth Elinor Kastner for inspiring discussions and helpful comments.

## Footnotes

One contribution of 14 to a theme issue ‘Quantum foundations: information approach’.

↵1 This is analogous to a photon's wave function undergoing position measurement upon passing a slit and immediately spreading again by diffraction, until the final detection of the photon.

↵2 It is also worth noting that position measurement is part of nearly all standard quantum measurements, e.g. spin measurement necessitating two detectors placed within the Stern–Gerlach magnet.

↵3 Admittedly, there is no consensus on this point. We, for example, support the more physical view on weak values representing real properties of single particles in pre-/post-selected ensembles, while Kastner [14], for instance, suggests that weak values represent mere amplitudes and should be applied to statistical ensembles rather than to single particles.

↵4 This perfectly reasonable approach was recently suggested to us via private communication from W. M. Stuckey (2015).

- Accepted February 9, 2016.

- © 2016 The Author(s)