The simplest form of sensor–motor control is obtained with a reflex. In this case the reflex can be interpreted as part of a closed–loop control paradigm which measures a sensor input and generates a motor reaction as soon as the sensor signal deviates from its desired (resting) state. This is a typical case of feedback control. However, reflex reactions are tardy, because they occur always only after a (for example, unpleasant) reflex–eliciting sensor event. This defines an objective problem for an organism which can only be avoided if the corresponding motor reaction is generated earlier. The goal of this study is to design a closed–loop control situation where temporal–sequence learning supersedes a tardy reflex reaction with a proactive anticipatory action. We achieve this by employing a second, earlier–occurring and causally coupled sensor event. An appropriate motor reaction to this early event prevents triggering of the original, primary reflex. Such causally coupled sensor events are common for animals, for example when smell predicts taste or when heat radiation precedes pain. We show that trying to achieve anticipatory control is a fundamentally different goal from trying to model a classical conditioning paradigm, which is an open–loop condition. To this end, we use a novel learning rule for temporal–sequence learning called isotropic–sequence–order (ISO) learning, which performs a confounded correlation between the primary sensor signal associated to the reflex and a predictive, earlier–occurring sensor input: this way the system learns the relation between the primary reflex and the earlier sensor input in order to create an earlier–occurring motor reaction. As a consequence of learning, the primary reflex will not be triggered any more, thereby permanently remaining in its desired resting state. In a robot application, we demonstrate that ISO learning can successfully solve the classical obstacle–avoidance task by learning to correlate a built–in reflex behaviour (retraction after touching) with earlier arising signals from range finders (before touching). Finally, we show that avoidance and attraction tasks can be combined in the same agent.