Programming robots and other autonomous systems to interact with the world in real time is bringing into sharp focus general questions about representation, inference and understanding. These artificial agents use digital computation to interpret the data gleaned from sensors and produce decisions and actions to guide their future behaviour. In a physical system, however, finite computational resources unavoidably impose the need to approximate and make selective use of the information available to reach prompt deductions. Recent research has led to widespread adoption of the methodology of Bayesian inference, which provides the absolute framework to understand this process fully via modelling as informed, fully acknowledged approximation. The performance of modern systems has improved greatly on the heuristic methods of the early days of artificial intelligence. We discuss the general problem of real–time inference and computation, and draw on examples from recent research in computer vision and robotics: specifically visual tracking and simultaneous localization and mapping.