By organizing object recognition as indexing a look–up table of model object features, it can be made effective in terms of time and memory complexity. However, for general three–dimensional (3D) objects it is not possible to compute non–trivial object specific descriptors from a single view that are invariant to general choices of viewpoint This raises the question of how to effectively organize recognition of 3D objects from single views by making various compromises w.r.t. invariance and efficiency of representations. For general shapes we can derive shape constraints, invariant to viewpoint and other camera parameters, that relate 3D and image structure. These relations can be used for verification of the presence in an image of a specific 3D object but they do not allow for the computation of view invariant indexes. In order to have an indexing system for recognition, there are basically two alternative options: If we want complete view invariance we have to restrict the class of objects. Alternatively, if we want methods that work for general unconstrained object types, we have to restrict the range of viewpoints over which invariant descriptors can be computed. We will see how this naturally leads to the introduction of incidence and order structure respectively, as a basis for shape description. The hierarchy of geometric structure descriptions: projective/affine, order and incidence can all be described in a unified way in terms of properties of bracket expressions of image coordinates in arbitrary frames.