Natural language generation systems must achieve fluency goals, as well as fidelity goals. Fluency helps make systems more usable by, for instance, producing language that is easier for people to process, or which engenders a positive evaluation of the system. Using very simple examples, we have explored one way to achieve specific fluency goals. These goals are stated as norms on ‘macroscopic’ properties of the text as a whole, rather than on individual words or sentences. Such properties are hard to accommodate within a conventional architecture. One solution is a two-component architecture, which permits independent variation of the components, either or both of which can be stochastic.