Many language generation tasks require the production of text conditioned on
both structured and unstructured inputs. We present a novel neural network
architecture which generates an output sequence conditioned on an arbitrary
number of input functions. Crucially, our approach allows both the choice of
conditioning context and the granularity of generation,