Today's probabilistic language generators fall short when it comes to
producing coherent and fluent text despite the fact that the underlying models
perform well under standard metrics, e.g., perplexity. This discrepancy has
puzzled the language generation community for the last few ye