We study knowledge-grounded dialogue generation with pre-trained language
models. Instead of pursuing new state-of-the-art on benchmarks, we try to
understand if the knowledge stored in parameters of the pre-trained models is
already enough to ground open domain dialogues, and thus all