How to generate summaries of different styles without requiring corpora in
the target styles, or training separate models? We present two novel methods
that can be deployed during summary decoding on any pre-trained
Transformer-based summarization model. (1) Decoder state adjustment in