Pre-trained text-to-text transformers achieve impressive performance across a wide range of nlp tasks, and they naturally support zero-shot learning (ZSL) by using the task description as prompt in the input. However, this approach has potential limitations, as it learns from input-out