TL;DR通过显式地提供特定任务对话策略,提出了 Schema Attention Model (SAM) 和改进版 Schema 表示来解决零 - shot 迁移学习问题,在 Star 语料库上验证了 SAM 在零 - shot 模式下的显著改善,F1 值提高了 22 个点。
Abstract
Developing mechanisms that flexibly adapt dialog systems to unseen tasks and
domains is a major challenge in dialog research. neural models implicitly
memorize task-specific dialog policies from the training data