BriefGPT.xyz
Nov, 2022
通过保留目标函数的自我训练增强方法提升少样本生成性对话状态追踪
Self-Training with Purpose Preserving Augmentation Improves Few-shot Generative Dialogue State Tracking
HTML
PDF
Jihyun Lee, Chaebin Lee, Yunsu Kim, Gary Geunbae Lee
TL;DR
提出了一种新的自学框架,通过伪标签和目的保持扩充来迭代地改进模型,用于少样本生成式对话状态跟踪,增强了MultiWOZ 2.1的性能,并提高了无见过值的槽召回率。
Abstract
In
dialogue state tracking
(DST), labeling the dataset involves considerable human labor. We propose a new
self-training framework
for few-shot
g
→