BriefGPT.xyz
Nov, 2022
通过渐进式训练提高提示调整效率的FPT模型
FPT: Improving Prompt Tuning Efficiency via Progressive Training
HTML
PDF
Yufei Huang, Yujia Qin, Huadong Wang, Yichun Yin, Maosong Sun...
TL;DR
本论文提出了一种名为Fast Prompt Tuning的技术,通过将partial PLMs中的soft prompts转化到整个PLM中来提高prompt tuning(PT)的训练效率,该技术的应用可以在保持性能的同时节省30%的训练计算资源。
Abstract
Recently,
prompt tuning
(PT) has gained increasing attention as a parameter-efficient way of tuning
pre-trained language models
(PLMs). Despite extensively reducing the number of tunable parameters and achieving
→