BriefGPT.xyz
Jan, 2022
深度多任务学习中单一标量化的辩护
In Defense of the Unitary Scalarization for Deep Multi-Task Learning
HTML
PDF
Vitaly Kurin, Alessandro De Palma, Ilya Kostrikov, Shimon Whiteson, M. Pawan Kumar
TL;DR
通过标量化,加标准正则化和稳定技巧,我们可以在流行的监督和强化学习环境中,匹配甚至优于复杂多任务优化器的性能,我们认为这种结果需要对最近的多任务学习研究进行关键重新评估。
Abstract
Recent
multi-task learning
research argues against unitary
scalarization
, where training simply minimizes the sum of the task losses. Several ad-hoc multi-task
→