BriefGPT.xyz
Jun, 2024
关于采用领域相邻的精调模型集合在少样本问题中的效果的实用性
On the Utility of Domain-Adjacent Fine-Tuned Model Ensembles for Few-shot Problems
HTML
PDF
Md Ibrahim Ibne Alam, Parikshit Ram, Soham Dan, Horst Samulowitz, Koushik Kar
TL;DR
利用领域相邻模型进行零样本或少样本学习的框架DAFT-E在零样本问题上表现接近单一最佳模型的准确性,在少样本问题上性能进一步提升,能够胜过任何单一领域相邻模型,同时需要更少的领域特定数据进行微调。
Abstract
large language models
(LLMs) have been observed to perform well on a wide range of
downstream tasks
when fine-tuned on domain-specific data. However, such data may not be readily available in many applications, m
→