ACLJun, 2024

通过多任务指令微调增强泛化的代码漏洞检测

TL;DRCode Pre-trained Models (CodePTMs) based vulnerability detection struggles to generalize as they typically learn superficial mapping from source code to labels, resulting in poor performance in real-world scenarios. To address this, VulLLM integrates multi-task learning with Large Language Models (LLMs) to effectively mine deep-seated vulnerability features, surpassing seven state-of-the-art models in terms of effectiveness, generalization, and robustness.