BriefGPT.xyz
Oct, 2020
学习变分词掩模以提高神经文本分类器的可解释性
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
HTML
PDF
Hanjie Chen, Yangfeng Ji
TL;DR
本文提出了一种用于自动学习任务特定重要单词和减少非相关信息的变分词掩码方法,以改善模型预测的可解释性,并在七个基准文本分类数据集上评估了其有效性,证明了VMASK在提高模型预测准确性和可解释性方面的有效性。
Abstract
To build an
interpretable neural text classifier
, most of the prior work has focused on designing inherently interpretable models or finding faithful explanations. A new line of work on
improving model interpretability<
→