BriefGPT.xyz
Oct, 2017
InterpNET: 可解释深度学习的神经内视
InterpNET: Neural Introspection for Interpretable Deep Learning
HTML
PDF
Shane Barratt
TL;DR
本文提出了一种新的解释性神经网络框架,可以生成自然语言的分类解释,试图弥合人类推理和深度神经网络推理相差的鸿沟。该模型在 CUB 鸟类分类和解释数据集上获得了 METEOR 分数 37.9,为目前最高水平。
Abstract
Humans are able to explain their reasoning. On the contrary, deep neural networks are not. This paper attempts to bridge this gap by introducing a new way to design
interpretable neural networks
for
classification
→