BriefGPT.xyz
May, 2017
子正则复杂度与深度学习
Subregular Complexity and Deep Learning
HTML
PDF
Enes Avcu, Chihiro Shibata, Jeffrey Heinz
TL;DR
本文通过对两种循环神经网络的实验研究,证明了正则正负推理算法是深度神经网络能否表示和学习时间序列中的长期依赖的可靠工具。此外,本文发现在同种实验中简单循环神经网络在最难的实验中表现出色,长短时记忆网络的表现总体上比简单循环神经网络差。
Abstract
This paper presents experiments illustrating how
formal language theory
can shed light on deep learning. We train naive Long Short-Term Memory (LSTM) Recurrent
neural networks
(RNNs) on six formal languages drawn
→