TL;DR本文研究了有限精度的RNNs,证明LSTM和Elman-RNN with ReLU activation比RNN with a squashing activation和GRU更加强大,可以实现计数行为,并且实验证明了LSTM学习了有效地使用计数机制。
Abstract
While recurrent neural networks (RNNs) are famously known to be Turing complete, this relies on infinite precision in the states and unbounded computation time. We consider the case of RNNs with finite precision