recurrent neural networks (RNNs) hold immense potential for computations due
to their Turing completeness and sequential processing capabilities, yet
existing methods for their training encounter efficiency challenges.
Backpropagation (BP) is the dominant method for training deep neural networks, but node perturbation (NP) proposes learning through noise injection and measurement of induced loss change, yielding competitive performance when aligned with directional derivatives and combining with a decorrelating mechanism for layer-wise inputs.