Temporal-Difference (TD) learning is a general and very useful tool for
estimating the value function of a given policy, which in turn is required to
find good policies. Generally speaking, TD learning updates states whenever
they are visited. When the agent lands in a state, its value can be used to
compute the TD-error, which is then propagated to other st
本研究探讨用函数逼近的时序差分学习论(TD)可收敛至比蒙特卡罗回归更劣的解的问题,以及针对价值函数在出现急剧不连续的地方的逼近误差在自举更新中何以进一步扩散的问题。我们通过实证找到了泄漏扩散的证据,并论证了仅当逼近误差时,这种情况会出现。最后,我们证明了泄漏传播从 [Tsitsiklis and Van Roy, 1997] 中得出,但是这并不意味着泄漏传播会发生以及何种情况下会发生,最后,我们测试了这个问题是否可以通过更好的状态表示来缓解,并且是否可以在无奖励或特权信息的情况下进行学习。