BriefGPT.xyz
May, 2024
平均奖励约束下有效的强化学习探索:通过后验抽样实现接近最优的遗憾
Efficient Exploration in Average-Reward Constrained Reinforcement Learning: Achieving Near-Optimal Regret With Posterior Sampling
HTML
PDF
Danil Provodin, Maurits Kaptein, Mykola Pechenizkiy
TL;DR
基于后验抽样的新算法在无限时间视野下的有约束马尔科夫决策过程学习中实现了几乎最优的悔恨界限,并在实践中相比现有算法具有优势。
Abstract
We present a new
algorithm
based on
posterior sampling
for
learning
in
→