The commonly used Q-learning algorithm combined with function approximation induces systematic overestimations of state-action values. These systematic errors might cause instability, poor performance and sometimes divergence of learning. In this work, we present the \textsc{Averaged Target DQN} (ADQN) algorithm, an adaptation to the DQN class of algorithms