In theoretical cognitive science, there is a tension between highly
structured models whose parameters have a direct psychological interpretation
and highly complex, general-purpose models whose parameters and representations
are difficult to interpret. The former typically provide more insight into
cognition but the latter often perform better. This tension
qDKT is a variant of deep knowledge tracing that models every learner's success probability on individual questions over time by incorporating graph Laplacian regularization and initialization scheme inspired by fastText algorithm, achieving state-of-the-art performance on predicting learner outcomes and serving as a baseline for new question-centric knowledge tracing models.