AI in Education · Research

How Knowledge Tracing
Using Transformers
Is Making Learning More Effective

A
Atharva Deshmukh NeuroLearn · AI Research
April 21, 2026
12 min read
Back to all blogs

Imagine a classroom where the teacher always knows — with uncanny precision — exactly what each student understands, what they're about to forget, and what they should practice next. This is no longer science fiction. It's what transformer-based knowledge tracing systems are beginning to deliver, and it's quietly reshaping how we think about learning at scale.

What Is Knowledge Tracing?

Knowledge tracing (KT) is the task of modeling a student's evolving understanding of a subject over time — based on their history of interactions with learning material. Given a sequence of questions a student has answered (along with whether each answer was correct or incorrect), a knowledge tracing model predicts the probability that the student will correctly answer the next question on any given concept.

Think of it as a continuous, invisible assessment. While a traditional test gives a snapshot of what a student knows on a single day, knowledge tracing builds a dynamic, time-evolving map of their mastery — concept by concept, moment by moment.

This has enormous practical consequences. If you can reliably predict whether a student will get a question right before they even attempt it, you can intervene precisely — giving them harder problems when they're ready, easier ones when they're struggling, and skipping content they've already mastered.

A Brief History: From Bayes to Deep Learning

Bayesian Knowledge Tracing (BKT)

The field began in the early 1990s with Bayesian Knowledge Tracing, developed by Corbett and Anderson at Carnegie Mellon. BKT models each skill as a binary latent variable: a student either "knows" it or "doesn't know" it. Using a Hidden Markov Model, it estimates the probability of mastery based on four parameters — prior knowledge, learning rate, guess probability, and slip probability.

BKT is elegant and interpretable, but it has serious limitations. It treats each skill in isolation, assumes a fixed learning rate for all students, and can't capture the complex, interdependent way real concepts relate to each other.

Item Response Theory (IRT)

Item Response Theory takes a different approach — modeling the probability of a correct answer as a function of both the student's latent ability and the item's difficulty, discrimination, and guessing parameters. It's the backbone of most standardized tests. But IRT gives a static snapshot of ability rather than tracking how that ability evolves through learning.

Deep Knowledge Tracing (DKT)

The landmark 2015 paper by Piech et al. from Stanford introduced Deep Knowledge Tracing — replacing the HMM with an LSTM neural network. By feeding sequences of student interactions into an LSTM, DKT could automatically learn complex patterns of knowledge acquisition that BKT simply couldn't capture. Performance on next-question prediction improved dramatically.

But LSTMs have their own ceiling. They process sequences step by step, which makes it hard to capture long-range dependencies — for example, how a concept a student struggled with six months ago should influence today's prediction. They're also slow to train and difficult to interpret.

Key Insight

Every generation of knowledge tracing models has improved on its predecessor's biggest weakness. The transformer didn't just improve performance — it addressed the fundamental bottleneck of sequential processing, enabling the model to "look everywhere at once" across a student's entire history.

Enter the Transformer

The transformer architecture, introduced by Vaswani et al. in the 2017 paper "Attention Is All You Need," discarded recurrence entirely. Instead of processing one token at a time, transformers use a self-attention mechanism that computes relationships between all positions in a sequence simultaneously. This allows them to model arbitrarily long dependencies without degradation.

For knowledge tracing, this is revolutionary. A student's learning journey isn't a clean sequential chain — it's a web. A mistake on integration might suddenly make sense three weeks later when the student masters limits. The transformer can capture that web.

"Self-attention allows the model to determine the relevance of each past interaction to the current prediction — not based on how recently it occurred, but based on how conceptually important it is in context."

— Attention Is All You Need, Vaswani et al., 2017

Transformer Models for Knowledge Tracing

SAINT — Separated Self-AttentIon for kNowledge Tracing

One of the most significant transformer KT architectures is SAINT (Choi et al., 2020), developed with data from the Riiid educational platform. SAINT uses an encoder-decoder structure inspired by the original transformer: the encoder processes exercise information (question content and concept tags), and the decoder processes response information (correct/incorrect answers). The two streams are kept separate until deep in the network, allowing each to develop rich representations before they interact.

SAINT achieved state-of-the-art AUC scores on the EdNet dataset — one of the largest student interaction datasets ever collected, containing over 95 million interactions from 780,000 students.

AKT — Attentive Knowledge Tracing

AKT (Ghosh et al., 2020) introduced a key insight: not all past interactions should be weighted equally. Recent interactions and interactions on closely related concepts should matter more. AKT incorporates monotonic attention — a modified attention mechanism with an exponential decay on distance — ensuring that more recent answers receive higher attention weights, mimicking the natural forgetting curve.

AKT also uses a Rasch model embedding to capture per-item difficulty, merging classical psychometrics with modern deep learning.

BERT-based Approaches

Researchers have also adapted BERT (Bidirectional Encoder Representations from Transformers) for knowledge tracing. Unlike causal (left-to-right) models, BERT can attend to context from both past and future within a sequence, which makes it powerful for tasks like filling in missing knowledge states — useful when student interaction logs have gaps.

Model Architecture Key Strength Limitation
BKT Hidden Markov Model Interpretable Binary mastery only
DKT (LSTM) Recurrent Neural Network Learns sequences Weak long-range memory
SAINT Encoder-Decoder Transformer Separates exercise & response Needs large datasets
AKT Transformer + Rasch Model Recency-aware attention More parameters to tune
BERT-KT Bidirectional Transformer Handles sparse data High compute cost

Why This Makes Learning More Effective

The ultimate goal of knowledge tracing isn't academic — it's practical. Accurately modeling what a student knows enables three things that transform learning outcomes:

"The best educational technology doesn't replace the teacher. It gives the teacher a superpower: the ability to know, at any moment, exactly where every student stands and what they need next."

— NeuroLearn Research Team

The Role of Auxiliary Features

Modern transformer KT models go beyond just correct/incorrect answers. They incorporate a rich set of auxiliary features that dramatically improve prediction accuracy:

Elapsed time — how long a student spent on a question — is a powerful signal. A student who answers a hard question in 3 seconds likely guessed. One who spends 8 minutes and gets it wrong is engaging deeply, just not yet successfully.

Lag time — the time between interactions — captures the effect of the forgetting curve. A student who reviews a concept after two weeks of no practice is in a very different state than one who reviewed it yesterday.

Concept graph structure — encoding prerequisite relationships between skills (e.g., fractions are a prerequisite of algebra) — helps the model generalize across related concepts even with sparse data.

Challenges and Open Questions

Despite impressive progress, transformer-based knowledge tracing is not without challenges. Training these models requires large amounts of student interaction data, which raises serious privacy and ethics questions. Education systems handle some of the most sensitive data imaginable — children's learning struggles, attention patterns, and academic trajectories.

There is also the question of cold start: what does the model predict for a brand-new student with no interaction history? Most systems handle this with generic priors or brief diagnostic assessments, but it remains an active area of research.

Finally, concept definition is an underappreciated problem. What exactly is a "concept"? Is "solving quadratic equations" one concept or five? The granularity at which skills are defined significantly affects how well any knowledge tracing model performs.

How NeuroLearn Uses This

NeuroLearn's AI grading and recall engine are built on transformer-based knowledge state modeling. When our AI grades a student's paper, it doesn't just score the answer — it updates a probabilistic model of that student's concept mastery. The spaced-repetition review sessions scheduled after each test are driven directly by those predicted knowledge states, not by generic intervals.

What the Research Shows

Empirical results across benchmark datasets are compelling. On the EdNet dataset (Choi et al., 2020), SAINT improved AUC from 0.769 (DKT baseline) to 0.809 — a significant gain in a field where improvements of 0.01 AUC are considered meaningful. On the ASSISTments 2009 dataset, AKT achieved an AUC of 0.784 compared to DKT's 0.742.

More importantly, real-world deployments are showing impact on actual learning outcomes. Platforms that incorporate adaptive sequencing driven by knowledge tracing — as opposed to fixed-order curricula — consistently report higher completion rates, faster mastery timelines, and better performance on external assessments.

A 2022 study from Cal State University found that students using an adaptive learning system with transformer-based knowledge tracing scored an average of 14% higher on final assessments compared to students following a traditional linear curriculum — with no increase in study time.

The Road Ahead

The next frontier for transformer-based knowledge tracing includes multimodal modeling — incorporating not just text-based Q&A, but video watch patterns, writing samples, and even biometric signals like typing rhythm and eye-tracking data. As models become richer, they'll move from predicting "will this student get this question right?" to answering deeper questions: "Is this student frustrated right now?" "Is their confidence misaligned with their actual mastery?" "What's the optimal next learning experience to maximize long-term retention?"

We're entering an era where AI doesn't just automate grading — it acts as a cognitive model of every student, running silently in the background, making every interaction with learning material more precisely targeted, more efficient, and ultimately more human.

That's the promise. And with transformer architectures maturing rapidly, we are closer to keeping it than ever before.

A
Atharva Deshmukh
AI Research · NeuroLearn

Atharva writes about the intersection of artificial intelligence, cognitive science, and education technology. He is interested in how probabilistic models of human learning can be used to build more humane and effective educational systems.

atharvadeshmukhwork@gmail.com