The paper introduces LoReKT, a framework for improving knowledge tracing in low-resource scenarios. It utilizes pre-training on rich datasets and fine-tuning strategies. The model achieves superior results in AUC and Accuracy compared to baselines across various datasets.
Knowledge tracing is crucial in Intelligent Tutoring Systems to predict student performance based on past interactions. Deep learning models like DKT have shown promise but face challenges with limited data. LoReKT addresses this by transferring knowledge from rich to low-resource datasets through pre-training and fine-tuning.
The importance mechanism in LoReKT prioritizes updating crucial parameters during fine-tuning, preventing overfitting. The model's effectiveness is demonstrated through experiments on public KT datasets, showcasing significant improvements in AUC and Accuracy.
By incorporating data type embeddings and dataset embeddings, LoReKT enhances the model's ability to integrate information from questions and concepts effectively. This approach leads to improved performance across different KT datasets.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Hengyuan Zha... alle arxiv.org 03-12-2024
https://arxiv.org/pdf/2403.06725.pdfDomande più approfondite