Abstract
The automatic generation of Multiple-Choice Questions (MCQs) is an emerging task primarily applied in education. The goal is to generate MCQs based on a given context, including the question stem, answers, and distractors. In such application scenarios, the MCQ generation methods should provide MCQs with appropriate difficulties for individuals to enhance their reading comprehension abilities and learning experience. A few efforts of Difficulty-Controllable Question Generation(DCQG) have been made recently, however the previous definitions of difficulties in DCQGs failed to consider the potential relationships between the difficulties of MCQs and students’ performances. In the paper, we propose the Difficulty-Controllable Multiple-Choice Question Generation (DCMCQG), a framework to generate MCQs dedicated to the students’ abilities. Specifically, the definition of the difficulty is introduced from the Knowledge Tracking (KT) by dynamically modeling the difficulty as the error rate of students answering the MCQs. To be aware of the definition of the difficulty factor, we employed ten different reading comprehension models to simulate students answering MCQs, using the models’ error rates as the difficulty labels for the MCQs. We further fine-tuned the ChatGLM3-6b [7] to understand the difficulty of MCQs. Subsequently, we proposed a difficulty feedback module to guide ChatGLM3-6b in generating MCQs that meet the specified difficulty levels. The automatic evaluation experiments demonstrate that our model ensures the quality of the generated MCQs and successfully aligns the generated MCQs with the specified difficulty levels. Additionally, supplementary human evaluation experiments validate the effectiveness of our method.