Background information
Students often face challenges in assessing their understanding of theoretical concepts, particularly in technical courses that combine practical and conceptual learning. In the course Introduction to Autonomous Robotics, a significant portion of the grade is based on understanding theoretical content. While automated tools already help students check the correctness of their programming assignments, no comparable solution exists for evaluating conceptual knowledge.
To fill this gap, this project introduces an AI-powered Q&A tool that allows students to generate questions, on demand, directly from the course's lecture notes. The tool will help students actively test their understanding and receive suggestions for targeted review. Rather than passively rereading materials, students will engage in self-assessment that highlights knowledge gaps, including areas they may not realize they are struggling with.
The tool will be developed as part of the existing Alexandria platform and is designed to work with various types of educational content, such as text, figures, and animations. It will not generate generic or trivial questions, but instead focus on meaningful prompts that align with the course’s learning objectives. In doing so, it enhances both learning efficiency and the relevance of student-teacher interactions during class time. The project is positioned as a low-effort, high-impact intervention that supports scalable and personalized learning while keeping teacher workload manageable.
Aim of the project
The goal of this project is to develop and evaluate a proof-of-concept Q&A tool that enables students to self-assess their theoretical knowledge in a structured and personalized way. The tool will dynamically generate relevant questions from course lecture notes, allowing students to check their understanding and identify areas that require further study. This will support continuous learning outside of regular class hours and reduce reliance on predefined assignments, which can be limiting in evolving or newly developed courses.
The tool is intended for integration into the Alexandria platform, ensuring that it benefits from existing infrastructure and aligns with TU/e’s digital education policies. A key focus is the ability to handle diverse educational content and avoid superficial question generation, instead promoting deeper engagement with course material.
The project will be piloted in the Introduction to Autonomous Robotics course and include both technical and educational evaluations. Student feedback will be collected through surveys, tool usage data, and interviews to assess usability, learning impact, and potential improvements. Depending on the outcome, the tool may be scaled to other courses using Alexandria. Long-term, it could support adaptive learning features and help educators generate exam materials, making it a flexible and sustainable addition to TU/e’s learning ecosystem.