Giving University Exams in the Age of Chatbots
- •École Polytechnique de Louvain trials open-chatbot exam format for Open Source Strategies course.
- •AI-using students required to pre-announce intent, share prompts, and accept full responsibility for errors.
- •Low adoption observed with only 3 out of 60 students opting to use chatbots during assessment.
As generative AI becomes ubiquitous, academia is grappling with how to assess student knowledge without banning the very tools that define the modern workforce. At the École Polytechnique de Louvain, a professor known as Ploum recently conducted a fascinating experiment during an "Open Source Strategies" exam, permitting the use of chatbots under strict transparency conditions.
The protocol required students to declare their intent to use AI beforehand, document their specific prompts, and—most importantly—take full accountability for any hallucinations or factual errors generated by the system. This approach shifts the focus from preventing "cheating" to fostering responsible use of a Language Model (LLM) in a controlled environment where the output must be verified by the human user.
Surprisingly, despite the availability of these powerful tools, only 3 out of 60 students chose to utilize them. A subsequent survey revealed that many feared the time cost of Prompt Engineering or the risk of the AI producing incorrect logic that they would then be responsible for correcting under the ticking clock of a final exam.
This experiment highlights a growing trend in higher education: moving away from locked-down environments toward "open-book, open-AI" formats. By treating the AI as a sophisticated but fallible assistant, educators can test higher-order thinking and the ability to verify technical outputs rather than simple rote memorization, preparing students for real-world scenarios.