Skip to main content

HAN Releases Manual Helping Professors Make AI-Proof Exams

HAN Releases Manual Helping Professors Make AI-Proof Exams
by Wittenborg News -
Number of replies: 0

HAN Releases Manual Helping Professors Make AI-Proof Exams

https://www.wittenborg.eu/han-releases-manual-helping-professors-make-ai-proof-exams.htm

HAN Releases Manual Helping Professors Make AI-Proof Exams

Ensuring quality by reducing academic fraud

One of the largest universities of applied sciences in the Netherlands, Gelderland-based HAN University of Applied Sciences, has released a guide for lecturers seeking to craft exams which cannot be beaten through the use of tools like ChatGPT. The guide, authored by HAN's Chair Board of Examiners Institute Organisation and Development Frank Vonk and Consultant for Learning with AI Jorn Bunk, makes it clear that today's lecturers are dealing with a different academic landscape than previous generations. Just as guides like Vonk and Bunk's are popping up, guides teaching students how to use AI responsibly to enhance their learning are also emerging. Be sure to read Wittenborg's policy on academic dishonesty before following any tips you find online.

Artificial intelligence

Artificial intelligence is just that – artificial. It is not capable of actual thought. It is the product of a synthesis of information which it has been fed to allow it to mimic the most average response or behaviour. For example, as ScienceGuide explains, if a human is asked to calculate the sum of five plus five, they can calculate that the answer is ten. If AI is asked the same, it might give the same answer. However, this does not mean that the AI can do maths; the AI has been trained to offer up the most common answer based on the pool of data it has been fed by its programmers. This can often lead to wrong answers, since the AI is simply producing the response that its developers thought was most common, unless the developers thought it the specific logic underlying a function.

At present, tools like ChatGPT generally cannot be used without a “human in the loop” to correct its various errors. For example, AI text is often very close to being coherent, but upon closer inspection it is not, or is very redundant in a manner which renders the text unreadable; the time spent correcting all the errors in an AI text could be better spent writing the text oneself.

Furthermore, it is impossible to know whether information in the text produced by the AI is correct, and, depending on the cut-off date for the pool of data used by the AI, the information may be completely outdated and false. For example, ChatGPT's pool of knowledge cuts off after September 2021. If you are writing a paper on a current event, or want to use recent data or information, ChatGPT will not be able to help you. The tool can sometimes make up information which seems plausible, but there is no guarantee whether it is accurate. Methods of detecting whether a text has been produced by AI are becoming more numerous and more accurate with time. As such, students must exercise caution when employing these tools.

Vonk and Bunk's takeaways

One of Vonk and Bunk's tips is to use recent examples, events and information when asking questions. Since ChatGPT only has access to information as recent as September 2021, professors are, therefore, encouraged to ask exam questions which are based on up-to-date information and current events. Teachers should also ask specific questions, not ones which are very general and easy to extrapolate answers for. Professors might ask students to reflect on specific sources, or to compare something to a recent situation. Vonk and Bunk recommend that teachers submit their exams through tools like ChatGPT and compare the results to what students have submitted. They can flag those texts which do not use specific examples, recent information or are written far too generally and are cause for further inquiry.

Moreover, as is relevant to educators like Wittenborg, the authors of the guide recommend testing a student's skills rather than having them write down their answers. This will allow the students to demonstrate their ability to apply their knowledge practically and avoid any risk of using AI tools. Rather than asking questions which require students to regurgitate factual information – which AI can mimic – it may be better to look into how a student can apply the information they have learned in a real-world context. By employing methods like these, professors can ensure that students receive top-quality education and enter the workforce with the skills they need to excel.

WUP 22/04/2023
by Olivia Nelson
©WUAS Press

735 words