Schools should make students do some coursework in class “under direct supervision” to make sure they are not cheating amid fears about artificial intelligence (AI) such as ChatGPT, new exam board guidance states.
The Joint Council for Qualifications (JCQ) – which represents boards – has published guidance for schools today on “protecting the integrity of qualifications”.
While the majority of qualifications are exam-based and unaffected by AI, there are some assessments such as coursework which allow access to the internet.
It follows reports of schools scrapping homework for fears of cheating as top universities ban the use of AI in coursework and exams.
Here’s what schools need to know…
1. Misuse of AI is malpractice…
JCQ said chatbots may pose “significant risks” if used by students completing assessments. They can often produce incorrect answers, biased information or fake references, the guidance reads.
Students who misuse AI – where the work is not their own – will have committed malpractice and may attract “severe sanctions”. Any use of AI which means students have not “independently demonstrated their own attainment” is likely to be considered malpractice.
Sanctions for “making a false declaration of authenticity” and “plagiarism” include disqualification and being barred from taking qualifications.
Schools policies should address “the risks associated with AI misuse” and staff should communicate the importance of independent work to students.
2. …but AI tools can be used
The exam boards said AI tools must only be used when the conditions of the assessment permit the use of the internet and where students are able to demonstrate the final submission is their “own independent work and independent thinking”.
Students must appropriately reference where they have used AI. For instance, if they use AI to find sources of content, the sources must be verified by students and referenced.
So teachers can check whether AI use was appropriate, students must “acknowledge its use and show clearly how they have used it”.
Students must keep a copy of the questions and AI answers for reference and authentication purposes. But it must be non-editable – such as a screenshot – and provide a brief explanation of how it was used and submitted with the work.
3. Consider supervised work and restricting AI in schools
JCQ has set out a list of actions that schools should take to prevent misuse – many of which are “already in place in centres and are not new requirements”, they added.
Actions include considering whether students should sign a declaration on understanding what AI misuse is.
Schools should consider restricting access to online AI tools on their devices and networks, including those used in exams.
“Where appropriate”, schools should be “allocating time for sufficient portions of work to be done in class under direct supervision to allow the teacher to authenticate each student’s whole work with confidence”.
This is similar to what Ofqual boss Dr Jo Saxton suggested earlier this month.
Schools should consider whether it’s “appropriate and helpful” to have a “short verbal discussion” with students about their work to confirm “they understand it and that it reflect their own independent work”.
Teachers should also examine “intermediate stages” in the production of work to make sure their final submission “represents a natural continuation of earlier stages”.
4. Look out for typed work and hyperbolic language
JCQ says identifying AI misuse requires the “same skills and observation techniques” teachers already use to check students’ work is their own. For instance comparing it against their previous work to check for unusual changes.
Potential indicators of AI include default use of American spellings as well as vocabulary which might not be appropriate for the qualification level.
Others are where a student has handed in work in a typed format, when their usual output is handwritten. Staff should also keep an eye out for “overly verbose or hyperbolic language” that may not be in keeping with a student’s usual style.
JCQ points to several services – such as GPTZero and OpenAI Classifier – which can determine the likelihood text was produced by AI.
5. ‘Detected or suspected’ misuse should be reported
If a teacher’s suspicions are confirmed and the students have not signed the declaration of authentication, a school does not need to report malpractice to the exam board. The matter can be resolved prior to any declaration signing.
But if this has been signed and AI misuse is “detected or suspected” by the school, the case must be reported to the relevant exam board.
If misuse is suspected by an exam board marker, or it has been reported, full details will usually be relayed to the school. The board will then consider the case and “if necessary” impose a sanction.
Staff should not accept – without further investigation – work they suspect has been taken from AI tools as this could encourage the spread of the practice. It could also constitute sanctions under staff malpractice.
The crazy thing is OpenAI’s “efforts” to foil cheating and detect auto-generated work…may be bypassed by using ChatGPT itself. Sort of a fox guarding the henhouse situation…
https://www.nbcnews.com/tech/innovation/chatgpt-can-help-fool-openais-anti-cheating-tool-rcna68855