Schools “may wish to review” their homework policies amid fears about the use of artificial intelligence like ChatGPT, the Department for Education has warned.
The department has set out its stance on generative AI in a statement published this morning.
Exam boards yesterday published their own guidance on “protecting the integrity of qualifications” from AI yesterday.
The DfE said that when used “appropriately” it has potential to reduce workload across the education sector and free up teachers’ time.
But said schools “may wish to review homework policies, to consider the approach to homework and other forms of unsupervised study as necessary to account for the availability of generative AI”.
It follows reports of schools abandoning homework essays because of AI.
Schools should also “review and strengthen” their cyber security as AI could “increase the sophistication and credibility of attacks”.
Students should be protected from harmful online content and personal, sensitive data should not be entered into AI tools, DfE said.
The department warned the quality and content of any final documents – such as administrative plans – remains the “professional responsibilities of the person who produces it and the organisation they belong to.”
Education sector has ‘lagged in tech adoption’
The DfE will now convene experts to work with the education sector and “share and identify best practice and opportunities to improve education and reduce workload using generative AI”.
They say to “harness the potential” of AI, students will need to be “knowledgeable and develop their intellectual capability”.
“The education system should support students, particularly young pupils, to identify and use appropriate resources to support their ongoing education.
“This includes encouraging effective use of age-appropriate resources (which in some instances may include generative AI) and preventing over-reliance on a limited number of tools or resources.”
Speaking at Bett Show ed tech conference this morning, education secretary Gillian Keegan said the education sector has “often lagged in tech adoption” and is a tool “schools haven’t yet managed to get the most out of”.
She said that tech that doesn’t work is an “expensive and potentially dangerous mistake” and one that “schools cannot afford to make”.
Keegan believes teachers’ work could be “transformed” by AI but it’s not yet at the standard needed.
Sector ‘moving too slow’ on AI
Education experts were quizzed by MPs on using AI in education at the science and technology committee this morning.
Rose Luckin, professor of learned centred design at University College London, warned that the education sector doesn’t have “the in-depth knowledge about AI to be able to do a really good job.
“The technology’s moving at pace, it’s increasingly complex. Even the people developing it don’t always understand the implications of what it does.
Daisy Christodoulou, director of education at No More Marking, said “speed matters” when responding to AI changes and that too many organsiations are “moving very, very slowly”.
“I think we need to have a good, hard look at how we assess. I do think ChatGPT has huge implications for continuous assessment and coursework,” she said.
“I’ve heard a few suggestions about different things you could do…but some of the people making those suggestions don’t realise quite how powerful a tool like ChatGPT is.
It is capable of producing, original, very hard to detect, relatively high-quality responses to any kind of question. We have to be looking at assessments that are in more controlled environments.”
Not sure how you can beat something like ChatGPT (it’s astonishing what it can do and it’s only been public for several months), aside from now quarantining students in class to do their assignments?
Or have somewhat of an automatised invigilator installed on student devices that can record what’s being accessed and whatnot? But even that can be bypassed. A conundrum.