In the 1820s, the scientist Charles Babbage invented the first calculating machine. He showed it to an MP, who asked him if it gave you the right answer if you put the wrong numbers in. Babbage had to explain that his machine wasn’t capable of this.
Computers have improved a phenomenal amount since then, but in many ways they are still dependent on the quality of human input.
I thought of Babbage’s anecdote recently when using ChatGPT. It’s a new artificial intelligence chatbot that seems as miraculous to our eyes as Babbage’s calculating machine must have seemed in the 19th century. It can produce lengthy realistic texts in response to human prompts, create teaching resources and mark essays. It has the potential to transform the way teachers work.
But, as with Babbage’s calculating machine, to use it effectively you have to enter useful prompts and check the results you get.
Because of this, several commentators now think that a key skill of the future will be writing good prompts and sense-checking the results. I can agree with this.
But then, those commentators go on to say that we should throw out the traditional curriculum and just teach kids how to use artificial intelligence chatbots. I cannot agree with this. Why?
Because the way we learn a skill is different to the use of that skill. To get good at something, you can’t just blindly copy what an expert does. You have to break down the skill you want to learn and practise its component parts.
To acquire the skills of writing good prompts, sense-checking and editing, you are going to need to learn to read and write first. There is no short cut.
Here’s a good analogy: if you want to learn to play chess, you still need to learn the basics about how each piece moves. You can’t just say ‘oh the AI will do that for me.’ AI bots might be able to play a better game of chess than humans, but if you as a human want to be able to play chess, you are going to have to start with the basics. Likewise, ChatGPT might be able to write better than many students. But if we want our students to understand what it is saying, they are going to have learn to read and write, starting with the basics.
So, the curriculum doesn’t need to focus on how to use chatbots. Similarly, we also need to think very carefully about assessment. If we want students to learn to read and write, we need assessments that will monitor their progress towards that goal. If those assessments are being completed by AI, they will lose all value as an indicator of the students’ thinking.
That’s why I think it is perfectly acceptable to ban students from using AI for written assessments, and to make greater use of in-person hand-written exams.
My conclusion is somewhat paradoxical: the best way for students to learn how to use AI in the long term might be to ban it in the short term. That’s not me being a Luddite. At No More Marking, where I work, we think that AI is an enormously powerful technology and we’ve already integrated OpenAI into our Comparative Judgement software.
But it’s precisely because AI is so powerful that it will need regulating. All new technologies have the potential for danger and brilliance within them. To tilt the balance towards brilliance, they need a framework of rules. Cars need traffic lights and zebra crossings. Opioid painkillers need prescriptions and dosage recommendations. Electricity needs regulations about the maximum voltage and minimum height of overhead power cables. ChatGPT is no different, and we should start thinking now about the best way of integrating it into education.
We end our series of expert opinions on AI’s impact on education with two contrasting opinions about assessment. Read the other here.
Your thoughts