Artificial Intelligence

What medicine can teach us about regulating AI in education

The critical issue is accountability. If an AI tool gives inaccurate results, who is responsible?

The critical issue is accountability. If an AI tool gives inaccurate results, who is responsible?

2 Sep 2023, 5:00

Teachers will be very familiar with the power of analogy. There’s nothing quite like it for causing pennies to drop and once impenetrable ideas or difficult new concepts to suddenly become crystal clear.

The reason analogy works so well is because the best way to understand new ideas and concepts is to relate them to something we already know and trust.

In our response to the DfE’s AI consultation, I recently likened AI tools to actors in the BBC drama, Casualty; they look and sound like doctors, but cannot actually do surgery in a hospital to make people better.

Like the actors, AI tools might use convincing medical language, but they couldn’t – and shouldn’t – ever insert the scalpel.

Taking this analogy to its next logical step, we think of AI regulation much like the way medicines are controlled.

Any medicine new to market has to be tested and go through rigorous approval procedures. Even when a new cancer ‘wonder drug’ appears, it will still undergo years of trials, due diligence and be tested thoroughly because a bad medicine is likely to be worse than nothing at all.

And even when it makes it to the pharmacist’s shelves, it contains a leaflet of contraindications – listing all its risks and shortcomings to help you decide whether and how to use it.

The objective of these regulations is clear: to improve patient outcomes, in particular longevity and quality of life.

Similarly, we think there are three key things we need to achieve through regulation if AI is to successfully improve outcomes in the education sector. AI must:

  1. Add clear value to teaching, learning and assessment, avoiding common pitfalls such as bias and information that is harmful or inaccurate
  2. Have equality of access, a level playing field, so disadvantaged schools do not get left further behind
  3. Not inadvertently put schools at risk, for example with intellectual property issues.

The critical issue that regulation must address is accountability

Of course, some regulation is already in place, and the government’s white paper has looked to grasp the issue. It recognises that public trust is absolutely key, and that trust will be undermined unless the risks and wider concerns about the potential for bias and discrimination are addressed. It sets out five principles to guide and inform the responsible development and use of AI in all sectors of the economy.

One of these principles is fairness, which is clearly critical in the education sector. But to really build that all-important trust, we need more specifics.

Returning to medicine, new products are first approved for safety by the MHRA regulator. Then, if a product is to be approved for use in the NHS, it also has to be approved by NICE, which looks at the product’s cost-effectiveness.

We could envisage a general AI regulator performing the equivalent of the MHRA’s safety check, with the DfE then being responsible for a check on the educational effectiveness of the products, somewhat analogous to the role of NICE for the NHS.

This could even include kite-marking of products that are shown by research to have value educationally, working with the Education Endowment Foundation, awarding organisations such as AQA, and groups of schools such as those who recently wrote in to The Times, led by Antony Seldon.  

This might also helpfully include some hands-on, meaningful guidance and training for schools. In particular, we believe schools in more disadvantaged communities must have the same opportunity to benefit from AI as every other school.

Ultimately, the critical issue that regulation must address is accountability. Who is responsible for the behaviour of AI? If a teacher uses an automated marking system and it gives inaccurate outcomes, who is responsible: the teacher, the school or the software developer?

By treating AI like we do medicine and adopting a similar regulation framework, we can build much-needed trust, confidence and fairness, and protect students, schools and teachers by providing clear guidelines and accountability.

More from this theme

Artificial Intelligence

Scientists wanted to help DfE ‘shape future of education’

The Department for Education is creating a new science advisory council and is looking for members

Samantha Booth

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *