Stop blaming teachers for questionable assessments and start fixing the broken system that means they can’t be right, writes Dennis Sherwood
Teachers’ predicted grades are much criticised for being “optimistic”, and teachers are pilloried as a result. But exam grades are unreliable, and so teachers are denied the opportunity to learn.
Imagine a primary school pupil is looking at some marked arithmetic exercises, and sees
3 + 4 = 7 ✓
2 + 3 = 5 ✗
5 + 1 = 6 ✗
2 + 7 = 9 ✓
You and I know that the marking is incorrect. But the pupil doesn’t. The pupil just thinks, “I’m no good at this, I got half the answers wrong”. The pupil’s parents notice, but they don’t want to make a fuss. And the next week, the same thing happens, and the week after that too.
How does the pupil learn about the number line, about arithmetic? In fact, the pupil can’t. No wonder the pupil loses all self-confidence. No wonder the pupil will fear maths ever after.
Of course, that could never happen.
But now imagine a young teacher has four A level History students, and predicts – to the very best of her ability and without any bias – grade B for each. When the results are announced, they are A, B, B, C. She rejoices for the A, and is sad for the C, but is also likely to think, “I’m no good at this, I got half the predictions wrong”. She is also very puzzled, for she is trying to understand why the students were awarded the A and the C, and what she had failed to recognise in her predictions. But she just can’t work it out.
The same thing happens the following year, and the next too. She just never gets those predictions more than half right. So it is quite natural for the teacher to blame herself, and to lose any self-confidence she may originally have had.
For she may not know that, according to Ofqual’s own research, on average, the probability that A level History grades are correct is about 56 per cent. It is therefore perfectly possible – indeed to be expected – that nearly two in every four awarded grades for History are wrong. Perhaps all the teacher’s predictions were right.
But how could she know? How can the teacher learn to assess pupil performance when the information on which she relies is so unreliable? When that feedback loop, so important to all learning, is broken?
No wonder teachers’ predictions are “unreliable”.
And how tragic that teachers take the blame, when the true fault is Ofqual’s in “awarding” exam grades that are themselves unreliable. For as Ofqual’s soon-to-step-down Interim Chief Regulator, Dame Glenys Stacey, acknowledged at the Education Select Committee hearing on 2 September, exam grades “are reliable to one grade either way”. That statement was unqualified, and so presumably applies to all grades in all subjects at all levels. So a certificate showing ABB really means “any grades in the range from A*AA to BCC”.
What use is that?
To make this real, today about 22,000 AS and A level results are announced for the autumn ‘re-sits’. And about 3,000 of those are wrong. But neither the students who are the victims nor their teachers will ever know.
Yet ministers continue to claim that “exams are the fairest way of judging student performance”.
Really?
An update on the numbers at the end… The figure of “about 22,000 AS and A level results” was based on the number of entries as reported by Ofqual here: https://www.gov.uk/government/statistics/entries-for-as-and-a-level-autumn-2020-exam-series?fbclid=IwAR0jDnc8Lz15QP_iiX6mGbL8sFc1x-101nisR-bPELd0qutbuXJrg8Bi8zA.
According to the data published by JCQ today, https://www.jcq.org.uk/wp-content/uploads/2020/12/A-Level-and-AS-Results-Autumn-2020.pdf, the actual number of awards made has been rather lower, nearly 17,500. My estimate of the number of wrong grades actually awarded is therefore lower too, about 2,200 (details available on request).
The main point, however, remains unchanged: grades as awarded are far too unreliable, damaging not only students, but teachers too.