The Knowledge

The Knowledge. Can oracy be reliably assessed?

A new report explores the use of new models of assessment to make oracy a reliably measurable part of children’s learning, explains Amanda Moorghen

A new report explores the use of new models of assessment to make oracy a reliably measurable part of children’s learning, explains Amanda Moorghen

29 Jan 2024, 5:00

Oracy matters. Strong spoken language skills equip students for success in school and in life. However, there is currently no standardised assessment for oracy. As a result, it can be difficult to see the big picture: there is no national dataset we can draw on to easily and reliably identify schools or areas with the strongest oracy attainment or progress; or to provide gold-standard evidence for the most effective teaching practices.

Voice 21 has been researching new ways to assess oracy in order to meet this need.

Why isn’t there an assessment?

Designing a high-quality oracy assessment is challenging. Similar to the marking of essays against rubrics (and as compared to, say, multiple-choice questions), it can be difficult to ensure that different assessors give the same piece of work the same grade. Concerns around reliable marking led to the removal of the speaking and listening component from GCSEs. The same concerns persisted in our previous efforts, in conjunction with Oracy Cambridge, to design a traditional oracy assessment.

In addition, in comparison to the well-rehearsed routines of a written exam, there are logistical challenges when it comes to recording students taking part in oracy activities. This makes it challenging to design an assessment that offers a fair representation of the breadth of genres and contexts for talk that are relevant in our education system. However, with the development of the Oracy Framework’, we’re much better able to define the different competencies which, together, constitute oracy, laying a solid foundation for the development of an assessment tool.

Developing a reliable assessment

The rise of new software and the increasing familiarity of teachers with the use of tech in the classroom have made it possible to trial a new approach to oracy assessment: comparative judgement. Traditional ‘absolute’ judgement involves assessors comparing student performances to rubrics or mark sheets. The difficulty of ensuring these rubrics are interpreted consistently by all the assessors is a substantial source of poor reliability.

In comparative judgement approaches, assessors compare two pieces of work with each other, which can generally be done more reliably than comparing a piece of work to a rubric. Lots of judgments of pairs, by a group of assessors, are then combined algorithmically to create a rank order from the piece of work most likely to ‘win’ a comparison to the least likely. More familiar grades or scores can then be imposed upon this rank in order to meaningfully communicate the results.

Our project, ‘Comparing Talk’ has explored the use of RM Compare (an adaptive comparative judgement platform) for the assessment of oracy. We’ve conducted successful proof-of-concept trials and worked with teachers across England and Wales to develop a suite of appropriate tasks.

Our first large-scale pilot

In Autumn 2023, we worked with 55 schools across England and Wales to assess 463 videos of year 5 students performing an oracy task (giving a summary). This is our first large-scale pilot of a new oracy assessment driven by comparative judgement. It is key to determining whether this methodology offers a practical alternative to traditional approaches.

Our initial findings are promising. We were able to generate a reliable rank order of oracy performances, with an appropriate amount of agreement between assessors. Further, we were able to see that students’ average scores increased as the number of years that their school had been a Voice 21 Oracy School increased: the average score increases by 50 per cent when we compared first- and fourth-year Voice 21 Oracy Schools. This suggests that our assessment tool is able to track the student progress that teachers in our schools report seeing.

So, can oracy be assessed?

Yes! Our findings suggest a comparative judgement approach may be just what we need to inform the decisions school leaders and policy-makers must make for every child to receive a high-quality oracy education.

We are continuing to work on developing our task suite and the age ranges served by our assessment, and collaborating with our tech partners RM to ensure our assessment tool is both practical and robust for schools and teachers.

Read more about the pilot’s findings in Voice 21’s Impact Report here

More from this theme

The Knowledge

Covid delays release of long-awaited phonics study

Important £1m trial study now due in 2023

John Dickens

Your thoughts

Leave a Reply

Your email address will not be published. Required fields are marked *