top of page

Was The Government Right To U-Turn On Exam Results?

Updated: Nov 28, 2022

After what many are describing as a fiasco, we take a look at the pros and cons of the government's exam results u-turn, and put the question to you: did they get it right?

Why they should have stayed the course:

The government was right to have the algorithm-based approach serve as the benchmark for a number of reasons.

One of the most important to consider is that awarding these teacher-predicted grades is not fair on other cohorts of students who have to sit the real exams. Not only is withstanding the pressure and having the knowledge necessary to do well in an exam hard, it's something that this year's group of finalists have totally avoided. Whether our exam-focused education system is the best way of educating and grading student can be debated, the fact remains that some students who are happy with their predicted grades will be able to move into the next stage of their lives without having undergone a true test of their A-level subject knowledge.

Factored into this should be the arguments that teachers may give students 'implausibly high' predicted grades, mainly as a way to stop their schools being perceived to be falling behind their local competitors. Introducing moderation into the grading (as the algorithm attempted to do) should have helped to counter this, and provide a fairer and more realistic grade for individual pupils - and that the backlash is driven by the fact that this disparity between higher predictive grades and lower awarded grades actually exists.

Furthermore, U-turn on this matter is not only politically damaging, but could also damage the confidence in the entire governmental approach to data and science. A choice had to be made; either to award grades arrived at by an unemotive and scientific algorithm, or by humans who might be driven by their own emotions or concerns. Choosing to award the grades predicted by teachers is not the issue per se, but that the calculations behind those grades can never be arrived at through impersonal and objective forecasting the way an algorithm could, and that the government believes that sacrificing it's own efforts in producing and applying the algorithm as soon as it becomes politically controversial or embarrassing does not suggest a government that believes in science and technology.

If it cannot be trusted to stick by controversial decisions made over exams, can the government be trusted on hot-button topics like genetic modification, commitments to tackling climate change, or even investment in 5G? Either the government trusts its own staff and experts, or it is willing to abandon anything slightly controversial to appease an emotive public sentiment that might not be as informed or accurate in their opinions as government experts.

Why the U-turn was a prudent manoeuvre:

It's obvious that the government should have seen the utter disaster that this was going to be - warnings were given about the reliability of a system that takes into account the economic background and composition of an entire school to predict an individual student's results back in July. The algorithm was alleged to show "bias" in awarding lower prospective grades to students from ethnic minority and lower socio-economic backgrounds, as well as higher grades to students with lower abilities thanks to the averaging effect of considering such a wide data set.


Put simply, the worse a particular type of pupil going to a particular school would do on average, the lower the grade awarded to that actual student - even if they were individually predicted to achieve well. Relying on broad-brush approaches that could be argued to stereotype the pupil based upon their background and school distorts the performance of a supposedly objective algorithm, and discredits the results produced. Put simply, the average performance of a whole school or pupil type should not be used to calculate the performance of the individual.This algorithm also had the effect of increasing the estimated performance of private schools, for a variety of reasons.

As well as taking into account the higher average performance of privately-educated pupil, where a smaller cohort of pupils (thought to be around 15 students) were inputted, more weight was awarded to the teacher predicted grades thanks to the paucity of other data points, leading to a larger increase in A-A* grades (4.9%) awarded in this system versus state schools (2%), even considering that last year 43.9% of privately-educated pupils gained such grades - whereas only 21.8% of state schools performed that well. Regardless of opinions about the existence of private schools, the fact that the increase was so much larger for private pupils demonstrates that the results created by the algorithm are not only unfairly favouring some types of student over another, but that the background of a particular student can be cruical to the grades they get, and how important their teacher's predictions are taken to be. This is not a fair way of governing applications to universities in particular, which will take students from state and private institutions as it teds to disadvantage one group over another.



 
 

However, what should be most worrying about the algorithm used is the lack of accuracy it possessed when fed data points from last year's A-level students - who actually managed to sit their exams. Official figures from Ofqual, who did exactly that as an experiment with the algorithm, found that it could only be at best around 60% accurate in predicting the actual results achieved by students. If this alone is not proof enough that the government should have acted to prevent its use. The Royal Statistical Society allegedly offered their help to Ofqual to arrive at a fairer algorithm, but were re-used to sign a restrictive non-disclosure agreement to keep the methodology under wraps. That a secret and inaccurate algorithm was seriously considered and deployed by the government represents a huge, foreseeable failure that could have been easily avoided, given that the Scottish devolved authorities faced very similar problems a week before, and which no lessons were learned from. Another important factor to consider is that the increase in grades 'achieved' could be beneficial for social mobility in the UK. Given that many universities were expecting a drop in the number of international students from September these places could instead be awarded to 'home' students, which could make a huge deal of difference to many young people's lives as they now have access to institutes with better reputations (and possibly better quality of courses and staff). Worrying that students might go to a university they "don’t deserve" is also not relevant, as students who make the grade already find themselves unable to cope with the workload of their current course and can 'upgrade' to 'better' universities through post-exams clearing.

Students who ultimately cannot work at the level of a particular course could either drop out, find another course they are more passionate about or transfer university, all of which doesn't make the places given to students who get in via their predicted grades and ultimately thrive and graduate with honours somehow not deserved. This argument will apply even less to employers who have more freedom to let employees who get a job based on their predicted grades and don't pull their weight go, and apprenticeship providers who don't have confidence in the grades can choose to give more weight to interview performance, tutor recommendation or other factors before accepting young people onto training programmes.



53 views0 comments

Recent Posts

See All
bottom of page