The Higher Education sector needs to explain what’s going on with this trend we’re calling ‘grade inflation’. HESA data show that the pattern of achievement in honours degrees is now very different to how it was a generation ago. This plays into a narrative about university education that’s really unhelpful.
We need arguments and explanations of what’s going on. Here’s Liz Morrish offering a cogent perspective on the change last summer in this post. Graham Virgo offered some of these arguments to the Telegraph (note that his explanations become downgraded to ‘claims’).
Inevitably the Telegraph turns to Professor Alan Smithers, the Eeyore of academic standards, for a view. He’s not impressed. He says:
“It sounds to me like a narrative designed to bat away criticism of what is an obvious problem,”
“It is possible to come up with an accepted grade distribution. Within the sector you could say that in any university a set proportion would get a first or 2:1.”
This is absolutely not the answer to ‘grade inflation’.
Higher education uses criterion referenced assessment. We grade students’ assessment according to criteria – setting out our expectations and seeing if they meet them. Our criteria have changed as assessment has changed and as courses have changed – we mustn’t pretend they haven’t- but that doesn’t mean they are ‘easier’. Crucially, as Morrish points out, we are all better, staff and students, at understanding those criteria and more students are getting marks in those higher grades. This is where the explanation must focus.
What Alan Smithers is proposing is norm referenced assessment. This fixes a grade distribution and then plots students against it. Our national examination system is predicated on it; in addition to knowing whether a student had met a set of learning outcomes, we learn whether they are in a band of achievement for all kids.
I’d be happier if we stuck to criterion- referenced assessment for national exams, but the logic does work at this level. It is useful to know which students are performing best. So Ofqual has a formula, because it wants the new grade 9 GCSE to be scarce, it is capped:
Percentage of those achieving at least a grade 7 who will be awarded a grade 9 = 7% + 0.5 × (percentage of students awarded grade 7 and above)
So, if every entrant in Maths is taking broadly the same exam, you can, as Smithers says, set the proportion getting a 9. Obviously, as Smithers knows, every university student is not taking the same exam. Their assessment is run by autonomous universities running different courses with different assessment. That’s the strength of our system. A national curriculum in HE would be a disaster, so you cannot possibly fix a national assessment. So autonomous universities must continue to do their own assessment.
So, how could you possibly norm-reference thousands of different courses in over a hundred universities? This would turn into a quota system, but because we are a rather hierarchical system, this too would be monstrous. What if firsts were only to be awarded to the ‘best’ 10% of students in a university? Is that on each course, or across the university? Tough on you if your cohort of 20 in your year had two outstanding students – they’re getting those firsts however good you are. But if it’s across the university, then tough on the historians because the physicists are going to get ‘better’ marks and take more of the 10%.
Roll that argument out to universities. Should Oxford and Buckingham each only award 10% firsts? Or will the norm-referencing agency allocate differential numbers of firsts to different universities on a pre-calculated basis?
This is a series of straw man arguments, of course, but they are offered because people like Alan Smithers offer a glib response to a complex question that they must know is utterly unworkable and would be monstrously unfair. Let’s talk about how students meet our criteria, let’s accept that the classified honours degree is problematic, but let’s not pretend you could do a grade distribution system.