Lessons unlearned on raising teaching quality

The Higher Education Policy Institute or HEPI has warned the government that it may be about to “flunk” its attempts to initiate the biggest shake-up of UK universities in decades.

The proposals, outlined in a Green Paper published in November, mark a further shift towards a market approach, installing a new regime designed to reward good teaching, which will allow high-performing universities to increase their tuition fees by the rate of inflation.

Under the new Teaching Excellence Framework, or TEF, universities and their departments will be graded, using published data from surveys of student satisfaction, student retention rates, and graduate employment rates, and other sources.

The proposals include the abolition of the Higher Education Funding Council for England, or HEFCE, and the Office for Fair Access, and the creation of a single regulator of universities called the Office for Students, or OfS, with the aim of “putting students at the heart of higher education”.

The Green Paper says that the new body will have a statutory duty “to promote the interests of students to ensure that the OfS considers issues primarily from the point of view of students, not providers”.

But HEPI Director Nick Hillman says the proposals suffer from “lack of memory”, a failure to recall methods already tried over the years.

“The proposals are not sufficiently informed by past attempts to improve university teaching nor by past attempts to use metrics more heavily in assessing research and they ignore the proven benefits of routing public funding for English institutions via an arms-length body,” he says.

A detailed response

HEPI – which describes itself as the UK’s only independent think-tank devoted to higher education – has published a detailed response to the Green Paper, drawing on a range of experts with deep roots in education and some of their conclusions are highly critical.

Hillman himself warns of the complexity of trying to evaluate teaching quality in a way that is “light touch, fair and consensual, and which does not inadvertently discourage innovation”.

The point is underlined by a poll of 1,005 full-time undergraduate students, published in HEPI’s report, on possible metrics for measuring teaching quality: it shows they have a very different view of which ones are important.

Their favourite metrics such as “the proportion of students who achieve good degrees” and “graduate employment statistics” often don’t match others’ preferences and their placing of “external review by the Quality Assurance Agency or QAA” bottom of a list of 18 possible metrics, contradicts the view of the government, which intends to make it the key metric for the first year of the Teaching Excellence Framework, or TEF.

In a chapter in the HEPI report, Graham Gibbs, former professor at the University of Winchester and director of the Oxford Learning Institute, is sanguine about the value of student opinion on metrics, arguing that their views about what they want are sometimes flatly contradicted by research evidence about what is good for them.

But he strongly questions the underlying assumption of the Green Paper that there is a pressing need to drive up teaching standards and added that whether an increased use of different metrics linked directly to fee levels would produce more rapid improvements is “open to debate” and “such a policy will have to do rather well to improve on the record of the last decade”.

Gibbs argues that the use of comparative metrics, such as the National Student Survey scores, have already acted as a surprisingly strong lever to improve quality, or at least the metrics about quality, even without consequences for institutional income.

“The scale and sophistication of current teaching improvement efforts in higher education in England is, despite its patchy implementation, among the highest in the world, and with measurable positive consequences. The government should recognise the risks of establishing a competing and different mechanism that could divert current efforts.”

Parallels drawn between the Research Excellence Framework, or REF, and the TEF are flawed, he says, because the REF was established not to reward institutions but to enable selective funding, including the withdrawal of funding from most, which is perceived as a punitive measure.

“The consequences of poor TEF results might also be experienced primarily as punitive: institutional reputations would be harmed, student recruitment and total funding could suffer and teaching quality might then go down,” Gibbs said.

Further it is not clear if any additional funds from fees, achieved through the TEF, would be spent on teaching.

Gibbs questions the usefulness of a range of metrics – particularly outcome measures and employability measures – on the grounds that they are substantially influenced by the quality of students and do not tell us much about the quality of education they experience. Reliance on student satisfaction surveys is also challenged, because there is “no research evidence that satisfaction predicts learning gains”.

He said: “Reviews of teaching quality measures generally conclude that the only safe thing to do is to use process measures – indicators of what you do with whoever your students are, and measures of how they experience and respond to what you do – rather than input or outcome measures”.

Gibbs said in terms of teaching quality it is easier to distinguish institutions at subject level than by using institutional averages across subjects, and students need subject level information. He warned that if Grade Point Averages are used as an outcome measure in the TEF, standards will decline, particularly if there is inadequate control of standards between institutions.

Impact of funding on research

Bahram Bakradnia, president of HEPI and former director of policy at HEFCE, notes that the research section of the Green Paper, while unambiguously recognising the benefit of public investment in research, contains very little in the way of policy proposals and includes no reference to the impact of the undergraduate funding regime on research students or academic careers.

“The reluctance of English graduates to undertake PhD programmes is a serious matter and one would have expected it to be covered in a green paper that covers research policy,” Bakradnia said.

Light touch regulation

Roger King, visiting professor at the school of management at the University of Bath, who led the Higher Education Commission’s work on a new regulatory landscape, questions whether the government really is offering students adequate protection as consumers of education by establishing the new OfS as a student champion, given that the QAA’s role in protecting students is passed over in silence.

He argues that the form of regulation proposed offers only weak consumer protection because it essentially “asks the regulated whether they would like to continue to be regulated, to which the unsurprising response appears to be ‘no’. On offer instead is a form of light-touch regulation which is bound to appear attractive to many institutions”.

But he also questions the value of the stated desire to put student choice at the heart of higher education, since many are “poorly advised about what type of learning will benefit them in the long run”.

“Consequently, as higher education is of vital interest for all citizens in a country – for example, in producing people with the right skills for our global economic needs – we have to regulate higher education with more than just the student-consumer transition in mind,” he says.

It would be better if the title of the lead regulator was changed therefore to reflect the main purposes of higher education institutions – and concern for the interests of all stakeholders – such as Higher Education Council or similar, he says.