Recent higher education policy has highlighted the government’s desire to improve the incentives for universities, supporting the provision of high quality teaching that prepares students for the labour market.

Examples of this include the introduction of the Teaching Excellence and Student Outcomes Framework (TEF), which grades universities based on various measures associated with teaching quality, and the increased publication of statistics on the average earnings of graduates from different institutions.

However, simple comparisons of the average earnings of graduates might be very misleading. Universities are intentionally academically selective meaning the characteristics of their students differ. These differences could have a significant impact on the earnings of graduates. An institution like Cambridge may see very high graduate earnings simply because it takes high ability students – who would have high earnings regardless – and the result may have nothing to do with the actual impact of the education provided.

Comparable outcomes

The IFS report is a significant contribution towards filling this gap in the evidence. By showing the earnings outcomes for graduates of universities when comparing similar students, these figures strip the student composition effect and highlight how the value that degrees directly add to graduates’ earnings varies by institution and subject.

The findings are stark. Different institutions and subject combinations have vastly different impacts on the earnings of their graduates, and despite common perceptions to the contrary, can matter more for earnings than student characteristics on entry to university. Medicine and economics degrees increase graduates early career earnings by 25% more than English and history degrees. Russell Group universities increase earnings by around 10% more than the average degree. The very top universities – LSE, Oxford and Imperial – increase earnings by more than 50% more than the average degree. Even comparing the same subject at different institutions there is a wide range: the highest return business courses have returns 50% higher than the average degree, while the lowest return business courses have below average returns.

This newly available data is a significant step forward. Policymakers can evaluate which courses are good at adding value to students. Meanwhile, students have more information when making their choices.

But what does this mean for university accountability?

Universities intrinsically care about the outcomes of their graduates so this may highlight areas where they are succeeding or doing less well. Or, if students use this kind of information when making their degree choices, as the government hopes, this might affect university behaviour as they try to attract new students.

Alternatively, the government could choose to use such measures of quality as a direct policy lever. They could link the fees universities charge students to graduate earnings, much like the initial proposal to link fee levels to TEF performance. This would likely increase universities’ focus on the employment outcomes of their students.

In some ways, this would be very positive. Universities which do poorly can learn from those which do well. One theory for the good performance of the University of Bath, for example, is the prevalence of sandwich courses that give students work experience, which is crucial for helping them onto the job ladder. Increasing activities that are successful at improving graduate outcomes could benefit everybody.

But there are obvious drawbacks too. Focusing on employment outcomes might lead to universities neglecting courses or modules which offer lower labour market returns but provide value to society or graduates in other ways. The government must keep in mind the consequences of focusing on a narrow range of outcomes and consider methods to evaluate universities in a way which take into account wider value to society. Furthermore, by their very nature, these measures of value added based on employment outcomes five years after students graduate, are only available with a considerable time lag. This could weaken the mechanisms that incentivise universities to improve.

Improving university accountability is not straightforward and there are many potential unintended consequences to reforms. But we shouldn’t get carried away. The government has simply made this information available, without saying how or whether it intends to use this for policy. And better information available to students and policymakers is surely a good thing.

This article was originally published on WonkHE.