/

Why the Complete University Guide Doesn’t Measure 'Value Added'

The Complete University Guide uses ten indicators of university performance to construct its ranking of UK universities. They are weighted according to significance and then aggregated to produce the University League Table.

Experience and consultation with the university community through our Advisory Group shows that the process we use leads to a reliable and consistent snapshot of the UK university system year-on-year.

The University League Table and its associated 70 subject tables provide our users with a valuable guide to help them make the best choice of course and university, based on objective, audited and official data scrupulously checked with the supplying universities.

One component is, however, missing. The Complete University Guide does not attempt to measure the effectiveness of one university relative to the others in building a student’s knowledge and skill set from admission to graduation.

A measure of this kind, described generally as 'value added' would be, its proponents say, the only true basis for comparison between one university and another.

Critics of league tables such as those published by the Complete University Guide have long argued that they perpetuate an inbuilt bias towards universities that historically select from the nation’s academic élite – the brightest students at the top schools, and against universities that enrol students with lower qualifications, often from less privileged backgrounds, and lead them through to graduation as able to compete in society in general and the jobs market in particular.

It is easier for a top university to admit a student with straight As and three years later to graduate them with a good degree, than for a lower ranked university to take a student with patchier A Level results through to the equivalent level of knowledge and skills at graduation.

Sounds simple in theory, but in practice so far – and this debate has been running since league tables first appeared in the 1990s – no-one has been able to agree on a methodology that achieves the objective while meeting the required tests for robustness and reliability.

The Guardian ranking does include value added, using a complex methodology that draws on qualifications on entry and ultimate degree class. Neither of the other UK rankings – including the Complete University Guide – have followed suit.

The main reason for not doing so, in the case of the Complete University Guide, is the complexity of the process used, and more importantly its lack of transparency. Our competitor does not publish the actual value-added score, but converts it into points, the easier to create a differentiated rank order. The difficulty with this approach is that it masks the tiny differences between institutions resulting from the close bunching of entry qualifications on the one hand and degree class at the other.

In that ranking, the added value scores range from 7.4 points out of ten down to 2.5, with only 15 universities scoring fewer than 4.0 points or more than 7.0. In other words, there was little to choose between institutions, particularly in the middle order, when it came to value added.

Until a better way of calculating the impact of an institution is found and accepted by the university sector, it is unlikely that the Complete University Guide will be following its competitor compiler.

The Guide will continue to rely on solid, audited data, analysed in a transparent and defensible way, in the certainty that this is what would-be students, their parents and advisors, universities and employers, deserve.