July 20, 2017

A better ranking system for university teaching?

Who is top dog among UK universities?
Image: © Australian Dog Lover, 2017 http://www.australiandoglover.com/2017/04/dog-olympics-2017-newcastle-april-23.html

Redden, E. (2017) Britain Tries to Evaluate Teaching Quality Inside Higher Ed, June 22

This excellent article describes in detail a new three-tiered rating system of teaching quality at universities introduced by the U.K. government, as well as a thoughtful discussion. As I have a son and daughter-in-law teaching in a U.K. university and grandchildren either as students or potential students, I have more than an academic interest in this topic.

How are the rankings done?

Under the government’s Teaching Excellence Framework (TEF), universities in England and Wales will get one of three ‘awards’: gold, silver and bronze (apparently there are no other categories, such as tin, brass, iron or dross for those whose teaching really sucks). A total of 295 institutions opted to participate in the ratings.

Universities are compared on six quantitative metrics that cover:

  • retention rates
  • student satisfaction with teaching, assessment and academic support (from the National Student Survey)
  • rates of employment/post-graduate education six months after graduation.

However, awards are relative rather than absolute since they are matched against ‘benchmarks calculated to account for the demographic profile of their students and the mix of programs offered.’ 

This process generates a “hypothesis” of gold, silver or bronze, which a panel of assessors then tests against additional evidence submitted for consideration by the university (higher education institutions can make up to a 15-page submission to TEF assessors). Ultimately the decision of gold, silver or bronze is a human judgment, not the pure product of a mathematical formula.

What are the results?

Not what you might think. Although Oxford and Cambridge universities were awarded gold, so were some less prestigious universities such as the University of Loughborough, while some more prestigious universities received a bronze. So at least it provides an alternative ranking system to those that focus mainly on research and peer reputation.

What is the purpose of the rankings?

This is less clear. Ostensibly (i.e., according to the government) it is initially aimed at giving potential students a better way of knowing how universities stand with regard to teaching. However, knowing the Conservative government in the UK, it is much more likely to be used to link tuition fees to institutional performance, as part of the government’s free market approach to higher education. (The U.K. government allowed universities to set their own fees, on the assumption that the less prestigious universities would offer lower tuition fees, but guess what – they almost all opted for the highest level possible, and still were able to fill seats).

What are the pros and cons of this ranking?

For a more detailed discussion, see the article itself but here is my take on it.


First this is a more thoughtful approach to ranking than the other systems. It focuses on teaching (which will be many potential students’ initial interest in a university) and provides a useful counter-balance to the emphasis on research in other rankings.

Second it has a more sophisticated approach than just counting up scores on different criteria. It has an element of human judgement and an opportunity for universities to make their case about why they should be ranked highly. In other words it tries to tie institutional goals to teaching performance and tries to take into account the very large differences between universities in the U.K. in terms of student socio-economic background and curricula.

Third, it does provide a simple, understandable ‘award’ system of categorizing universities on their quality of teaching that students and their parents can at least understand.

Fourth, and most important of all, it sends a clear message to institutions that teaching matters. This may seem obvious, but for many universities – and especially faculty – the only thing that really matters is research. Whether though this form of ranking will be sufficient to get institutions to pay more than lip service to teaching remains to be seen.


However, there are a number of cons. First the national student union is against it, partly because it is heavily weighted by student satisfaction ratings based on the National Student Survey, which thousands of students have been boycotting (I’m not sure why). One would have thought that students in particular would value some accountability regarding the quality of teaching. But then, the NUS has bigger issues with the government, such as the appallingly high tuition fees (C$16,000 a year- the opposition party in parliament, Labour, has promised free tuition).

More importantly, there are the general arguments about university rankings that still apply to this one. They measure institutional performance not individual department or instructor performance, which can vary enormously within the same institution. If you want to study physics it doesn’t help if a university has an overall gold ranking but its physics department is crap or if you get the one instructor who shouldn’t be allowed in the building.

Also the actual quantitative measures are surrogates for actual teaching performance. No-one has observed the teaching to develop the rankings, except the students, and student rankings themselves, while one important measure, can also be highly misleading, based on instructor personality and the extent to which the instructor makes them work to get a good grade.

The real problem here is two-fold: first, the difficulty of assessing quality teaching in the first place: one man’s meat is another man’s poison. There is no general agreement, at least within an academic discipline, as to what counts as quality teaching (for instance, understanding, memory of facts, or skills of analysis – maybe all three are important but can how one teaches to develop these diverse attributes be assessed separately?).

The second problem is the lack of quality data on teaching performance – it just isn’t tracked directly. Since a student may take courses from up to 40 different instructors and from several different disciplines/departments in a bachelor’s program, it is no mean task to assess the collective effectiveness of their quality of teaching. So we are left with surrogates of quality, such as completion rates.

So is it a waste of time – or worse?

No, I don’t think so. People are going to be influenced by rankings, whatever. This particular ranking system may be flawed, but it is a lot better than the other rankings which are so much influenced by tradition and elitism. It could be used in ways that the data do not justify, such as justifying tuition fee increases or decreased government funding to institutions. It is though a first systematic attempt at a national level to assess quality in teaching, and with patience and care could be considerably improved. But most of all, it is an attempt to ensure accountability for the quality of teaching that takes account of the diversity of students and the different mandates of institutions. It may make both university administrations and individual faculty pay more attention to the importance of teaching well, and that is something we should all support.

So I give it a silver – a good try but there is definitely room for improvement. 

Thanks to Clayton Wright for drawing my attention to this.

Next up

I’m going to be travelling for the next three weeks so my opportunity to blog will be limited – but that has been the case for the last six months. My apologies – I promise to do better. However, a four hour layover at Pearson Airport does give me some time for blogging!

Another way to rank universities?

Oxford University

Mundell, I. (2013) EU rolls out university ranking European Voice.com, January 24

The European Union is promoting a new system of rankings for Europe’s universities to encourage international comparison. The system, called U-Multirank, aims to correct a perceived bias towards research performance in other international rankings, and so present a more balanced picture of university activities.  The perceived research bias of existing rankings is seen as problematic because it fails to recognise that universities may have other goals and their users may have other priorities. The EU initiative aims to bring out these other university activities and allow institutions to be compared accordingly.

The possibility that U-Multirank offers an alternative has been cautiously welcomed in some quarters, but rejected in others. The League of European Research Universities, which represents 21 elite institutions including Oxford and Cambridge, left the U-Multirank pilot project and remains opposed to its implementation, particularly with public funds.


First, I welcome a move to provide any alternative system to the fundamentally flawed university ranking models currently in use, which certainly overvalue research and undervalue teaching. The main concern though is that current rankings are deliberately rigged to promote the standings of existing elite universities in the USA and to a lesser extent in the UK. The EU is trying to redress that.

However, I doubt whether it is feasible or even desirable to reduce a complex organization such as a university to a single numerical ranking. In any university, some departments will be better than others. Different students will want different things from a university. Rankings tell you nothing about the flexibility they provide for learners. University rankings are an attempt to rig a simple metric to drive high international fee payers to certain institutions. Don’t play or support this stupid game.

See also: World University Rankings: A Reality Based on a Fraud

World University Rankings: A reality based on a fraud

I am still seething with outrage at the methodology used by the THES World University Rankings.

Under a headline ‘University rankings key tool in recruiting foreign students’, in today’s Globe and Mail newspaper, Stephen Toope, the President of the University of British Columbia is reported as saying ‘These rankings are looked at by students and parents as part of their decision-making process.’

Well, if they are, they are being fooled. These university rankings are the equivalent of a Ponzi scheme. Let’s take a close look at the methodology.

Rotten methodology

The THES, on its web site, claims that the rankings are based primarily on a survey of 13,000 or so ‘experienced scholars in spring 2010. It examined the perceived prestige of institutions in both research and teaching. There were 13,388 responses, statistically representative of global higher education’s geographical and subject mix.’

Well, I’d like to see that sample broken down. How many of these scholars were chosen from the institutions ultimately ranked in the top 10-30, or in last year’s top 10-30? How many were chosen from countries where English is not the first language? (Only 18 out of the top 100 universities were from non-English speaking countries). My fear is that this is a self-referential methodology – in other words, choose scholars from a sample already biased then ask them to rank each other, with the result that the same institutions come to the top. (Did someone say ‘An Anglo-Saxon hegemony’?)

For instance, quality of teaching was determined primarily by asking these experienced scholars to rank other universities on the quality of their teaching. Now what do they actually know about teaching in other universities? Probably very little (indeed, how much do they know about teaching by other instructors in their own university?). They may have sat on a few program review committees (usually of similar institutions to their own), and they may refer to their own undergraduate and post-graduate teaching institutions, but it’s still a closed circle of the same institutions. These are totally subjective views based on very limited direct personal experience. What is needed are independent measures of actual outcomes of teaching.

The second reason why parents and teachers are being fooled is the undue weight given to research in the rankings. (Even the teaching ratings are heavily influenced by research criteria). There is no independent indicator of undergraduate teaching in the rankings. Thus a university that loads all its efforts into research, and provides few resources for teaching, will come out much better than an institution that tries to balance research and teaching.

Now ask yourself, as a parent or potential student: how many undergraduate students, even in the most heavily loaded research university, will go on to do post-graduate research? Probably between 10-15% of a cohort (I exclude those going on to do an applied masters or other non-research-based graduate studies)? What about the other 85% of the students, who are depending primarily on good teaching to get a useful bachelors or masters degree. These rankings in now way address this issue. I think these rankings seriously short change undergraduate teaching, where lies the bulk of university students.

Better criteria

A critical criterion not included in the ratings of teaching is completion rate. How many students went on to graduate in their chosen programs? Another is employability – how many got good jobs after graduation? I could go on and on (and will do in a future post on performance indicators), but what is needed for rankings are empirically based data that indicate the quality of teaching, preferably from a variety of independent sources.

Who’s to blame?

Not the THES. I blame the universities themselves for failing to provide clear indicators of their own performance. First let’s recognize that universities serve multiple purposes of which research, although extremely important, is only one. Many institutions do collect data on the success of their graduates, usually in terms of the numbers employed on graduation. Why isn’t this data used?

Also why were not students surveyed about teaching? I realise there are problems with this, but the more independent sources – triangulation – the more likely the results will be reliable.

Does it matter?

It sure does. If Stephen Toope is correct (and I suspect he is) people are going to make substantial financial decisions on data that are really, really really misleading (hence the similarity to a Ponzi scheme). More importantly, the rankings are a classic case of subjectivity over evidence – surely the last thing on which a university wants its reputation based. Thirdly, it is so discouraging for those in universities who care about teaching, to see it not only given so little weight, but to see how the ratings of teaching were made. Lastly, the ranking system reinforces the status quo. This may well be fine for the top 10 universities, who are well funded, but it does not serve the vast majority of post-secondary institutions who need to find new ways of delivering teaching and new measures of performance that are not so heavily research biased.

Now don’t misunderstand me. Perhaps if a different system was used the rankings wouldn’t change much, at least among the very elite institutions. But no-one really knows. It is clear that even minor changes in methodology do lead to movement up and down the rankings – sometimes quite substantially due to a slight change on one measurement – and this will affect where students and parents will make their choice.

The real answer to this problem is better efforts by the universities themselves to measure their performance. After all, they have nothing to lose, except their reputation.

See also How much does teaching count in World University Rankings?