I am still seething with outrage at the methodology used by the THES World University Rankings.

Under a headline ‘University rankings key tool in recruiting foreign students’, in today’s Globe and Mail newspaper, Stephen Toope, the President of the University of British Columbia is reported as saying ‘These rankings are looked at by students and parents as part of their decision-making process.’

Well, if they are, they are being fooled. These university rankings are the equivalent of a Ponzi scheme. Let’s take a close look at the methodology.

Rotten methodology

The THES, on its web site, claims that the rankings are based primarily on a survey of 13,000 or so ‘experienced scholars in spring 2010. It examined the perceived prestige of institutions in both research and teaching. There were 13,388 responses, statistically representative of global higher education’s geographical and subject mix.’

Well, I’d like to see that sample broken down. How many of these scholars were chosen from the institutions ultimately ranked in the top 10-30, or in last year’s top 10-30? How many were chosen from countries where English is not the first language? (Only 18 out of the top 100 universities were from non-English speaking countries). My fear is that this is a self-referential methodology – in other words, choose scholars from a sample already biased then ask them to rank each other, with the result that the same institutions come to the top. (Did someone say ‘An Anglo-Saxon hegemony’?)

For instance, quality of teaching was determined primarily by asking these experienced scholars to rank other universities on the quality of their teaching. Now what do they actually know about teaching in other universities? Probably very little (indeed, how much do they know about teaching by other instructors in their own university?). They may have sat on a few program review committees (usually of similar institutions to their own), and they may refer to their own undergraduate and post-graduate teaching institutions, but it’s still a closed circle of the same institutions. These are totally subjective views based on very limited direct personal experience. What is needed are independent measures of actual outcomes of teaching.

The second reason why parents and teachers are being fooled is the undue weight given to research in the rankings. (Even the teaching ratings are heavily influenced by research criteria). There is no independent indicator of undergraduate teaching in the rankings. Thus a university that loads all its efforts into research, and provides few resources for teaching, will come out much better than an institution that tries to balance research and teaching.

Now ask yourself, as a parent or potential student: how many undergraduate students, even in the most heavily loaded research university, will go on to do post-graduate research? Probably between 10-15% of a cohort (I exclude those going on to do an applied masters or other non-research-based graduate studies)? What about the other 85% of the students, who are depending primarily on good teaching to get a useful bachelors or masters degree. These rankings in now way address this issue. I think these rankings seriously short change undergraduate teaching, where lies the bulk of university students.

Better criteria

A critical criterion not included in the ratings of teaching is completion rate. How many students went on to graduate in their chosen programs? Another is employability – how many got good jobs after graduation? I could go on and on (and will do in a future post on performance indicators), but what is needed for rankings are empirically based data that indicate the quality of teaching, preferably from a variety of independent sources.

Who’s to blame?

Not the THES. I blame the universities themselves for failing to provide clear indicators of their own performance. First let’s recognize that universities serve multiple purposes of which research, although extremely important, is only one. Many institutions do collect data on the success of their graduates, usually in terms of the numbers employed on graduation. Why isn’t this data used?

Also why were not students surveyed about teaching? I realise there are problems with this, but the more independent sources – triangulation – the more likely the results will be reliable.

Does it matter?

It sure does. If Stephen Toope is correct (and I suspect he is) people are going to make substantial financial decisions on data that are really, really really misleading (hence the similarity to a Ponzi scheme). More importantly, the rankings are a classic case of subjectivity over evidence – surely the last thing on which a university wants its reputation based. Thirdly, it is so discouraging for those in universities who care about teaching, to see it not only given so little weight, but to see how the ratings of teaching were made. Lastly, the ranking system reinforces the status quo. This may well be fine for the top 10 universities, who are well funded, but it does not serve the vast majority of post-secondary institutions who need to find new ways of delivering teaching and new measures of performance that are not so heavily research biased.

Now don’t misunderstand me. Perhaps if a different system was used the rankings wouldn’t change much, at least among the very elite institutions. But no-one really knows. It is clear that even minor changes in methodology do lead to movement up and down the rankings – sometimes quite substantially due to a slight change on one measurement – and this will affect where students and parents will make their choice.

The real answer to this problem is better efforts by the universities themselves to measure their performance. After all, they have nothing to lose, except their reputation.

See also How much does teaching count in World University Rankings?

8 COMMENTS

  1. I agree with the observations signalling methodological issues, and with the conclusion.

    I have seen enough anecdotal evidence of self-referencing. Although there are good reasons to limit the scope of the research (e.g. cost, time, language barriers), the conclusions should not misrepresent the truth by generalizing without ground. Looking for the cause of such gigantic bias, I think that sometimes it is cost, sometimes it is plain laziness and negligence.

    I live in Toronto, where half of the population is foreign-born (see the information presented by the City of Toronto, based on the 2006 Census data: http://www.toronto.ca/invest-in-toronto/immigration_char.htm ). I have studied and worked with foreign-trained people, from Bangladesh, India, Philippines, and elsewhere. I have found them quite skilled and knowledgeable, enough to compete with locally-trained workers. This talks to the quality of their foreign education.

    I should add that Canada being a bilingual country, I would expect to see literature reviews drawing from both anglo-saxon and francophone countries (i.e. in English and French). The tendency to over-reference the people we trust from the institution we know, is becoming more and more obvious as globalization, social networks and social media become mainstream phenomenons.

    In conclusion, I support Tony Bates’ conclusions, adding that universities, which are centres of expertise in research and evaluation, should be in the best position to assess their own performance.

    I wonder if the real debate is not between the old-style academic hierarchy vs. a democratization of postsecondary education, and a tendency to move toward a more quality of service-oriented education? Having taking courses at a “small” and new university (Ryerson), I can rightfully say that service (as overall student experience) can be better even though the graduate levels and research outcomes may be behind those of institutions where I was feeling a bit like a “number” and where talking and even seeing our professors at a close distance was difficult because of the massive size of the audience.

  2. It comes back to a very well rehearsed debate in Higher Education about the value of teaching, the value of a student with a degree from a high ranking University, the value of a vocational degree from a low ranking but focused ‘new university’ and the current debate in the UK about how we continue to fund HE in general, where to quote Terry Pratchett in his recent interview in the THES, ‘ and now we seem to believe that everyone will benefit from a university education. They don’t, a lot of them – it’s a waste of bloody time.’ Well not quite Terry, at least if we do our job right we will have extended their tolerance to ambiguity and enabled them, especially engineers, to look for more complex solutions to world problems other than bombs or terror (yes engineers with HE experience are over represented amongst those inflicting terror on our major cities). So teaching is important and we in the e-learning world must help those in universities to do it better, very often by challenging their pedagogy and moving them from the research focused world of ratings, to student focused world of teaching and learning. So change what universities do best and enable our young people to attend them without amassing huge debts.

  3. U.S. Graduate Education:
    Academy Rankings Tell You a Lot, But Not Who’s No. 1 in Any Field

    Well folks, here is a news report from SCIENCE that should get everyone’s attention! I encourage you to read the full paper, because it provides no ONE answer, but enables ways to seek and analyse the data from multiple perspectives.

    Here is the Abstract

    Science 1 October 2010:
    Vol. 330. no. 6000, pp. 18 – 19
    DOI: 10.1126/science.330.6000.18

    Prev | Table of Contents | Next
    News of the Week
    U.S. Graduate Education:
    Academy Rankings Tell You a Lot, But Not Who’s No. 1 in Any Field
    Jeffrey Mervis

    This week’s release of the long-awaited assessment of the quality of U.S. research doctoral programs by the National Academies’ National Research Council (NRC), the first since 1995 and 3 years behind schedule, disgorges a massive amount of information about 5100 doctoral programs in 62 fields at 212 U.S. universities. More than a decade in the making, the assessment is meant to reflect the collective wisdom of the U.S. research community on what defines a top-quality graduate program. But those who simply want to know who’s No. 1 in neuroscience or read a list of the top 10 graduate programs in any particular field will walk away disappointed, because the NRC assessment can look quite different depending on your definition of “best.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here