I am still seething with outrage at the methodology used by the THES World University Rankings.
Under a headline ‘University rankings key tool in recruiting foreign students’, in today’s Globe and Mail newspaper, Stephen Toope, the President of the University of British Columbia is reported as saying ‘These rankings are looked at by students and parents as part of their decision-making process.’
Well, if they are, they are being fooled. These university rankings are the equivalent of a Ponzi scheme. Let’s take a close look at the methodology.
The THES, on its web site, claims that the rankings are based primarily on a survey of 13,000 or so ‘experienced scholars in spring 2010. It examined the perceived prestige of institutions in both research and teaching. There were 13,388 responses, statistically representative of global higher education’s geographical and subject mix.’
Well, I’d like to see that sample broken down. How many of these scholars were chosen from the institutions ultimately ranked in the top 10-30, or in last year’s top 10-30? How many were chosen from countries where English is not the first language? (Only 18 out of the top 100 universities were from non-English speaking countries). My fear is that this is a self-referential methodology – in other words, choose scholars from a sample already biased then ask them to rank each other, with the result that the same institutions come to the top. (Did someone say ‘An Anglo-Saxon hegemony’?)
For instance, quality of teaching was determined primarily by asking these experienced scholars to rank other universities on the quality of their teaching. Now what do they actually know about teaching in other universities? Probably very little (indeed, how much do they know about teaching by other instructors in their own university?). They may have sat on a few program review committees (usually of similar institutions to their own), and they may refer to their own undergraduate and post-graduate teaching institutions, but it’s still a closed circle of the same institutions. These are totally subjective views based on very limited direct personal experience. What is needed are independent measures of actual outcomes of teaching.
The second reason why parents and teachers are being fooled is the undue weight given to research in the rankings. (Even the teaching ratings are heavily influenced by research criteria). There is no independent indicator of undergraduate teaching in the rankings. Thus a university that loads all its efforts into research, and provides few resources for teaching, will come out much better than an institution that tries to balance research and teaching.
Now ask yourself, as a parent or potential student: how many undergraduate students, even in the most heavily loaded research university, will go on to do post-graduate research? Probably between 10-15% of a cohort (I exclude those going on to do an applied masters or other non-research-based graduate studies)? What about the other 85% of the students, who are depending primarily on good teaching to get a useful bachelors or masters degree. These rankings in now way address this issue. I think these rankings seriously short change undergraduate teaching, where lies the bulk of university students.
A critical criterion not included in the ratings of teaching is completion rate. How many students went on to graduate in their chosen programs? Another is employability – how many got good jobs after graduation? I could go on and on (and will do in a future post on performance indicators), but what is needed for rankings are empirically based data that indicate the quality of teaching, preferably from a variety of independent sources.
Who’s to blame?
Not the THES. I blame the universities themselves for failing to provide clear indicators of their own performance. First let’s recognize that universities serve multiple purposes of which research, although extremely important, is only one. Many institutions do collect data on the success of their graduates, usually in terms of the numbers employed on graduation. Why isn’t this data used?
Also why were not students surveyed about teaching? I realise there are problems with this, but the more independent sources – triangulation – the more likely the results will be reliable.
Does it matter?
It sure does. If Stephen Toope is correct (and I suspect he is) people are going to make substantial financial decisions on data that are really, really really misleading (hence the similarity to a Ponzi scheme). More importantly, the rankings are a classic case of subjectivity over evidence – surely the last thing on which a university wants its reputation based. Thirdly, it is so discouraging for those in universities who care about teaching, to see it not only given so little weight, but to see how the ratings of teaching were made. Lastly, the ranking system reinforces the status quo. This may well be fine for the top 10 universities, who are well funded, but it does not serve the vast majority of post-secondary institutions who need to find new ways of delivering teaching and new measures of performance that are not so heavily research biased.
Now don’t misunderstand me. Perhaps if a different system was used the rankings wouldn’t change much, at least among the very elite institutions. But no-one really knows. It is clear that even minor changes in methodology do lead to movement up and down the rankings – sometimes quite substantially due to a slight change on one measurement – and this will affect where students and parents will make their choice.
The real answer to this problem is better efforts by the universities themselves to measure their performance. After all, they have nothing to lose, except their reputation.