July 20, 2017

A better ranking system for university teaching?

Who is top dog among UK universities?
Image: © Australian Dog Lover, 2017 http://www.australiandoglover.com/2017/04/dog-olympics-2017-newcastle-april-23.html

Redden, E. (2017) Britain Tries to Evaluate Teaching Quality Inside Higher Ed, June 22

This excellent article describes in detail a new three-tiered rating system of teaching quality at universities introduced by the U.K. government, as well as a thoughtful discussion. As I have a son and daughter-in-law teaching in a U.K. university and grandchildren either as students or potential students, I have more than an academic interest in this topic.

How are the rankings done?

Under the government’s Teaching Excellence Framework (TEF), universities in England and Wales will get one of three ‘awards’: gold, silver and bronze (apparently there are no other categories, such as tin, brass, iron or dross for those whose teaching really sucks). A total of 295 institutions opted to participate in the ratings.

Universities are compared on six quantitative metrics that cover:

  • retention rates
  • student satisfaction with teaching, assessment and academic support (from the National Student Survey)
  • rates of employment/post-graduate education six months after graduation.

However, awards are relative rather than absolute since they are matched against ‘benchmarks calculated to account for the demographic profile of their students and the mix of programs offered.’ 

This process generates a “hypothesis” of gold, silver or bronze, which a panel of assessors then tests against additional evidence submitted for consideration by the university (higher education institutions can make up to a 15-page submission to TEF assessors). Ultimately the decision of gold, silver or bronze is a human judgment, not the pure product of a mathematical formula.

What are the results?

Not what you might think. Although Oxford and Cambridge universities were awarded gold, so were some less prestigious universities such as the University of Loughborough, while some more prestigious universities received a bronze. So at least it provides an alternative ranking system to those that focus mainly on research and peer reputation.

What is the purpose of the rankings?

This is less clear. Ostensibly (i.e., according to the government) it is initially aimed at giving potential students a better way of knowing how universities stand with regard to teaching. However, knowing the Conservative government in the UK, it is much more likely to be used to link tuition fees to institutional performance, as part of the government’s free market approach to higher education. (The U.K. government allowed universities to set their own fees, on the assumption that the less prestigious universities would offer lower tuition fees, but guess what – they almost all opted for the highest level possible, and still were able to fill seats).

What are the pros and cons of this ranking?

For a more detailed discussion, see the article itself but here is my take on it.

Pros

First this is a more thoughtful approach to ranking than the other systems. It focuses on teaching (which will be many potential students’ initial interest in a university) and provides a useful counter-balance to the emphasis on research in other rankings.

Second it has a more sophisticated approach than just counting up scores on different criteria. It has an element of human judgement and an opportunity for universities to make their case about why they should be ranked highly. In other words it tries to tie institutional goals to teaching performance and tries to take into account the very large differences between universities in the U.K. in terms of student socio-economic background and curricula.

Third, it does provide a simple, understandable ‘award’ system of categorizing universities on their quality of teaching that students and their parents can at least understand.

Fourth, and most important of all, it sends a clear message to institutions that teaching matters. This may seem obvious, but for many universities – and especially faculty – the only thing that really matters is research. Whether though this form of ranking will be sufficient to get institutions to pay more than lip service to teaching remains to be seen.

Cons

However, there are a number of cons. First the national student union is against it, partly because it is heavily weighted by student satisfaction ratings based on the National Student Survey, which thousands of students have been boycotting (I’m not sure why). One would have thought that students in particular would value some accountability regarding the quality of teaching. But then, the NUS has bigger issues with the government, such as the appallingly high tuition fees (C$16,000 a year- the opposition party in parliament, Labour, has promised free tuition).

More importantly, there are the general arguments about university rankings that still apply to this one. They measure institutional performance not individual department or instructor performance, which can vary enormously within the same institution. If you want to study physics it doesn’t help if a university has an overall gold ranking but its physics department is crap or if you get the one instructor who shouldn’t be allowed in the building.

Also the actual quantitative measures are surrogates for actual teaching performance. No-one has observed the teaching to develop the rankings, except the students, and student rankings themselves, while one important measure, can also be highly misleading, based on instructor personality and the extent to which the instructor makes them work to get a good grade.

The real problem here is two-fold: first, the difficulty of assessing quality teaching in the first place: one man’s meat is another man’s poison. There is no general agreement, at least within an academic discipline, as to what counts as quality teaching (for instance, understanding, memory of facts, or skills of analysis – maybe all three are important but can how one teaches to develop these diverse attributes be assessed separately?).

The second problem is the lack of quality data on teaching performance – it just isn’t tracked directly. Since a student may take courses from up to 40 different instructors and from several different disciplines/departments in a bachelor’s program, it is no mean task to assess the collective effectiveness of their quality of teaching. So we are left with surrogates of quality, such as completion rates.

So is it a waste of time – or worse?

No, I don’t think so. People are going to be influenced by rankings, whatever. This particular ranking system may be flawed, but it is a lot better than the other rankings which are so much influenced by tradition and elitism. It could be used in ways that the data do not justify, such as justifying tuition fee increases or decreased government funding to institutions. It is though a first systematic attempt at a national level to assess quality in teaching, and with patience and care could be considerably improved. But most of all, it is an attempt to ensure accountability for the quality of teaching that takes account of the diversity of students and the different mandates of institutions. It may make both university administrations and individual faculty pay more attention to the importance of teaching well, and that is something we should all support.

So I give it a silver – a good try but there is definitely room for improvement. 

Thanks to Clayton Wright for drawing my attention to this.

Next up

I’m going to be travelling for the next three weeks so my opportunity to blog will be limited – but that has been the case for the last six months. My apologies – I promise to do better. However, a four hour layover at Pearson Airport does give me some time for blogging!

How Britain is moving to the privatization of higher education

Shepherd, J. (2011) Private university BPP launches bid to run 10 publicly funded counterparts Guardian Newspaper, June 21

BPP, a subsidiary of the Apollo Group, which owns the University of Phoenix, is negotiating with 10 British universities to manage the administrative side of their operations. BPP is the first private university to be accredited in Britain for 30 years. The aim is to cut costs on the administrative side to put more into the academic side. Thus BPP is now showing a two-pronged strategy: partnership with public funded universities; and also direct competition as a university on its own.

It is able to do this because of the government moving to the full cost of teaching being covered by tuition fees. The Conservative-Lib Dem coalition was hoping that this would lead to differential fees, with the elite universities charging the maximum (£9,000 per year – $14,500) with the others charging less. However, this has not happened with nearly all the universities charging the maximum. The government is therefore encouraging the private sector to come in to reduce costs (the government loans students money then claws it back when students are earning, and with higher tuition fees, this is costing the government more than it anticipated.).

Comment

This seems to me to be the worst kind of politics or policy for higher education. Certainly the law of unintended consequences is working well here, as with previous privatizations in Britain. It looks like that the changes brought in by the government are actually costing them much more in the short run than intended, with increased tuition fees and hence loans to students putting even more pressure on a government with huge debts. Opening up  universities and colleges to the private sector to drive down costs is likely to lead to further unintended and not always desirable consequences.

However, here’s a different question. Is it such a bad idea to privatize university administration, while leaving the academics to run the academic side? After all, many ancillary services, such as food services and the book store, are already privatized. I know many academics whose gut reaction might be delight at the administration getting its come-uppance.

The problem though is that administration costs are mainly driven by academic decisions. For instance, it is academics who demand transcripts, require students to provide lots of data about themselves, want multiple screens in classrooms, or bigger and better lecture theatres. In particular, it is academics who often determine the IT costs by their decisions about what technologies to use (or not use). Or is it? Would we spend less on IT if it was driven more by cost than academic demands? In any case it will be interesting to see what happens when academics in Britain are told to change practices because they are too costly or inefficient on the administrative side. No gain without pain.

What do you think of privatizing the administration? Good or bad?

In the meantime, the Apollo Group shares are looking better by the day (shares of Apollo Group (NASDAQ:APOLAnalyst Report) have  opened at $41.51 today. In the past 52 weeks, shares of Apollo Group have traded between a low of $33.75 and a high of $53.61. But with the British government on their side who knows where the share price could end?

.