May 24, 2018

Learning analytics, student satisfaction, and student performance at the UK Open University

There is very little correlation between student satisfaction and student performance. Image: Bart Rienties. Click on image to see the video.

Rienties, B. and Toetenel, L. (2016) The impact of learning design on student behaviour, satisfaction and performance: A cross-institutional comparison across 151 modules, Computers in Human Behaviour, Vol. 60, pp.333-341

Li, N. et al. (2017) Online learning experiences of new versus continuing learners: a large-scale replication study, Assessment and Evaluation in Higher Education, Vol. 42, No. 4, pp.657-672

It’s never too late to learn

It’s been a hectic month with two trips from Vancouver to Ontario and back and one to the UK and back, a total of four keynotes, two panel sessions and two one day consultancies. By the time I got to the end of the month’s travels, I had learned so much that at a conference in Toronto I had to go to my room and lie down  – I just couldn’t take any more!

At my age, it takes time to process all this new information, but I will try to summarise the main points of what I learned in the next three posts.

Learning analytics at the Open University

The Open University, with over 100,000 students and more than 1,000 courses (modules), and most of its teaching online in one form or another, is an ideal context for the application of learning analytics. Fortunately the OU has some of the world leaders in this field. 

At the conference on STEM teaching at the Open University that I attended as the opening keynote, the closing keynote was given by Bart Rienties, Professor of Learning Analytics at the Institute of Educational Technology at the UK Open University. Rienties and his team linked 151 modules (courses) and 111,256 students with students’ behaviour, satisfaction and performance at the Open University UK, using multiple regression models. 

His whole presentation (40 minutes, including questions) can be accessed online, and is well worth viewing, as it provides a clear summary of the results published in the two detailed papers listed above. As always, if you find my summary of results below of interest or challenging, I strongly recommend you view Bart’s video first, then read the two articles in more detail. Here’s what I took away.

There is little correlation between student course evaluations and student performance

This result is a bit of a zinger. The core dependent variable used was academic retention (the number of learners who completed and passed the module relative to the number of learners who registered for each module). As Rientes and Toetenel (p.340) comment, almost as an aside, 

it is remarkable that learner satisfaction and academic retention were not even mildly related to each other….Our findings seem to indicate that students may not always be the best judge of their own learning experience and what helps them in achieving the best outcome.’

The design of the course matters

One of the big challenges in online and blended learning is getting subject matter experts to recognise the importance of what the Open University calls ‘learning design.’ 

Conole (2012, p121) describes learning design as:

a methodology for enabling teachers/designers to make more informed decisions in how they go about designing learning activities and interventions, which is pedagogically informed and makes effective use of appropriate resources and technologies. LD is focussed on ‘what students do’ as part of their learning, rather than the ‘teaching’ which is focussed on the content that will be delivered.

Thus learning design is more than just instructional design.

However, Rienties at al. comment that ‘only a few studies have investigated how educators in practice are actually planning and designing their courses and whether this is then implemented as intended in the design phase.’ 

The OU has done a good job in breaking down some of the elements of learning design. The OU has mapped the elements of learning design in nearly 200 different courses. The elements of this mapping can be seen below (Rientes and Toetenal, 2016, p.335):

Rientes and Toetenel then analysed the correlations between each of these learning design elements against both learner satisfaction and learner performance. What they found is that what OU students liked did not match with learner performance. For instance, students were most satisfied with ‘assimilative’ activities, which are primarily content focused, and disliked communication activities, which are primarily social activities. However, better student retention was most strongly associated with communication activities, and overall, with the quality of the learning design.

Rientes and Toetenel conclude:

although more than 80% of learners were satisfied with their learning experience, learning does not always need to be a nice, pleasant experience. Learning can be hard and difficult at times, and making mistakes, persistence, receiving good feedback and support are important factors for continued learning….

An exclusive focus on learner satisfaction might distract institutions from understanding the impact of LD on learning experiences and academic retention. If our findings are replicated in other contexts, a crucial debate with academics, students and managers needs to develop whether universities should focus on happy students and customers, or whether universities should design learning activities that stretch learners to their maximum abilities and ensuring that they eventually pass the module. Where possible, appropriate communication tasks that align with the learning objectives of the course may seem to be a way forward to enhance academic retention.

Be careful what you measure

As Rientes and Toetenel put it:

Simple LA metrics (e.g., number of clicks, number of downloads) may actually hamper the advancement of LA research. For example, using a longitudinal data analysis of over 120 variables from three different VLE/LMS systems and a range of motivational, emotions and learning styles indicators, Tempelaar et al. (2015) found that most of the 40 proxies of simple” VLE LA metrics provided limited insights into the complexity of learning dynamics over time. On average, these clicking behaviour proxies were only able to explain around 10% of variation in academic performance.

In contrast, learning motivations, emotions (attitudes), and learners’ activities during continuous assessments (behaviour) significantly improved explained variance (up to 50%) and could provide an opportunity for teachers to help at-risk learners at a relatively early stage of their university studies.

My conclusions

Student feedback on the quality of a course is really important but it is more useful as a conversation between students and instructors/designers than as a quantitative ranking of the quality of a course.  In fact using learner satisfaction as a way to rank teaching is highly misleading. Learner satisfaction encompasses a very wide range of factors as well as the teaching of a particular course. It is possible to imagine a highly effective course where teaching in a transmissive or assimilative manner is minimal, but student activities are wide, varied and relevant to the development of significant learning outcomes. Students, at least initially, may not like this because this may be a new experience for them, and because they must take more responsibility for their learning. Thus good communication and explanation of why particular approaches to teaching have been chosen is essential (see my comment to a question on the video).

Perhaps though the biggest limitation of student satisfaction for assessing the quality of the teaching is the often very low response rates from students, limited evaluation questions due to standardization (the same questions irrespective of the nature of the course), and the poor quality of the student responses. This is no way to assess the quality of an individual teacher or a whole institution, yet far too many institutions and governments are building this into their evaluation of teachers/instructors and institutions.

I have been fairly skeptical of learning analytics up to now, because of the tendency to focus more on what is easily measurable (simple metrics) than on what students actually do qualitatively when they are learning. The focus on learning design variables in these studies is refreshing and important but so will be analysis of student learning habits.

Finally, this research provides quantitative evidence of the importance of learning design in online and distance teaching. Good design leads to better learning outcomes. Why then are we not applying this knowledge to the design of all university and college courses, and not just online courses? We need a shift in the power balance between university and college subject experts and learning designers resulting in the latter being treated as at least equals in the teaching process.

References

Conole, G. (2012). Designing for learning in an open world. Dordrecht: Springer

Tempelaar, D. T., Rienties, B., & Giesbers, B. (2015). In search for the most informative data for feedback generation: learning analytics in a data-rich context. Computers in Human Behavior, 47, 157e167. http://dx.doi.org/10.1016/j.chb.2014.05.038.

 

‘Making Digital Learning Work’: why faculty and program directors must change their approach

Completion rates for different modes of delivery at Houston Community College

Bailey, A. et al (2018) Making Digital Learning Work Boston MA:The Boston Consulting Group/Arizona State University

Getting blended learning wrong

I’ve been to several universities recently where faculty are beginning to develop blended or ‘hybrid’ courses which reduce but do not eliminate time on campus. I must confess I have mixed feelings about this. While I welcome such moves in principle, I have been alarmed by some of the approaches being taken.

The main strategy appears to be to move some of the face-to-face lectures online, without changing either the face-to-face or the online lecture format. In particular there is often a resistance to asynchronous approaches to online learning.  In one or two cases I have seen, faculty have insisted that students watch the Internet lectures live so that there can be synchronous online discussion, thus severely limiting the flexibility of ‘any time, any place’ for students.

Even more alarming, academic departments seem to be approaching the development of new blended learning programs the same way as their on-campus programs – identify faculty to teach the courses and then let them loose without any significant faculty development or learning design support. Even worse, there is no project management to ensure that courses are ready on time. Why discuss the design of the online lectures when you don’t do that for your classroom lectures? 

Trying to move classroom lectures online without adaptation is bound to fail, as we saw from the early days of fully online learning (and MOOCs). I recognise that blended or hybrid learning is different from fully online learning, but it is also different from face-to-face teaching. The challenge is to identify what the added value is of the face-to-face component, when most teaching can be done as well or better, and much more conveniently for students, online, and how to combine the two modes of delivery to deliver better learning outcomes more cost-effectively.  In particular, faculty are missing the opportunity to change their teaching method in order to get better learning outcomes, such as the development of high-level intellectual skills.

The real danger here is that poorly designed blended courses or programs will ‘fail’ and it is ‘blended learning’ that is blamed, when really it’s ignorance of best teaching practices on the part of faculty, and program directors especially. The problem is that faculty, and particularly senior faculty such as Deans and program directors, don’t know what they don’t know, which is why the report, ‘Making Digital Learning Work’ is so important. The report provides evidence that digital learning needs a complete change in culture and approaches to course and program development and delivery for most academic departments. Here’s why.

The report

The Arizona State University Foundation and Boston Consulting, funded by the Melinda and Bill Gates Foundation, conducted a study of the return on investment (ROI) of digital learning in six different institutions. The methodology focused on six case studies of institutions that have been pioneers in post-secondary digital education:

  • Arizona State University
  • University of Central Florida
  • Georgia State University
  • Houston Community College
  • The Kentucky Community and Technical College System
  • Rio Salado Community College.

These are all large institutions (over 30,000 students each) and relatively early adopters of online learning. 

The study had three aims:

  • define what ROI means in terms of digital education, and identify appropriate metrics for measuring ROI
  • assess the impact of digital learning formats on institutions’ enrolments, student learning outcomes, and cost structures
  • examine how these institutions implemented digital learning, and identify lessons and promising practices for the field.

The study compared results from three different modes of delivery:

  • face-to-face courses
  • mixed-modality courses, offering a mix of online and face-to-face components, with the online component typically replacing some tradition face-to-face teaching (what I would call ‘hybrid learning)
  • fully online courses.

The ROI framework

The study identified three components of ROI for digital learning:

  • impact on student access to higher education
  • impact on learning and completion outcomes
  • impact on economics (the costs of teaching, administration and infrastructure, and the cost to students).

The report is particularly valuable in the way it has addressed the economic issues. Several factors were involved:

  • differences in class size between face-to-face and digital teaching and learning
  • differences in the mix of instructors (tenured and adjunct, full-time and part-time)
  • allocation of additional expenses such as faculty development and learning design support
  • impact of digital learning on classroom and other physical capacity 
  • IT costs specifically associated with digital learning.

The report summarised this framework in the following graphic:

While there are some limitations which I will discuss later, this is a sophisticated approach to looking at the return on investment in digital learning and gives me a great deal of confidence in the findings.

Results

Evidence from the six case studies resulted in the following findings, comparing digital learning with face-to-face teaching.

Digital learning resulted in:

  • equivalent or improved student learning outcomes
  • faster time to degree completion
  • improved access, particularly for disadvantaged students
  • a better return on investment (at four of the institutions): savings for online courses ranged from $12 to $66 per credit hour.

If you have problems believing or accepting these results then I recommend you read the report in full. I think you will find the results justified.

Conditions for success

This is perhaps the most valuable part of the report, because although most faculty may not be aware of this, those of us working in online learning have been aware for some time of the benefits of digital learning identified above. What this report makes clear though are the conditions that are needed for digital learning to succeed:

  • take a strategic portfolio approach to digital learning. This needs a bit of unpacking because of the terminology. The report argues that the greatest potential to improve access and outcomes while reducing costs lies in increasing the integration of digital learning into the undergraduate experience through mixed-modality (i.e. hybrid learning). This involves not just one single approach to course design but a mix, dependent on the demands of the subject and the needs of students. However, there should be somewhat standard course design templates to ensure efficiency in course design and to reduce risk.
  • build the necessary capabilities and expertise to design for quality in the digital realm. The experience of the six institutions emphasises that significant investment needs to be made in instructional design, learning sciences and digital tools and capacity (and – my sidebar – faculty need to listen to what instructional designers tell them)
  • provide adequate student support that takes account of the fact that students will often require that support away from the campus (and 24/7)
  • fully engage faculty and provide adequate faculty development and training by fostering a culture of innovation in teaching
  • tap outside vendors strategically: determine the strategic goals first for digital learning then decide where outside vendors can add value to in-house capacity
  • strengthen analytics and monitoring: the technology provides better ways to track student progress and difficulties

My comments on the report

This report should be essential reading for anyone concerned with teaching and learning in post-secondary education, but it will be particularly important for program directors. 

It emphasises that blended learning is not so much about delivery but about achieving better learning outcomes and increased access through the re-design of teaching that incorporates the best of face-to-face and online teaching. However this requires a major cultural change in the way faculty and instructors approach teaching as indicated by the following:

  • holistic program planning involving all instructors, instructional designers and probably students as well
  • careful advanced planning, and following best practices, including project management and learning design
  • focusing as much on the development of skills as delivering content
  • identifying the unique ‘affordances’ of face-to-face teaching and online learning: there is no general formula for this but it will require discussion and input from both content experts and learning designers on a course by course basis
  • systematic evaluation and monitoring of hybrid learning course designs, so best (and poor) practices can be identified

I have a few reservations about the report:

  • The case study institutions were carefully selected. They are institutions with a long history of and/or considerable experience in online learning. I would like to see more cases built on more traditional universities or colleges that have been able successfully to move into online and especially blended learning
  • the report did not really deal with the unique context of mixed-modularity. Many of the results were swamped by the much more established fully online courses. However, hybrid learning is still new so this presents a challenge in comparing results.

However, these are minor quibbles. Please print out the report and leave it on the desk of your Dean, the Provost, the AVP Teaching and Learning and your program director – after you’ve read it. You could also give them:

Bates, A. and Sangra, A. (2011) Managing Technology in Higher Education San Francisco: Jossey-Bass/John Wiley

But that may be too much reading for the poor souls, who now have a major crisis to deal with.

A better ranking system for university teaching?

Who is top dog among UK universities?
Image: © Australian Dog Lover, 2017 http://www.australiandoglover.com/2017/04/dog-olympics-2017-newcastle-april-23.html

Redden, E. (2017) Britain Tries to Evaluate Teaching Quality Inside Higher Ed, June 22

This excellent article describes in detail a new three-tiered rating system of teaching quality at universities introduced by the U.K. government, as well as a thoughtful discussion. As I have a son and daughter-in-law teaching in a U.K. university and grandchildren either as students or potential students, I have more than an academic interest in this topic.

How are the rankings done?

Under the government’s Teaching Excellence Framework (TEF), universities in England and Wales will get one of three ‘awards’: gold, silver and bronze (apparently there are no other categories, such as tin, brass, iron or dross for those whose teaching really sucks). A total of 295 institutions opted to participate in the ratings.

Universities are compared on six quantitative metrics that cover:

  • retention rates
  • student satisfaction with teaching, assessment and academic support (from the National Student Survey)
  • rates of employment/post-graduate education six months after graduation.

However, awards are relative rather than absolute since they are matched against ‘benchmarks calculated to account for the demographic profile of their students and the mix of programs offered.’ 

This process generates a “hypothesis” of gold, silver or bronze, which a panel of assessors then tests against additional evidence submitted for consideration by the university (higher education institutions can make up to a 15-page submission to TEF assessors). Ultimately the decision of gold, silver or bronze is a human judgment, not the pure product of a mathematical formula.

What are the results?

Not what you might think. Although Oxford and Cambridge universities were awarded gold, so were some less prestigious universities such as the University of Loughborough, while some more prestigious universities received a bronze. So at least it provides an alternative ranking system to those that focus mainly on research and peer reputation.

What is the purpose of the rankings?

This is less clear. Ostensibly (i.e., according to the government) it is initially aimed at giving potential students a better way of knowing how universities stand with regard to teaching. However, knowing the Conservative government in the UK, it is much more likely to be used to link tuition fees to institutional performance, as part of the government’s free market approach to higher education. (The U.K. government allowed universities to set their own fees, on the assumption that the less prestigious universities would offer lower tuition fees, but guess what – they almost all opted for the highest level possible, and still were able to fill seats).

What are the pros and cons of this ranking?

For a more detailed discussion, see the article itself but here is my take on it.

Pros

First this is a more thoughtful approach to ranking than the other systems. It focuses on teaching (which will be many potential students’ initial interest in a university) and provides a useful counter-balance to the emphasis on research in other rankings.

Second it has a more sophisticated approach than just counting up scores on different criteria. It has an element of human judgement and an opportunity for universities to make their case about why they should be ranked highly. In other words it tries to tie institutional goals to teaching performance and tries to take into account the very large differences between universities in the U.K. in terms of student socio-economic background and curricula.

Third, it does provide a simple, understandable ‘award’ system of categorizing universities on their quality of teaching that students and their parents can at least understand.

Fourth, and most important of all, it sends a clear message to institutions that teaching matters. This may seem obvious, but for many universities – and especially faculty – the only thing that really matters is research. Whether though this form of ranking will be sufficient to get institutions to pay more than lip service to teaching remains to be seen.

Cons

However, there are a number of cons. First the national student union is against it, partly because it is heavily weighted by student satisfaction ratings based on the National Student Survey, which thousands of students have been boycotting (I’m not sure why). One would have thought that students in particular would value some accountability regarding the quality of teaching. But then, the NUS has bigger issues with the government, such as the appallingly high tuition fees (C$16,000 a year- the opposition party in parliament, Labour, has promised free tuition).

More importantly, there are the general arguments about university rankings that still apply to this one. They measure institutional performance not individual department or instructor performance, which can vary enormously within the same institution. If you want to study physics it doesn’t help if a university has an overall gold ranking but its physics department is crap or if you get the one instructor who shouldn’t be allowed in the building.

Also the actual quantitative measures are surrogates for actual teaching performance. No-one has observed the teaching to develop the rankings, except the students, and student rankings themselves, while one important measure, can also be highly misleading, based on instructor personality and the extent to which the instructor makes them work to get a good grade.

The real problem here is two-fold: first, the difficulty of assessing quality teaching in the first place: one man’s meat is another man’s poison. There is no general agreement, at least within an academic discipline, as to what counts as quality teaching (for instance, understanding, memory of facts, or skills of analysis – maybe all three are important but can how one teaches to develop these diverse attributes be assessed separately?).

The second problem is the lack of quality data on teaching performance – it just isn’t tracked directly. Since a student may take courses from up to 40 different instructors and from several different disciplines/departments in a bachelor’s program, it is no mean task to assess the collective effectiveness of their quality of teaching. So we are left with surrogates of quality, such as completion rates.

So is it a waste of time – or worse?

No, I don’t think so. People are going to be influenced by rankings, whatever. This particular ranking system may be flawed, but it is a lot better than the other rankings which are so much influenced by tradition and elitism. It could be used in ways that the data do not justify, such as justifying tuition fee increases or decreased government funding to institutions. It is though a first systematic attempt at a national level to assess quality in teaching, and with patience and care could be considerably improved. But most of all, it is an attempt to ensure accountability for the quality of teaching that takes account of the diversity of students and the different mandates of institutions. It may make both university administrations and individual faculty pay more attention to the importance of teaching well, and that is something we should all support.

So I give it a silver – a good try but there is definitely room for improvement. 

Thanks to Clayton Wright for drawing my attention to this.

Next up

I’m going to be travelling for the next three weeks so my opportunity to blog will be limited – but that has been the case for the last six months. My apologies – I promise to do better. However, a four hour layover at Pearson Airport does give me some time for blogging!

A brighter future for Athabasca University?

Mid-career retraining is seen as one possible focus for Athabasca University’s future

Coates, K. (2017) Independent Third-Party Review of Athabasca University Saskatoon, SK

This report, 45 pages in length plus extensive appendices, was jointly commissioned by the Government of Alberta and the Governors of Athabasca University.

Why the report?

Because Athabasca University, established in 1971 as a fully distance, open university, has been in serious trouble over the last 10 years. In 2015, its Acting President issued a report saying that ‘Athabasca University (AU) will be unable to pay its debt in two years if immediate action is not taken.’ It needed an additional $25 million just to solve its IT problems. Two years earlier, the AU’s senior administrators were savagely grilled by provincial legislators about the financial management of the university, to such an extent that it seemed that the Government of Alberta might well pull the plug on the university.

However, comes a recent provincial election, comes a radical change of government, leading to a new Board and a new President with a five year term. Although these are essential changes for establishing a secure future of the university, in themselves they are not sufficient. The financial situation of the university is temporarily more secure, but the underlying problem of expenses not being matched by revenue remains. It desperately needs more money from a government that is short of revenues since the oil industry tanked. Also its enrolments have started to drop, due to competition from campus-based universities now offering fully online programs. Lastly it still has the same structural problems with an outdated course design and development model and poor student support services, especially on the academic side.

So although the newish government was willing to suspend judgement, it really needed an independent review before shovelling any new money AU’s way – hence this report.

What does the report say?

I will try to summarise briefly the main findings and recommendations, but as always, it is worth reading the full report, which is relatively concise and easy to read:

  • there is substantial student demand in Alberta, across Canada and internationally for AU’s programs, courses and services;
  • the current business model is not financially sustainable and will not support the institution in the coming decades – but ‘it has the potential if significant changes are made to its structure, approach and program mix, to be a viable, sustainable and highly relevant part of the Alberta post-secondary system’;
  • more money is needed to support its operations, especially if it is to remain headquartered in the (small and somewhat remote) Town of Athabasca; the present government funding arrangement is inadequate for the university’s mix of programs and students, especially regarding the support needed for disadvantaged students and those requiring more flexibility in delivery;
  • the emergence of dozens of credible online university alternatives has undermined AU’s competitive advantage – it no longer has a clear and obvious role within the Provincial post-secondary system;
  • AU should re-brand itself as the leading Canadian centre for online learning and 21st century educational technology, but although it has the educational technology professionals needed to provide leadership, it lacks the ICT model and facilities to rise to this opportunity;
  • Open access: AU should expand its activities associated with population groups that are under-represented in the Alberta and Canadian post-secondary system: women in STEM subject, new Canadians, Indigenous Peoples and students with disabilities;
  • diversification of the student body is necessary to achieve economies of scale; in other words it should expand its reach across Canada and internationally and not limit itself just to Alberta;
  • AU should expand its efforts to educate lifelong learners and should expand its career-focused and advanced educational opportunities – particularly mid-career training and training for new work;
  • although there is overwhelming faculty and staff support for AU’s mandate and general approach, there are considerable institutional and financial barriers to effecting a substantial reorientation in AU operations; however, such a re-orientation is critical for its survival.

My comments

Overall, this is an excellent report. Wisely, it does not dwell on the historical reasons why Athabasca University got itself into its current mess but instead focuses on what its future role should be, what it can uniquely contribute to the province, and what is needed to right the ship, including more money.

However, the main challenges, in my view, remain more internal than external. The Board of Governors, senior administration, faculty, staff and students still need to develop together a clear and shared vision for the future of the institution that presents a strong enough value proposition to the government to justify the increased operational and investment funding that is needed. Although the external reviewer does a good job suggesting what some of the elements of such a vision might be, it has to come from the university community itself. This is long overdue and cannot be delayed much longer otherwise the government’s patience will understandably run out. Money itself is not the issue – it is the value proposition that will persuade the government to prioritise funding for AU that still needs to be made by the university itself. In other words it’s a trust issue – if we give you more money, what will you deliver?

The second major challenge, while strongly linked to vision and funding, is the institutional culture. Major changes in course design, educational technology, student support and administration, marketing and PR are urgently needed to bring AU into advanced 21st century practice in online and distance learning. I fear that while there are visionary faculty and staff at AU who understand this, there is still too much resistance from traditionalists and those who see change as undermining academic excellence or threatening their comfort zone. Without these necessary structural and cultural changes though AU will not be able to implement its vision, no matter how persuasive it is. So there is also a competency issue – if we give you more money, can you deliver on your promises?

I think these are still open questions but at least the external review offers a vote of confidence in the university. Now it is up to the university community to turn this opportunity into something more concrete. But it needs to move fast. The window of opportunity is closing fast.

Corruption in higher education: a wake-up call

Staff at Pavol Jozef Safarik University, Kosice, Slovakia were accused of taking bribes to admit students to its Medical School

Staff at Pavol Jozef Safarik University, Kosice, Slovakia have been accused of taking bribes to admit students to the Medical School

Daniel, J. (2016) Combatting Corruption and Enhancing Integrity: A Contemporary Challenge for the Quality and Integrity of Higher Education: Advisory Statement for Effective International Practice: Washington DC/Paris: CHEA/UNESCO

Daniel, J. (2016) Lutter contre la corruption et renforcer l’intégrité : un défi contemporain pour la qualité et la crédibilité de l’enseignement supérieur: Déclaration consultative pour des pratiques internationales efficaces Washington DC/Paris: CHEA/UNESCO

Those of us working in online learning are often berated by academic colleagues about the possible lack of integrity in online learning due to issues such as plagiarism, diploma mills, or ‘easy’ qualifications lacking rigorous academic process. Such cases do occur, but having read this document, it seems that the more traditional areas of higher education are prone to far more egregious forms of corruption.

Where do we find corruption?

At the end of this report, there is a list of references chronicling corruption in higher education in Australia, China, the Czech Republic, Egypt, France, Germany, India, Kenya, Nigeria, Russia, Slovakia, South Africa, and the USA. And those are just the ones who have been recently caught.

The report puts it bluntly:

This Advisory Statement is a wake-up call to higher education worldwide – particularly to quality assurance bodies. HEIs [higher education institutions], governments, employers and societies generally, in both developed and developing countries, are far too complacent about the growth of corrupt practices, either assuming that these vices occur somewhere else or turning a deaf ear to rumours of malpractice in their own organizations.

What kinds of corruption?

You name it, it’s in this report. In fact, the report describes 29 different kinds of corrupt practices. Here are just a few examples:

  • giving institutions licenses, granting degree-awarding powers, or accrediting programmes in return for bribes or favours.
  • altering student marks in return for sexual or other favours.

  • administrative pressure on academics to alter marks for institutional convenience.

  • publishing false recruitment advertising.

  • impersonation of candidates and ghost writing of assignments.

  • political pressures on higher education institutions to award degrees to public figures.

  • publication by supervisors of research by graduate students without acknowledgement.

  • higher education institutions publishing misleading news releases or suppressing inconvenient news.

Who is sounding the alarm?

Although the writer of the report is Sir John Daniel, a fellow Research Associate at Contact North, and former Vice-Chancellor, the Open University, Assistant Director-General for Education at UNESCO and President of the Commonwealth of Learning, the report draws on meetings of expert groups from the following organizations:

  • UNESCO’s International Institute for Educational Planning (IIEP)
  • the International Quality Group of the US Council for Higher Education Accreditation (CHEA/CIQG).

What’s causing this?

Corruption is as much about lack of ethical behaviour and rampant self-interest as about policies and practices. The report though points to two key factors that are contributing to corruption:

  • the huge appetite for higher education among the young populations of the developing world puts great pressures on admissions processes;
  • the steadily developing sophistication and borderless nature of information and communications technology (ICT) has expanded the opportunities for fraudsters in all walks of life.

What are the recommended solutions?

There are of course no easy solutions here. The report points out that there are both ‘upstream’ possibilities for corruption at the level of government and accrediting agencies, and downstream, from individuals desperate to get into and succeed within an increasingly competitive higher education system. In the middle are the institutions themselves.

The report separates its recommendations for combatting corruption then into several target areas:

  1. the regulation of higher education systems
  2.  the teaching role of higher education institutions
  3. student admissions and recruitment
  4. student assessment
  5. credentials and qualifications
  6. research theses and publications
  7. through increased public awareness

It is interesting that while the report emphasizes the importance of internal quality assurance processes within HEIs, it also notes that the more ‘mature’ an HE system becomes, the more external quality assurance agencies, such as accreditation boards and government ministries, tend to pass quality assurance responsibilities back to the institutions. The report notes that students themselves have a very important role to play in demanding transparency and whistle-blowing.

A call to action

The report ends with the following:

  • governments, quality assurance agencies and HEIs worldwide must become more aware of the threat that corruption poses to the credibility, effectiveness and quality of higher education at a time when its importance as a driver of global development has never been higher.

  • external quality assurance agencies should do more to review the risks of corruption in their work and HEIs must ensure that their IQA [internal quality assurance] frameworks are also fit for the purpose of combatting corruption.

  • training and supporting staff in identifying and exposing corrupt practices should be stepped up.

  • creating networks of organizations that are fighting corruption and greater North-South collaboration in capacity building for this purpose are highly desirable.

So next time some sanctimonious academic sneers at the academic integrity of online learning, just point them in the direction of this report.