November 28, 2015

What do we mean by quality when teaching in a digital age?

Listen with webReader
© Insights, 2012

© Insights, 2012

Before I start on my nine steps to quality learning for my open textbook, Teaching in a Digital Age, I have needed to ‘clear the decks’ about what we mean by quality. I thought this bit might be useful to share, as quality is a very slippery concept at the best of times.

The aim of this chapter is to provide some practical guidelines for teachers and instructors to ensure quality teaching in a digital age. Before I can do this, however, it is necessary to clarify what is meant by ‘quality’ in education, because I am using ‘quality’ here in a very specific way.


Probably there is no other topic in education which generates so much discussion and controversy as ‘quality’. Many books have been written on the topic, but I will cut to the chase and give my definition of quality up-front. For the purposes of this book, quality is defined as:

teaching methods that successfully help learners develop the knowledge and skills they will require in a digital age.

This of course is the short answer to the question. A longer answer means looking, at least briefly, at:

  • institutional and degree accreditation
  • internal (academic) quality assurance processes
  • differences in quality assurance between traditional classroom teaching and online and distance education
  • the relationship between quality assurance processes and learning outcomes
  • ‘quality assurance fit for purpose’: meeting the goals of education in a digital age.

This will then provide the foundations for my recommendations for quality teaching that will follow in this chapter.

Institutional and degree accreditation

Most governments act to protect consumers in the education market by ensuring that institutions are properly accredited and the qualifications they award are valid and are recognised as of being of ‘quality.’ However, the manner in which institutions and degrees are accredited varies a great deal. The main difference is between the USA and virtually any other country.

The U.S. Department of Education’s Network for Education Information states in its description of accreditation and quality assurance in the USA:

Accreditation is the process used in U.S. education to ensure that schools, postsecondary institutions, and other education providers meet, and maintain, minimum standards of quality and integrity regarding academics, administration, and related services. It is a voluntary process based on the principle of academic self-governance. Schools, postsecondary institutions and programs (faculties) within institutions participate in accreditation. The entities which conduct accreditation are associations comprised of institutions and academic specialists in specific subjects, who establish and enforce standards of membership and procedures for conducting the accreditation process.

Both the federal and state governments recognize accreditation as the mechanism by which institutional and programmatic legitimacy are ensured. In international terms, accreditation by a recognized accrediting authority is accepted as the U.S. equivalent of other countries’ ministerial recognition of institutions belonging to national education systems.

In other words, in the USA, accreditation and quality assurance is effectively self-regulated by the educational institutions and faculty through their control of accreditation agencies, although the government does have some ‘weapons of enforcement’, mainly through the withdrawal of student financial aid for students at any institution that the U.S. Department of Education deems to be failing to meet standards.

In many other countries, government has the ultimate authority to accredit institutions and approve degrees, although in countries such as Canada and the United Kingdom, this is often exercised by arm’s length agencies appointed by government, but consisting mainly of representatives from the various institutions within the system. These bodies have a variety of names, but Degree Quality Assurance Board is a typical title.

However, more important than the formal lines of responsibility for quality is how the accrediting agencies actually operate. Usually, once a degree program is approved, there is little follow-up or monitoring afterwards, unless formal complaints are subsequently made about the quality of the program, although many institutions now voluntarily review programs every five years or so. Also, once an institution has been accredited, the accreditation agency may delegate back to the institution the approval of it own degree programs, providing that it has an internal process in place for assuring quality, although where government is formally responsible, new degrees may still come to an accrediting agency, to ensure there is no duplication within the system, that there is a defined market for the program, or where approval to deviate from government guidelines on fees may be requested. Nevertheless, mainly to ensure academic freedom from direct government interference, universities in particular have a large degree of autonomy in most economically advanced countries for determining ‘quality’ in programming.

However, in recent years, some regulatory agencies such as the United Kingdom’s Quality Assurance Agency for Higher Education have adopted formal quality assurance processes based on practices that originated in industry. The U.K. QAA’s Quality Code for Higher Education which aims to guide universities on what the QAA is looking for runs to several hundred pages. Chapter B3 on Learning and Teaching is 25 pages long and has seven indicators of quality. Indicator 4 is typical:

Higher education providers assure themselves that everyone involved in teaching or supporting student learning is appropriately qualified, supported and developed.

Many institutions as a result of pressure from external agencies have therefore put in place formal quality assurance processes over and beyond the normal academic approval processes (see Clarke-Okah et al., 2014, for a typical, low-cost example).

Internal quality assurance

It can be seen then that the internal processes for ensuring quality programs within an institution are particularly important. Although again the process can vary considerably between institutions, at least in universities the process is fairly standard. A proposal for a new degree will usually originate from a group of faculty/instructors within a department. The proposal will be discussed and amended at departmental and/or Faculty meetings, then once approved will go to the university senate for final approval. The administration in the form of the Provost’s Office will usually be involved, particularly where resources, such as new appointments, are required.

Although this is probably an over-generalisation, significantly the proposal will contain information about who will teach the course and their qualifications to teach it, the content to be covered within the program (often as a list of courses with short descriptions), a set of required readings, and usually something about how students will be assessed. Increasingly, such proposals may also include broad learning outcomes for the program.

If there is a proposal for courses within a program or the whole program to be delivered fully online, it is likely that the proposal will come under intense internal scrutiny. What is unlikely to be included in a proposal though is what methods of teaching will be used. This is usually considered the responsibility of individual faculty members. It is this aspect of quality with which this chapter is concerned.

Lastly, some institutions require every program to be reviewed after five or more years of operation, or at the discretion of the senior administration. Again whether and how this is done varies considerably. One common approach is for an internal review process, with an internal evaluation report by a committee set up by the department offering the program, followed by a review of the internal committee’s report by external assessors. This review may or (more frequently) may not lead to significant changes in a program, but this will depend on the instructors responsible for teaching the program agreeing to implement any recommended changes. Less frequently, where enrolment for a program has declined to unacceptable levels or where external complaints about a program have been received, the Vice President Academic may call for an external review of the program, in which case anything is possible, up to and including closure of the program.

Jung and Latchem (2102), in a review of quality assessment processes in a large number of online and distance education institutions around the world, make the following important points about quality assurance processes within institutions:

  • focus on outcomes as the leading measure of quality
  • take a systemic approach to quality assurance
  • see QA as a process of continuous improvement
  • move the institution from external controls to an internal culture of quality
  • poor quality has very high costs so investment in quality is worthwhile.

In particular, Butcher and Wilson-Strydom (2013) warn:

you should not assume that creating quality assurance structures… automatically improves quality….Institutional quality assurance structures and processes are important, but beware of making them an exercise in compliance for accountability, rather than a process of learning and self-improvement that really improves quality.

There are many guidelines for quality traditional classroom teaching. Perhaps the most well know are those of Chickering and Gamson (1987), based on an analysis of 50 years of research into best practices in teaching. They argue that good practice in undergraduate education:

  1. Encourages contact between students and faculty
  2. Develops reciprocity and cooperation among students.
  3. Encourages active learning.
  4. Gives prompt feedback.
  5. Emphasizes time on task.
  6. Communicates high expectations.
  7. Respects diverse talents and ways of learning.

Online courses and programs

Because online learning was new and hence open to concern about its quality, there have also been many guidelines, best practices and quality assurance criteria created and applied to online programming. All these guidelines and procedures have been derived from the experience of previously successful online programs, best practices in teaching and learning, and research and evaluation of online teaching and learning.

Some degree quality assurance boards (such as the QAA in the UK and PEQAB in Ontario) have put in place specific ‘benchmarks’ for online courses. A comprehensive list of online quality assurance standards, organizations and research on online learning can be found in Appendix 3. Graham et al. (2001) applied Chickering and Gamson’s seven principles for face-to-face teaching to the evaluation of four online courses from a mid-western university in the USA, and adapted these principles for online learning.

Thus ensuring quality in online learning is not rocket science. There is plenty of evidence of what works and what doesn’t, which will be examined in more detail in this chapter. There is no need to build a bureaucracy around this, but there does need to be some mechanism, some way of calling institutions when they fail to meet these standards. However, we should also do the same for campus-based teaching. As more and more already accredited (and ‘high quality’) campus-based institutions start moving into hybrid learning, the establishment of quality in the online learning elements of programs will become even more important.

Thus there are plenty of evidence-based guidelines for ensuring quality in teaching, both face-to-face and online. The main challenge then is to ensure that teachers and instructors are aware of these best practices and that institutions have processes in place to ensure that guidelines for quality teaching are implemented and followed.

Quality assurance, innovation and learning outcomes

It may have been noted that most QA processes are front-loaded, i.e. they look at inputs – such as the academic qualifications of faculty, or the processes to be adopted for effective teaching, such as clear learning objectives – rather than outputs, such as what students have actually learned. They also tend to be backward-looking, that is, they focus on past best practices.

This is particularly important for evaluating new teaching approaches. Butcher and Hoosen (2014) state:

The quality assurance of post-traditional higher education is not straightforward, because openness and flexibility are primary characteristics of these new approaches, whereas traditional approaches to quality assurance were designed for teaching and learning within more tightly structured frameworks.

 However, Butcher and Hoosen (2014) go on to say that:

fundamental judgements about quality should not depend on whether education is provided in a traditional or post-traditional manner …the growth of openness is unlikely to demand major changes to quality assurance practices in institutions. The principles of good quality higher education have not changed…. Quality distance education is a sub-set of quality education…Distance education should be subject to the same quality assurance mechanisms as education generally.’

Such arguments though offer a particular challenge for teaching in a digital age, where it is argued that learning outcomes need to include the development of skills such as independent learning, facility in using social media for communication, and knowledge management, skills that have not been explicitly identified in the past. Quality assurance processes are not usually tied to specific types of learning outcomes, but are more closely linked to general performance measures such as course completion rates, time to degree completion and grades based on past learning goals.

Furthermore, we have already seen in Chapters 9 and 10 that new media and new methods of teaching are emerging that have not been around long enough to be subject to analysis of best practices. A too rigid view of quality assessment based on past practices could have serious negative implications for innovation in teaching and for meeting newly emerging learning needs. ‘Best practice’ may need occasionally to be challenged, so new approaches can be experimented with and evaluated.

Quality assurance: fit for purpose in a digital age

Maxim Jean-Louis, the President of Contact North, at a presentation in 2010 to the Higher Education Quality Council of Ontario, made a very useful distinction about different ways of looking at quality in education:

  • Quality as ‘Excellence’– a definition that sets abstract goals for institutions and academic communities to always striving to be the best, mainly taken as having elitist undertones. In post-secondary education this could mean winning Nobel prizes, attraction of research funds or the “best” faculty as measured by research output and teaching evaluations. The drawback here is that this tends to also exclude the work of the ‘further education’ sectors, and is not applied equally between disciplines (citation counts do not exist for historians and many other subjects).
  • Quality as ‘Meeting a pre-determined standard’– a definition that requires only a given standard to be met, e.g. a minimum grade, basic competency, the ability to read, write, use a computer, etc. [It might also include competencies and skills, degree completion rates, time to degree completion, etc.] The drawback of this is that setting and measuring this ‘standard’ is difficult at best and idealistic at worst.
  • Quality as ‘fitness for purpose’ – in this construction of quality, we have to decide the extent to which the service or product meets the goals set – does this course or program do what it says it was going to do? Such a construction of quality allows institutions/sectors to define goals themselves according to their mandate and concentrates on meeting the needs of their customers (whether this be upgrading learners, graduate researchers, industry, etc.).

Quality assurance processes must address the increasing diversity of our educational systems. Distance education organizations are not the same as elite traditional universities and shouldn’t try to be. This would mean looking for different measures of quality in the Open University, for instance, than in Cambridge University. Neither one is necessarily better (depending on what they are trying to achieve), but the learning experience ought to be different, even though the intended learning outcomes may be similar; this will mean different design criteria but not necessarily different criteria for assessing the quality of the learning.

In the meantime, much more attention needs to be directed at what campus-based institutions are doing when they move to hybrid or online learning. Are they following best practices, or even better, developing innovative, better teaching methods? The design of xMOOCs and the high drop-out rates of many new two year colleges new to online learning in the USA suggest they are not.

This means that different types of institution will and should evaluate quality differently. If the goal or purpose is to develop the knowledge and skills that learners will need in a digital age, then this is the ‘standard’ by which quality should be assessed, while at the same time taking into account what we already know about general best practices in teaching. The recommendations for quality teaching in a digital age that follow in this chapter are based on these principles.

Over to you

There is so much I wanted to write here about the stupidity of the current system of institutional accreditation and internal quality assurance processes, especially but not exclusively in the United Kingdom, but this section is meant as an introduction to practical guidelines for teaching and learning. So I’ve tried to be uncharacteristically restrained in writing this section. But feedback is even more welcome than usual.

1. (a) First, are there any incorrect facts in this section? This is a large and complex topic and it is easy to get things wrong.

(b) Have I left out anything really important about assessing quality in teaching and learning?

2. One problem with this topic is that it tends to gravitate between two polarised positions: those who believe in absolute truth and those who are relativists. Absolute truthers believe that there is a God-given set of ‘quality’ standards that are set primarily by elite institutions that everyone else should strive to meet. Relativists (like myself) believe that quality is in the eye of the beholder; it all depends on what your goals are. Hence my definition of quality is set among the rather limited goal in one way – and extremely ambitious in another – of developing teaching methods that will help learners develop the knowledge and skills they will need in a digital age. So: any views on my definition of quality? Is it fit for purpose?

3. What do you think of the current system of (a) institutional accreditation and (b) internal quality assurance processes?

My own view is that institutional accreditation is definitely needed to protect against really incompetent or downright dishonest organisations, but, depending on the jurisdiction, it is very much an insider’s process and not very transparent, and while current accreditation processes may set minimum standards it certainly doesn’t do much to improve quality in the system.

Similarly, internal quality assurance processes are far too cosy and protect the status quo. The internal program approval processes are based again on peer review of a very limited kind, with often ‘I’ll scratch your back if you’ll scratch mine’ approach to program approval. I’ve been on a number of program reviews as an external reviewer, but rarely see any significant changes, despite sometimes scathing reviews from the externals.

And as for formal QA processes, they are the kiss of death for quality, tangling faculty and administrators in incredibly bureaucratic processes without dealing with the real issues around quality teaching and learning.

Of course, all these practices are in the name of protecting academic freedom, which is important – but surely better processes can be derived for improving quality without threatening academic freedom.

4. So lastly, is it wise for me to restrain myself from adding these types of comments in the book – or will I muddy the waters of what is to come if I do?

References and further reading

Butcher, N. and Wilson-Strydom, M. (2013) A Guide to Quality in Online Learning Dallas TX: Academic Partnerships

Butcher, N. and Hoosen, S. (2014) A Guide to Quality in Post-traditional Online Higher Education Dallas TX: Academic Partnerships

Chickering, A., and Gamson, Z. (1987) ‘Seven Principles for Good Practice in Undergraduate Education’ AAHE Bulletin, March 1987.

Clarke-Okah, W. et al. (2014) The Commonwealth of Learning Review and Improvement Model for Higher Education Institutions Vancouver BC: Commonwealth of Learning

Graham, C. et al. (2001) Seven Principles of Effective Teaching: A Practical Lens for Evaluating Online Courses The Technology Source, March/April

Jung, I. and Latchem, C. (2012) Quality Assurance and Accreditation in Distance Education and e-Learning New York/London: Routledge

Why the fuss about MOOCs? Political, social and economic drivers

Listen with webReader
Daphne Koller's TED talk on MOOCs (click to activate video)

Daphne Koller’s TED talk on MOOCs (click to activate video)

The end of MOOCs

This is the last part of my chapter on MOOCs for my online open textbook, Teaching in a Digital Age. In a series of prior posts, I have looked at the strengths and weaknesses of MOOCs. Here I summarise this section and look at why MOOCs have gained so much attention.

Brief summary of strengths and weaknesses of MOOCs

The main points of my analysis of the strengths and weaknesses of MOOCs can be summarised as follows:


  • the main value proposition of MOOCs is that through the use of computer automation and/or peer-to-peer communication MOOCs can eliminate the very large variable costs in higher education associated with providing learner support and quality assessment
  • MOOCs, particularly xMOOCs, deliver high quality content from some of the world’s best universities for free to anyone with a computer and an Internet connection
  • MOOCs can be useful for opening access to high quality content, particularly in Third World countries, but to do so successfully will require a good deal of adaptation, and substantial investment in local support and partnerships
  • MOOCs are valuable for developing basic conceptual learning, and for creating large online communities of interest or practice
  • MOOCs are an extremely valuable form of lifelong learning and continuing education
  • MOOCs have forced conventional and especially elite institutions to reappraise their strategies towards online and open learning
  • institutions have been able to extend their brand and status by making public their expertise and excellence in certain academic areas


  • the high registration numbers for MOOCs are misleading; less than half of registrants actively participate, and of these, only a small proportion successfully complete the course; nevertheless, absolute numbers of successful participants are still higher than for conventional courses
  • MOOCs are expensive to develop, and although commercial organisations offering MOOC platforms have opportunities for sustainable business models, it is difficult to see how publicly funded higher education institutions can develop sustainable business models for MOOCs
  • MOOCs tend to attract those with already a high level of education, rather than widen access
  • MOOCs so far have been limited in the ability to develop high level academic learning, or the high level intellectual skills needed in a knowledge based society
  • assessment of the higher levels of learning remains a challenge for MOOCs, to the extent that most MOOC providers will not recognise their own MOOCs for credit
  • MOOC materials may be limited by copyright or time restrictions for re-use as open educational resources

Why the fuss about MOOCs?

It can be seen from the previous section that the pros and cons of MOOCs are finely balanced. Given though the obvious questions about the value of MOOCs, and the fact that before MOOCs arrived, there had been substantial but quiet progress for over ten years in the use of online learning for undergraduate and graduate programs, you might be wondering why MOOCs have commanded so much media interest, and especially why a large number of government policy makers, economists, and computer scientists have become so ardently supportive of MOOCs, and why there has been such a strong, negative reaction, not only from many traditional university and college instructors, who are right to be threatened by some of the claims being made for MOOCs, but also from many professionals in online learning (see for instance, Bates, 2012; Daniel, 2012; Hill, 2012; Watters, 2013), who might be expected to be more supportive of MOOCs

It needs to be recognised that the discourse around MOOCs is not usually based on a cool, rational, evidence-based analysis of the pros and cons of MOOCs, but is more likely to be driven by emotion, self-interest, fear, or ignorance of what education is actually about. Thus it is important to explore the political, social and economic factors that have driven MOOC mania.

Massive, free and Made in America!

This is what I will call the intrinsic reason for MOOC mania. It is not surprising that, since the first MOOC from Stanford professors Andrew Ng and Daphne Koller attracted 270,000 sign-ups from around the world, since the course was free, and since it came from professors at one of the most prestigious private universities in the USA, the American media were all over it. It was big news in its own right, however you look at it, especially as courses from Sebastian Thrun, another Stanford professor, and others from MIT and Harvard followed shortly, with equally staggering numbers of participants.

It’s the Ivy Leagues!

Until MOOCs came along, the major Ivy League universities in the USA, such as Stanford, MIT, Harvard and UC Berkeley, as well as many of the most prestigious universities in Canada, such as the University of Toronto and McGill, and elsewhere, had largely ignored online learning in any form.

However, by 2011, online learning, in the form of for credit undergraduate and graduate courses, was making big inroads at many other, very respectable universities, such as Carnegie Mellon, Penn State, and the University of Maryland in the USA, and also in many of the top tier public universities in Canada and elsewhere, to the extent that almost one in three course enrolments in the USA were now in online courses. Furthermore, at least in Canada, the online courses were often getting good completion rates and matching on-campus courses for quality.

The Ivy League and other highly prestigious universities that had ignored online learning were beginning to look increasingly out of touch by 2011. By launching into MOOCs, these prestigious universities could jump to the head of the queue in terms of technology innovation, while at the same time protecting their selective and highly personal and high cost campus programs from direct contact with online learning. In other words, MOOCs gave these prestigious universities a safe sandbox in which to explore online learning, and the Ivy League universities gave credibility to MOOCs, and, indirectly, online learning as a whole.

It’s disruptive!

For years before 2011, various economists, philosophers and industrial gurus had been predicting that education was the next big area for disruptive change due to the march of new technologies (see for instance Lyotard, 1979; Tapscott, undated; Christensen and Eyring, 2011).

Online learning in credit courses though was being quietly absorbed into the mainstream of university teaching, through blended learning, without any signs of major disruption, but here with MOOCs was a massive change, providing evidence at long last in the education sector to support the theories of disruptive innovation.

It’s Silicon Valley!

It is no coincidence that the first MOOCs were all developed by entrepreneurial computer scientists. Ng and Koller very quickly went on to create Coursera as a private commercial company, followed shortly by Thrun, who created Udacity. Anant Agarwal, a computer scientist at MIT, went on to head up edX.

The first MOOCs were very typical of Silicon Valley start-ups: a bright idea (massive, open online courses with cloud-based, relatively simple software to handle the numbers), thrown out into the market to see how it might work, supported by more technology and ideas (in this case, learning analytics, automated marking, peer assessment) to deal with any snags or problems. Building a sustainable business model would come later, when some of the dust had settled.

As a result it is not surprising that almost all the early MOOCs completely ignored any pedagogical theory about best practices in teaching online, or any prior research on factors associated with success or failure in online learning. It is also not surprising as a result that a very low percentage of participants actually successfully complete MOOCs – there’s a lot of catching up still to do, but so far Coursera and to a lesser extent edX have continued to ignore educators and prior research in online learning. They would rather do their own research, even if it means re-inventing the wheel. The commercial MOOC platform providers though are beginning to work out a sustainable business model.

It’s the economy, stupid!

Of all the reasons for MOOC mania, Bill Clinton’s famous election slogan resonates most with me. It should be remembered that by 2011, the consequences of the disastrous financial collapse of 2008 were working their way through the economy, and particularly were impacting on the finances of state governments in the USA.

The recession meant that states were suddenly desperately short of tax revenues, and were unable to meet the financial demands of state higher education systems. For instance, California’s community college system, the nation’s largest, suffered about $809 million in state funding cuts between 2008-2012, resulting in a shortfall of 500,000 places in its campus-based colleges. Free MOOCs were seen as manna from heaven by the state governor, Jerry Brown.

One consequence of rapid cuts to government funding was a sharp spike in tuition fees, bringing the real cost of higher education sharply into focus. Tuition fees in the USA have increased by 7% per annum over the last 10 years, compared with an inflation rate of 4% per annum. Here at last was a possible way to rein in the high cost of higher education.

Now though the economy in the USA is picking up and revenues are flowing back into state coffers, and so the pressure for more radical solutions to the cost of higher education is beginning to ease. It will be interesting to see if MOOC mania continues as the economy grows, although the search for more cost-effective approaches to higher education is not going to disappear.

Don’t panic!

These are all very powerful drivers of MOOC mania, which makes it all the more important to try to be clear and cool headed about the strengths and weaknesses of MOOCs. The real test is whether MOOCs can help develop the knowledge and skills that learners need in a knowledge-based society. The answer of course is yes and no.

As a low-cost supplement to formal education, they can be quite valuable, but not as a complete replacement. They can at present teach conceptual learning, comprehension and in a narrow range of activities, application of knowledge. They can be useful for building communities of practice, where already well educated people or people with a deep, shared passion for a topic can learn from one another, another form of continuing education.

However, certainly to date, MOOCs have not been able to demonstrate that they can lead to transformative learning, deep intellectual understanding, evaluation of complex alternatives, and evidence-based decision-making, and without greater emphasis on expert-based learner support and more qualitative forms of assessment, they probably never will, at least without substantial increases in their costs.

At the end of the day, there is a choice between throwing more resources into MOOCs and hoping that some of their fundamental flaws can be overcome without too dramatic an increase in costs, or whether we would be better investing in other forms of online learning and educational technology that could lead to more cost-effective learning outcomes. I know where I would put my money, and it’s not into MOOCs.

Over to you

This will be my last contribution to the discussion of MOOCs for my book, so let’s have it!

1. Do you agree with the strengths and weaknesses of MOOCs that I have laid out? What would you add or remove or change?

2. What do you think of the drivers of MOOC mania? Are these accurate? Are there other, more important drivers of MOOC mania?

3. Do you even agree that there is a mania about MOOCs, or is their rapid expansion all perfectly understandable?


Bates, T. (2012) What’s right and what’s wrong about Coursera-style MOOCs, Online learning and distance education resources, August 5

Christensen, C. and Eyring, H. (2011), The Innovative University: Changing the DNA of Higher Education, New York, New York, USA: John Wiley & Sons,

Daniel, J. (2012). Making sense of MOOCs: Musings in a maze of myth, paradox and possibility.Journal of Interactive Media in Education, Vol. 3

Hill, P. (2012) Four Barriers that MOOCs must overcome to build a sustainable model, e-Literate, July 24

Lyotard, J-J. (1979) La Condition postmoderne: rapport sur le savoir: Paris: Minuit

Tapscott, D. (undated) The transformation of education

Watters, A. (2013) MOOC Mania: Debunking the hype around massive, open online courses The Digital Shift, 18 April

A New Zealand analysis of MOOCs

Listen with webReader


Shrivastava, A. and Guiney, P. (2014) Technological Development and Tertiary Education Delivery Models: The Arrival of MOOCs  Wellington NZ: Tertiary Education Commission/Te Amorangi Mātauranga Matua

Why this paper?

Another report for the record on MOOCs, this time from the New Zealand Tertiary Education Commission. The reasoning behind this report:

The paper focuses on MOOCs [rather than doing a general overview of emerging technologies] because of their potential to disrupt tertiary education and the significant opportunities, challenges and risks that they present. MOOCs are also the sole focus of this paper because of their scale and the involvement of the elite United States universities.

What’s in the paper?

The paper provides a fairly standard, balanced analysis of developments in MOOCs, first by describing the different MOOC delivery models, their business models and the drivers behind MOOCs, then by following up with a broad discussion of the possible implications of MOOCs for New Zealand, such as unbundling of services, possible economies of scale, globalization of tertiary (higher) education, adaptability to learners’ and employers’ needs, and the possible impact on New Zealand’s tertiary education workforce.

There is also a good summary of MOOCs being offered by New Zealand institutions.

At the end of the paper some interesting questions for further discussion are raised:

  • What will tertiary education delivery look like in 2030?

  • What kinds of opportunities and challenges do technological developments, including MOOCs, present to the current policy, regulatory and operational arrangements for tertiary teaching and learning in New Zealand?

  • How can New Zealand make the most of the opportunities and manage any associated risks and challenges?

  • Do MOOCs undermine the central value of higher education, or are they just a helpful ‘updating’ that reflects its new mass nature?

  • Where do MOOCs fit within the New Zealand education and qualifications systems?

  • Who values the knowledge and skills gained from a MOOC programme and why?

  • Can economies of scale be achieved through MOOCs without loss of quality?

  • Can MOOCs lead to better learning outcomes at the same or less cost than traditional classroom-based teaching? If so, how might the Government go about funding institutions that want to deliver MOOCs to a mix of domestic and international learners?

  • What kinds of MOOC accreditation models might make sense in the context of New Zealand’s quality-assurance system?

Answers on a postcard, please, to the NZ Tertiary Education Commission.


Am I alone in wondering what has happened to for-credit online education in government thinking about the future? It is as if 20 years of development of undergraduate and graduate online courses and programs never existed. Surely a critical question for institutions and government planners is:

  • what are the relative advantages and disadvantages of MOOCs over other forms of online learning? What can MOOCs learn from our prior experience with credit-based online learning?

There are several reasons for considering this, but one of the most important is the huge investment many institutions, and, indirectly, governments. have already made in credit-based online learning.

By and large, online learning in publicly funded universities, both in New Zealand and in Canada, has been very successful in terms of both increasing access and in student learning. It is also important to be clear about the differences and some of the similarities between credit-based online learning and MOOCs.

Some of the implications laid out in this paper, such as possibilities of consortia and institutional collaboration, apply just as much to credit-based online learning as to MOOCs, and many of the negative criticisms of MOOCs, such as difficulties of assessment and lack of learner support, disappear when applied to credit-based online learning.

Please, policy-makers, realise that MOOCs are not your only option for innovation through online learning. There are more established and well tested solutions already available.

The strengths and weaknesses of MOOCs: Part 2: learning and assessment

Listen with webReader
Remote exam proctoring

Remote exam proctoring

The writing of my book, Teaching in a Digital Age, has been interrupted for nearly two weeks by my trip to England for the EDEN Research Workshop. As part of the first draft of the book, I have already published three posts on MOOCs:

In this post, I ask (and try to answer) what do participants learn from MOOCs, and I also evaluate their assessment methods.

What do students learn in MOOCs?

This is a much more difficult question to answer, because so little of the research to date (2014) has tried to answer this question. (One reason, as we shall see, is that assessment of learning in MOOCs remains a major challenge). There are at least two kinds of study: quantitative studies that seek to quantify learning gains; and qualitative studies that describe the experience of learners within MOOCs, which indirectly provide some insight into what they have learned.

At the time of writing, the most well conducted study of learning in MOOCs has been by Colvin et al. (2014), who investigated ‘conceptual learning’ in an MIT Introductory Physics MOOC. They compared learner performance not only between different sub-categories of learners within the MOOC, such as those with no physics or math background with those such as physic teachers who had considerable prior knowledge, but also with on-campus students taking the same curriculum in a traditional campus teaching format. In essence, the study found no significant differences in learning gains between or within the two types of teaching, but it should be noted that the on-campus students were students who had failed an earlier version of the course and were retaking it.

This research is a classic example of the no significant difference in comparative studies in educational technology; other variables, such as differences in the types of students, were as important as the mode of delivery. Also, this MOOC design represents a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It doesn’t attempt to develop the skills needed in a digital age as identified in Chapter 1.

There have been far more studies of the experience of learners within MOOCs, particularly focusing on the discussions within MOOCs (see for instance, Kop, 2011). In general (although there are exceptions), discussions are unmonitored, and it is left to participants to make connections and respond to other students comments.

However, there are some strong criticisms of the effectiveness of the discussion element of MOOCs for developing the high-level conceptual analysis required for academic learning. To develop deep, conceptual learning, there is a need in most cases for intervention by a subject expert, to clarify misunderstandings or misconceptions, to provide accurate feedback,  to ensure that the criteria for academic learning, such as use of evidence, clarity of argument, etc., are being met, and to ensure the necessary input and guidance to seek deeper understanding (see Harasim, 2013).

Furthermore, the more massive the course, the more likely participants are to feel ‘overload, anxiety and a sense of loss’, if there is not some instructor intervention or structure imposed (Knox, 2014). Firmin et al. (2014) have shown that when there is some form of instructor ‘encouragement and support of student effort and engagement’, results improve for all participants in MOOCs. Without a structured role for subject experts, participants are faced with a wide variety of quality in terms of comments and feedback from other participants. There is again a great deal of research on the conditions necessary for the successful conduct of collaborative and co-operative group learning (see for instance, Dillenbourg, 1999, Lave and Wenger, 1991), and these findings certainly have not been generally applied to the management of MOOC discussions to date.

One counter argument is that at least cMOOCs develop a new form of learning based on networking and collaboration that is essentially different from academic learning, and MOOCs are thus more appropriate to the needs of learners in a digital age. Adult participants in particular, it is claimed by Downes and Siemens, have the ability to self-manage the development of high level conceptual learning.  MOOCs are ‘demand’ driven, meeting the interests of individual students who seek out others with similar interests and the necessary expertise to support them in their learning, and for many this interest may well not include the need for deep, conceptual learning but more likely the appropriate applications of prior knowledge in new or specific contexts. MOOCs do appear to work best for those who already have a high level of education and therefore bring many of the conceptual skills developed in formal education with them when they join a MOOC, and therefore contribute to helping those who come without such skills.

Over time, as more experience is gained, MOOCs are likely to incorporate and adapt some of the findings from research on smaller group work for much larger numbers. For instance, some MOOCs are using ‘volunteer’ or community tutors (Dillenbourg, 2014).The US State Department has organized MOOC camps through US missions and consulates abroad to mentor MOOC participants. The camps include Fulbright scholars and embassy staff who lead discussions on content and topics for MOOC participants in countries abroad (Haynie, 2014). Some MOOC providers, such as the University of British Columbia, pay a small cohort of academic assistants to monitor and contribute to the MOOC discussion forums (Engle, 2014). Engle reported that the use of academic assistants, as well as limited but effective interventions from the instructors themselves, made the UBC MOOCs more interactive and engaging. However, paying for people to monitor and support MOOCs will of course increase the cost to providers. Consequently, MOOCs are likely to develop new automated ways to manage discussion effectively in very large groups. The University of Edinburgh is experimenting with automated ‘teacherbots’ that crawl through online discussion forums and direct predetermined comments to students identified as needing help or encouragement (Bayne, 2014).

These results and approaches are consistent with prior research on the importance of instructor presence for successful for-credit online learning. In the meantime, though, there is much work still to be done if MOOCs are to provide the support and structure needed to ensure deep, conceptual learning where this does not already exist in students. The development of the skills needed in a digital age is likely to be an even greater challenge when dealing with massive numbers. However, we need much more research into what participants actually learn in MOOCs and under what conditions before any firm conclusions can be drawn.


Assessment of the massive numbers of participants in MOOCs has proved to be a major challenge. It is a complex topic that can be dealt with only briefly here. However, Chapter 5.8 provides a general analysis of different types of assessment, and Suen (2014) provides a comprehensive and balanced overview of the way assessment has been used in MOOCs to date. This section draws heavily on Suen’s paper.

Computer marked assignments

Assessment to date in MOOCs has been primarily of two kinds. The first is based on quantitative multiple-choice tests, or response boxes where formulae or ‘correct code’ can be entered and automatically checked. Usually participants are given immediate automated feedback on their answers, ranging from simple right or wrong answers to more complex responses depending on the type of response they have checked, but in all cases, the process is usually fully automated.

For straight testing of facts, principles, formulae, equations and other forms of conceptual learning where there are clear, correct answers, this works well. In fact, multiple choice computer marked assignments were used by the UK Open University as long ago as the 1970s, although the means to give immediate online feedback were not available then. However, this method of assessment is limited for testing deep or ‘transformative’ learning, and particularly weak for assessing the intellectual skills needed in a digital age, such as creative or original thinking.

Peer review

The second type of assessment that has been tried in MOOCs has been peer assessment, where participants assess each other’s work. Peer assessment is not new. It has been successfully used for formative assessment in traditional classrooms and in some online teaching for credit (Falchikov and Goldfinch, 2000; van Zundert et al., 2010). More importantly, peer assessment is seen as a powerful way to improve deep understanding and knowledge through the rating process, and at the same time, it can be useful for developing some of the skills needed in a digital age, such as critical thinking, for those participants assessing the work of others.

However, a key feature of the successful use of peer assessment has been the close involvement of an instructor or teacher, in providing benchmarks, rubrics or criteria  for assessment, and for monitoring and adjusting peer assessments to ensure consistency and a match with the benchmarks set by the instructor. Although an instructor can provide the benchmarks and rubrics in MOOCs, close monitoring of the multiple peer assessments is difficult if not impossible with the very large numbers of participants in MOOCs. As a result, MOOC participants often become incensed at being randomly assessed by other participants who may not and often do not have the knowledge or ability to give a ‘fair’ or accurate assessment of a participant’s work.

Various attempts to get round the limitations of peer assessment in MOOCs have been tried such as calibrated peer reviews, based on averaging all the peer ratings, and Bayesian post hoc stabilization, but although these statistical techniques reduce the error (or spread) of peer review somewhat they still do not remove the problems of systematic errors of judgement in raters due to misconceptions. This is particularly a problem where a majority of participants fail to understand key concepts in a MOOC, in which case peer assessment becomes the blind leading the blind.

Automated essay scoring

This is another area where there have been attempts to automate scoring. Although such methods are increasingly sophisticated they are currently limited in terms of accurate assessment to measuring primarily technical writing skills, such as grammar, spelling and sentence construction. Once again they do not measure accurately essays where higher level intellectual skills are demonstrated.

Badges and certificates

Particularly in xMOOCs, participants may be awarded a certificate or a ‘badge’ for successful completion of the MOOC, based on a final test (usually computer-marked) which measures the level of learning in a course. The American Council on Education (ACE), which represents the presidents of U.S. accredited, degree-granting institutions, recommended offering credit for five courses on the Coursera MOOC platform. However, according to the person responsible for the review process:

what the ACE accreditation does is merely accredit courses from institutions that are already accredited. The review process doesn’t evaluate learning outcomes, but is a course content focused review thus obviating all the questions about effectiveness of the pedagogy in terms of learning outcomes.’ (Book, 2013)

Indeed, most of the institutions offering MOOCs will not accept their own certificates for admission or credit within their own, campus-based programs. Probably nothing says more about the confidence in the quality of the assessment than this failure of MOOC providers to recognize their own teaching.

The intent behind assessment

To evaluate assessment in MOOCs requires an examination of the intent behind assessment. As identified earlier in another chapter of my book, there are many different purposes behind assessment. Peer assessment and immediate feedback on computer-marked tests can be extremely valuable for formative assessment, enabling participants to see what they have understood and to help develop further their understanding of key concepts. In cMOOCs, as Suen points out, learning is measured as the communication that takes place between MOOC participants, resulting in crowdsourced validation of knowledge – it’s what the sum of all the participants come to believe to be true as a result of participating in the MOOC, so formal assessment is unnecessary. However, what is learned in this way is not necessarily academically validated knowledge, which to be fair, is not the concern of cMOOC proponents such as Stephen Downes.

Academic assessment is a form of currency, related not only to measuring student achievement but also affecting student mobility (e.g. entrance to grad school) and perhaps more importantly employment opportunities and promotion. From a learner’s perspective, the validity of the currency – the recognition and transferability of the qualification – is essential. To date, MOOCs have been unable to demonstrate that they are able to assess accurately the learning achievements of participants beyond comprehension and knowledge of ideas, principles and processes (recognizing that there is some value in this alone). What MOOCs have not been able to demonstrate is that they can either develop or assess deep understanding or the intellectual skills required in a digital age. Indeed, this may not be possible within the constraints of massiveness, which is their major distinguishing feature from other forms of online learning, although the lack of valid methods of assessment will not stop computer scientists from trying to find ways to analyze participant online behaviour to show that such learning is taking place.

Up next

I hope the next post will be my last on this chapter on MOOCs. It will cover the following topics:

  • the cost of MOOCs and economies of scale
  • branding
  • the political, economic and social factors that explain the rise of MOOCs.

Over to you

As regular readers know, this is my way of obtaining peer review for my open textbook (so clearly I am not against peer review in principle!). So if I have missed anything important on this topic, or have misrepresented people’s views, or you just plain disagree with what I’ve written, please let me know. In particular, I am hoping for comments on:

  • comprehensiveness of the sources used that address learning and assessment methods in MOOCs
  • arguments that should have been included, either as a strength or a weakness
  • errors of fact

Yes, I’m a glutton for punishment, but you need to be a masochist to publish openly on this topic.


Bayne, S. (2014) Teaching, Research and the More-than-Human in Digital Education Oxford UK: EDEN Research Workshop (url to come)

Book, P. (2103) ACE as Academic Credit Reviewer–Adjustment, Accommodation, and Acceptance WCET Learn, July 25

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Dillenbourg, P. (ed.) (1999) Collaborative-learning: Cognitive and Computational Approaches. Oxford: Elsevier

Dillenbourg, P. (2014) MOOCs: Two Years Later, Oxford UK: EDEN Research Workshop (url to come)

Engle, W. (2104) UBC MOOC Pilot: Design and Delivery Vancouver BC: University of British Columbia

Falchikov, N. and Goldfinch, J. (2000) Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks Review of Educational Research, Vol. 70, No. 3

Firmin, R. et al. (2014) Case study: using MOOCs for conventional college coursework Distance Education, Vol. 35, No. 2

Haynie, D. (2014). State Department hosts ‘MOOC Camp’ for online learners. US News,January 20

Harasim, L. (2012) Learning Theory and Online Technologies New York/London: Routledge

Ho, A. et al. (2014) HarvardX and MITx: The First Year of Open Online Courses Fall 2012-Summer 2013 (HarvardX and MITx Working Paper No. 1), January 21

Knox, J. (2014) Digital culture clash: ‘massive’ education in the e-Learning and Digital Cultures Distance Education, Vol. 35, No. 2

Kop, R. (2011) The Challenges to Connectivist Learning on Open Online Networks: Learning Experiences during a Massive Open Online Course International Review of Research into Open and Distance Learning, Vol. 12, No. 3

Lave, J. and Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press

Milligan, C., Littlejohn, A. and Margaryan, A. (2013) Patterns of engagement in connectivist MOOCs, Merlot Journal of Online Learning and Teaching, Vol. 9, No. 2

Suen, H. (2104) Peer assessment for massive open online courses (MOOCs) International Review of Research into Open and Distance Learning, Vol. 15, No. 3

van Zundert, M., Sluijsmans, D., van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and future directions. Learning and Instruction, 20, 270-279

Review of ‘Online Distance Education: Towards a Research Agenda.’

Listen with webReader
Drop-out: the elephant in the DE room that no-one wants to talk about

Drop-out: the elephant in the DE room that no-one wants to talk about

Zawacki-Richter, O. and Anderson, T. (eds.) (2014) Online Distance Education: Towards a Research Agenda Athabasca AB: AU Press, pp. 508

It is somewhat daunting to review a book of over 500 pages of research on any topic. I doubt if few other than the editors are likely to read this book from cover to cover. It is more likely to be kept on one’s bookshelf (if these still exist in a digital age) for reference whenever needed. Nevertheless, this is an important work that anyone working in online learning needs to be aware of, so I will do my best to cover it as comprehensively as I can.

Structure of the book

The book is a collection of about 20 chapters by a variety of different authors (more on the choice of authors later). Based on a Delphi study and analysis of ‘key research journals’ in the field, the editors have organized the topic into three sections, with a set of chapters on each sub-section, as follows:

1. Macro-level research: distance education systems and theories

  • access, equity and ethics
  • globalization and cross-cultural issues
  • distance teaching systems and institutions
  • theories and models
  • research methods and knowledge transfer

2. Meso-level research: management, organization and technology

  • management and organization
  • costs and benefits
  • educational technology
  • innovation and change
  • professional development and faculty support
  • learner support services
  • quality assurance

3. Micro-level: teaching and learning in distance education

  • instructional/learning design
  • interaction and communication
  • learner characteristics.

In addition, there is a very useful preface from Otto Peters, an introductory chapter by the editors where they justify their structural organization of research, and a short conclusion that calls for a systematic research agenda in online distance education research.

More importantly, perhaps, Terry Anderson and Olaf Zawacki-Richter demonstrate empirically that research in this field has been skewed towards micro-level research (about half of all publications).  Interestingly, and somewhat surprisingly given its importance, costs and benefits of online distance education is the least researched area.

What I liked

It is somewhat invidious to pick out particular chapters, because different people will have different interests from such a wide-ranging list of topics. I have tended to choose those that I found were new and/or particularly enlightening for me, but other readers’ choices will be different. However, by selecting a few excellent chapters, I hope to give some idea of the quality of the book.

1. The structuring/organization of research

Anderson and Zawacki-Richter have done an excellent job in providing a structural framework for research in this field. This will be useful both for those teaching about online and distance education but in particular for potential Ph.D. students wondering what to study. This book will provide an essential starting point.

2. Summary of the issues in each area of research

Again, the editors have done an excellent job in their introductory chapter in summarizing the content of each of the chapters that follows, and in so doing pulling out the key themes and issues within each area of research. This alone makes the book worthwhile.

3. Globalization, Culture and Online Distance Education

Charlotte (Lani) Gunawardena of the University of New Mexico has written the most comprehensive and deep analysis of this issue that I have seen, and it is an area in which I have a great deal of interest, since most of the online teaching I have done has been with students from around the world and sometimes multi-lingual.

After a general discussion of the issue of globalization and education, she reviews research in the following areas:

  • diverse educational expectations
  • learners and preferred ways of learning
  • socio-cultural environment and online interaction
  • help-seeking behaviours
  • silence
  • language learning
  • researching culture and online distance learning

This chapter should be required reading for anyone contemplating teaching online.

4. Quality assurance in Online Distance Education

I picked this chapter by Colin Latchem because he is so deeply expert in this field that he is able to make what can be a numbingly boring but immensely important topic a fun read, while at the same time ending with some critical questions about quality assurance. In particular Latchem looks at QA from the following perspectives:

  • definitions of quality
  • accreditation
  • online distance education vs campus-based teaching
  • quality standards
  • transnational online distance education
  • open educational resources
  • costs of QA
  • is online distance education yet good enough?
  • an outcomes approach to QA.

This chapter definitely showcases a master at the top of his game.

5. The elephant in the room: student drop-out

This is a wonderfully funny but ultimately serious argument between Ormond Simpson and Alan Woodley about the elephant in the distance education room that no-one wants to mention. Here they start poking the elephant with some sticks (which they note is not likely to be a career-enhancing move.) The basic argument is that institutions should and could do more to reduce drop-out/increase course completion. This chapter also stunned me with providing hard data about really low completion rates for most open university students. I couldn’t help comparing these with the high completion rates for online credit courses at dual-mode (campus-based) institutions, at least in Canada (which of course are not ‘open’ institutions in that students must have good high school qualifications.)

Woodley’s solution to reducing drop-out is quite interesting (and later well argued):

  • make it harder to get in
  • make it harder to get out

In both cases, really practical and not too costly solutions are offered that nevertheless are consistent with open access and high quality teaching.

In summary

The book contains a number of really good chapters that lay out the issues in researching online distance education.

What I disliked

I have to say that I groaned when I first saw the list of contributors. The same old, same old list of distance education experts with a heavy bias towards open universities. Sure, they are nearly all well-seasoned experts, and there’s nothing wrong with that per se (after all, I see myself as one of them.)

But where are the young researchers here, and especially the researchers in open educational resources, MOOCs, social media applications in online learning, and above all researchers from the many campus-based universities now mainstreaming online learning? There is almost nothing in the book about research into blended learning, and flipped classrooms are not even mentioned. OK, the book is about online distance learning but the barriers or distinctions are coming down with a vengeance. This book will never reach those who most need it, the many campus-based instructors now venturing for the first time into online learning in one way or another. They don’t see themselves as primarily distance educators.

And a few of the articles were more like lessons in history than an up-to-date review of research in the field. Readers of this blog will know that I strongly value the history of educational technology and distance learning. But these lessons need to be embedded in the here and now. In particular, the lessons need to be spelled out. It is not enough to know that Stanford University researchers as long ago as 1974 were researching the costs and benefits of educational broadcasting in developing countries, but what lessons does this have for some of the outrageous claims being made about MOOCs? A great deal in fact, but this needs explaining in the context of MOOCs today.

Also the book is solely focused on post-secondary university education. Where is the research on online distance education in the k-12/school sector or the two-year college/vocational sector? Maybe they are topics for other books, but this is where the real gap exists in research publications in online learning.

Lastly, although the book is reasonably priced for its size (C$40), and is available as an e-text as well as the fully printed version, what a pity it is not an open textbook that could then be up-dated and crowd-sourced over time.


This is essential reading for anyone who wants to take a professional, evidence-based approach to online learning (distance or otherwise). It will be particularly valuable for students wanting to do research in this area. The editors have done an incredibly good job of presenting a hugely diverse and scattered area in a clear and structured manner. Many of the chapters are gems of insight and knowledge in the field.

However, we have a huge challenge of knowledge transfer in this field. Repeatedly authors in the book lamented that many of the new entrants to online learning are woefully ignorant of the research previously done in this field. We need a better way to disseminate this research than a 500 page printed text that only those already expert in the field are likely to access. On the other hand, the book does provide a strong foundation from which to find better ways to disseminate this knowledge. Knowledge dissemination in a digital world then is where the research agenda for online learning needs to focus.