August 31, 2015

A future vision for OER and online learning

Listen with webReader

For each chapter of my online open textbook, Teaching in a Digital Age, I am developing imaginary but hopefully realistic scenarios. In this scenario, developed as a closing to my chapter on ‘Modes of Delivery and Open Education’, I look at how modularization could lead both a wider range of access to credit courses and more open use of learning materials.

Print

Figure 10.1 The Hart River, Yukon. Image: © www.protectpeel.ca, CC BY-NC

Figure 10.1.F The Hart River, Yukon.
Image: © www.protectpeel.ca, CC BY-NC

Print

Research faculty in the Faculties of Land Management and Forestry at the (mythical) University of Western Canada developed over a number of years a range of ‘learning artefacts’, digital graphics, computer models and simulations about watershed management, partly as a consequence of research conducted by faculty, and partly to generate support and funding for further research.

At a faculty meeting several years ago, after a somewhat heated discussion, faculty members voted to make these resources openly available for re-use for educational purposes under a Creative Commons license that requires attribution and prevents commercial use without specific written permission from the copyright holders, who in this case are the faculty responsible for developing the artefacts. What swayed the vote is that the majority of the faculty actively involved in the research wanted to make these resources more widely available. The agencies responsible for funding the work that lead to the development of the artefacts (mainly national research councils) welcomed the move to makes these artefacts more widely available as open educational resources.

Initially, the researchers just put the graphics and simulations up on the research group’s web site. It was left to individual faculty members to decide whether to use these resources in their teaching. Over time, faculty started to introduce these resources into a range of on-campus undergraduate and graduate courses.

After a while, though, word seemed to get out about these OER. The research faculty began to receive e-mails and phone calls from other researchers around the world. It became clear that there was a network or community of researchers in this field who were creating digital materials as a result of their research, and it made sense to share and re-use materials from other sites. This eventually led to an international web ‘portal’ of learning artefacts on watershed management.

The researchers also started to get calls from a range of different agencies, from government ministries or departments of environment, local environmental groups, First Nations/aboriginal bands, and, occasionally, major mining or resource extraction companies, leading to some major consultancy work for the faculty in the department. At the same time, the faculty were able to attract further research funding from non-governmental agencies such as the Nature Conservancy and some ecological groups, as well as from their traditional funding source, the national research councils, to develop more OER.

By this time, instructors had access to a fairly large amount of OER. There were already two fourth and fifth level fully online courses built around the OER that were being offered successfully to undergraduate and graduate students.

A proposal was therefore put forward to create initially a fully online post-graduate certificate program on watershed management, built around existing OER, in partnership with a university in the USA and another one in Sierra Leone. This certificate program was to be self-funding from tuition fees, with the tuition fees for the 25 Sierra Leone students to be initially covered by an international aid agency. The Dean, after a period of hard negotiation, persuaded the university administration that the tuition fees from the certificate program should go directly to the two Faculties whose staff were teaching the program.From these funds, the departments would hire additional tenured faculty to teach or backfill for the certificate, and the Faculties would pay 25 per cent of the tuition revenues to the university as overheads.

This decision was made somewhat easier by a fairly substantial grant from Foreign Affairs Canada to make the certificate program available in English and French to Canadian mining and resource extraction companies with contracts and partnerships in African countries.

Although the certificate program was very successful in attracting students from North America, Europe and New Zealand, it was not taken up very well in Africa beyond the partnership with the university in Sierra Leone, although there was a lot of interest in the OER and the issues raised in the certificate courses. After two years of running the certificate, then, the Faculties made two major decisions:

  • another three courses and a research project would be added to the certificate courses, and this would be offered as a fully cost recoverable online master in land and water systems. This would attract greater participation from managers and professionals in African countries in particular, and provide a recognised qualification that many of the certificate students were requesting
  • drawing on the large network of external experts now involved one way or another with the researchers, the university would offer a series of MOOCs on watershed management issues, with volunteer experts from outside the university being invited to participate and provide leadership in the MOOCs. The MOOCs would be able to draw on the existing OER.

Five years later, the following outcomes were recorded by the Dean of one of the faculties at an international conference on sustainability:

  • the online master’s program had doubled the total number of graduate students across the two faculties
  • the master’s program was fully cost-recoverable from tuition fees
  • there were 120 graduates a year from the master’s program
  • the degree completion rate was 64 per cent
  • six new tenured faculty has been hired, plus another six post-doctoral research faculty
  • several thousand students had registered and paid for at least one course in the certificate or master’s program, of which 45 per cent were from outside Canada
  • over 100,000 students had taken the MOOCs, almost half from developing countries
  • there were now over 1,000 hours of OER on watershed management available and downloaded many times across the world, attracting more students and revenue to the university
  • the university was now internationally recognised as a world leader in watershed management.

Although this scenario is purely a figment of my imagination, it is influenced by real and exciting work, much of which was developed as open access materials from the start, at the University of British Columbia:

Over to you

1. Does this strike you as a realistic scenario?

2. How useful are scenarios like this for thinking about the future? Could you use similar kinds of scenarios in your program planning or for faculty development, for instance?

3, If you have used scenarios for online learning in similar ways, would you be willing to share one?

4. Most of the elements of this scenario already exist at UBC. What I have done though is bring things together from different parts of the university into an integrated single scenario. What could be done within institutions to make this cross-disciplinary transfer of ideas and strategies easier to achieve? (It should be noted that UBC already has a Flexible Learning initiative, including a strategy team within the Provost’s office, which should help with this.)

Next

Just one more post to wrap up the chapter on Modes of Delivery and Open Education: the key takeaways from this chapter.

 

 

EDEN research papers: OERs (inc. MOOCs), quality/assessment, social media, analytics and research methods

Listen with webReader

EDEN RSW me 2

EDEN has now published a second report on my review of papers submitted to the EDEN research workshop in Oxford a couple of weeks ago. All the full papers for the workshop can be accessed here.

Main lessons (or unanswered questions) I took away:

OERs and MOOCs

  • what does awarding badges of certificates for MOOCs or other OER actually mean? For instance will institutions give course exemption or credits for the awards, or accept such awards for admission purposes? Or will the focus be on employer recognition? How will participants who are awarded badges know what their ‘currency’ is worth?
  • can MOOCs be designed to go beyond comprehension or networking to develop other critical 21st century skills such as critical thinking, analysis and evaluation? Can they lead to ‘transformational learning’ as identified by Kumar and Arnold (see Quality and Assessment below)
  • are there better design models for open courses than MOOCs as currently structured? If so what would they look like?
  • is there a future for learning object repositories when nearly all academic content becomes open and online?

Quality and assessment

  • research may inform but won’t resolve policy issues
  • quality is never ‘objective’ but is value-driven
  • the level of intervention must be long and significant enough to result in significant learning gains
  • there’s lots of research already that indicates the necessary conditions for successful use of online discussion forums but if these conditions are not present then learning will not take place
  • the OU’s traditional model of course design constrains the development of successful collaborative online learning.

Use of social media in open and distance learning

There were surprisingly few papers on this topic. My main takeaway:

  • the use of social media needs to be driven by sound pedagogical theory that takes into account the affordances of social media (as in Sorensen’s study described in an earlier post under course design)

Data analytics and student drop-out

  • institutions/registrars must pay attention to how student data is tagged/labeled for analytic purposes, so there is consistency in definitions, aggregation and interpretation;
  • when developing or applying an analytics software program, consideration needs to be given to the level of analysis and what potential users of the data are looking for; this means working with instructional designers, faculty and administrators from the beginning
  • analytics need to be integrated with action plans to identify and support early at risk students

Research methods

Next

If these bullets interest you at all, then I strongly recommend you go and read the original papers in full – click here. My summary is of necessity personal and abbreviated and the papers provide much greater richness of context.

 

 

The strengths and weaknesses of MOOCs: Part 2: learning and assessment

Listen with webReader
Remote exam proctoring

Remote exam proctoring

The writing of my book, Teaching in a Digital Age, has been interrupted for nearly two weeks by my trip to England for the EDEN Research Workshop. As part of the first draft of the book, I have already published three posts on MOOCs:

In this post, I ask (and try to answer) what do participants learn from MOOCs, and I also evaluate their assessment methods.

What do students learn in MOOCs?

This is a much more difficult question to answer, because so little of the research to date (2014) has tried to answer this question. (One reason, as we shall see, is that assessment of learning in MOOCs remains a major challenge). There are at least two kinds of study: quantitative studies that seek to quantify learning gains; and qualitative studies that describe the experience of learners within MOOCs, which indirectly provide some insight into what they have learned.

At the time of writing, the most well conducted study of learning in MOOCs has been by Colvin et al. (2014), who investigated ‘conceptual learning’ in an MIT Introductory Physics MOOC. They compared learner performance not only between different sub-categories of learners within the MOOC, such as those with no physics or math background with those such as physic teachers who had considerable prior knowledge, but also with on-campus students taking the same curriculum in a traditional campus teaching format. In essence, the study found no significant differences in learning gains between or within the two types of teaching, but it should be noted that the on-campus students were students who had failed an earlier version of the course and were retaking it.

This research is a classic example of the no significant difference in comparative studies in educational technology; other variables, such as differences in the types of students, were as important as the mode of delivery. Also, this MOOC design represents a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It doesn’t attempt to develop the skills needed in a digital age as identified in Chapter 1.

There have been far more studies of the experience of learners within MOOCs, particularly focusing on the discussions within MOOCs (see for instance, Kop, 2011). In general (although there are exceptions), discussions are unmonitored, and it is left to participants to make connections and respond to other students comments.

However, there are some strong criticisms of the effectiveness of the discussion element of MOOCs for developing the high-level conceptual analysis required for academic learning. To develop deep, conceptual learning, there is a need in most cases for intervention by a subject expert, to clarify misunderstandings or misconceptions, to provide accurate feedback,  to ensure that the criteria for academic learning, such as use of evidence, clarity of argument, etc., are being met, and to ensure the necessary input and guidance to seek deeper understanding (see Harasim, 2013).

Furthermore, the more massive the course, the more likely participants are to feel ‘overload, anxiety and a sense of loss’, if there is not some instructor intervention or structure imposed (Knox, 2014). Firmin et al. (2014) have shown that when there is some form of instructor ‘encouragement and support of student effort and engagement’, results improve for all participants in MOOCs. Without a structured role for subject experts, participants are faced with a wide variety of quality in terms of comments and feedback from other participants. There is again a great deal of research on the conditions necessary for the successful conduct of collaborative and co-operative group learning (see for instance, Dillenbourg, 1999, Lave and Wenger, 1991), and these findings certainly have not been generally applied to the management of MOOC discussions to date.

One counter argument is that at least cMOOCs develop a new form of learning based on networking and collaboration that is essentially different from academic learning, and MOOCs are thus more appropriate to the needs of learners in a digital age. Adult participants in particular, it is claimed by Downes and Siemens, have the ability to self-manage the development of high level conceptual learning.  MOOCs are ‘demand’ driven, meeting the interests of individual students who seek out others with similar interests and the necessary expertise to support them in their learning, and for many this interest may well not include the need for deep, conceptual learning but more likely the appropriate applications of prior knowledge in new or specific contexts. MOOCs do appear to work best for those who already have a high level of education and therefore bring many of the conceptual skills developed in formal education with them when they join a MOOC, and therefore contribute to helping those who come without such skills.

Over time, as more experience is gained, MOOCs are likely to incorporate and adapt some of the findings from research on smaller group work for much larger numbers. For instance, some MOOCs are using ‘volunteer’ or community tutors (Dillenbourg, 2014).The US State Department has organized MOOC camps through US missions and consulates abroad to mentor MOOC participants. The camps include Fulbright scholars and embassy staff who lead discussions on content and topics for MOOC participants in countries abroad (Haynie, 2014). Some MOOC providers, such as the University of British Columbia, pay a small cohort of academic assistants to monitor and contribute to the MOOC discussion forums (Engle, 2014). Engle reported that the use of academic assistants, as well as limited but effective interventions from the instructors themselves, made the UBC MOOCs more interactive and engaging. However, paying for people to monitor and support MOOCs will of course increase the cost to providers. Consequently, MOOCs are likely to develop new automated ways to manage discussion effectively in very large groups. The University of Edinburgh is experimenting with automated ‘teacherbots’ that crawl through online discussion forums and direct predetermined comments to students identified as needing help or encouragement (Bayne, 2014).

These results and approaches are consistent with prior research on the importance of instructor presence for successful for-credit online learning. In the meantime, though, there is much work still to be done if MOOCs are to provide the support and structure needed to ensure deep, conceptual learning where this does not already exist in students. The development of the skills needed in a digital age is likely to be an even greater challenge when dealing with massive numbers. However, we need much more research into what participants actually learn in MOOCs and under what conditions before any firm conclusions can be drawn.

Assessment

Assessment of the massive numbers of participants in MOOCs has proved to be a major challenge. It is a complex topic that can be dealt with only briefly here. However, Chapter 5.8 provides a general analysis of different types of assessment, and Suen (2014) provides a comprehensive and balanced overview of the way assessment has been used in MOOCs to date. This section draws heavily on Suen’s paper.

Computer marked assignments

Assessment to date in MOOCs has been primarily of two kinds. The first is based on quantitative multiple-choice tests, or response boxes where formulae or ‘correct code’ can be entered and automatically checked. Usually participants are given immediate automated feedback on their answers, ranging from simple right or wrong answers to more complex responses depending on the type of response they have checked, but in all cases, the process is usually fully automated.

For straight testing of facts, principles, formulae, equations and other forms of conceptual learning where there are clear, correct answers, this works well. In fact, multiple choice computer marked assignments were used by the UK Open University as long ago as the 1970s, although the means to give immediate online feedback were not available then. However, this method of assessment is limited for testing deep or ‘transformative’ learning, and particularly weak for assessing the intellectual skills needed in a digital age, such as creative or original thinking.

Peer review

The second type of assessment that has been tried in MOOCs has been peer assessment, where participants assess each other’s work. Peer assessment is not new. It has been successfully used for formative assessment in traditional classrooms and in some online teaching for credit (Falchikov and Goldfinch, 2000; van Zundert et al., 2010). More importantly, peer assessment is seen as a powerful way to improve deep understanding and knowledge through the rating process, and at the same time, it can be useful for developing some of the skills needed in a digital age, such as critical thinking, for those participants assessing the work of others.

However, a key feature of the successful use of peer assessment has been the close involvement of an instructor or teacher, in providing benchmarks, rubrics or criteria  for assessment, and for monitoring and adjusting peer assessments to ensure consistency and a match with the benchmarks set by the instructor. Although an instructor can provide the benchmarks and rubrics in MOOCs, close monitoring of the multiple peer assessments is difficult if not impossible with the very large numbers of participants in MOOCs. As a result, MOOC participants often become incensed at being randomly assessed by other participants who may not and often do not have the knowledge or ability to give a ‘fair’ or accurate assessment of a participant’s work.

Various attempts to get round the limitations of peer assessment in MOOCs have been tried such as calibrated peer reviews, based on averaging all the peer ratings, and Bayesian post hoc stabilization, but although these statistical techniques reduce the error (or spread) of peer review somewhat they still do not remove the problems of systematic errors of judgement in raters due to misconceptions. This is particularly a problem where a majority of participants fail to understand key concepts in a MOOC, in which case peer assessment becomes the blind leading the blind.

Automated essay scoring

This is another area where there have been attempts to automate scoring. Although such methods are increasingly sophisticated they are currently limited in terms of accurate assessment to measuring primarily technical writing skills, such as grammar, spelling and sentence construction. Once again they do not measure accurately essays where higher level intellectual skills are demonstrated.

Badges and certificates

Particularly in xMOOCs, participants may be awarded a certificate or a ‘badge’ for successful completion of the MOOC, based on a final test (usually computer-marked) which measures the level of learning in a course. The American Council on Education (ACE), which represents the presidents of U.S. accredited, degree-granting institutions, recommended offering credit for five courses on the Coursera MOOC platform. However, according to the person responsible for the review process:

what the ACE accreditation does is merely accredit courses from institutions that are already accredited. The review process doesn’t evaluate learning outcomes, but is a course content focused review thus obviating all the questions about effectiveness of the pedagogy in terms of learning outcomes.’ (Book, 2013)

Indeed, most of the institutions offering MOOCs will not accept their own certificates for admission or credit within their own, campus-based programs. Probably nothing says more about the confidence in the quality of the assessment than this failure of MOOC providers to recognize their own teaching.

The intent behind assessment

To evaluate assessment in MOOCs requires an examination of the intent behind assessment. As identified earlier in another chapter of my book, there are many different purposes behind assessment. Peer assessment and immediate feedback on computer-marked tests can be extremely valuable for formative assessment, enabling participants to see what they have understood and to help develop further their understanding of key concepts. In cMOOCs, as Suen points out, learning is measured as the communication that takes place between MOOC participants, resulting in crowdsourced validation of knowledge – it’s what the sum of all the participants come to believe to be true as a result of participating in the MOOC, so formal assessment is unnecessary. However, what is learned in this way is not necessarily academically validated knowledge, which to be fair, is not the concern of cMOOC proponents such as Stephen Downes.

Academic assessment is a form of currency, related not only to measuring student achievement but also affecting student mobility (e.g. entrance to grad school) and perhaps more importantly employment opportunities and promotion. From a learner’s perspective, the validity of the currency – the recognition and transferability of the qualification – is essential. To date, MOOCs have been unable to demonstrate that they are able to assess accurately the learning achievements of participants beyond comprehension and knowledge of ideas, principles and processes (recognizing that there is some value in this alone). What MOOCs have not been able to demonstrate is that they can either develop or assess deep understanding or the intellectual skills required in a digital age. Indeed, this may not be possible within the constraints of massiveness, which is their major distinguishing feature from other forms of online learning, although the lack of valid methods of assessment will not stop computer scientists from trying to find ways to analyze participant online behaviour to show that such learning is taking place.

Up next

I hope the next post will be my last on this chapter on MOOCs. It will cover the following topics:

  • the cost of MOOCs and economies of scale
  • branding
  • the political, economic and social factors that explain the rise of MOOCs.

Over to you

As regular readers know, this is my way of obtaining peer review for my open textbook (so clearly I am not against peer review in principle!). So if I have missed anything important on this topic, or have misrepresented people’s views, or you just plain disagree with what I’ve written, please let me know. In particular, I am hoping for comments on:

  • comprehensiveness of the sources used that address learning and assessment methods in MOOCs
  • arguments that should have been included, either as a strength or a weakness
  • errors of fact

Yes, I’m a glutton for punishment, but you need to be a masochist to publish openly on this topic.

References

Bayne, S. (2014) Teaching, Research and the More-than-Human in Digital Education Oxford UK: EDEN Research Workshop (url to come)

Book, P. (2103) ACE as Academic Credit Reviewer–Adjustment, Accommodation, and Acceptance WCET Learn, July 25

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Dillenbourg, P. (ed.) (1999) Collaborative-learning: Cognitive and Computational Approaches. Oxford: Elsevier

Dillenbourg, P. (2014) MOOCs: Two Years Later, Oxford UK: EDEN Research Workshop (url to come)

Engle, W. (2104) UBC MOOC Pilot: Design and Delivery Vancouver BC: University of British Columbia

Falchikov, N. and Goldfinch, J. (2000) Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks Review of Educational Research, Vol. 70, No. 3

Firmin, R. et al. (2014) Case study: using MOOCs for conventional college coursework Distance Education, Vol. 35, No. 2

Haynie, D. (2014). State Department hosts ‘MOOC Camp’ for online learners. US News,January 20

Harasim, L. (2012) Learning Theory and Online Technologies New York/London: Routledge

Ho, A. et al. (2014) HarvardX and MITx: The First Year of Open Online Courses Fall 2012-Summer 2013 (HarvardX and MITx Working Paper No. 1), January 21

Knox, J. (2014) Digital culture clash: ‘massive’ education in the e-Learning and Digital Cultures Distance Education, Vol. 35, No. 2

Kop, R. (2011) The Challenges to Connectivist Learning on Open Online Networks: Learning Experiences during a Massive Open Online Course International Review of Research into Open and Distance Learning, Vol. 12, No. 3

Lave, J. and Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press

Milligan, C., Littlejohn, A. and Margaryan, A. (2013) Patterns of engagement in connectivist MOOCs, Merlot Journal of Online Learning and Teaching, Vol. 9, No. 2

Suen, H. (2104) Peer assessment for massive open online courses (MOOCs) International Review of Research into Open and Distance Learning, Vol. 15, No. 3

van Zundert, M., Sluijsmans, D., van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and future directions. Learning and Instruction, 20, 270-279

The dissemination of research in online learning: a lesson from the EDEN Research Workshop

Listen with webReader
The Sheldonian Theatre, Oxford

The Sheldonian Theatre, Oxford

The EDEN Research Workshop

I’m afraid I have sadly neglected my blog over the last two weeks, as I was heavily engaged as the rapporteur for the EDEN 8th Research Workshop on challenges for research on open and distance learning, which took place in Oxford, England last week, with the UK Open University as the host and sponsor. I was also there to receive a Senior Fellowship from EDEN, awarded at the Sheldonian Theatre, the official ceremonial hall of the University of Oxford.

There were at the workshop almost 150 participants from more than 30 countries, in the main part European, with over 40 selected research papers/presentations. The workshop was highly interactive, with lots of opportunity for discussion and dialogue, and formal presentations were kept to a minimum. Together with some very stimulating keynotes, the workshop provided a good overview of the current state of online, open and distance learning in Europe. From my perspective it was a very successful workshop.

My full, factual report on the workshop will be published next week as a series of three blog posts by Antonio Moreira Texeira, the President of EDEN, and I will provide a link when these are available, but in the meantime I would like to reflect more personally on one of the issues that came out of the workshop, as this issue is more broadly applicable.

Houston, we have a problem: no-one reads our research

Well, not no-one, but no-one outside the close group of those doing research in the area. Indeed, although in general the papers for the workshop were of high quality, there were still far too many papers that suggested the authors were unaware of key prior research in the area.

But the real problem is that most practitioners – instructors and teachers – are blissfully unaware of the major research findings about teaching and learning online and at a distance. The same applies to the many computer scientists who are now moving into online learning with new products, new software and new designs. MOOCs are the most obvious example. Andrew Ng, Sebastian Thrun and Daphne Koller – all computer scientists – designed their MOOCs without any consideration about what was already known about online learning – or indeed teaching or learning in general, other than their experience as lecturers at Stanford University. The same applies to MIT’s and Harvards’s courses on edX, although MIT/Harvard are at least  starting to do their own research, but again ignoring or pretending that nothing else has been done before. This results in mistakes being made (unmonitored student discussion), the re-invention of the wheel hyped as innovation or major breakthroughs (online courses for the masses), and surprised delight at discovering what has already been known for many years (e.g. students like immediate feedback).

Perhaps of more concern though is that as more and more instructors move into blended and hybrid learning, they too are unaware of best practices based on research and evaluation of online learning, and knowledge about online learners and their behaviour. This applies not only to online course design in general, but also particularly to the management of online discussions.

It will of course be argued that MOOCs and hybrid learning are somehow different from previous online and distance courses and therefore the research does not apply. These are revolutionary innovations and therefore the rules of the game have changed. What was known before is therefore no longer relevant. This kind of thinking though misunderstands the nature of sustainable innovation, which usually builds on past knowledge – in other words, successful innovation is more cumulative than a leap into the dark. Indeed, it is hard to imagine any field other than education where innovators would blithely ignore previous knowledge. (‘I don’t know anything about civil engineering, but I have a great idea for a bridge.’ Let’s see how far that will get you.)

Who’s to blame?

Well, no-one really. There are several reasons why research in online learning is not better disseminated:

  • research into any kind of learning is not easy; there are just so many different variables or conditions that affect learning in any context. This has several consequences:
    • it is difficult to generalize, because learning contexts vary so much
    • clearly significant results are difficult to find when so many other variables are likely to affect learning outcomes
    • thus results are usually hedged with so many reservations that any clear message gets lost
  • because research into online learning is out of the mainstream of educational research it has been poorly funded by the research councils. Thus most studies are small scale, qualitative and practitioner-driven. This means interventions are small scale and therefore do not identify major changes in learning, and the results are mainly of use to the practitioner who did the research, so don’t get more widely disseminated
  • most research in online learning is published in journals that are not read by either practitioners or computer scientists (who publish in their own journals that no-one else reads). Furthermore, there are a large number of journals in the field, so integration of research findings is difficult, although Anderson and Zawacki-Richter (2104) have done a good job in bringing a lot of the research together in one publication – but which unfortunately is nearly 500 pages long, and hence unlikely to reach many practitioners, at least in a digestible form
  • online learning is still a relatively new field, less than 20 years old, so it is taking time to build a solid foundation of verifiable research in which people can have confidence
  • most instructors at a post-secondary level have no formal training in any form of teaching and learning, so there are difficulties in bringing research and best practices to their attention.

What can be done?

First let me state clearly that I believe there is a growing and significant body of evidence about best practices in online learning that is evidence-based and research-driven. These best practices are general enough to be applied in a wide variety of contexts. In fact I will shortly write a post called ‘Ten things we know from research in online learning’ that will set out some of the most important results and their implications for teaching and learning online. However, we need more attempts to pull together the scattered research into more generalizable conclusions and more widely distributed forms of communication.

At the same time, we need also to get out the message about the complexity of teaching and learning, without which it will be difficult to evaluate or appreciate fully the findings from research in online learning. It is understanding that:

  • learning is a process, not a product,
  • there are different epistemological positions about what constitutes knowledge and how to teach it,
  • above all, identifying desirable learning outcomes is a value-driven decision; and acceptance of a diversity of values about what constitutes knowledge is to be welcomed, not restricted, in education, so long as there is genuine choice for teachers and learners.
  • however, if we want to develop the skills needed in a digital age, the traditional lecture-based model, whether offered face-to-face or online, is inadequate
  • academic knowledge is different from everyday knowledge; academic knowledge means transforming understanding of the world through evidence, theory and rational argument/dialogue, and effective teachers/instructors are essential for this
  • learning is heavily influenced by the context in which it takes place: one critical variable is the quality of course design; another is the role of an expert instructor. These variables are likely to be more important than any choice of technology or delivery mode.

There are therefore multiple audiences for the dissemination of research in online learning:

  • practitioners: teachers and instructors
  • senior managers and administrators in educational institutions
  • computer scientists and entrepreneurs interested in educational services or products
  • government and other funding agencies.

I can suggest a number of ways in which research dissemination can be done, but what is needed is a conversation about

(a) how best to identify the key research findings on online learning around which most experienced practitioners and researchers can agree

(b) the best means to get these messages out to the various stakeholders.

I believe that this is an important role for organizations such as EDEN, EDUCAUSE, ICDE, but it is also a responsibility for every one of us who works in the field and believes passionately about the value of online learning.

The strengths and weaknesses of MOOCs: Part I

Listen with webReader
© Carson Kahn, 2012

© Carson Kahn, 2012

How many times has an author cried: ‘Oh, God, I wish I’d never started on this!’? Well, I wanted to have a short section on MOOCs within a chapter on design models for teaching and learning in my online textbook, ‘Teaching in a Digital Age‘ and it is probably poetic justice that the section on MOOCs is now ballooning into a monster of its own.

Although I don’t want to inflate the importance of MOOCs, I fear I’m probably going to have to devote a whole chapter to the topic. (Well, I do have to agree that the topic is relevant to teaching in a digital age.) However, whether MOOCs get their own chapter may well depend on how you, my readers, react to what I’m writing, which I’m putting into this blog via a series of posts.

I’ve already had two posts, one on the key design features of MOOCs in general, and another on the differences between cMOOCs and xMOOCs that has already generated quite a lot of heated comments. Here I’m posting the first part of my discussion on the strengths and weaknesses of MOOCs. I’ll do another couple of posts to wrap it up (I desperately hope).

Strengths and weaknesses of MOOCs

Because at the time of writing most MOOCs are less than three years old, there are not many research publications on MOOCs, although research activities are now beginning to pick up. Much of the research so far on MOOCs comes from the institutions offering MOOCs, mainly in the form of reports on enrolments. The commercial platform providers such as Coursera and Udacity have provided limited research information overall, which is a pity, because they have access to really big data sets. However, MIT and Harvard, the founding partners in edX, are conducting some research, mainly on their own courses. There is very little research to date on cMOOCs, and what there is is mainly qualitative.

However, wherever possible, I have tried to use any research that has been done that provides insight into the strengths and weaknesses of MOOCs. At the same time, we should be clear that we are discussing a phenomenon that to date has been marked largely by political, emotional and often irrational discourse, and in terms of hard evidence, we will have to wait for some time. Thus any analysis must also address philosophical or value issues, which is a sure recipe for generating heated discussion.

Lastly, it should be remembered when evaluating MOOCs is that I am applying the criteria of whether MOOCs are likely to lead to the kinds of learning needed in a digital age: in other words, do they help develop the knowledge and skills defined in Chapter 1 of Teaching in a Digital Age?

1. Open and free education

MOOCs, particularly xMOOCs, deliver high quality content from some of the world’s best universities for free to anyone with a computer and an Internet connection. This in itself is an amazing value proposition. In this sense, MOOCs are an incredibly valuable addition to educational provision. Who could argue against this? Certainly not me, so long as the argument for MOOCs goes no further.

However, this is not the only form of open and free education. Libraries, open textbooks and educational broadcasting are also open and free to end users and have been for some time, even if they do not have the same power and reach as Internet-based delivery. There are also lessons we can learn from these earlier forms of open and free education that also apply to MOOCs.

The first is that these earlier forms of open and free did not replace the need for formal, credit-based education, but were used to supplement or strengthen it. In other words, MOOCs are a tool for continuing and informal education, which has high value in its own right.

The second lesson is that there have been many attempts in the past to use open and massive education through educational broadcasting and satellite broadcasting in Third World countries (see Bates, 1985), and they all failed miserably for a variety of reasons, the most important being:

  • the high cost of ground equipment (especially security),
  • the need for local support for learners without high levels of education, and its high cost
  • the need to adapt to the culture of the receiving countries
  • the difficulty of covering the operational costs of management and administration, especially for assessment, qualifications and local accreditation.

Also the priority in most Third World countries is not for courses from high-level Stanford University professors, but for programs for elementary and high schools. Finally, while mobile phones are widespread in Africa, they operate on very narrow bandwidths. For instance, it costs US$2 to download a typical YouTube video – equivalent to a day’s salary for many Africans. Streamed 50 minute video lectures then have limited applicability.

This is not to say that MOOCs could not be valuable in Third World countries. They have features, such as integrated interaction, testing and feedback, and much lower cost, that make them a more powerful medium than educational broadcasting but they will still face the same challenges of educational broadcasting:

  • being realistic as to what they can actually deliver to countries with no or limited technology infrastructure
  • working in partnership with Third World educational institutions and systems and other partners
  • ensuring that the necessary local support – which costs real money – is put in place
  • adapting the design, content and delivery of MOOCs to the cultural and economic requirements of those countries.

Also, MOOCs need to be compared to other possible ways of delivering mass education in developing countries, within these parameters. The problem comes when it is argued that because MOOCs are open and free to end-users, they will inevitably force down the cost of conventional education, or eliminate the need for it altogether, especially in Third World countries.

Lastly, and very importantly, in many countries, all public education is already in essence open to all and in many cases free to those participating, if grants, endowments and other forms of state support to students are taken into account. MOOCs then will have to deliver the same quality or better at a lower price than public education if they are to replace it. I will return to this point later when I discuss their costs and the political and social issues around MOOCs.

2. The audience that MOOCs mainly serve

In a research report from Ho et al. (2014), researchers at Harvard University and MIT found that on the first 17 MOOCs offered through edX, 66 per cent of all participants, and 74 per cent of all who obtained a certificate, had a bachelor’s degree or above, 71 per cent were male, and the average age was 26. This and other studies also found that a high proportion of participants came from outside the USA, ranging from 40-60 per cent of all participants, indicating strong interest internationally in open access to high quality university teaching.

In a study based on over 80 interviews in 62 institutions ‘active in the MOOC space’, Hollands and Tirthali (2014), researchers at Columbia University Teachers’ College, concluded that:

Data from MOOC platforms indicate that MOOCs are providing educational opportunities to millions of individuals across the world. However, most MOOC participants are already well-educated and employed, and only a small fraction of them fully engages with the courses. Overall, the evidence suggests that MOOCs are currently falling far short of “democratizing” education and may, for now, be doing more to increase gaps in access to education than to diminish them.

Thus MOOCs, as is common with most forms of university continuing education, cater to the better educated, older and employed sectors of society.

3. Persistence and commitment

Hill (2013) identified five types of participants in Coursera courses:

© Phil Hill, 2013

© Phil Hill, 2013

The edX researchers (Ho et al., 2014) provided empirical support for Hill’s analysis. They identified different levels of commitment as follows across 17 edX MOOCs:

  • Only Registered: Registrants who never access the courseware (35%).
  • Only Viewed: Non-certified registrants who access the courseware, accessing less than half of the available chapters (56%).
  • Only Explored: Non-certified Registrants who access more than half of the available chapters in the courseware, but did not get a certificate (4%).
  • Certified: Registrants who earn a certificate in the course (5%).

Engle (2014) found similar patterns for the UBC MOOCs on Coursera (also replicated in other studies):

  • of those that initially sign up, between one third and a half do not participate in any other active way
  • of those that participate in at least one activity, between 5-10% go on to successfully complete a certificate

Those going on to achieve certificates usually are within the 5-10 per cent range of those that sign up and in the 10-20 per cent range for those who actively engaged with the MOOC at least once. Nevertheless, the numbers obtaining certificates are still large in absolute terms: over 43,000 across 17 courses on edX and 8,000 across four courses at UBC (between 2,000-2,500 certificates per course).

Milligan et al. (2013) found a similar pattern of commitment in cMOOCs, from interviewing a relatively small sample of participants (29 out of 2,300 registrants) about halfway through a cMOOC:

  • passive participants: in Milligan’s study these were those that felt lost in the MOOC and rarely but occasionally logged in.
  • lurkers: they were actively following the course but did not engage in any of the activities (these were just under half those interviewed)
  • active participants (again, just under half those interviewed) who were fully engaged in the course activities.

MOOC participation and persistence rates need to be judged for what they are, a somewhat unique – and valuable – form of non-formal education. Once again, these results are very similar to research into non-formal educational broadcasts (e.g. the History Channel). One would not expect a viewer to watch every episode of a History Channel series then take an exam at the end. Ho et al. (p.13) produced the following diagram to show the different levels of commitment to xMOOCs:

Ho et al., 2014

Ho et al., 2014

Now compare that to what I wrote in 1985 about educational broadcasting in Britain:

(p.99): At the centre of the onion is a small core of fully committed students who work through the whole course, and, where available, take an end-of-course assessment or examination. Around the small core will be a rather larger layer of students who do not take any examination but do enrol with a local class or correspondence school. There may be an even larger layer of students who, as well as watching and listening, also buy the accompanying textbook, but who do not enrol in any courses. Then, by far the largest group, are those that just watch or listen to the programmes. Even within this last group, there will be considerable variations, from those who watch or listen fairly regularly, to those, again a much larger number, who watch or listen to just one programme. 

I also wrote (p.100):

A sceptic may say that the only ones who can be said to have learned effectively are the tiny minority that worked right through the course and successfully took the final assessment…A counter argument would be that broadcasting can be considered successful if it merely attracts viewers or listeners who might otherwise have shown no interest in the topic; it is the numbers exposed to the material that matter…the key issue then is whether broadcasting does attract to education those who would not otherwise have been interested, or merely provides yet another opportunity for those who are already well educated…There is a good deal of evidence that it is still the better educated in Britain and Europe that make the most use of non-formal educational broadcasting.

Exactly the same could be said about MOOCs. In a digital age where easy and open access to new knowledge is critical for those working in knowledge-based industries, MOOCs will be one valuable source or means of accessing that knowledge. The issue is though whether there are more effective ways to do this.

Furthermore, percentages, completion and certification DO matter if MOOCs are being seen as a substitute or a replacement for formal education. Thus MOOCs are a useful – but not really revolutionary – contribution to non-formal continuing education. We need though to look at whether they can meet the demands of more formal education, in terms of ensuring as many students succeed as possible.

To come

I think that’s more than enough for today. In my next post, I will try to cover the following strengths and weaknesses of MOOCs:

4. What do participants learn in MOOCs?

5. Costs and economies of scale

6. Branding

7. Ethical issues

8. Meeting the needs of learners in a digital age.

I will probably then do another short post on:

a. The politico-economic context that drives the MOOC phenomena

b. a short summary.

Over to you

Remembering that this is less than half the section on strengths and weaknesses, and that the criterion I am using for this is the ability of MOOCs to meet the learning needs of a digital age:

1. Are these the right topics for assessing MOOC’s strengths and weaknesses?

2. Would you have discussed these three topics differently? Do you agree or disagree with my conclusions?

3. Is ‘the ability of MOOCs to meet the learning needs of a digital age’ a fair criterion and if not how should they be judged?

4. Is the educational broadcasting comparison fair or relevant?

References

Bates, A. (1985) Broadcasting in Education: An Evaluation London: Constables

Engle, W. (2104) UBC MOOC Pilot: Design and Delivery Vancouver BC: University of British Columbia

Friedland, T. (2013) Revolution hits the universities, New York Times, January 26

Hill, P. (2013) Some validation of MOOC student patterns graphic, e-Literate, August 30

Ho, A. et al. (2014) HarvardX and MITx: The First Year of Open Online Courses Fall 2012-Summer 2013 (HarvardX and MITx Working Paper No. 1), January 21

Hollands, F. and Tirthali, D. (2014) MOOCs: Expectations and Reality New York: Columbia University Teachers’ College, Center for Benefit-Cost Studies of Education, 211 pp

Milligan, C., Littlejohn, A. and Margaryan, A. (2013) Patterns of engagement in connectivist MOOCs, Merlot Journal of Online Learning and Teaching, Vol. 9, No. 2

Yousef, A. et al. (2014) MOOCs: A Review of the State-of-the-Art Proceedings of 6th International Conference on Computer Supported Education – CSEDU 2014, Barcelona, Spain