October 22, 2014

The strengths and weaknesses of MOOCs: Part I

Listen with webReader
© Carson Kahn, 2012

© Carson Kahn, 2012

How many times has an author cried: ‘Oh, God, I wish I’d never started on this!’? Well, I wanted to have a short section on MOOCs within a chapter on design models for teaching and learning in my online textbook, ‘Teaching in a Digital Age‘ and it is probably poetic justice that the section on MOOCs is now ballooning into a monster of its own.

Although I don’t want to inflate the importance of MOOCs, I fear I’m probably going to have to devote a whole chapter to the topic. (Well, I do have to agree that the topic is relevant to teaching in a digital age.) However, whether MOOCs get their own chapter may well depend on how you, my readers, react to what I’m writing, which I’m putting into this blog via a series of posts.

I’ve already had two posts, one on the key design features of MOOCs in general, and another on the differences between cMOOCs and xMOOCs that has already generated quite a lot of heated comments. Here I’m posting the first part of my discussion on the strengths and weaknesses of MOOCs. I’ll do another couple of posts to wrap it up (I desperately hope).

Strengths and weaknesses of MOOCs

Because at the time of writing most MOOCs are less than three years old, there are not many research publications on MOOCs, although research activities are now beginning to pick up. Much of the research so far on MOOCs comes from the institutions offering MOOCs, mainly in the form of reports on enrolments. The commercial platform providers such as Coursera and Udacity have provided limited research information overall, which is a pity, because they have access to really big data sets. However, MIT and Harvard, the founding partners in edX, are conducting some research, mainly on their own courses. There is very little research to date on cMOOCs, and what there is is mainly qualitative.

However, wherever possible, I have tried to use any research that has been done that provides insight into the strengths and weaknesses of MOOCs. At the same time, we should be clear that we are discussing a phenomenon that to date has been marked largely by political, emotional and often irrational discourse, and in terms of hard evidence, we will have to wait for some time. Thus any analysis must also address philosophical or value issues, which is a sure recipe for generating heated discussion.

Lastly, it should be remembered when evaluating MOOCs is that I am applying the criteria of whether MOOCs are likely to lead to the kinds of learning needed in a digital age: in other words, do they help develop the knowledge and skills defined in Chapter 1 of Teaching in a Digital Age?

1. Open and free education

MOOCs, particularly xMOOCs, deliver high quality content from some of the world’s best universities for free to anyone with a computer and an Internet connection. This in itself is an amazing value proposition. In this sense, MOOCs are an incredibly valuable addition to educational provision. Who could argue against this? Certainly not me, so long as the argument for MOOCs goes no further.

However, this is not the only form of open and free education. Libraries, open textbooks and educational broadcasting are also open and free to end users and have been for some time, even if they do not have the same power and reach as Internet-based delivery. There are also lessons we can learn from these earlier forms of open and free education that also apply to MOOCs.

The first is that these earlier forms of open and free did not replace the need for formal, credit-based education, but were used to supplement or strengthen it. In other words, MOOCs are a tool for continuing and informal education, which has high value in its own right.

The second lesson is that there have been many attempts in the past to use open and massive education through educational broadcasting and satellite broadcasting in Third World countries (see Bates, 1985), and they all failed miserably for a variety of reasons, the most important being:

  • the high cost of ground equipment (especially security),
  • the need for local support for learners without high levels of education, and its high cost
  • the need to adapt to the culture of the receiving countries
  • the difficulty of covering the operational costs of management and administration, especially for assessment, qualifications and local accreditation.

Also the priority in most Third World countries is not for courses from high-level Stanford University professors, but for programs for elementary and high schools. Finally, while mobile phones are widespread in Africa, they operate on very narrow bandwidths. For instance, it costs US$2 to download a typical YouTube video – equivalent to a day’s salary for many Africans. Streamed 50 minute video lectures then have limited applicability.

This is not to say that MOOCs could not be valuable in Third World countries. They have features, such as integrated interaction, testing and feedback, and much lower cost, that make them a more powerful medium than educational broadcasting but they will still face the same challenges of educational broadcasting:

  • being realistic as to what they can actually deliver to countries with no or limited technology infrastructure
  • working in partnership with Third World educational institutions and systems and other partners
  • ensuring that the necessary local support – which costs real money – is put in place
  • adapting the design, content and delivery of MOOCs to the cultural and economic requirements of those countries.

Also, MOOCs need to be compared to other possible ways of delivering mass education in developing countries, within these parameters. The problem comes when it is argued that because MOOCs are open and free to end-users, they will inevitably force down the cost of conventional education, or eliminate the need for it altogether, especially in Third World countries.

Lastly, and very importantly, in many countries, all public education is already in essence open to all and in many cases free to those participating, if grants, endowments and other forms of state support to students are taken into account. MOOCs then will have to deliver the same quality or better at a lower price than public education if they are to replace it. I will return to this point later when I discuss their costs and the political and social issues around MOOCs.

2. The audience that MOOCs mainly serve

In a research report from Ho et al. (2014), researchers at Harvard University and MIT found that on the first 17 MOOCs offered through edX, 66 per cent of all participants, and 74 per cent of all who obtained a certificate, had a bachelor’s degree or above, 71 per cent were male, and the average age was 26. This and other studies also found that a high proportion of participants came from outside the USA, ranging from 40-60 per cent of all participants, indicating strong interest internationally in open access to high quality university teaching.

In a study based on over 80 interviews in 62 institutions ‘active in the MOOC space’, Hollands and Tirthali (2014), researchers at Columbia University Teachers’ College, concluded that:

Data from MOOC platforms indicate that MOOCs are providing educational opportunities to millions of individuals across the world. However, most MOOC participants are already well-educated and employed, and only a small fraction of them fully engages with the courses. Overall, the evidence suggests that MOOCs are currently falling far short of “democratizing” education and may, for now, be doing more to increase gaps in access to education than to diminish them.

Thus MOOCs, as is common with most forms of university continuing education, cater to the better educated, older and employed sectors of society.

3. Persistence and commitment

Hill (2013) identified five types of participants in Coursera courses:

© Phil Hill, 2013

© Phil Hill, 2013

The edX researchers (Ho et al., 2014) provided empirical support for Hill’s analysis. They identified different levels of commitment as follows across 17 edX MOOCs:

  • Only Registered: Registrants who never access the courseware (35%).
  • Only Viewed: Non-certified registrants who access the courseware, accessing less than half of the available chapters (56%).
  • Only Explored: Non-certified Registrants who access more than half of the available chapters in the courseware, but did not get a certificate (4%).
  • Certified: Registrants who earn a certificate in the course (5%).

Engle (2014) found similar patterns for the UBC MOOCs on Coursera (also replicated in other studies):

  • of those that initially sign up, between one third and a half do not participate in any other active way
  • of those that participate in at least one activity, between 5-10% go on to successfully complete a certificate

Those going on to achieve certificates usually are within the 5-10 per cent range of those that sign up and in the 10-20 per cent range for those who actively engaged with the MOOC at least once. Nevertheless, the numbers obtaining certificates are still large in absolute terms: over 43,000 across 17 courses on edX and 8,000 across four courses at UBC (between 2,000-2,500 certificates per course).

Milligan et al. (2013) found a similar pattern of commitment in cMOOCs, from interviewing a relatively small sample of participants (29 out of 2,300 registrants) about halfway through a cMOOC:

  • passive participants: in Milligan’s study these were those that felt lost in the MOOC and rarely but occasionally logged in.
  • lurkers: they were actively following the course but did not engage in any of the activities (these were just under half those interviewed)
  • active participants (again, just under half those interviewed) who were fully engaged in the course activities.

MOOC participation and persistence rates need to be judged for what they are, a somewhat unique – and valuable – form of non-formal education. Once again, these results are very similar to research into non-formal educational broadcasts (e.g. the History Channel). One would not expect a viewer to watch every episode of a History Channel series then take an exam at the end. Ho et al. (p.13) produced the following diagram to show the different levels of commitment to xMOOCs:

Ho et al., 2014

Ho et al., 2014

Now compare that to what I wrote in 1985 about educational broadcasting in Britain:

(p.99): At the centre of the onion is a small core of fully committed students who work through the whole course, and, where available, take an end-of-course assessment or examination. Around the small core will be a rather larger layer of students who do not take any examination but do enrol with a local class or correspondence school. There may be an even larger layer of students who, as well as watching and listening, also buy the accompanying textbook, but who do not enrol in any courses. Then, by far the largest group, are those that just watch or listen to the programmes. Even within this last group, there will be considerable variations, from those who watch or listen fairly regularly, to those, again a much larger number, who watch or listen to just one programme. 

I also wrote (p.100):

A sceptic may say that the only ones who can be said to have learned effectively are the tiny minority that worked right through the course and successfully took the final assessment…A counter argument would be that broadcasting can be considered successful if it merely attracts viewers or listeners who might otherwise have shown no interest in the topic; it is the numbers exposed to the material that matter…the key issue then is whether broadcasting does attract to education those who would not otherwise have been interested, or merely provides yet another opportunity for those who are already well educated…There is a good deal of evidence that it is still the better educated in Britain and Europe that make the most use of non-formal educational broadcasting.

Exactly the same could be said about MOOCs. In a digital age where easy and open access to new knowledge is critical for those working in knowledge-based industries, MOOCs will be one valuable source or means of accessing that knowledge. The issue is though whether there are more effective ways to do this.

Furthermore, percentages, completion and certification DO matter if MOOCs are being seen as a substitute or a replacement for formal education. Thus MOOCs are a useful – but not really revolutionary – contribution to non-formal continuing education. We need though to look at whether they can meet the demands of more formal education, in terms of ensuring as many students succeed as possible.

To come

I think that’s more than enough for today. In my next post, I will try to cover the following strengths and weaknesses of MOOCs:

4. What do participants learn in MOOCs?

5. Costs and economies of scale

6. Branding

7. Ethical issues

8. Meeting the needs of learners in a digital age.

I will probably then do another short post on:

a. The politico-economic context that drives the MOOC phenomena

b. a short summary.

Over to you

Remembering that this is less than half the section on strengths and weaknesses, and that the criterion I am using for this is the ability of MOOCs to meet the learning needs of a digital age:

1. Are these the right topics for assessing MOOC’s strengths and weaknesses?

2. Would you have discussed these three topics differently? Do you agree or disagree with my conclusions?

3. Is ‘the ability of MOOCs to meet the learning needs of a digital age’ a fair criterion and if not how should they be judged?

4. Is the educational broadcasting comparison fair or relevant?

References

Bates, A. (1985) Broadcasting in Education: An Evaluation London: Constables

Engle, W. (2104) UBC MOOC Pilot: Design and Delivery Vancouver BC: University of British Columbia

Friedland, T. (2013) Revolution hits the universities, New York Times, January 26

Hill, P. (2013) Some validation of MOOC student patterns graphic, e-Literate, August 30

Ho, A. et al. (2014) HarvardX and MITx: The First Year of Open Online Courses Fall 2012-Summer 2013 (HarvardX and MITx Working Paper No. 1), January 21

Hollands, F. and Tirthali, D. (2014) MOOCs: Expectations and Reality New York: Columbia University Teachers’ College, Center for Benefit-Cost Studies of Education, 211 pp

Milligan, C., Littlejohn, A. and Margaryan, A. (2013) Patterns of engagement in connectivist MOOCs, Merlot Journal of Online Learning and Teaching, Vol. 9, No. 2

Yousef, A. et al. (2014) MOOCs: A Review of the State-of-the-Art Proceedings of 6th International Conference on Computer Supported Education – CSEDU 2014, Barcelona, Spain

What students learned from an MIT physics MOOC

Listen with webReader

Newtonian mechanics 2

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Why this paper?

I don’t normally review individual journal articles, but I am making an exception in this case for several reasons:

  • it is the only research publication I have seen that attempts to measure actual learning from a MOOC in a quantitative manner (if you know of other publications, please let me know)
  • as you’d expect from MIT, the research is well conducted, within the parameters of a quasi-experimental design
  • the paper indicates, in line with many other comparisons between modes of delivery, that the conditions which are associated with the context of teaching are more important than just the mode of delivery
  • I was having to read this paper carefully for my book on ‘Teaching in a Digital Age’, but for reasons of space I would not be able to go into detail on this paper for my book, so I might as well share my full analysis with you.

What was the course?

8.MReV – Mechanics ReView, an introduction to Newtonian Mechanics, is the online version of a similar course offered on campus in the spring for MIT students who failed the Introductory Newtonian Mechanics in the fall. In other words, it is based on a second-chance course for MIT-based campus students.

The online version was offered in the summer semester as a free, open access course through edX and was aimed particularly at high school physics teachers but also to anyone else interested. The course consisted of the following components:

  • an online eText, especially designed for the course
  • reference materials both inside the course and outside the course (e.g., Google, Wikipedia, or a textbook)
  • an online discussion area/forum
  • mainly multiple-choice online tests and ‘quizzes’, interspersed on a weekly basis throughout the course.

Approximately 17,000 people signed-up for 8.MReV. Most dropped out with no sign of commitment to the course; only 1,500 students were “passing” or on-track to earn a certificate after the second assignment. Most of those completing less than 50% of the homework and quiz problems dropped out during the course and did not take the post-test, so the analysis included only the 1,080 students who attempted more than 50% of the questions in the course. 1,030 students earned certificates.

Thus the study measured only the learning of the most successful online students (in terms of completing the online course).

Methodology (summary)

The study measured primarily ‘conceptual’ learning, based mainly on multiple-choice questions demanding a student response that generally can be judged right or wrong. Students were given a pre-test before the course and a post-test at the end of the course.

Two methods to test learning were used: a comparison between each student’s pre-test and post-test score to measure the learning gain during the course; and an analysis based on Item Response Theory (IRT) which does not show absolute learning (as measured by pre-post testing), but rather improvement relative to “class average.”

Because of the large size of the MOOC participants included in the study, the researchers were able to analyse performance between various ‘cohorts’ within the MOOC participants such as:

  • physics teachers
  • not physics teachers
  • physics background
  • no physics background
  • college math
  • no math
  • post-graduate qualification
  • bachelor degree
  • no more than high school

Lastly, the scores of the MOOC participants were compared with the scores of those taking the on-campus version of the course, which had the following features:

  • four hours of instruction in which staff interacted with small groups of students (a flipped classroom) each week,
  • staff office hours,
  • help from fellow students,
  • available physics tutors,
  • MIT library

Main results (summary)

  • gains in knowledge for the MOOC group were generally higher than those found in traditional, lecture-based classes and lower than (but closer to) those found in ‘interactive’ classes, but this result is hedged around with some considerable qualifications (‘more studies on MOOCs need to be done to confirm this’.)
  • in spite of the extra instruction that the on-campus students had, there was no evidence of positive, weekly relative improvement of the on-campus students compared with our online students. (Indeed, if my reading of Figure 5 in the paper is correct, the on-campus students did considerably worse).
  • there was no evidence within the MOOC group that cohorts with low initial ability learned less than the other cohorts

Conclusions

This is a valuable research report, carefully conducted and cautiously interpreted by the authors. However, for these reasons, it is really important not to jump to conclusions. In particular, the authors’ own caution at the end of the paper should be noted:

It is … important to note the many gross differences between 8.MReV and on-campus education. Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course … and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.

To this I would add that the design of this MOOC was somewhat different to many other xMOOCs in that it was based on online texts specially designed for the MOOC, and not on video lectures.

I’m still not sure from reading the paper how much students actually learned from the MOOC. About 1,000 who finished the course got a certificate, but it is difficult to interpret the gain in knowledge. The statistical measurement of an average gain of 0.3 doesn’t mean a lot. There is some mention of the difference being between a B and a B+, but I have probably misinterpreted that. If it is the case, though, I certainly would expect students taking a 13 week course to do much better than that. It would have been more helpful to have graded students on the pre-test then compared those grades on the post-test. We could then see if gains were in the order of at least one grade better, for instance.

Finally, this MOOC design suits a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It is less likely to develop the skills I have identified as being needed in a digital age.

 

 

Review of ‘Online Distance Education: Towards a Research Agenda.’

Listen with webReader
Drop-out: the elephant in the DE room that no-one wants to talk about

Drop-out: the elephant in the DE room that no-one wants to talk about

Zawacki-Richter, O. and Anderson, T. (eds.) (2014) Online Distance Education: Towards a Research Agenda Athabasca AB: AU Press, pp. 508

It is somewhat daunting to review a book of over 500 pages of research on any topic. I doubt if few other than the editors are likely to read this book from cover to cover. It is more likely to be kept on one’s bookshelf (if these still exist in a digital age) for reference whenever needed. Nevertheless, this is an important work that anyone working in online learning needs to be aware of, so I will do my best to cover it as comprehensively as I can.

Structure of the book

The book is a collection of about 20 chapters by a variety of different authors (more on the choice of authors later). Based on a Delphi study and analysis of ‘key research journals’ in the field, the editors have organized the topic into three sections, with a set of chapters on each sub-section, as follows:

1. Macro-level research: distance education systems and theories

  • access, equity and ethics
  • globalization and cross-cultural issues
  • distance teaching systems and institutions
  • theories and models
  • research methods and knowledge transfer

2. Meso-level research: management, organization and technology

  • management and organization
  • costs and benefits
  • educational technology
  • innovation and change
  • professional development and faculty support
  • learner support services
  • quality assurance

3. Micro-level: teaching and learning in distance education

  • instructional/learning design
  • interaction and communication
  • learner characteristics.

In addition, there is a very useful preface from Otto Peters, an introductory chapter by the editors where they justify their structural organization of research, and a short conclusion that calls for a systematic research agenda in online distance education research.

More importantly, perhaps, Terry Anderson and Olaf Zawacki-Richter demonstrate empirically that research in this field has been skewed towards micro-level research (about half of all publications).  Interestingly, and somewhat surprisingly given its importance, costs and benefits of online distance education is the least researched area.

What I liked

It is somewhat invidious to pick out particular chapters, because different people will have different interests from such a wide-ranging list of topics. I have tended to choose those that I found were new and/or particularly enlightening for me, but other readers’ choices will be different. However, by selecting a few excellent chapters, I hope to give some idea of the quality of the book.

1. The structuring/organization of research

Anderson and Zawacki-Richter have done an excellent job in providing a structural framework for research in this field. This will be useful both for those teaching about online and distance education but in particular for potential Ph.D. students wondering what to study. This book will provide an essential starting point.

2. Summary of the issues in each area of research

Again, the editors have done an excellent job in their introductory chapter in summarizing the content of each of the chapters that follows, and in so doing pulling out the key themes and issues within each area of research. This alone makes the book worthwhile.

3. Globalization, Culture and Online Distance Education

Charlotte (Lani) Gunawardena of the University of New Mexico has written the most comprehensive and deep analysis of this issue that I have seen, and it is an area in which I have a great deal of interest, since most of the online teaching I have done has been with students from around the world and sometimes multi-lingual.

After a general discussion of the issue of globalization and education, she reviews research in the following areas:

  • diverse educational expectations
  • learners and preferred ways of learning
  • socio-cultural environment and online interaction
  • help-seeking behaviours
  • silence
  • language learning
  • researching culture and online distance learning

This chapter should be required reading for anyone contemplating teaching online.

4. Quality assurance in Online Distance Education

I picked this chapter by Colin Latchem because he is so deeply expert in this field that he is able to make what can be a numbingly boring but immensely important topic a fun read, while at the same time ending with some critical questions about quality assurance. In particular Latchem looks at QA from the following perspectives:

  • definitions of quality
  • accreditation
  • online distance education vs campus-based teaching
  • quality standards
  • transnational online distance education
  • open educational resources
  • costs of QA
  • is online distance education yet good enough?
  • an outcomes approach to QA.

This chapter definitely showcases a master at the top of his game.

5. The elephant in the room: student drop-out

This is a wonderfully funny but ultimately serious argument between Ormond Simpson and Alan Woodley about the elephant in the distance education room that no-one wants to mention. Here they start poking the elephant with some sticks (which they note is not likely to be a career-enhancing move.) The basic argument is that institutions should and could do more to reduce drop-out/increase course completion. This chapter also stunned me with providing hard data about really low completion rates for most open university students. I couldn’t help comparing these with the high completion rates for online credit courses at dual-mode (campus-based) institutions, at least in Canada (which of course are not ‘open’ institutions in that students must have good high school qualifications.)

Woodley’s solution to reducing drop-out is quite interesting (and later well argued):

  • make it harder to get in
  • make it harder to get out

In both cases, really practical and not too costly solutions are offered that nevertheless are consistent with open access and high quality teaching.

In summary

The book contains a number of really good chapters that lay out the issues in researching online distance education.

What I disliked

I have to say that I groaned when I first saw the list of contributors. The same old, same old list of distance education experts with a heavy bias towards open universities. Sure, they are nearly all well-seasoned experts, and there’s nothing wrong with that per se (after all, I see myself as one of them.)

But where are the young researchers here, and especially the researchers in open educational resources, MOOCs, social media applications in online learning, and above all researchers from the many campus-based universities now mainstreaming online learning? There is almost nothing in the book about research into blended learning, and flipped classrooms are not even mentioned. OK, the book is about online distance learning but the barriers or distinctions are coming down with a vengeance. This book will never reach those who most need it, the many campus-based instructors now venturing for the first time into online learning in one way or another. They don’t see themselves as primarily distance educators.

And a few of the articles were more like lessons in history than an up-to-date review of research in the field. Readers of this blog will know that I strongly value the history of educational technology and distance learning. But these lessons need to be embedded in the here and now. In particular, the lessons need to be spelled out. It is not enough to know that Stanford University researchers as long ago as 1974 were researching the costs and benefits of educational broadcasting in developing countries, but what lessons does this have for some of the outrageous claims being made about MOOCs? A great deal in fact, but this needs explaining in the context of MOOCs today.

Also the book is solely focused on post-secondary university education. Where is the research on online distance education in the k-12/school sector or the two-year college/vocational sector? Maybe they are topics for other books, but this is where the real gap exists in research publications in online learning.

Lastly, although the book is reasonably priced for its size (C$40), and is available as an e-text as well as the fully printed version, what a pity it is not an open textbook that could then be up-dated and crowd-sourced over time.

Conclusion

This is essential reading for anyone who wants to take a professional, evidence-based approach to online learning (distance or otherwise). It will be particularly valuable for students wanting to do research in this area. The editors have done an incredibly good job of presenting a hugely diverse and scattered area in a clear and structured manner. Many of the chapters are gems of insight and knowledge in the field.

However, we have a huge challenge of knowledge transfer in this field. Repeatedly authors in the book lamented that many of the new entrants to online learning are woefully ignorant of the research previously done in this field. We need a better way to disseminate this research than a 500 page printed text that only those already expert in the field are likely to access. On the other hand, the book does provide a strong foundation from which to find better ways to disseminate this knowledge. Knowledge dissemination in a digital world then is where the research agenda for online learning needs to focus.

 

Comparing xMOOCs and cMOOCs: philosophy and practice

Listen with webReader
They're big: but will they survive? Image: © Wikipedia

They’re big: but will they survive? Image: © Wikipedia

The story so far

For my open textbook Teaching in a Digital Age, I am writing a chapter on different design models for teaching and learning. I have started writing the section on MOOCs, and in my previous post, ‘What is a MOOC?‘, I gave a brief history and described the key common characteristics of all MOOCs.

In this post I examine the differences in philosophy and practice between xMOOCs and cMOOCs.

Design models for MOOCs

MOOCs are a relatively new phenomenon and as a result are still evolving, particularly in terms of their design. However the early MOOC courses had relatively identifiable designs which still permeate most MOOCs. At the same time, there are two quite different philosophical positions underpinning xMOOCs and cMOOCs, so we need to look at each design model separately.

xMOOCs

I am starting with xMOOCs because at the time of writing they are by far the most common MOOC. Because instructors have considerable flexibility in the design of the course, there is considerable variation in the details, but in general xMOOCs have the following common design features:

  • specially designed platform software: xMOOCs use specially designed platform software that allows for the registration of very large numbers of participants, provides facilities for the storing and streaming on demand of digital materials, and automates assessment procedures and student performance tracking.
  • video lectures: xMOOCs use the standard lecture mode, but delivered online by participants downloading on demand recorded video lectures. These video lectures are normally available on a weekly basis over a period of 10-13 weeks. Initially these were often 50 minute lectures, but as a result of experience some xMOOCs now are using shorter recordings (sometimes down to 15 minutes in length) and thus there may be more video segments. Over time, xMOOC courses, as well as the videos, are becoming shorter in length, some now lasting only five weeks. Various video production methods have been used, including lecture capture (recording face-to-face on-campus lectures, then storing them and streaming them on demand), full studio production, or desk-top recording by the instructor on their own.
  • computer-marked assignments: students complete an online test and receive immediate computerised feedback. These tests are usually offered throughout the course, and may be used just for participant feedback. Alternatively the tests may be used for determining the award of a certificate. Another option is for an end of course grade or certificate based solely on an end-of-course online test. Most xMOOC assignments are based on multiple-choice, computer-marked questions, but some MOOCs have also used text or formula boxes for participants to enter answers, such as coding in a computer science course, or mathematical formulae, and in one or two cases, short text answers, but in all cases these are computer-marked.
  • peer assessment: some xMOOCs have experimented with assigning students randomly to small groups for peer assessment, especially for more open-ended or more evaluative assignment questions. This has often proved problematic though because of wide variations in expertise between the different members of a group, and because of the different levels of involvement in the course of different participants.
  • supporting materials: sometimes copies of slides, supplementary audio files, urls to other resources, and online articles may be included for downloading by participants.
  • a shared comment/discussion space where participants can post questions, ask for help, or comment on the content of the course.
  • no or very light discussion moderation: the extent to which the discussion or comments are moderated varies probably more than any other feature in xMOOCs, but at its most, moderation is directed at all participants rather than to individuals. Because of the very large numbers participating and commenting, moderation of individual comments by the instructor(s) offering the MOOC is impossible. Some instructors offer no moderation whatsoever, so participants rely on other participants to respond to questions or comments. Some instructors ‘sample’ comments and questions, and post comments in response to these. Some instructors use teaching assistants to comb for or identify common areas of concern shared by a number of participants then the instructor or teaching assistants will respond. However, in most cases, participants moderate each other’s comments or questions.
  • badges or certificates: most xMOOCs award some kind of recognition for successful completion of a course, based on a final computer-marked assessment. However, at the time of writing, MOOC badges or certificates have not been recognised for credit or admission purposes even by the institutions offering a MOOC, or even when the lectures are the same as for on-campus students. No evidence exists to date about employer acceptance of MOOC qualifications.
  • learning analytics: Although to date there has not been a great deal of published information about the use of learning analytics in xMOOCs, the xMOOC platforms have the capacity to collect and analyse ‘big data’ about participants and their performance, enabling, at least in theory, for immediate feedback to instructors about areas where the content or design needs improving and possibly directing automated cues or hints for individuals.

xMOOCs therefore primarily use a teaching model focused on the transmission of information, with high quality content delivery, computer-marked assessment (mainly for student feedback purposes), and automation of all key transactions between participants and the learning platform. There is almost no direct interaction between an individual participant and the instructor responsible for the course.

cMOOCs

cMOOCs have a very different educational philosophy from xMOOCs, in that cMOOCs place heavy emphasis on networking and in particular on strong content contributions from the participants themselves.

Key design principles

Downes (2014) has identified four key design principles for cMOOCs:

  • autonomy of the learner: in terms of learners choosing what content or skills they wish to learn, learning is personal, and thus there being no formal curriculum
  • diversity: in terms of the tools used, the range of participants and their knowledge levels, and varied content
  • interactivity: in terms of co-operative learning, communication between participants, resulting in emergent knowledge
  • open-ness: in terms of access, content, activities and assessment

Thus for the proponents of cMOOCs, learning results not from the transmission of information from an expert to novices, as in xMOOCs, but from sharing of knowledge between participants.

From principles to practice

Identifying how these key design features for cMOOCs are turned into practice is somewhat more difficult to pinpoint, because cMOOCs depend on an evolving set of practices. Most cMOOCs to date have in fact made some use of ‘experts’, both in the organization and promotion of the MOOC, and in providing ‘nodes’ of content around which discussion tends to revolve.  In other words, the design practices of cMOOCs are still more a work in progress than those of xMOOCs.

Nevertheless, I see the following as key design practices to date in cMOOCs:

  • use of social media: partly because most cMOOCs are not institutionally based or supported, they do not at present use a shared platform or platforms but are more loosely supported by a range of ‘connected’ tools and media. These may include a simple online registration system, and the use of web conferencing tools such as Blackboard Collaborate or Adobe Connect, streamed video or audio files, blogs, wikis, ‘open’ learning management systems such as Moodle or Canvas, Twitter, LinkedIn or Facebook, all enabling participants to share their contributions. Indeed, as new apps and social media tools develop, they too are likely to be incorporated into cMOOCs. All these tools are connected through web-based hashtags or other web-based linking mechanisms, enabling participants to identify social media contributions from other participants. Downes (2014) is working on a Learning and Performance Support System that could be used to help both participants and cMOOC organisers to communicate more easily across the whole MOOC and to organise their personal learning. Thus the use of loosely linked/connected social media is a key design practice in cMOOCs
  • participant-driven content: in principle, other than a common topic that may be decided by someone wanting to organise a cMOOC, content is decided upon and contributed by the participants themselves, in this sense very much like any other community of practice. In practice though cMOOC organisers (who themselves tend to have some expertise in the topic of the cMOOC) are likely to invite potential participants who have expertise or are known already to have a well articulated approach to a topic to make contributions around which participants can discuss and debate. Other participants choose their own ways to contribute or communicate, the most common being through blog posts, tweets, or comments on other participants’ blog posts, although some cMOOCs use wikis or open source online discussion forums. The key design practice with regard to content is that all participants contribute to and share content.
  • distributed communication: this is probably the most difficult design practice to understand for those not familiar with cMOOCs – and even for those who have participated. With participants numbering in the hundreds or even thousands, each contributing individually through a variety of social media, there are a myriad different inter-connections between participants that are impossible to track (in total) for any single participant. This results in many sub-conversations, more commonly at a binary level of two people communicating with each other than an integrated group discussion, although all conversations are ‘open’ and all other participants are able to contribute to a conversation if they know it exists. The key design practice then with regard to communication is a self-organising network with many sub-components.
  • assessment: there is no formal assessment, although participants may seek feedback from other, more knowledgeable participants, on an informal basis. Basically participants decide for themselves whether what they have learned is appropriate to them.

cMOOCs therefore primarily use a networked approach to learning based on autonomous learners connecting with each other across open and connected social media and sharing knowledge through their own personal contributions. There is no pre-set curriculum and no formal teacher-student relationship, either for delivery of content or for learner support. Participants learn from the contributions of others, from the meta-level knowledge generated through the community, and from self-reflection on their own contributions.

This is very much a personal interpretation of how cMOOCs work in practice, based largely on my own experience as a participant, but much more has been written and spoken about the philosophy of cMOOCs, and much less about the implementation of that philosophy, presumably because cMOOC proponents want to leave it open to practitioners to decide how best to put that philosophy into practice.

What is clear though is that Downes was correct in clearly distinguishing cMOOCs from xMOOCs – they are very different beasts.

Coming next to a web page near you

Now for the fun part. Over the next few days I will be writing about the strengths and weaknesses of MOOCs, focusing particularly on the following question:

Can or do MOOCs provide the learning and skills that students will need in the future? 

I can in fact provide you with the short answer now: a resounding NO, for both kinds of MOOC, although one is a bit better than the other! Tune in later for the full details.

Feedback, please

In the meantime, I need to know whether I have got it right in describing the two kinds of MOOCs. Does my description – because that is all it’s meant to be at this stage – match your experience of MOOCs? Have I missed important characteristics? Do I have my facts wrong? Is this useful or is there a better way to approach this topic?

What is a MOOC?

Listen with webReader
© Giulia Forsythe, 2012

© Giulia Forsythe, 2012 and JISC, 2012

MOOCs as a design model

I have already covered seven different design models for teaching and learning in Chapter 6 of my open textbook, Teaching in a Digital Age. I have dithered a bit over whether MOOCs are a unique design model, because they contain a mix of familiar and somewhat unfamiliar approaches to teaching and learning – and also because there are different forms of MOOCs. I also don’t want to give too much attention to a form of teaching and learning that is already grossly overhyped. However I have decided to bite the bullet. I have to deal with MOOCs somewhere in the book, so a chapter on models of design for teaching and learning seems as good a place as any.

Because this topic is too big for one blog post, I plan a series of three or four posts. I could do a whole book on this topic , but this section of Chapter 6 has to be concise and accurate, while also dealing with the strengths and weaknesses of MOOCs, particularly with regard to meeting the needs of learners in a digital age, which for me means asking the question: can or do MOOCs provide the learning and skills that students will need in the future? Also please remember this book is aimed at teachers and instructors who are NOT specialists or even experienced in online learning, so the content of this blog post in particular will not come as a surprise to any of my regular readers.

This is the outline I am proposing for my section on MOOCs in Chapter 6:

  • Introduction
  • Brief history
  • Key characteristics of MOOCs
  • the xMOOC design model
  • the cMOOC design model
  • Strengths and weaknesses of MOOCs
  • Personal conclusions, including the political-economic context that has driven the MOOC phenomenon
  • References

I will cover the first three bullets in this post, the design models in one or two more posts, followed by my analysis of MOOCs in my last (couple of) post(s) on this topic.

Introduction

Probably no development in teaching in recent years has been as controversial as the development of Massive Open Online Courses (MOOCs). In 2013, the author Thomas Friedland wrote in the New York Times:

...nothing has more potential to enable us to reimagine higher education than the massive open online course ….For relatively little money, the U.S. could rent space in an Egyptian village, install two dozen computers and high-speed satellite Internet access, hire a local teacher as a facilitator, and invite in any Egyptian who wanted to take online courses with the best professors in the world, subtitled in Arabic…I can see a day soon where you’ll create your own college degree by taking the best online courses from the best professors from around the world ….paying only the nominal fee for the certificates of completion. It will change teaching, learning and the pathway to employment.

Many others have referred to MOOCs as a prime example of the kind of disruptive technology that Clayton Christensen (2010) has argued will change the world of education. Others have argued that MOOCs are not a big deal, just a more modern version of educational broadcasting, and do not really affect the basic fundamentals of education, and in particular do not address the type of learning needed in the 21st century.

MOOCs can be seen then as either a major revolution in education or just another example of the overblown hyperbole often surrounding technology, particularly in the USA. I shall be arguing that MOOCs are a significant development, but they have severe limitations for developing the knowledge and skills needed in a digital age.

Brief history

Elements of MOOCs have been around for some time. The British Open University, funded by the U.K. government, started offering open degree programs by distance in 1971, although sadly its degree programs are no longer free. Nevertheless, much of its teaching material is still open through its OpenLearn portal. Some of the British OU’s courses are also quite large (around 5,000 students).

In 2003 Massachusetts Institute of Technology (MIT) began offering digital video recordings of many of its lectures and accompanying materials such as slides for free downloading through its OpenCourseWare (OCW) project. Apple opened iTunes U in its iTunes store in 2007. I TunesU enables educational audio and video files from universities to be downloaded for free. It currently has over 50,000 entries. OpenLearn, OCW, and iTunesU are just some examples of open educational resources, free for students (and also instructors) to use in their learning and teaching. However, they are not courses.

Fully online credit courses have been offered by school boards, colleges and universities since 1995, usually in parallel with the on-campus version of the same course. Credit-based online learning has been gaining ground steadily, with increases in annual enrollment for fully online courses averaging between 10-20% per annum per year across the higher education system in the USA, resulting in somewhere between 25 to 30 per cent of all credit enrollments by 2012 (Allen and Seaman, 2014; US Department of Education, 2014). However, access to online credit courses requires admission to university and the payment of tuition fees, so although online, they are neither open nor massive.

The term MOOC was used for the first time in 2008 for a course offered by the Extension Division of the University of Manitoba in Canada. This non-credit course, Connectivism and Connective Knowledge (CK08) was designed by George Siemens, Stephen Downes and Dave Cormier. It enrolled 25 on-campus students who paid a tuition fee but was also offered online for free as an experiment. Much to the surprise of the instructors, 2,200 students enrolled in the free online version. Downes classified this course and others like it that followed as connectivist or cMOOCs, because of their design.

In the fall of 2011, two computer science professors from Stanford University, Sebastian Thrun and Peter Norvig, launched a MOOC on The Introduction to AI (artificial intelligence) that attracted over 160,000 enrollments, followed quickly by two other MOOCs, also in computer sciences, from Stanford instructors Andrew Ng and Daphne Koller. Thrun went on to found Udacity, and Ng and Koller established Coursera. These are for-profit companies using their own specially developed software that enable massive numbers of registrations and a platform for the teaching. Udacity and Coursera formed partnerships with other leading universities where the universities pay a fee to offer their own MOOCs through these platforms. Udacity more recently has changed direction and is now focusing more on the vocational and corporate training market.

The Massachusetts Institute of Technology (MIT) and Harvard University in March 2012 developed an open source platform for MOOCs called edX, which also acts as a platform for online registration and teaching. edX has also developed partnerships with leading universities to offer MOOCs without direct charge for hosting their courses, although some may pay to become partners in edX. Other platforms for MOOCs, such as the U.K. Open University’s FutureLearn, have also been developed. Because the majority of MOOCs offered through these various platforms are based mainly on video lectures and computer-marked tests, Downes has classified these as xMOOCs, to distinguish them from the more connectivist cMOOCs.

In 2014 there are approximately 1,000 MOOCs available from universities in the USA, and 800 from European institutions. Also there are MOOCs now in several languages besides English, but mainly in Spanish and French.

Key characteristics of MOOCs

All MOOCs have some common features, although we shall see that the term MOOC covers an increasingly wide range of designs.

Massive

In the three years following its launch in 2011, Coursera claims over 7.5 million sign-ups with its largest course claiming 240,000 participants. The huge numbers (in the hundreds of thousands) enrolling in the earliest MOOCs are not always replicated in later MOOCs, but the numbers are still substantial. For instance, in 2013, the University of British Columbia offered several MOOCs through Coursera, with the numbers initially signing up ranging from 25,000 to 190,000 per course (Engle, 2014).

However, even more important than the actual numbers is that in principle MOOCs have infinite scalability. There is technically no limit to their final size, because the marginal cost of adding each extra participant is nil for the institutions offering MOOCs. (In practice this is not quite true, as central technology, backup and bandwidth costs increase, and as we shall see, there can be some knock-on costs for an institution offering MOOCs as numbers increase. However, the cost of each additional participant is so small, given the very large numbers, that it can be more or less ignored). The scalability of MOOCs is probably the characteristic that has attracted the most attention, especially from governments, but it should be noted that this is also a characteristic of broadcast television and radio, so it is not unique to MOOCs.

Open

There are no pre-requisites for participants other than access to a computer/mobile device and the Internet. However, broadband access is essential for xMOOCs that use video streaming, and probably desirable even for cMOOCs. Furthermore, at least for the initial MOOCs, access is free for participants, although an increasing number of MOOCs are charging a fee for assessment leading to a badge or certificate.

However, there is one significant way in which MOOCs through Coursera are not fully open. Coursera owns the rights to the materials, so they cannot be repurposed or reused without permission, and the material may be removed from the Coursera site when the course ends. Also, Coursera decides which institutions can host MOOCs on its platform – this is not an open access for institutions. On the other hand, edX is an open source platform, so any institution that joins edX can develop their own MOOCs with their own rules regarding rights to the material. cMOOCs are generally completely open, but since individual participants of cMOOCs create a lot if not all of the material it is not always clear whether they own the rights and how long the MOOC materials will remain available.

It should also be noted that many other kinds of online material are also open and free over the Internet, often in ways that are more accessible for reuse than MOOC material.

Online

MOOCs are offered at least initially wholly online, but increasingly institutions are negotiating with the rights holders to use MOOC materials in a blended format for use on campus. In other words, the institution provides learner support for the MOOC materials through the use of campus-based instructors. For instance at San Jose State University, on-campus students used MOOC materials from Udacity courses, including lectures, readings and quizzes, and then instructors spent classroom time on small-group activities, projects and quizzes to check progress.

Again though it should be noted that MOOCs are not unique in offering courses online. There are over 7 million students in the USA alone taking for-credit online courses.

Courses

One characteristic that distinguishes MOOCs from most other open educational resources is that they are organized into a whole course.

However, what this actually means for participants is not exactly clear. Although many MOOCs offer certificates or badges for successful completion of a course, to date these have not been accepted for admission or for credit, even (or especially) by the institutions offering the MOOCs.

Summary

It can be seen that all the key characteristics of MOOCs exist in some form or other outside MOOCs. What makes MOOCs unique though is the combination of the four key characteristics, and in particular the fact that they scale massively and are open and free for participants.

To come

  • the xMOOC design model
  • the cMOOC design model
  • Strengths and weaknesses of MOOCs
  • Personal conclusions, including the political-economic context that has driven the MOOC phenomenon

Over to you

1. Is this an accurate description of MOOCs and their history?

2. Is there something I have left out that needs to be included in this basic description (remembering I will be going into more detail about completion rates, assessment, etc., in describing the strengths and weaknesses)?

Coming next

In a day or two: the design models of xMOOCs and cMOOCs

References

Allen, I. and Seaman, J. (2014) Grade Change: Tracking Online Learning in the United States Wellesley MA: Babson College/Sloan Foundation

Christensen, C. (2010) Disrupting Class, Expanded Edition: How Disruptive Innovation Will Change the Way the World Learns New York: McGraw-Hill

Engle, W. (2104) UBC MOOC Pilot: Design and Delivery Vancouver BC: University of British Columbia

Friedland, T. (2013) Revolution hits the universities, New York Times, January 26

U.S.Department of Education (2014) Web Tables: Enrollment in Distance Education Courses, by State: Fall 2012 Washington DC: U.S.Department of Education National Center for Education Statistics