October 31, 2014

Getting ready for the EDEN Research workshop

Listen with webReader
Oxford: City of Dreaming Spires (Matthew Arnold)

Oxford: City of Dreaming Spires (Matthew Arnold)

I’m now in England, about to attend the EDEN Research Workshop on research into online learning that starts tomorrow (Sunday) in Oxford, with the event being hosted by the UK Open University, one of the main sources of systematic research in online learning. (EDEN is the European Distance and e-Learning Network)

This is one of my favourite events, because the aim is to bring together all those in Europe doing research in online learning to discuss their work, the issues and research methods. It’s a great chance for young or new players in the field to make themselves known and connect with other, more experienced, researchers. Altogether there will be about 120 participants, just the right size to get to know everyone over three days. I organised one such EDEN research workshop myself several years ago in Barcelona, when I was working at the Open University of Catalonia, and it was great fun.

The format is very interesting. All the papers are published a week ahead of the workshop, and each author gets just a few minutes in parallel sessions to briefly summarise, with plenty of time for discussion afterwards (what EDEN calls ‘research speed dating’). There are also several research workshops, such as ‘Linking Learning Design with Learning Analytics,’ as well as several keynotes (but not too many!) I’m particularly looking forward to Sian Bayne’s ‘Teaching, Research and the More-than-human in Digital Education.’ There are also poster sessions, 14 in all.

I am the Chair of the jury for the EDEN award for the best research paper, and also the workshop rapporteur. As a result I have been carefully reading all the papers over the last week, 44 in all, and I’m still trying to work out how to be in several places at the same time so I can cover all the sessions.

As a result I’ve had to put my book, ‘Teaching in a Digital Age‘, on hold for the last few days. However, the EDEN papers have already been so useful, bringing me the latest reviews and updates on research in this area that it is well worth taking a few more days before getting back to the strengths and weaknesses of MOOCs. I will be much better informed as a result as there are quite a few research papers on European MOOCs. I will also do a blog post after the conference, summing up what I heard during the three days.

So it looks like that I won’t have much time for dreaming in the city of dreaming spires.

 

 

The strengths and weaknesses of MOOCs: Part I

Listen with webReader
© Carson Kahn, 2012

© Carson Kahn, 2012

How many times has an author cried: ‘Oh, God, I wish I’d never started on this!’? Well, I wanted to have a short section on MOOCs within a chapter on design models for teaching and learning in my online textbook, ‘Teaching in a Digital Age‘ and it is probably poetic justice that the section on MOOCs is now ballooning into a monster of its own.

Although I don’t want to inflate the importance of MOOCs, I fear I’m probably going to have to devote a whole chapter to the topic. (Well, I do have to agree that the topic is relevant to teaching in a digital age.) However, whether MOOCs get their own chapter may well depend on how you, my readers, react to what I’m writing, which I’m putting into this blog via a series of posts.

I’ve already had two posts, one on the key design features of MOOCs in general, and another on the differences between cMOOCs and xMOOCs that has already generated quite a lot of heated comments. Here I’m posting the first part of my discussion on the strengths and weaknesses of MOOCs. I’ll do another couple of posts to wrap it up (I desperately hope).

Strengths and weaknesses of MOOCs

Because at the time of writing most MOOCs are less than three years old, there are not many research publications on MOOCs, although research activities are now beginning to pick up. Much of the research so far on MOOCs comes from the institutions offering MOOCs, mainly in the form of reports on enrolments. The commercial platform providers such as Coursera and Udacity have provided limited research information overall, which is a pity, because they have access to really big data sets. However, MIT and Harvard, the founding partners in edX, are conducting some research, mainly on their own courses. There is very little research to date on cMOOCs, and what there is is mainly qualitative.

However, wherever possible, I have tried to use any research that has been done that provides insight into the strengths and weaknesses of MOOCs. At the same time, we should be clear that we are discussing a phenomenon that to date has been marked largely by political, emotional and often irrational discourse, and in terms of hard evidence, we will have to wait for some time. Thus any analysis must also address philosophical or value issues, which is a sure recipe for generating heated discussion.

Lastly, it should be remembered when evaluating MOOCs is that I am applying the criteria of whether MOOCs are likely to lead to the kinds of learning needed in a digital age: in other words, do they help develop the knowledge and skills defined in Chapter 1 of Teaching in a Digital Age?

1. Open and free education

MOOCs, particularly xMOOCs, deliver high quality content from some of the world’s best universities for free to anyone with a computer and an Internet connection. This in itself is an amazing value proposition. In this sense, MOOCs are an incredibly valuable addition to educational provision. Who could argue against this? Certainly not me, so long as the argument for MOOCs goes no further.

However, this is not the only form of open and free education. Libraries, open textbooks and educational broadcasting are also open and free to end users and have been for some time, even if they do not have the same power and reach as Internet-based delivery. There are also lessons we can learn from these earlier forms of open and free education that also apply to MOOCs.

The first is that these earlier forms of open and free did not replace the need for formal, credit-based education, but were used to supplement or strengthen it. In other words, MOOCs are a tool for continuing and informal education, which has high value in its own right.

The second lesson is that there have been many attempts in the past to use open and massive education through educational broadcasting and satellite broadcasting in Third World countries (see Bates, 1985), and they all failed miserably for a variety of reasons, the most important being:

  • the high cost of ground equipment (especially security),
  • the need for local support for learners without high levels of education, and its high cost
  • the need to adapt to the culture of the receiving countries
  • the difficulty of covering the operational costs of management and administration, especially for assessment, qualifications and local accreditation.

Also the priority in most Third World countries is not for courses from high-level Stanford University professors, but for programs for elementary and high schools. Finally, while mobile phones are widespread in Africa, they operate on very narrow bandwidths. For instance, it costs US$2 to download a typical YouTube video – equivalent to a day’s salary for many Africans. Streamed 50 minute video lectures then have limited applicability.

This is not to say that MOOCs could not be valuable in Third World countries. They have features, such as integrated interaction, testing and feedback, and much lower cost, that make them a more powerful medium than educational broadcasting but they will still face the same challenges of educational broadcasting:

  • being realistic as to what they can actually deliver to countries with no or limited technology infrastructure
  • working in partnership with Third World educational institutions and systems and other partners
  • ensuring that the necessary local support – which costs real money – is put in place
  • adapting the design, content and delivery of MOOCs to the cultural and economic requirements of those countries.

Also, MOOCs need to be compared to other possible ways of delivering mass education in developing countries, within these parameters. The problem comes when it is argued that because MOOCs are open and free to end-users, they will inevitably force down the cost of conventional education, or eliminate the need for it altogether, especially in Third World countries.

Lastly, and very importantly, in many countries, all public education is already in essence open to all and in many cases free to those participating, if grants, endowments and other forms of state support to students are taken into account. MOOCs then will have to deliver the same quality or better at a lower price than public education if they are to replace it. I will return to this point later when I discuss their costs and the political and social issues around MOOCs.

2. The audience that MOOCs mainly serve

In a research report from Ho et al. (2014), researchers at Harvard University and MIT found that on the first 17 MOOCs offered through edX, 66 per cent of all participants, and 74 per cent of all who obtained a certificate, had a bachelor’s degree or above, 71 per cent were male, and the average age was 26. This and other studies also found that a high proportion of participants came from outside the USA, ranging from 40-60 per cent of all participants, indicating strong interest internationally in open access to high quality university teaching.

In a study based on over 80 interviews in 62 institutions ‘active in the MOOC space’, Hollands and Tirthali (2014), researchers at Columbia University Teachers’ College, concluded that:

Data from MOOC platforms indicate that MOOCs are providing educational opportunities to millions of individuals across the world. However, most MOOC participants are already well-educated and employed, and only a small fraction of them fully engages with the courses. Overall, the evidence suggests that MOOCs are currently falling far short of “democratizing” education and may, for now, be doing more to increase gaps in access to education than to diminish them.

Thus MOOCs, as is common with most forms of university continuing education, cater to the better educated, older and employed sectors of society.

3. Persistence and commitment

Hill (2013) identified five types of participants in Coursera courses:

© Phil Hill, 2013

© Phil Hill, 2013

The edX researchers (Ho et al., 2014) provided empirical support for Hill’s analysis. They identified different levels of commitment as follows across 17 edX MOOCs:

  • Only Registered: Registrants who never access the courseware (35%).
  • Only Viewed: Non-certified registrants who access the courseware, accessing less than half of the available chapters (56%).
  • Only Explored: Non-certified Registrants who access more than half of the available chapters in the courseware, but did not get a certificate (4%).
  • Certified: Registrants who earn a certificate in the course (5%).

Engle (2014) found similar patterns for the UBC MOOCs on Coursera (also replicated in other studies):

  • of those that initially sign up, between one third and a half do not participate in any other active way
  • of those that participate in at least one activity, between 5-10% go on to successfully complete a certificate

Those going on to achieve certificates usually are within the 5-10 per cent range of those that sign up and in the 10-20 per cent range for those who actively engaged with the MOOC at least once. Nevertheless, the numbers obtaining certificates are still large in absolute terms: over 43,000 across 17 courses on edX and 8,000 across four courses at UBC (between 2,000-2,500 certificates per course).

Milligan et al. (2013) found a similar pattern of commitment in cMOOCs, from interviewing a relatively small sample of participants (29 out of 2,300 registrants) about halfway through a cMOOC:

  • passive participants: in Milligan’s study these were those that felt lost in the MOOC and rarely but occasionally logged in.
  • lurkers: they were actively following the course but did not engage in any of the activities (these were just under half those interviewed)
  • active participants (again, just under half those interviewed) who were fully engaged in the course activities.

MOOC participation and persistence rates need to be judged for what they are, a somewhat unique – and valuable – form of non-formal education. Once again, these results are very similar to research into non-formal educational broadcasts (e.g. the History Channel). One would not expect a viewer to watch every episode of a History Channel series then take an exam at the end. Ho et al. (p.13) produced the following diagram to show the different levels of commitment to xMOOCs:

Ho et al., 2014

Ho et al., 2014

Now compare that to what I wrote in 1985 about educational broadcasting in Britain:

(p.99): At the centre of the onion is a small core of fully committed students who work through the whole course, and, where available, take an end-of-course assessment or examination. Around the small core will be a rather larger layer of students who do not take any examination but do enrol with a local class or correspondence school. There may be an even larger layer of students who, as well as watching and listening, also buy the accompanying textbook, but who do not enrol in any courses. Then, by far the largest group, are those that just watch or listen to the programmes. Even within this last group, there will be considerable variations, from those who watch or listen fairly regularly, to those, again a much larger number, who watch or listen to just one programme. 

I also wrote (p.100):

A sceptic may say that the only ones who can be said to have learned effectively are the tiny minority that worked right through the course and successfully took the final assessment…A counter argument would be that broadcasting can be considered successful if it merely attracts viewers or listeners who might otherwise have shown no interest in the topic; it is the numbers exposed to the material that matter…the key issue then is whether broadcasting does attract to education those who would not otherwise have been interested, or merely provides yet another opportunity for those who are already well educated…There is a good deal of evidence that it is still the better educated in Britain and Europe that make the most use of non-formal educational broadcasting.

Exactly the same could be said about MOOCs. In a digital age where easy and open access to new knowledge is critical for those working in knowledge-based industries, MOOCs will be one valuable source or means of accessing that knowledge. The issue is though whether there are more effective ways to do this.

Furthermore, percentages, completion and certification DO matter if MOOCs are being seen as a substitute or a replacement for formal education. Thus MOOCs are a useful – but not really revolutionary – contribution to non-formal continuing education. We need though to look at whether they can meet the demands of more formal education, in terms of ensuring as many students succeed as possible.

To come

I think that’s more than enough for today. In my next post, I will try to cover the following strengths and weaknesses of MOOCs:

4. What do participants learn in MOOCs?

5. Costs and economies of scale

6. Branding

7. Ethical issues

8. Meeting the needs of learners in a digital age.

I will probably then do another short post on:

a. The politico-economic context that drives the MOOC phenomena

b. a short summary.

Over to you

Remembering that this is less than half the section on strengths and weaknesses, and that the criterion I am using for this is the ability of MOOCs to meet the learning needs of a digital age:

1. Are these the right topics for assessing MOOC’s strengths and weaknesses?

2. Would you have discussed these three topics differently? Do you agree or disagree with my conclusions?

3. Is ‘the ability of MOOCs to meet the learning needs of a digital age’ a fair criterion and if not how should they be judged?

4. Is the educational broadcasting comparison fair or relevant?

References

Bates, A. (1985) Broadcasting in Education: An Evaluation London: Constables

Engle, W. (2104) UBC MOOC Pilot: Design and Delivery Vancouver BC: University of British Columbia

Friedland, T. (2013) Revolution hits the universities, New York Times, January 26

Hill, P. (2013) Some validation of MOOC student patterns graphic, e-Literate, August 30

Ho, A. et al. (2014) HarvardX and MITx: The First Year of Open Online Courses Fall 2012-Summer 2013 (HarvardX and MITx Working Paper No. 1), January 21

Hollands, F. and Tirthali, D. (2014) MOOCs: Expectations and Reality New York: Columbia University Teachers’ College, Center for Benefit-Cost Studies of Education, 211 pp

Milligan, C., Littlejohn, A. and Margaryan, A. (2013) Patterns of engagement in connectivist MOOCs, Merlot Journal of Online Learning and Teaching, Vol. 9, No. 2

Yousef, A. et al. (2014) MOOCs: A Review of the State-of-the-Art Proceedings of 6th International Conference on Computer Supported Education – CSEDU 2014, Barcelona, Spain

What students learned from an MIT physics MOOC

Listen with webReader

Newtonian mechanics 2

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Why this paper?

I don’t normally review individual journal articles, but I am making an exception in this case for several reasons:

  • it is the only research publication I have seen that attempts to measure actual learning from a MOOC in a quantitative manner (if you know of other publications, please let me know)
  • as you’d expect from MIT, the research is well conducted, within the parameters of a quasi-experimental design
  • the paper indicates, in line with many other comparisons between modes of delivery, that the conditions which are associated with the context of teaching are more important than just the mode of delivery
  • I was having to read this paper carefully for my book on ‘Teaching in a Digital Age’, but for reasons of space I would not be able to go into detail on this paper for my book, so I might as well share my full analysis with you.

What was the course?

8.MReV – Mechanics ReView, an introduction to Newtonian Mechanics, is the online version of a similar course offered on campus in the spring for MIT students who failed the Introductory Newtonian Mechanics in the fall. In other words, it is based on a second-chance course for MIT-based campus students.

The online version was offered in the summer semester as a free, open access course through edX and was aimed particularly at high school physics teachers but also to anyone else interested. The course consisted of the following components:

  • an online eText, especially designed for the course
  • reference materials both inside the course and outside the course (e.g., Google, Wikipedia, or a textbook)
  • an online discussion area/forum
  • mainly multiple-choice online tests and ‘quizzes’, interspersed on a weekly basis throughout the course.

Approximately 17,000 people signed-up for 8.MReV. Most dropped out with no sign of commitment to the course; only 1,500 students were “passing” or on-track to earn a certificate after the second assignment. Most of those completing less than 50% of the homework and quiz problems dropped out during the course and did not take the post-test, so the analysis included only the 1,080 students who attempted more than 50% of the questions in the course. 1,030 students earned certificates.

Thus the study measured only the learning of the most successful online students (in terms of completing the online course).

Methodology (summary)

The study measured primarily ‘conceptual’ learning, based mainly on multiple-choice questions demanding a student response that generally can be judged right or wrong. Students were given a pre-test before the course and a post-test at the end of the course.

Two methods to test learning were used: a comparison between each student’s pre-test and post-test score to measure the learning gain during the course; and an analysis based on Item Response Theory (IRT) which does not show absolute learning (as measured by pre-post testing), but rather improvement relative to “class average.”

Because of the large size of the MOOC participants included in the study, the researchers were able to analyse performance between various ‘cohorts’ within the MOOC participants such as:

  • physics teachers
  • not physics teachers
  • physics background
  • no physics background
  • college math
  • no math
  • post-graduate qualification
  • bachelor degree
  • no more than high school

Lastly, the scores of the MOOC participants were compared with the scores of those taking the on-campus version of the course, which had the following features:

  • four hours of instruction in which staff interacted with small groups of students (a flipped classroom) each week,
  • staff office hours,
  • help from fellow students,
  • available physics tutors,
  • MIT library

Main results (summary)

  • gains in knowledge for the MOOC group were generally higher than those found in traditional, lecture-based classes and lower than (but closer to) those found in ‘interactive’ classes, but this result is hedged around with some considerable qualifications (‘more studies on MOOCs need to be done to confirm this’.)
  • in spite of the extra instruction that the on-campus students had, there was no evidence of positive, weekly relative improvement of the on-campus students compared with our online students. (Indeed, if my reading of Figure 5 in the paper is correct, the on-campus students did considerably worse).
  • there was no evidence within the MOOC group that cohorts with low initial ability learned less than the other cohorts

Conclusions

This is a valuable research report, carefully conducted and cautiously interpreted by the authors. However, for these reasons, it is really important not to jump to conclusions. In particular, the authors’ own caution at the end of the paper should be noted:

It is … important to note the many gross differences between 8.MReV and on-campus education. Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course … and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.

To this I would add that the design of this MOOC was somewhat different to many other xMOOCs in that it was based on online texts specially designed for the MOOC, and not on video lectures.

I’m still not sure from reading the paper how much students actually learned from the MOOC. About 1,000 who finished the course got a certificate, but it is difficult to interpret the gain in knowledge. The statistical measurement of an average gain of 0.3 doesn’t mean a lot. There is some mention of the difference being between a B and a B+, but I have probably misinterpreted that. If it is the case, though, I certainly would expect students taking a 13 week course to do much better than that. It would have been more helpful to have graded students on the pre-test then compared those grades on the post-test. We could then see if gains were in the order of at least one grade better, for instance.

Finally, this MOOC design suits a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It is less likely to develop the skills I have identified as being needed in a digital age.

 

 

Review of ‘Online Distance Education: Towards a Research Agenda.’

Listen with webReader
Drop-out: the elephant in the DE room that no-one wants to talk about

Drop-out: the elephant in the DE room that no-one wants to talk about

Zawacki-Richter, O. and Anderson, T. (eds.) (2014) Online Distance Education: Towards a Research Agenda Athabasca AB: AU Press, pp. 508

It is somewhat daunting to review a book of over 500 pages of research on any topic. I doubt if few other than the editors are likely to read this book from cover to cover. It is more likely to be kept on one’s bookshelf (if these still exist in a digital age) for reference whenever needed. Nevertheless, this is an important work that anyone working in online learning needs to be aware of, so I will do my best to cover it as comprehensively as I can.

Structure of the book

The book is a collection of about 20 chapters by a variety of different authors (more on the choice of authors later). Based on a Delphi study and analysis of ‘key research journals’ in the field, the editors have organized the topic into three sections, with a set of chapters on each sub-section, as follows:

1. Macro-level research: distance education systems and theories

  • access, equity and ethics
  • globalization and cross-cultural issues
  • distance teaching systems and institutions
  • theories and models
  • research methods and knowledge transfer

2. Meso-level research: management, organization and technology

  • management and organization
  • costs and benefits
  • educational technology
  • innovation and change
  • professional development and faculty support
  • learner support services
  • quality assurance

3. Micro-level: teaching and learning in distance education

  • instructional/learning design
  • interaction and communication
  • learner characteristics.

In addition, there is a very useful preface from Otto Peters, an introductory chapter by the editors where they justify their structural organization of research, and a short conclusion that calls for a systematic research agenda in online distance education research.

More importantly, perhaps, Terry Anderson and Olaf Zawacki-Richter demonstrate empirically that research in this field has been skewed towards micro-level research (about half of all publications).  Interestingly, and somewhat surprisingly given its importance, costs and benefits of online distance education is the least researched area.

What I liked

It is somewhat invidious to pick out particular chapters, because different people will have different interests from such a wide-ranging list of topics. I have tended to choose those that I found were new and/or particularly enlightening for me, but other readers’ choices will be different. However, by selecting a few excellent chapters, I hope to give some idea of the quality of the book.

1. The structuring/organization of research

Anderson and Zawacki-Richter have done an excellent job in providing a structural framework for research in this field. This will be useful both for those teaching about online and distance education but in particular for potential Ph.D. students wondering what to study. This book will provide an essential starting point.

2. Summary of the issues in each area of research

Again, the editors have done an excellent job in their introductory chapter in summarizing the content of each of the chapters that follows, and in so doing pulling out the key themes and issues within each area of research. This alone makes the book worthwhile.

3. Globalization, Culture and Online Distance Education

Charlotte (Lani) Gunawardena of the University of New Mexico has written the most comprehensive and deep analysis of this issue that I have seen, and it is an area in which I have a great deal of interest, since most of the online teaching I have done has been with students from around the world and sometimes multi-lingual.

After a general discussion of the issue of globalization and education, she reviews research in the following areas:

  • diverse educational expectations
  • learners and preferred ways of learning
  • socio-cultural environment and online interaction
  • help-seeking behaviours
  • silence
  • language learning
  • researching culture and online distance learning

This chapter should be required reading for anyone contemplating teaching online.

4. Quality assurance in Online Distance Education

I picked this chapter by Colin Latchem because he is so deeply expert in this field that he is able to make what can be a numbingly boring but immensely important topic a fun read, while at the same time ending with some critical questions about quality assurance. In particular Latchem looks at QA from the following perspectives:

  • definitions of quality
  • accreditation
  • online distance education vs campus-based teaching
  • quality standards
  • transnational online distance education
  • open educational resources
  • costs of QA
  • is online distance education yet good enough?
  • an outcomes approach to QA.

This chapter definitely showcases a master at the top of his game.

5. The elephant in the room: student drop-out

This is a wonderfully funny but ultimately serious argument between Ormond Simpson and Alan Woodley about the elephant in the distance education room that no-one wants to mention. Here they start poking the elephant with some sticks (which they note is not likely to be a career-enhancing move.) The basic argument is that institutions should and could do more to reduce drop-out/increase course completion. This chapter also stunned me with providing hard data about really low completion rates for most open university students. I couldn’t help comparing these with the high completion rates for online credit courses at dual-mode (campus-based) institutions, at least in Canada (which of course are not ‘open’ institutions in that students must have good high school qualifications.)

Woodley’s solution to reducing drop-out is quite interesting (and later well argued):

  • make it harder to get in
  • make it harder to get out

In both cases, really practical and not too costly solutions are offered that nevertheless are consistent with open access and high quality teaching.

In summary

The book contains a number of really good chapters that lay out the issues in researching online distance education.

What I disliked

I have to say that I groaned when I first saw the list of contributors. The same old, same old list of distance education experts with a heavy bias towards open universities. Sure, they are nearly all well-seasoned experts, and there’s nothing wrong with that per se (after all, I see myself as one of them.)

But where are the young researchers here, and especially the researchers in open educational resources, MOOCs, social media applications in online learning, and above all researchers from the many campus-based universities now mainstreaming online learning? There is almost nothing in the book about research into blended learning, and flipped classrooms are not even mentioned. OK, the book is about online distance learning but the barriers or distinctions are coming down with a vengeance. This book will never reach those who most need it, the many campus-based instructors now venturing for the first time into online learning in one way or another. They don’t see themselves as primarily distance educators.

And a few of the articles were more like lessons in history than an up-to-date review of research in the field. Readers of this blog will know that I strongly value the history of educational technology and distance learning. But these lessons need to be embedded in the here and now. In particular, the lessons need to be spelled out. It is not enough to know that Stanford University researchers as long ago as 1974 were researching the costs and benefits of educational broadcasting in developing countries, but what lessons does this have for some of the outrageous claims being made about MOOCs? A great deal in fact, but this needs explaining in the context of MOOCs today.

Also the book is solely focused on post-secondary university education. Where is the research on online distance education in the k-12/school sector or the two-year college/vocational sector? Maybe they are topics for other books, but this is where the real gap exists in research publications in online learning.

Lastly, although the book is reasonably priced for its size (C$40), and is available as an e-text as well as the fully printed version, what a pity it is not an open textbook that could then be up-dated and crowd-sourced over time.

Conclusion

This is essential reading for anyone who wants to take a professional, evidence-based approach to online learning (distance or otherwise). It will be particularly valuable for students wanting to do research in this area. The editors have done an incredibly good job of presenting a hugely diverse and scattered area in a clear and structured manner. Many of the chapters are gems of insight and knowledge in the field.

However, we have a huge challenge of knowledge transfer in this field. Repeatedly authors in the book lamented that many of the new entrants to online learning are woefully ignorant of the research previously done in this field. We need a better way to disseminate this research than a 500 page printed text that only those already expert in the field are likely to access. On the other hand, the book does provide a strong foundation from which to find better ways to disseminate this knowledge. Knowledge dissemination in a digital world then is where the research agenda for online learning needs to focus.

 

Comparing xMOOCs and cMOOCs: philosophy and practice

Listen with webReader
They're big: but will they survive? Image: © Wikipedia

They’re big: but will they survive? Image: © Wikipedia

The story so far

For my open textbook Teaching in a Digital Age, I am writing a chapter on different design models for teaching and learning. I have started writing the section on MOOCs, and in my previous post, ‘What is a MOOC?‘, I gave a brief history and described the key common characteristics of all MOOCs.

In this post I examine the differences in philosophy and practice between xMOOCs and cMOOCs.

Design models for MOOCs

MOOCs are a relatively new phenomenon and as a result are still evolving, particularly in terms of their design. However the early MOOC courses had relatively identifiable designs which still permeate most MOOCs. At the same time, there are two quite different philosophical positions underpinning xMOOCs and cMOOCs, so we need to look at each design model separately.

xMOOCs

I am starting with xMOOCs because at the time of writing they are by far the most common MOOC. Because instructors have considerable flexibility in the design of the course, there is considerable variation in the details, but in general xMOOCs have the following common design features:

  • specially designed platform software: xMOOCs use specially designed platform software that allows for the registration of very large numbers of participants, provides facilities for the storing and streaming on demand of digital materials, and automates assessment procedures and student performance tracking.
  • video lectures: xMOOCs use the standard lecture mode, but delivered online by participants downloading on demand recorded video lectures. These video lectures are normally available on a weekly basis over a period of 10-13 weeks. Initially these were often 50 minute lectures, but as a result of experience some xMOOCs now are using shorter recordings (sometimes down to 15 minutes in length) and thus there may be more video segments. Over time, xMOOC courses, as well as the videos, are becoming shorter in length, some now lasting only five weeks. Various video production methods have been used, including lecture capture (recording face-to-face on-campus lectures, then storing them and streaming them on demand), full studio production, or desk-top recording by the instructor on their own.
  • computer-marked assignments: students complete an online test and receive immediate computerised feedback. These tests are usually offered throughout the course, and may be used just for participant feedback. Alternatively the tests may be used for determining the award of a certificate. Another option is for an end of course grade or certificate based solely on an end-of-course online test. Most xMOOC assignments are based on multiple-choice, computer-marked questions, but some MOOCs have also used text or formula boxes for participants to enter answers, such as coding in a computer science course, or mathematical formulae, and in one or two cases, short text answers, but in all cases these are computer-marked.
  • peer assessment: some xMOOCs have experimented with assigning students randomly to small groups for peer assessment, especially for more open-ended or more evaluative assignment questions. This has often proved problematic though because of wide variations in expertise between the different members of a group, and because of the different levels of involvement in the course of different participants.
  • supporting materials: sometimes copies of slides, supplementary audio files, urls to other resources, and online articles may be included for downloading by participants.
  • a shared comment/discussion space where participants can post questions, ask for help, or comment on the content of the course.
  • no or very light discussion moderation: the extent to which the discussion or comments are moderated varies probably more than any other feature in xMOOCs, but at its most, moderation is directed at all participants rather than to individuals. Because of the very large numbers participating and commenting, moderation of individual comments by the instructor(s) offering the MOOC is impossible. Some instructors offer no moderation whatsoever, so participants rely on other participants to respond to questions or comments. Some instructors ‘sample’ comments and questions, and post comments in response to these. Some instructors use teaching assistants to comb for or identify common areas of concern shared by a number of participants then the instructor or teaching assistants will respond. However, in most cases, participants moderate each other’s comments or questions.
  • badges or certificates: most xMOOCs award some kind of recognition for successful completion of a course, based on a final computer-marked assessment. However, at the time of writing, MOOC badges or certificates have not been recognised for credit or admission purposes even by the institutions offering a MOOC, or even when the lectures are the same as for on-campus students. No evidence exists to date about employer acceptance of MOOC qualifications.
  • learning analytics: Although to date there has not been a great deal of published information about the use of learning analytics in xMOOCs, the xMOOC platforms have the capacity to collect and analyse ‘big data’ about participants and their performance, enabling, at least in theory, for immediate feedback to instructors about areas where the content or design needs improving and possibly directing automated cues or hints for individuals.

xMOOCs therefore primarily use a teaching model focused on the transmission of information, with high quality content delivery, computer-marked assessment (mainly for student feedback purposes), and automation of all key transactions between participants and the learning platform. There is almost no direct interaction between an individual participant and the instructor responsible for the course.

cMOOCs

cMOOCs have a very different educational philosophy from xMOOCs, in that cMOOCs place heavy emphasis on networking and in particular on strong content contributions from the participants themselves.

Key design principles

Downes (2014) has identified four key design principles for cMOOCs:

  • autonomy of the learner: in terms of learners choosing what content or skills they wish to learn, learning is personal, and thus there being no formal curriculum
  • diversity: in terms of the tools used, the range of participants and their knowledge levels, and varied content
  • interactivity: in terms of co-operative learning, communication between participants, resulting in emergent knowledge
  • open-ness: in terms of access, content, activities and assessment

Thus for the proponents of cMOOCs, learning results not from the transmission of information from an expert to novices, as in xMOOCs, but from sharing of knowledge between participants.

From principles to practice

Identifying how these key design features for cMOOCs are turned into practice is somewhat more difficult to pinpoint, because cMOOCs depend on an evolving set of practices. Most cMOOCs to date have in fact made some use of ‘experts’, both in the organization and promotion of the MOOC, and in providing ‘nodes’ of content around which discussion tends to revolve.  In other words, the design practices of cMOOCs are still more a work in progress than those of xMOOCs.

Nevertheless, I see the following as key design practices to date in cMOOCs:

  • use of social media: partly because most cMOOCs are not institutionally based or supported, they do not at present use a shared platform or platforms but are more loosely supported by a range of ‘connected’ tools and media. These may include a simple online registration system, and the use of web conferencing tools such as Blackboard Collaborate or Adobe Connect, streamed video or audio files, blogs, wikis, ‘open’ learning management systems such as Moodle or Canvas, Twitter, LinkedIn or Facebook, all enabling participants to share their contributions. Indeed, as new apps and social media tools develop, they too are likely to be incorporated into cMOOCs. All these tools are connected through web-based hashtags or other web-based linking mechanisms, enabling participants to identify social media contributions from other participants. Downes (2014) is working on a Learning and Performance Support System that could be used to help both participants and cMOOC organisers to communicate more easily across the whole MOOC and to organise their personal learning. Thus the use of loosely linked/connected social media is a key design practice in cMOOCs
  • participant-driven content: in principle, other than a common topic that may be decided by someone wanting to organise a cMOOC, content is decided upon and contributed by the participants themselves, in this sense very much like any other community of practice. In practice though cMOOC organisers (who themselves tend to have some expertise in the topic of the cMOOC) are likely to invite potential participants who have expertise or are known already to have a well articulated approach to a topic to make contributions around which participants can discuss and debate. Other participants choose their own ways to contribute or communicate, the most common being through blog posts, tweets, or comments on other participants’ blog posts, although some cMOOCs use wikis or open source online discussion forums. The key design practice with regard to content is that all participants contribute to and share content.
  • distributed communication: this is probably the most difficult design practice to understand for those not familiar with cMOOCs – and even for those who have participated. With participants numbering in the hundreds or even thousands, each contributing individually through a variety of social media, there are a myriad different inter-connections between participants that are impossible to track (in total) for any single participant. This results in many sub-conversations, more commonly at a binary level of two people communicating with each other than an integrated group discussion, although all conversations are ‘open’ and all other participants are able to contribute to a conversation if they know it exists. The key design practice then with regard to communication is a self-organising network with many sub-components.
  • assessment: there is no formal assessment, although participants may seek feedback from other, more knowledgeable participants, on an informal basis. Basically participants decide for themselves whether what they have learned is appropriate to them.

cMOOCs therefore primarily use a networked approach to learning based on autonomous learners connecting with each other across open and connected social media and sharing knowledge through their own personal contributions. There is no pre-set curriculum and no formal teacher-student relationship, either for delivery of content or for learner support. Participants learn from the contributions of others, from the meta-level knowledge generated through the community, and from self-reflection on their own contributions.

This is very much a personal interpretation of how cMOOCs work in practice, based largely on my own experience as a participant, but much more has been written and spoken about the philosophy of cMOOCs, and much less about the implementation of that philosophy, presumably because cMOOC proponents want to leave it open to practitioners to decide how best to put that philosophy into practice.

What is clear though is that Downes was correct in clearly distinguishing cMOOCs from xMOOCs – they are very different beasts.

Coming next to a web page near you

Now for the fun part. Over the next few days I will be writing about the strengths and weaknesses of MOOCs, focusing particularly on the following question:

Can or do MOOCs provide the learning and skills that students will need in the future? 

I can in fact provide you with the short answer now: a resounding NO, for both kinds of MOOC, although one is a bit better than the other! Tune in later for the full details.

Feedback, please

In the meantime, I need to know whether I have got it right in describing the two kinds of MOOCs. Does my description – because that is all it’s meant to be at this stage – match your experience of MOOCs? Have I missed important characteristics? Do I have my facts wrong? Is this useful or is there a better way to approach this topic?