edX graphic

Ho, A. et al. (2014) HarvardX and MITx: The First Year of Open Online Courses Fall 2012-Summer 2013 (HarvardX and MITx Working Paper No. 1), January 21

This 32 page report provides a range of data and statistics about the first 17 MOOCs offered through edX by MIT and Harvard.

Methodology

MOOCs raise a number of interesting challenges when doing research, such as measuring participation, and defining success. In any interpretation of the results, these methodological challenges need to be considered. The researchers identified the following challenges:

1. Post-hoc research. The research design was established after the courses were designed and delivered, so data on some key critical research variables (e.g., socio-economic status) were not available or collected.

2. Variation in the object of the research. Although limited to MOOCs offered on the edX platform, the 17 MOOCs varied considerably in educational objectives, style, length, types of learner and other factors.

3. Measuring levels of participation. Participants varied from those who logged in only once to those that completed a certificate (and then some who went on to take more MOOCs). As a result, the researchers came up with four mutually exclusive categories of participation:

  • Only Registered: Registrants who never access the courseware.
  • Only Viewed: Non-certified registrants who access the courseware, accessing less than half of the available chapters.
  • Only Explored: Non-certified Registrants who access more than half of the available chapters in the courseware.
  • Certified: Registrants who earn a certificate in the course.

4. Percentages are misleading when numbers are large. This was a new one for me. I know one should never use percentages when n <20, specially when generalizing beyond the sample, but in this instance, the researchers argue that small percentages (e.g. <5%) are also misleading when the number the percentage refers to can be very large, e.g. when 3% = 1,400 students who completed a certificate. In such cases, the absolute numbers matter more than the percentage, so the researchers claim.

5. Measures of success The researchers argue that traditional measures of academic success, such as the percentage of those who successfully complete a course, are not valid (the word used is ‘counter-productive’) for open online courses.

Main results

Participation

  • 17 MOOCs
  • 841,687 course registrations: average per MOOC: 51,263
  • 597,692 ‘persons’: average of 1.4 MOOCs per person
  • 292,852 (35%) never engaged with the content (“Only registered”)
  • 469,702 (56%) viewed (i.e. clicked on a module) less than half of the content (“Only viewed”)
  • 35,937 (4%) explored more than half the content, but did not get a certificate (average per MOOC: 2,114)
  • 43,196  (5%) earned certificates (average per MOOC: 2,540)

Participants

  • 234,463 (33%) report a high school education or lower
  • 66% of all participants, and 74% of all who obtained a certificate, have a bachelor’s degree or above
  • 213,672 (29%) of all participants, and 33% of all who obtained a certificate, are female
  • 26 was the median age, with 45,844 (6%) over 50 years of age
  • 20,745 (3%) of all participants were from the UN listed least developed countries
  • there are ‘considerable differences in …. demographics such as gender, age… across courses.”

Comments

First, congratulations to Harvard and MIT for not only doing this research on MOOCs, but also for making it openly available and releasing it early.

Second, I agree that percentages can be misleading, a focus on certification is not the best way to assess the value of a MOOC, and that absolute figures matter for assessing the value of MOOCs. However, this is NOT the way most commentators and the media have focused on MOOCs. Percentages and certification DO matter if MOOCs are being seen as a substitute or a replacement for formal education. MOOCs need to be judged for what they are, a somewhat unique – and valuable – form of non-formal education.

Third, if we do look at absolute numbers, they are in my view not that impressive – an average of 2,540 per course earning a certificate, and less than 5,000 per course following more than half the content. The Open University, with completely open access, was getting higher numbers of students completing credit-based foundation courses when it started. The History Channel (a cable TV channel in North America) does a lot better, in terms of numbers. We have already seen overall average numbers for MOOCs dropping considerably as they have become more common. So when we account for the Hawthorne effect, the results are certainly not startling.

Fourth, these results so much reminded me of the research on educational broadcasting 30 years ago (for more details, see footnote). If you substituted ‘MOOC’ for ‘educational television’, the results would be almost identical (except there was a higher proportion of women than men participating). Perhaps they should read my very old book, “Broadcasting in Education: An Evaluation.” (I still have a few copies in a cupboard somewhere).

Lastly, this brings me to my final point. Where is the reference to relevant previous research or theory (see, for instance the footnote to this post)? There are certainly unique aspects to MOOCs that deserve to be researched. However, while MOOCs may be new, non-formal learning is not, nor is credit-based online learning, nor is open education, nor is educational broadcasting, of which MOOCs are a new format. Much of what we already know about these areas also applies to some aspects of MOOCs. Once again, though, Harvard and MIT seem to live in an environment that pays no attention to what happens outside their cocoon. If it’s not theirs, it doesn’t count. This is simply not good enough. In no other field would you get away with ignoring all previous research or work in related areas such as credit-based online learning, open education or educational broadcasting.

Having got that off my chest, I did find the paper well written and interesting and certainly worth a careful read. I look forward to reading – and reviewing – future papers.

Footnote: MOOCs and the onion theory of educational broadcasting

I eventually found a copy of my book. I blew the dust off it and guess what I found.

Here’s what I wrote about ‘levels of commitment’ in non-formal educational broadcasting in 1984 (p.99):

At the centre of the onion is a small core of fully committed students who work through the whole course, and, where available, take an end-of-course assessment or examination. Around the small core will be a rather larger layer of students who do not take any examination but do enrol with a local class or correspondence school. There may be an even larger layer of students who, as well as watching and listening, also buy the accompanying textbook, but who do not enrol in any courses. Then, by far the largest group, are those that just watch or listen to the programmes. Even within this last group, there will be considerable variations, from those who watch or listen fairly regularly, to those, again a much larger number, who watch or listen to just one programme.

Now compare this to Figure 2 (p.13) of the Harvard/MIT report:

MOOC onionI also wrote (p.100):

A sceptic may say that the only ones who can be said to have learned effectively are the tiny minority that worked right through the course and successfully took the final assessment…A counter argument would be that broadcasting can be considered successful if it merely attracts viewers or listeners who might otherwise have shown no interest in the topic; it is the numbers exposed to the material that matter…the key issue then is whether broadcasting does attract to education those who would not otherwise have been interested, or merely provides yet another opportunity for those who are already well educated…There is a good deal of evidence that it is still the better educated in Britain and Europe that make the most use of non-formal educational broadcasting.

Thanks for the validation of my 1984 theory, Harvard/MIT.

Reference

Bates, A. (1984) Broadcasting in Education: An Evaluation. London: Constable

 

3 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here