October 21, 2014

What students learned from an MIT physics MOOC

Listen with webReader

Newtonian mechanics 2

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Why this paper?

I don’t normally review individual journal articles, but I am making an exception in this case for several reasons:

  • it is the only research publication I have seen that attempts to measure actual learning from a MOOC in a quantitative manner (if you know of other publications, please let me know)
  • as you’d expect from MIT, the research is well conducted, within the parameters of a quasi-experimental design
  • the paper indicates, in line with many other comparisons between modes of delivery, that the conditions which are associated with the context of teaching are more important than just the mode of delivery
  • I was having to read this paper carefully for my book on ‘Teaching in a Digital Age’, but for reasons of space I would not be able to go into detail on this paper for my book, so I might as well share my full analysis with you.

What was the course?

8.MReV – Mechanics ReView, an introduction to Newtonian Mechanics, is the online version of a similar course offered on campus in the spring for MIT students who failed the Introductory Newtonian Mechanics in the fall. In other words, it is based on a second-chance course for MIT-based campus students.

The online version was offered in the summer semester as a free, open access course through edX and was aimed particularly at high school physics teachers but also to anyone else interested. The course consisted of the following components:

  • an online eText, especially designed for the course
  • reference materials both inside the course and outside the course (e.g., Google, Wikipedia, or a textbook)
  • an online discussion area/forum
  • mainly multiple-choice online tests and ‘quizzes’, interspersed on a weekly basis throughout the course.

Approximately 17,000 people signed-up for 8.MReV. Most dropped out with no sign of commitment to the course; only 1,500 students were “passing” or on-track to earn a certificate after the second assignment. Most of those completing less than 50% of the homework and quiz problems dropped out during the course and did not take the post-test, so the analysis included only the 1,080 students who attempted more than 50% of the questions in the course. 1,030 students earned certificates.

Thus the study measured only the learning of the most successful online students (in terms of completing the online course).

Methodology (summary)

The study measured primarily ‘conceptual’ learning, based mainly on multiple-choice questions demanding a student response that generally can be judged right or wrong. Students were given a pre-test before the course and a post-test at the end of the course.

Two methods to test learning were used: a comparison between each student’s pre-test and post-test score to measure the learning gain during the course; and an analysis based on Item Response Theory (IRT) which does not show absolute learning (as measured by pre-post testing), but rather improvement relative to “class average.”

Because of the large size of the MOOC participants included in the study, the researchers were able to analyse performance between various ‘cohorts’ within the MOOC participants such as:

  • physics teachers
  • not physics teachers
  • physics background
  • no physics background
  • college math
  • no math
  • post-graduate qualification
  • bachelor degree
  • no more than high school

Lastly, the scores of the MOOC participants were compared with the scores of those taking the on-campus version of the course, which had the following features:

  • four hours of instruction in which staff interacted with small groups of students (a flipped classroom) each week,
  • staff office hours,
  • help from fellow students,
  • available physics tutors,
  • MIT library

Main results (summary)

  • gains in knowledge for the MOOC group were generally higher than those found in traditional, lecture-based classes and lower than (but closer to) those found in ‘interactive’ classes, but this result is hedged around with some considerable qualifications (‘more studies on MOOCs need to be done to confirm this’.)
  • in spite of the extra instruction that the on-campus students had, there was no evidence of positive, weekly relative improvement of the on-campus students compared with our online students. (Indeed, if my reading of Figure 5 in the paper is correct, the on-campus students did considerably worse).
  • there was no evidence within the MOOC group that cohorts with low initial ability learned less than the other cohorts

Conclusions

This is a valuable research report, carefully conducted and cautiously interpreted by the authors. However, for these reasons, it is really important not to jump to conclusions. In particular, the authors’ own caution at the end of the paper should be noted:

It is … important to note the many gross differences between 8.MReV and on-campus education. Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course … and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.

To this I would add that the design of this MOOC was somewhat different to many other xMOOCs in that it was based on online texts specially designed for the MOOC, and not on video lectures.

I’m still not sure from reading the paper how much students actually learned from the MOOC. About 1,000 who finished the course got a certificate, but it is difficult to interpret the gain in knowledge. The statistical measurement of an average gain of 0.3 doesn’t mean a lot. There is some mention of the difference being between a B and a B+, but I have probably misinterpreted that. If it is the case, though, I certainly would expect students taking a 13 week course to do much better than that. It would have been more helpful to have graded students on the pre-test then compared those grades on the post-test. We could then see if gains were in the order of at least one grade better, for instance.

Finally, this MOOC design suits a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It is less likely to develop the skills I have identified as being needed in a digital age.

 

 

Review of ‘Online Distance Education: Towards a Research Agenda.’

Listen with webReader
Drop-out: the elephant in the DE room that no-one wants to talk about

Drop-out: the elephant in the DE room that no-one wants to talk about

Zawacki-Richter, O. and Anderson, T. (eds.) (2014) Online Distance Education: Towards a Research Agenda Athabasca AB: AU Press, pp. 508

It is somewhat daunting to review a book of over 500 pages of research on any topic. I doubt if few other than the editors are likely to read this book from cover to cover. It is more likely to be kept on one’s bookshelf (if these still exist in a digital age) for reference whenever needed. Nevertheless, this is an important work that anyone working in online learning needs to be aware of, so I will do my best to cover it as comprehensively as I can.

Structure of the book

The book is a collection of about 20 chapters by a variety of different authors (more on the choice of authors later). Based on a Delphi study and analysis of ‘key research journals’ in the field, the editors have organized the topic into three sections, with a set of chapters on each sub-section, as follows:

1. Macro-level research: distance education systems and theories

  • access, equity and ethics
  • globalization and cross-cultural issues
  • distance teaching systems and institutions
  • theories and models
  • research methods and knowledge transfer

2. Meso-level research: management, organization and technology

  • management and organization
  • costs and benefits
  • educational technology
  • innovation and change
  • professional development and faculty support
  • learner support services
  • quality assurance

3. Micro-level: teaching and learning in distance education

  • instructional/learning design
  • interaction and communication
  • learner characteristics.

In addition, there is a very useful preface from Otto Peters, an introductory chapter by the editors where they justify their structural organization of research, and a short conclusion that calls for a systematic research agenda in online distance education research.

More importantly, perhaps, Terry Anderson and Olaf Zawacki-Richter demonstrate empirically that research in this field has been skewed towards micro-level research (about half of all publications).  Interestingly, and somewhat surprisingly given its importance, costs and benefits of online distance education is the least researched area.

What I liked

It is somewhat invidious to pick out particular chapters, because different people will have different interests from such a wide-ranging list of topics. I have tended to choose those that I found were new and/or particularly enlightening for me, but other readers’ choices will be different. However, by selecting a few excellent chapters, I hope to give some idea of the quality of the book.

1. The structuring/organization of research

Anderson and Zawacki-Richter have done an excellent job in providing a structural framework for research in this field. This will be useful both for those teaching about online and distance education but in particular for potential Ph.D. students wondering what to study. This book will provide an essential starting point.

2. Summary of the issues in each area of research

Again, the editors have done an excellent job in their introductory chapter in summarizing the content of each of the chapters that follows, and in so doing pulling out the key themes and issues within each area of research. This alone makes the book worthwhile.

3. Globalization, Culture and Online Distance Education

Charlotte (Lani) Gunawardena of the University of New Mexico has written the most comprehensive and deep analysis of this issue that I have seen, and it is an area in which I have a great deal of interest, since most of the online teaching I have done has been with students from around the world and sometimes multi-lingual.

After a general discussion of the issue of globalization and education, she reviews research in the following areas:

  • diverse educational expectations
  • learners and preferred ways of learning
  • socio-cultural environment and online interaction
  • help-seeking behaviours
  • silence
  • language learning
  • researching culture and online distance learning

This chapter should be required reading for anyone contemplating teaching online.

4. Quality assurance in Online Distance Education

I picked this chapter by Colin Latchem because he is so deeply expert in this field that he is able to make what can be a numbingly boring but immensely important topic a fun read, while at the same time ending with some critical questions about quality assurance. In particular Latchem looks at QA from the following perspectives:

  • definitions of quality
  • accreditation
  • online distance education vs campus-based teaching
  • quality standards
  • transnational online distance education
  • open educational resources
  • costs of QA
  • is online distance education yet good enough?
  • an outcomes approach to QA.

This chapter definitely showcases a master at the top of his game.

5. The elephant in the room: student drop-out

This is a wonderfully funny but ultimately serious argument between Ormond Simpson and Alan Woodley about the elephant in the distance education room that no-one wants to mention. Here they start poking the elephant with some sticks (which they note is not likely to be a career-enhancing move.) The basic argument is that institutions should and could do more to reduce drop-out/increase course completion. This chapter also stunned me with providing hard data about really low completion rates for most open university students. I couldn’t help comparing these with the high completion rates for online credit courses at dual-mode (campus-based) institutions, at least in Canada (which of course are not ‘open’ institutions in that students must have good high school qualifications.)

Woodley’s solution to reducing drop-out is quite interesting (and later well argued):

  • make it harder to get in
  • make it harder to get out

In both cases, really practical and not too costly solutions are offered that nevertheless are consistent with open access and high quality teaching.

In summary

The book contains a number of really good chapters that lay out the issues in researching online distance education.

What I disliked

I have to say that I groaned when I first saw the list of contributors. The same old, same old list of distance education experts with a heavy bias towards open universities. Sure, they are nearly all well-seasoned experts, and there’s nothing wrong with that per se (after all, I see myself as one of them.)

But where are the young researchers here, and especially the researchers in open educational resources, MOOCs, social media applications in online learning, and above all researchers from the many campus-based universities now mainstreaming online learning? There is almost nothing in the book about research into blended learning, and flipped classrooms are not even mentioned. OK, the book is about online distance learning but the barriers or distinctions are coming down with a vengeance. This book will never reach those who most need it, the many campus-based instructors now venturing for the first time into online learning in one way or another. They don’t see themselves as primarily distance educators.

And a few of the articles were more like lessons in history than an up-to-date review of research in the field. Readers of this blog will know that I strongly value the history of educational technology and distance learning. But these lessons need to be embedded in the here and now. In particular, the lessons need to be spelled out. It is not enough to know that Stanford University researchers as long ago as 1974 were researching the costs and benefits of educational broadcasting in developing countries, but what lessons does this have for some of the outrageous claims being made about MOOCs? A great deal in fact, but this needs explaining in the context of MOOCs today.

Also the book is solely focused on post-secondary university education. Where is the research on online distance education in the k-12/school sector or the two-year college/vocational sector? Maybe they are topics for other books, but this is where the real gap exists in research publications in online learning.

Lastly, although the book is reasonably priced for its size (C$40), and is available as an e-text as well as the fully printed version, what a pity it is not an open textbook that could then be up-dated and crowd-sourced over time.

Conclusion

This is essential reading for anyone who wants to take a professional, evidence-based approach to online learning (distance or otherwise). It will be particularly valuable for students wanting to do research in this area. The editors have done an incredibly good job of presenting a hugely diverse and scattered area in a clear and structured manner. Many of the chapters are gems of insight and knowledge in the field.

However, we have a huge challenge of knowledge transfer in this field. Repeatedly authors in the book lamented that many of the new entrants to online learning are woefully ignorant of the research previously done in this field. We need a better way to disseminate this research than a 500 page printed text that only those already expert in the field are likely to access. On the other hand, the book does provide a strong foundation from which to find better ways to disseminate this knowledge. Knowledge dissemination in a digital world then is where the research agenda for online learning needs to focus.

 

Developing a coherent approach to mobile learning

Listen with webReader
The JIBC Emergency Social Services app

The JIBC Emergency Social Services app

The Justice Institute of British Columbia (JIBC) is an unusual organization, focusing on the training of police, fire, corrections and paramedical personnel, as well as providing training for social service departments of the BC provincial government.

Tannis Morgan, Associate Dean, JIBC Centre for Teaching, Learning and Innovation, has just posted a very interesting post, Mobile Learning at an Applied Institution, on why and how JIBC has developed its mobile learning strategy, which includes issuing pre-loaded iPads to participants in some programs. If you have any interest in mobile learning, the post is well worth reading. You should click on the graphic at the end of the post under ‘Presentation for ETUG 2014′ for more information from slides prepared for an ETUG workshop.

Particularly worth noting is that most of JIBC’s mobile applications are open and free, because the materials are of direct value to all personnel working in this area, whether they are taking a JIBC course or not. Making the material developed for clients such as the government’s emergency social services department open and free has resulted in more enrolments, as employees and managers see the value of the service that JIBC is providing.

‘The Economist’ on online learning

Listen with webReader
© The Economist, 2014

© The Economist, 2014

Insight (2014) The democratisation of learning The Economist, September 26

This is a well-balanced and well-researched article on online learning in post-secondary education, even if it is very U.S.-focused. Doesn’t say anything new, but it covers the right ground.

What I liked is that it places MOOCs within the broader context of what is happening in online learning. A nice article to give to faculty skeptical about online learning (especially if they are economists).

What UBC has learned about doing MOOCs

Listen with webReader

Coursera certificate 2

Engle, W. (2104) UBC MOOC Pilot: Design and Delivery Vancouver BC: University of British Columbia

The University of British Columbia, a premier public research university in Canada, successfully delivered five MOOCs in the spring and summer of 2013, using the Coursera platform. This report is an evaluation of the experience.

The report is particularly valuable because it provides details of course development and delivery, including media used and costs. Also UBC has been developing online courses for credit for almost 20 years, so it is interesting to see how this has impacted on the design of their MOOCs.

The MOOCs

1. Game Theory I: K. Leyton Brown (UBC); M. Jackson and Y.Shoham (Stanford University)

2. Game Theory II: K. Leyton Brown (UBC); M. Jackson and Y.Shoham (Stanford University)

3. Useful Genetics: R. Redfield, UBC

4. Climate Literacy: S. Harris and S. Burch, UBC

5. Introduction to Systematic Program Design: G. Kizcales, UBC

In terms of comparability I’m going to treat Game Theory I and II as one MOOC, as combined they were about the same length as the other MOOCs (between 8-12 weeks)

Basic statistics

330,150 signed up (82,500 on average per course)

164,935 logged in at least once (41,000 per course)

12,031 took final exam (3,000 per course)

8,174 earned course certificate (2,000 per course)

60-70% already had a post-secondary degree

30-40% were North American, with participants from nearly every country in the world.

Course development

None of the instructors had taught an online course before, but were supported by instructional designers, media development staff, and academic assistants (graduate and undergraduate students).

One major difference between UBC MOOCs and its online credit courses (which are primarily LMS-based) was the extensive use of video, the main component of the MOOC pilot courses.

Video production

305 videos constituting a total of 65 hours were produced. Each MOOC used a different method of production:

  • Intensive studio (Climate Literacy)
  • Hybrid studio plus instructor desktop (Systematic Program Design)
  • Light studio production (Game Theory I and II)
  • Instructor desktop (Useful Genetics)

Web pages

All the MOOCs except Games Theory also included weekly modules as HTML-based web pages, which is a variation of the Coursera design default model. Altogether 98 HTML module pages were developed. The weekly modules were used to provide guidance to students on learning goals, amount of work expected, an overview of activities, and additional quiz or assignment help. (All standard practice in UBC’s LMS-based credit courses.)

Assessment

1,049 quiz questions were developed, of which just over half were graded.

There were four peer assessments in total across all the MOOCs.

Course delivery

As well as the faculty member responsible for each MOOC, graduate and undergraduate academic assistants were a crucial component of all courses, with the following responsibilities:

  • directly assisting learners
  • troubleshooting technical problems
  • conducting quality assurance activities

There was very little one-on-one interaction between the main instructor and learners, but academic assistants monitored and moderated the online forum discussions.

Costs

As always, costing is a difficult exercise. Appendix B of the report gives a pilot total of $217,657, but this excludes academic assistance or, perhaps the most significant cost, instructor time.

Working from the video production costs ($95,350) and the proportion of costs (44%) devoted to video production in Figure 1 in the report, I estimate the direct cost at $216,700, or approximately $54,000 per MOOC, excluding faculty time and co-ordination support, but including academic assistance.

However, the range of cost is almost as important. The video production costs for Climate Literacy, which used intensive studio production, were more than six times the video production costs of Systematic Program Design (hybrid studio + desktop).

MOOCs as OERs

  • the UBC instructors are using their MOOC materials in their own on-campus, for-credit classes in a flipped classroom model
  • courses are left open and active on Coursera for self-paced learning
  • porting of video materials as open access YouTube videos
  • two courses (Climate Literacy and Useful Genetics) added Creative Commons licenses for re-use

Challenges

  • copyright clearance (Coursera owns the copyright so third party copyright needs to be cleared)
  • higher than expected time demands on all involved
  • iterative upgrades to the Coursera platform
  • partner relationship management (UBC + Coursera + Stanford University) was time-consuming.
  • training and managing academic assistants, especially time management
  • the Coursera platform limited instructors’ ability to develop desired course activities
  • Coursera’s peer assessment functionality in particular was limiting

Lessons

  • UBC’s prior experience in credit-based online learning led to better-designed, more interactive and more engaging MOOCs
  • learners always responded positively to instructor ‘presence’ in forums or course announcements
  • MOOC students were motivated by grades
  • MOOC students were willing to critically engage in critiquing instructors’ expertise and teaching
  • open publishing via MOOCs is a strong motivator for instructors
  • MOOCs require significant investment.

Conclusion

All the MOOCs received positive feedback and comments from students. UBC was able to gain direct experience in and knowledge of MOOCs and look at how this might inform both their for-credit on-campus and online teaching. UBC was also able to bring its experience in for-credit online learning to strengthening the design of MOOCs. Lastly it was able to make much more widely known the quality of UBC instructors and course materials.

Comment

First, congratulations to UBC for

  • experimenting with MOOCs
  • conducting the evaluation
  • making the report publicly available.

It is clear from the comments of participants in the appendices that at least some of the participants (we don’t know how many) were very pleased with the courses. As usual though with evaluation reports on MOOCs, there is no assessment of learning other than the end of course quiz-based tests. I don’t care too much about completion rates, but some measurement of student satisfaction would have been helpful.

It is also significant that UBC has now decided to move from Coursera to edX as its platform for MOOCs. edX, which is open source and allows partners to modify and adapt the platform, provides the flexibility that Coursera lacked, despite its many iterative ‘improvements’.

This also demonstrates the hubris of MOOC platform developers in ignoring best design principles in online learning when they designed their platforms. It is clear that UBC designers were able to improve the design of their MOOCs by drawing on prior for-credit online experience, but also that the MOOC platforms are still very limited in enabling the kind of learning activities that lead to student engagement and success.

The UBC report also highlighted the importance (and cost) of providing some form of learner support in course delivery. The use of academic assistants in particular clearly made the MOOCs more interactive and engaging, as well as limited but effective interventions from the instructors themselves, once again supported by (and confirming) prior research on the importance of instructor presence for successful for-credit online learning.

I very much appreciate the cost data provided by UBC, and the breakdown of production and delivery costs is extremely valuable, but I have to challenge the idea of providing any costs that exclude the time of the instructors. This is by far the largest and most important cost in MOOCs and the notion that MOOCs are free of instructor cost is to fly in the face of any respectable form of economics.

It is clear that MOOCs are more expensive to date per hour of study time than LMS-based for-credit online courses. We still do not have enough data to give a precise figure, and in any case, as the UBC study shows, cost is very much a factor of design. However, even without instructors costs, the UBC MOOCs at $54,000 each for between 8-12 weeks are already more than the average cost of a 13 week for-credit LMS-based online course, including instructor time.

This is partly due to the increased instructor time in preparation/production, but also to the higher cost of video production.  I am not against the use of video in principle, but it must add value. Using it for content transmission when this can be done so much more cheaply textually and/or by audio is a waste of the medium’s potential (although perhaps more motivating for the instructor).

More importantly, every institution contemplating MOOCs needs to do a cost-benefit exercise. Is it better to invest in MOOCs or credit-based online learning or both? If MOOCs are more expensive, what are the added benefits they provide and does this more than make up for not only the extra cost, but the lost opportunity of investing in (more) credit-based online learning or other forms of campus-based learning? I know what my answer would be.