July 21, 2017

MIT aims to expand its research into learning

Diffusion tension imaging Satrajit Ghosh, MIT

Diffusion tension imaging Satrajit Ghosh, MIT

Chandler, D. (2016) New initiatives accelerate learning research and its applications MIT News, February 2

The President of MIT has announced a significant expansion of the Institute’s programs in learning research and online and digital education, through the creation of the MIT Integrated Learning Initiative (MITili).

The integrated science of learning — now emerging as a significant field of research — will be the core of MITili (to be pronounced “mightily”), a cross-disciplinary, Institute-wide initiative to foster rigorous quantitative and qualitative research on how people learn.

MITili will combine research in cognitive psychology, neuroscience, economics, engineering, public policy, and other fields to investigate what methods and approaches to education work best for different people and subjects. The effort will also examine how to improve the educational experience within MIT and in the world at large, at all levels of teaching.

The findings that spin out of MITili will then be applied to improve teaching on campus and online.

Comment

First, I very much welcome this initiative by a prestigious research university seriously to research what MIT calls the ‘science of learning’. Research into learning has generally been relatively poorly funded compared with research into science, engineering and computing.

However, I hope that MIT will approach this in the right way and avoid the hubris they displayed when moving into MOOCs, where they ignored all previous research into online learning.

It is critical that those working in MITili do not assume that there is nothing already known about learning. Although exploring the contribution that the physical sciences, such as biological research into the relationship between brain functionality and learning, can make to our understanding of learning is welcome, as much attention needs to be paid to the environmental conditions that support or inhibit learning, to what kind of teaching approaches encourage different kinds of learning, and to the previous, well-grounded research into the psychology of learning.

In other words, not only a multi-disciplinary, but also a multi-epistemological approach will be needed, drawing as much from educational research and the social sciences as from the natural sciences. Is MIT willing and able to do this? After all, learning is a human, not a mechanical activity, when all is said and done.

MIT introduces credit-based online learning

MIT entrance

Bradt, S. (2015) Online courses + time on campus = a new path to an MIT master’s degree MIT News, October 7

MIT is famous for its non-credit MOOCs, but now, for the first time, it is offering a credit program at least partially online.

The one year Master in Supply Chain Management will consist of one semester taking online courses and one semester on campus, starting in February, 2016. This will run alongside the existing 10 month on-campus program. The online classes that make up the first semester will cost US$150, while the exam is $400 to $800. The second semester on campus will cost at least half what it costs for the yearlong program, which would mean about another $17,000. Students will still need to meet MIT’s academic standards for admission. It is expected to take about 30 to 40 students a year into the new program. The program will be offered using MIT’s own edX platform.

Since many other universities have been offering a mix of online and campus-based programs for many years, perhaps of more interest is MIT’s announcement of a new qualification, a MicroMaster, for those that successfully complete just the online portion of the program. MIT states that those that do well on the MicroMaster will ‘significantly enhance their chances of being accepted to the full master’s program‘.

Comment

First, congratulations to MIT for finally getting into credit-based online learning. This is a small but significant step.

It will be interesting to see how much the Master’s online courses differ in design from MOOCs. Will there be more interaction with the MIT faculty in the Master’s program? Will MIT use existing best practice in the design of credit-based online learning, or will they use a different model closer to MOOCs? If so, how will that affect the institution’s willingness to accept credit for MOOCs? All interesting questions.

MIT and German research on the [appalling] use of video in xMOOCs

Demonstration is one of the 18 video production styles from a Coursera course “Mechanics: Motion, Forces, Energy and Gravity, from Particles to Planets” (UNSW Australia)

Demonstration is one of the 18 video production styles from a Coursera course “Mechanics: Motion,
Forces, Energy and Gravity, from Particles to Planets” (UNSW Australia)

Hansch, A. et al. (2015) Video and Online Learning: Critical Reflections and Findings From the Field Berlin DE: Alexander von Humbolt Institut für Internet und Gesellschaft

The study

This exploratory study examines video as an instructional medium and investigates the following research questions:

  • How is video designed, produced, and used in online learning contexts, specifically with regard to pedagogy and cost?
  • What are the benefits and limitations of standardizing the video production process?

Findings are based on a literature review, our observation of online courses, and the results of 12 semi-structured interviews with practitioners in the field of educational video production

We reviewed a variety of different course and video formats offered on six major platforms: Coursera, edX, Udacity, iversity, FutureLearn, and Khan Academy.

Results

(My summary, the authors’ words in italics)

1. We found documentation on the use of video as an instructional tool for online learning to be a notably underexplored field. To date, little consideration has been given to the pedagogical affordances of video, what constitutes an effective learning video, and what learning situations the medium of video is best suited for.

2. On the whole, we found that video is the main method of content delivery in nearly all MOOCs. MOOC videos tend to be structured as short pieces of content, often separated by assessment questions. This seems to be one of the few best practices that is widely accepted within the field.

3. We found two video production styles that are most commonly used: (1) the talking head style, where the instructor is recorded lecturing into the camera, and (2) the tablet capture with voiceover style (e.g. Khan Academy style).

4. It appears that the use of video in online learning is taken for granted, and there is often not enough consideration given to whether or not video is the right medium to accomplish a MOOC’s pedagogical goals.

5. Video tends to be the most expensive part of MOOC production. There is a tendency for institutions to opt for a professional, studio-style setup when producing video… but.. there is little to no research showing the relevance of production value for learning.

6. More research is needed on how people learn from video.

Recommendations

1. Think twice before using video….it seems problematic that online learning pedagogy is concentrated so heavily in this medium. Hence, we want to discourage the use of video in online learning simply because there is an expectation for it, and rather encourage online learning producers and providers to question video’s extensive use at the expense of other pedagogical alternatives

2. Make the best use of video as a medium…Based on our findings, we have compiled an overview of the medium of video’s affordances for online learning. [Nine ‘affordances’ of video are recommended]

Comment

First, this is not really about video in online learning, but video in xMOOCs, which is just one, fairly esoteric use of video in online learning. Nevertheless, since xMOOCs are in widespread use, it is still a valid and important area of research.

Unfortunately, though, the authors’ literature search was barely adequate. I will forgive the failure to discuss the 20 years of research on television and video at the UK Open University, or the research done on the educational effects of television from Sesame Street, but although the authors of this paper include a reference to his book in the bibliography, the failure in the main text to recognise properly Richard Mayer’s contribution to what we know about using video for teaching and learning is unforgivable, as is the authors’ conclusion that the use of video as an instructional tool for online learning is a notably underexplored field. Sorry, but its the authors who haven’t looked in the right places.

Secondly, it’s not that I disagree with their recommendations, it’s that what they are recommending has been known for a long time. More research is always useful, but first the existing research needs to be read, learned and applied.

Thirdly, this paper reinforces what many of us with experience in online learning and/or in the use of video in education have known all along: those designing xMOOCs have made the most egregious of errors in effective design through sheer ignorance of prior research in the area. Since those making these stupid mistakes in course design come from elite, research-based institutions, the sin of ignoring prior research is even more unforgivable. Once gain we have MIT, Stanford and Harvard and the other xMOOC providers having to use new research to rediscover the wheel through ignorance and arrogance.

Fourthly, the real value of this paper comes from the authors’ typology of video production styles. They offer a total of 18 possible production styles, with a short description and a set of questions to be asked about each. This alone makes the paper worth reading for anyone considering using video in online learning, although the authors fail to point out which of the production styles should be avoided, and which used, according to the research.

Lastly, what this paper really reinforces above all is that we should stop taking xMOOCs seriously. They are badly designed by amateurs who don’t know what they are doing. Let’s move on to more important issues in online learning.

 

What students learned from an MIT physics MOOC

Newtonian mechanics 2

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Why this paper?

I don’t normally review individual journal articles, but I am making an exception in this case for several reasons:

  • it is the only research publication I have seen that attempts to measure actual learning from a MOOC in a quantitative manner (if you know of other publications, please let me know)
  • as you’d expect from MIT, the research is well conducted, within the parameters of a quasi-experimental design
  • the paper indicates, in line with many other comparisons between modes of delivery, that the conditions which are associated with the context of teaching are more important than just the mode of delivery
  • I was having to read this paper carefully for my book on ‘Teaching in a Digital Age’, but for reasons of space I would not be able to go into detail on this paper for my book, so I might as well share my full analysis with you.

What was the course?

8.MReV – Mechanics ReView, an introduction to Newtonian Mechanics, is the online version of a similar course offered on campus in the spring for MIT students who failed the Introductory Newtonian Mechanics in the fall. In other words, it is based on a second-chance course for MIT-based campus students.

The online version was offered in the summer semester as a free, open access course through edX and was aimed particularly at high school physics teachers but also to anyone else interested. The course consisted of the following components:

  • an online eText, especially designed for the course
  • reference materials both inside the course and outside the course (e.g., Google, Wikipedia, or a textbook)
  • an online discussion area/forum
  • mainly multiple-choice online tests and ‘quizzes’, interspersed on a weekly basis throughout the course.

Approximately 17,000 people signed-up for 8.MReV. Most dropped out with no sign of commitment to the course; only 1,500 students were “passing” or on-track to earn a certificate after the second assignment. Most of those completing less than 50% of the homework and quiz problems dropped out during the course and did not take the post-test, so the analysis included only the 1,080 students who attempted more than 50% of the questions in the course. 1,030 students earned certificates.

Thus the study measured only the learning of the most successful online students (in terms of completing the online course).

Methodology (summary)

The study measured primarily ‘conceptual’ learning, based mainly on multiple-choice questions demanding a student response that generally can be judged right or wrong. Students were given a pre-test before the course and a post-test at the end of the course.

Two methods to test learning were used: a comparison between each student’s pre-test and post-test score to measure the learning gain during the course; and an analysis based on Item Response Theory (IRT) which does not show absolute learning (as measured by pre-post testing), but rather improvement relative to “class average.”

Because of the large size of the MOOC participants included in the study, the researchers were able to analyse performance between various ‘cohorts’ within the MOOC participants such as:

  • physics teachers
  • not physics teachers
  • physics background
  • no physics background
  • college math
  • no math
  • post-graduate qualification
  • bachelor degree
  • no more than high school

Lastly, the scores of the MOOC participants were compared with the scores of those taking the on-campus version of the course, which had the following features:

  • four hours of instruction in which staff interacted with small groups of students (a flipped classroom) each week,
  • staff office hours,
  • help from fellow students,
  • available physics tutors,
  • MIT library

Main results (summary)

  • gains in knowledge for the MOOC group were generally higher than those found in traditional, lecture-based classes and lower than (but closer to) those found in ‘interactive’ classes, but this result is hedged around with some considerable qualifications (‘more studies on MOOCs need to be done to confirm this’.)
  • in spite of the extra instruction that the on-campus students had, there was no evidence of positive, weekly relative improvement of the on-campus students compared with our online students. (Indeed, if my reading of Figure 5 in the paper is correct, the on-campus students did considerably worse).
  • there was no evidence within the MOOC group that cohorts with low initial ability learned less than the other cohorts

Conclusions

This is a valuable research report, carefully conducted and cautiously interpreted by the authors. However, for these reasons, it is really important not to jump to conclusions. In particular, the authors’ own caution at the end of the paper should be noted:

It is … important to note the many gross differences between 8.MReV and on-campus education. Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course … and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.

To this I would add that the design of this MOOC was somewhat different to many other xMOOCs in that it was based on online texts specially designed for the MOOC, and not on video lectures.

I’m still not sure from reading the paper how much students actually learned from the MOOC. About 1,000 who finished the course got a certificate, but it is difficult to interpret the gain in knowledge. The statistical measurement of an average gain of 0.3 doesn’t mean a lot. There is some mention of the difference being between a B and a B+, but I have probably misinterpreted that. If it is the case, though, I certainly would expect students taking a 13 week course to do much better than that. It would have been more helpful to have graded students on the pre-test then compared those grades on the post-test. We could then see if gains were in the order of at least one grade better, for instance.

Finally, this MOOC design suits a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It is less likely to develop the skills I have identified as being needed in a digital age.

 

 

A review of a Harvard/MIT research paper on edX MOOCs

 edX graphic

Ho, A. et al. (2014) HarvardX and MITx: The First Year of Open Online Courses Fall 2012-Summer 2013 (HarvardX and MITx Working Paper No. 1), January 21

This 32 page report provides a range of data and statistics about the first 17 MOOCs offered through edX by MIT and Harvard.

Methodology

MOOCs raise a number of interesting challenges when doing research, such as measuring participation, and defining success. In any interpretation of the results, these methodological challenges need to be considered. The researchers identified the following challenges:

1. Post-hoc research. The research design was established after the courses were designed and delivered, so data on some key critical research variables (e.g., socio-economic status) were not available or collected.

2. Variation in the object of the research. Although limited to MOOCs offered on the edX platform, the 17 MOOCs varied considerably in educational objectives, style, length, types of learner and other factors.

3. Measuring levels of participation. Participants varied from those who logged in only once to those that completed a certificate (and then some who went on to take more MOOCs). As a result, the researchers came up with four mutually exclusive categories of participation:

  • Only Registered: Registrants who never access the courseware.
  • Only Viewed: Non-certified registrants who access the courseware, accessing less than half of the available chapters.
  • Only Explored: Non-certified Registrants who access more than half of the available chapters in the courseware.
  • Certified: Registrants who earn a certificate in the course.

4. Percentages are misleading when numbers are large. This was a new one for me. I know one should never use percentages when n <20, specially when generalizing beyond the sample, but in this instance, the researchers argue that small percentages (e.g. <5%) are also misleading when the number the percentage refers to can be very large, e.g. when 3% = 1,400 students who completed a certificate. In such cases, the absolute numbers matter more than the percentage, so the researchers claim.

5. Measures of success The researchers argue that traditional measures of academic success, such as the percentage of those who successfully complete a course, are not valid (the word used is ‘counter-productive’) for open online courses.

Main results

Participation

  • 17 MOOCs
  • 841,687 course registrations: average per MOOC: 51,263
  • 597,692 ‘persons’: average of 1.4 MOOCs per person
  • 292,852 (35%) never engaged with the content (“Only registered”)
  • 469,702 (56%) viewed (i.e. clicked on a module) less than half of the content (“Only viewed”)
  • 35,937 (4%) explored more than half the content, but did not get a certificate (average per MOOC: 2,114)
  • 43,196  (5%) earned certificates (average per MOOC: 2,540)

Participants

  • 234,463 (33%) report a high school education or lower
  • 66% of all participants, and 74% of all who obtained a certificate, have a bachelor’s degree or above
  • 213,672 (29%) of all participants, and 33% of all who obtained a certificate, are female
  • 26 was the median age, with 45,844 (6%) over 50 years of age
  • 20,745 (3%) of all participants were from the UN listed least developed countries
  • there are ‘considerable differences in …. demographics such as gender, age… across courses.”

Comments

First, congratulations to Harvard and MIT for not only doing this research on MOOCs, but also for making it openly available and releasing it early.

Second, I agree that percentages can be misleading, a focus on certification is not the best way to assess the value of a MOOC, and that absolute figures matter for assessing the value of MOOCs. However, this is NOT the way most commentators and the media have focused on MOOCs. Percentages and certification DO matter if MOOCs are being seen as a substitute or a replacement for formal education. MOOCs need to be judged for what they are, a somewhat unique – and valuable – form of non-formal education.

Third, if we do look at absolute numbers, they are in my view not that impressive – an average of 2,540 per course earning a certificate, and less than 5,000 per course following more than half the content. The Open University, with completely open access, was getting higher numbers of students completing credit-based foundation courses when it started. The History Channel (a cable TV channel in North America) does a lot better, in terms of numbers. We have already seen overall average numbers for MOOCs dropping considerably as they have become more common. So when we account for the Hawthorne effect, the results are certainly not startling.

Fourth, these results so much reminded me of the research on educational broadcasting 30 years ago (for more details, see footnote). If you substituted ‘MOOC’ for ‘educational television’, the results would be almost identical (except there was a higher proportion of women than men participating). Perhaps they should read my very old book, “Broadcasting in Education: An Evaluation.” (I still have a few copies in a cupboard somewhere).

Lastly, this brings me to my final point. Where is the reference to relevant previous research or theory (see, for instance the footnote to this post)? There are certainly unique aspects to MOOCs that deserve to be researched. However, while MOOCs may be new, non-formal learning is not, nor is credit-based online learning, nor is open education, nor is educational broadcasting, of which MOOCs are a new format. Much of what we already know about these areas also applies to some aspects of MOOCs. Once again, though, Harvard and MIT seem to live in an environment that pays no attention to what happens outside their cocoon. If it’s not theirs, it doesn’t count. This is simply not good enough. In no other field would you get away with ignoring all previous research or work in related areas such as credit-based online learning, open education or educational broadcasting.

Having got that off my chest, I did find the paper well written and interesting and certainly worth a careful read. I look forward to reading – and reviewing – future papers.

Footnote: MOOCs and the onion theory of educational broadcasting

I eventually found a copy of my book. I blew the dust off it and guess what I found.

Here’s what I wrote about ‘levels of commitment’ in non-formal educational broadcasting in 1984 (p.99):

At the centre of the onion is a small core of fully committed students who work through the whole course, and, where available, take an end-of-course assessment or examination. Around the small core will be a rather larger layer of students who do not take any examination but do enrol with a local class or correspondence school. There may be an even larger layer of students who, as well as watching and listening, also buy the accompanying textbook, but who do not enrol in any courses. Then, by far the largest group, are those that just watch or listen to the programmes. Even within this last group, there will be considerable variations, from those who watch or listen fairly regularly, to those, again a much larger number, who watch or listen to just one programme.

Now compare this to Figure 2 (p.13) of the Harvard/MIT report:

MOOC onionI also wrote (p.100):

A sceptic may say that the only ones who can be said to have learned effectively are the tiny minority that worked right through the course and successfully took the final assessment…A counter argument would be that broadcasting can be considered successful if it merely attracts viewers or listeners who might otherwise have shown no interest in the topic; it is the numbers exposed to the material that matter…the key issue then is whether broadcasting does attract to education those who would not otherwise have been interested, or merely provides yet another opportunity for those who are already well educated…There is a good deal of evidence that it is still the better educated in Britain and Europe that make the most use of non-formal educational broadcasting.

Thanks for the validation of my 1984 theory, Harvard/MIT.

Reference

Bates, A. (1984) Broadcasting in Education: An Evaluation. London: Constable