November 26, 2014

A ‘starter’ bibliography on MOOCs

Listen with webReader
Image: © educatorstechnology.com, 2014

Image: © educatorstechnology.com, 2014

For the increasing number of students doing Masters’ dissertations or Ph.D’s on MOOCs I have collected together for convenience all the references made in my chapter on MOOCs for my open textbook, ‘Teaching in a Digital World.’ However, there are many other publications – this cannot be considered a comprehensive list. Also note the date of this blog post: anything published after this will not be here, unless you let me know about it.

In return, I would really appreciate other suggestions for references that you have found to be valuable or influential. I’m now less interested in ‘opinion pieces’ but I am looking for more papers that reflect actual experience or research on MOOCs.

Balfour, S. P. (2013). Assessing writing in MOOCs: Automated essay scoring and calibrated peer review. Research & Practice in Assessment, Vol. 8.

Bates, A. (1985) Broadcasting in Education: An Evaluation London: Constables

Bates, A. and Sangrà, A. (2011) Managing Technology in Higher Education San Francisco: Jossey-Bass/John Wiley and Co

Bates, T. (2012) What’s right and what’s wrong with Coursera-style MOOCs Online Learning and Distance Education Resources, August 5

Bayne, S. (2014) Teaching, Research and the More-than-Human in Digital Education Oxford UK: EDEN Research Workshop (keynote: no printed record available)

Blackall, L. (2014) Open online courses and massively untold stories, GoogleDocs

Book, P. (2103) ACE as Academic Credit Reviewer–Adjustment, Accommodation, and Acceptance WCET Learn, July 25

Chauhan, A. (2014) Massive Open Online Courses (MOOCS): Emerging Trends in Assessment and Accreditation Digital Education Review, No. 25

Christensen, C. (2010) Disrupting Class, Expanded Edition: How Disruptive Innovation Will Change the Way the World Learns New York: McGraw-Hill

Christensen, C. and Eyring, H. (2011), The Innovative University: Changing the DNA of Higher Education, New York, New York, USA: John Wiley & Sons,

Christensen, C. and Weise, M. (2014) MOOCs disruption is only beginning, The Boston Globe, May 9

Collins, E. (2013) SJSU Plus Augmented Online Learning Environment Pilot Project Report San Jose CA: The Research and Planning Group for California Colleges

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Daniel, J. (2012) Making sense of MOOCs: Musings in a maze of myth, paradox and possibility Seoul: Korean National Open University

Dillenbourg, P. (ed.) (1999) Collaborative-learning: Cognitive and Computational Approaches. Oxford: Elsevier

Dillenbourg, P. (2014) MOOCs: Two Years Later, Oxford UK: EDEN Research Workshop (keynote: no printed record available)

Downes, S. (2012) Massively Open Online Courses are here to stay, Stephen’s Web, July 20

Downes, S. (2014) The MOOC of One, Valencia, Spain, March 10

Engle, W. (2104) UBC MOOC Pilot: Design and Delivery Vancouver BC: University of British Columbia

Falchikov, N. and Goldfinch, J. (2000) Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks Review of Educational Research, Vol. 70, No. 3

Firmin, R. et al. (2014) Case study: using MOOCs for conventional college coursework Distance Education, Vol. 35, No. 2

Friedland, T. (2013) Revolution hits the universities, New York Times, January 26

Harasim, L. (2012) Learning Theory and Online Technologies New York/London: Routledge

Haynie, D. (2014). State Department hosts ‘MOOC Camp’ for online learners. US News,January 20

Hernandez, R. et al. (2014) Promoting engagement in MOOCs through social collaboration Oxford UK: Proceedings of the 8th EDEN Research Workshop

Hill, P. (2012) Four Barriers that MOOCs Must Overcome to Build a Sustainable Model e-Literate, July 24

Ho, A. et al. (2014) HarvardX and MITx: The First Year of Open Online Courses Fall 2012-Summer 2013 (HarvardX and MITx Working Paper No. 1), January 21

Hollands, F. and Tirthali, D. (2014) MOOCs: Expectations and Reality New York: Columbia University Teachers’ College, Center for Benefit-Cost Studies of Education

Hülsmann, T. (2003) Costs without camouflage: a cost analysis of Oldenburg University’s  two graduate certificate programs offered  as part of the online Master of Distance Education (MDE): a case study, in Bernath, U. and Rubin, E., (eds.) Reflections on Teaching in an Online Program: A Case Study Oldenburg, Germany: Bibliothecks-und Informationssystem der Carl von Ossietsky Universität Oldenburg

Jaschik, S. (2013) MOOC Mess, Inside Higher Education, February 4

Knox, J. (2014) Digital culture clash: ‘massive’ education in the e-Learning and Digital Cultures Distance Education, Vol. 35, No. 2

Kop, R. (2011) The Challenges to Connectivist Learning on Open Online Networks: Learning Experiences during a Massive Open Online Course International Review of Research into Open and Distance Learning, Vol. 12, No. 3

Lave, J. and Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press

Lyotard, J-J. (1979) La Condition postmoderne: rapport sur le savoir: Paris: Minuit

Mackness, J. (2013) cMOOCs and xMOOCs – key differences, Jenny Mackness, October 22

Milligan, C., Littlejohn, A. and Margaryan, A. (2013) Patterns of engagement in connectivist MOOCs, Merlot Journal of Online Learning and Teaching, Vol. 9, No. 2

Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., & Koller, D. (2013). Tuned models of peer assessment in MOOCs. Palo Alto, CA: Stanford University.

Rumble, G. (2001) The costs and costing of networked learning, Journal of Asynchronous Learning Networks, Vol. 5, No. 2

Suen, H. (2104) Peer assessment for massive open online courses (MOOCs) International Review of Research into Open and Distance Learning, Vol. 15, No. 3

Tapscott, D. (undated) The transformation of education dontapscott.com

University of Ottawa (2013) Report of the e-Learning Working Group Ottawa ON: The University of Ottawa

van Zundert, M., Sluijsmans, D., van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and future directions. Learning and Instruction, 20, 270-279

Watters, A. (2012) Top 10 Ed-Tech Trends of 2012: MOOCs Hack Education, December 3

Yousef, A. et al. (2014) MOOCs: A Review of the State-of-the-Art Proceedings of 6th International Conference on Computer Supported Education – CSEDU 2014, Barcelona, Spain

A New Zealand analysis of MOOCs

Listen with webReader

NZ MOOCs 2

Shrivastava, A. and Guiney, P. (2014) Technological Development and Tertiary Education Delivery Models: The Arrival of MOOCs  Wellington NZ: Tertiary Education Commission/Te Amorangi Mātauranga Matua

Why this paper?

Another report for the record on MOOCs, this time from the New Zealand Tertiary Education Commission. The reasoning behind this report:

The paper focuses on MOOCs [rather than doing a general overview of emerging technologies] because of their potential to disrupt tertiary education and the significant opportunities, challenges and risks that they present. MOOCs are also the sole focus of this paper because of their scale and the involvement of the elite United States universities.

What’s in the paper?

The paper provides a fairly standard, balanced analysis of developments in MOOCs, first by describing the different MOOC delivery models, their business models and the drivers behind MOOCs, then by following up with a broad discussion of the possible implications of MOOCs for New Zealand, such as unbundling of services, possible economies of scale, globalization of tertiary (higher) education, adaptability to learners’ and employers’ needs, and the possible impact on New Zealand’s tertiary education workforce.

There is also a good summary of MOOCs being offered by New Zealand institutions.

At the end of the paper some interesting questions for further discussion are raised:

  • What will tertiary education delivery look like in 2030?

  • What kinds of opportunities and challenges do technological developments, including MOOCs, present to the current policy, regulatory and operational arrangements for tertiary teaching and learning in New Zealand?

  • How can New Zealand make the most of the opportunities and manage any associated risks and challenges?

  • Do MOOCs undermine the central value of higher education, or are they just a helpful ‘updating’ that reflects its new mass nature?

  • Where do MOOCs fit within the New Zealand education and qualifications systems?

  • Who values the knowledge and skills gained from a MOOC programme and why?

  • Can economies of scale be achieved through MOOCs without loss of quality?

  • Can MOOCs lead to better learning outcomes at the same or less cost than traditional classroom-based teaching? If so, how might the Government go about funding institutions that want to deliver MOOCs to a mix of domestic and international learners?

  • What kinds of MOOC accreditation models might make sense in the context of New Zealand’s quality-assurance system?

Answers on a postcard, please, to the NZ Tertiary Education Commission.

Comment

Am I alone in wondering what has happened to for-credit online education in government thinking about the future? It is as if 20 years of development of undergraduate and graduate online courses and programs never existed. Surely a critical question for institutions and government planners is:

  • what are the relative advantages and disadvantages of MOOCs over other forms of online learning? What can MOOCs learn from our prior experience with credit-based online learning?

There are several reasons for considering this, but one of the most important is the huge investment many institutions, and, indirectly, governments. have already made in credit-based online learning.

By and large, online learning in publicly funded universities, both in New Zealand and in Canada, has been very successful in terms of both increasing access and in student learning. It is also important to be clear about the differences and some of the similarities between credit-based online learning and MOOCs.

Some of the implications laid out in this paper, such as possibilities of consortia and institutional collaboration, apply just as much to credit-based online learning as to MOOCs, and many of the negative criticisms of MOOCs, such as difficulties of assessment and lack of learner support, disappear when applied to credit-based online learning.

Please, policy-makers, realise that MOOCs are not your only option for innovation through online learning. There are more established and well tested solutions already available.

A review of MOOCs and their assessment tools

Listen with webReader
What kind of MOOC?

What kind of MOOC?

Chauhan, A. (2014) Massive Open Online Courses (MOOCS): Emerging Trends in Assessment and Accreditation Digital Education Review, No. 25

For the record, Amit Chauhan, from Florida State University, has reviewed the emerging trends in MOOC assessments and their application in supporting student learning and achievement.

Holy proliferating MOOCs!

He starts with a taxonomy of MOOC instructional models, as follows:

  • cMOOCs
  • xMOOCs
  • BOOCs (a big open online course) – only one example, by a professor from Indiana University with a grant from Google, is given which appears to be a cross between an xMOOC and a cMOOC and had 500 participants.
  • DOCCs (distributed open collaborative course): this involved 17 universities sharing and adapting the same basic MOOC
  • LOOC (little open online course): as well as 15-20 tuition-paying campus-based students, the courses also allow a limited number of non-registered students to also take the course, but also paying a fee. Three examples are given, all from New England.
  • MOORs (massive open online research): again just one example is given, from UC San Diego, which seems to be a mix of video-based lecturers and student research projects guided by the instructors
  • SPOCs (small, private, online courses): the example given is from Harvard Law School, which pre-selected 500 students from over 4,000 applicants, who take the same video-delivered lectures as on-campus students enrolled at Harvard
  • SMOCs: (synchronous massive open online courses): live lectures from the University of Texas offered to campus-based students are also available synchronously to non-enrolled students for a fee of $550. Again, just one example.

MOOC assessment models and emerging technologies

Chauhan describes ‘several emerging tools and technologies that are being leveraged to assess learning outcomes in a MOOC. These technologies can also be utilized to design and develop a MOOC with built-in features to measure learning outcomes.’

  • learning analytics on MIT’s 6.002x, Circuits and Electronics. This is a report of the study by Breslow et al. (2013) of the use of learning analytics to study participants’ behaviour on the course to identify factors influencing student performance.
  • personal learning networks on PLENK 2010: this cMOOC is actually about personal learning networks and encouraged participants to use a variety of tools to develop their own personal learning networks
  • mobile learning on MobiMOOC, another connectivist MOOC. The learners in MobiMOOC utilized mobile technologies for accessing course content, knowledge creation and sharing within the network. Data were collected from participant discussion forums and hashtag analysis to track participant behaviour
  • digital badges have been used in several MOOCs to reward successful completion of an end of course test, participation in discussion forums, or in peer review activities
  • adaptive assessment:  assessments based on Item Response Theory (IRT) are designed to automatically adapt to student learning and ability to measure learner performance and learning outcomes. The tests include different difficulty levels and based on the response of the learner to each test item, the difficulty level decreases or increases to match learner ability and potential. No example of actual use of IRT in MOOCs was given.
  • automated assessments: Chauhan describes two automated assessment tools, Automated Essay Scoring (AES) and Calibrated Peer Review™ (CPR), that are really automated tools for assessing and giving feedback on writing skills. One study on their use in MOOCs (Balfour, 2013) is cited.
  • recognition of prior learning: I think Chauhan is suggesting that institutions offering RPL can/should include MOOCs in student RPL portfolios.

Chauhan concludes:

Assessment in a MOOC does not necessarily have to be about course completion.  Learners can be assessed on time-on-task; learner-course component interaction; and a certification of the specific skills and knowledge gained from a MOOC….. Ultimately, the satisfaction gained from completing the course can be potential indicator of good learning experiences.

Alice in MOOCland

Chauhan describes the increasing variation of instructional methods now associated with the generic term ‘MOOC’, to the point where one has to ask whether the term has any consistent meaning. It’s difficult to see how a SPOC for instance differs from a typical online credit course, except perhaps in that it uses a recorded lecture rather than a learning management system or VLE. The only common factor in these variations is that the course is being offered to some non-registered students, but then if they have to pay a $500 fee, surely that’s a registered student? If a course is neither massive, nor open, nor free, how can it be a MOOC?

Further, if MOOC participants are taking exactly the same course and tests as registered students, will the institution award them credit for it and admit them to the institution? If not, why not? It seems that some institutions really haven’t thought this through. I’d like to know what Registrar’s make of all this.

At some point, institutions will need to develop a clearer, more consistent strategy for open learning, in terms of how it can best be provided, how it calibrates with formal learning, and how open learning can be accommodated within the fiscal constraints of the institution, and then where MOOCs might fit with the strategy. It seems that a lot of institutions – or rather instructors – are going into open learning buttock-backwards.

More disturbing for me though is the argument Chauhan makes for assessing everything except what participants learn from MOOCs. With the exception of automated tests, all these tools do is describe all kinds of behaviour except for learning. These tools may be useful for identifying factors that influence learning, on a post hoc rationalization, but you need to be able to measure the learning in the first place, unless you see MOOCs as some cruel form of entertainment. I have no problem with trying to satisfy students, I have no problem with MOOCs as un-assessed non-formal education, but if you try to assess participants, at the end of the day it’s what they learn that matters. MOOCs need better tools for measuring learning, but I didn’t see any described in this article.

Further, if MOOC participants are taking exactly the same course and tests as registered students, will the institution award them credit for it and admit them to the institution? If not, why not? It seems that some institutions really haven’t thought this through. I’d like to know what Registrar’s make of all this.

At some point, institutions will need to develop a clearer, more consistent strategy for open learning, in terms of how it can best be provided, how it calibrates with formal learning, and how open learning can be accommodated within the fiscal constraints of the institution, and then where MOOCs might fit with the strategy. It seems that a lot of institutions – or rather instructors – are going into open learning buttock-backwards.

More disturbing for me though is the argument Chauhan makes for assessing everything except what participants learn from MOOCs. With the exception of automated tests, all these tools do is describe all kinds of behaviour except for learning. These tools may be useful for identifying factors that influence learning, on a post hoc rationalization, but you need to be able to measure the learning in the first place, unless you see MOOCs as some cruel form of entertainment. I have no problem with trying to satisfy students, I have no problem with MOOCs as un-assessed non-formal education, but if you try to assess participants, at the end of the day it’s what they learn that matters. MOOCs need better tools for measuring learning, but I didn’t see any described in this article.

References

Balfour, S. P. (2013). Assessing writing in MOOCs: Automated Essay Scoring and Calibrated Peer review. Research & Practice in Assessment, Vol. 8, No. 1

Breslow, L., Pritchard, D. E., DeBoer, J., Stump, G. S., Ho, A. D., & Seaton, D. T. (2013). Studying learning in the worldwide classroom: Research into edx’s first mooc. Research & Practice in Assessment, 8, 13-25.

New journal on research into online learning for k-12 educators

Listen with webReader
Image: © myeducation.com

Image: © myeducation.com

The Journal of Online Learning Research (JOLR) is a peer-reviewed, international journal devoted to the theoretical, empirical, and pragmatic understanding of technologies and their impact on primary and secondary pedagogy and policy in primary and secondary (K-12) online and blended environments.

This new quarterly journal (premieres Jan. 2015) is Open Access, and distributed by the EdITLib Digital Library as well as available in print by subscription for institutions/libraries.

JOLR papers should address online learning, catering particularly to the educators who research, practice, design, and/or administer in primary and secondary schooling in online settings. However, the journal also serves those educators who have chosen to blend online learning tools and strategies in their face-to-face classroom.

For Author Guidelines & to Submit: Click HERE

Thanks to Russell Poulin at WCET for directing me to this.

What students learned from an MIT physics MOOC

Listen with webReader

Newtonian mechanics 2

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Why this paper?

I don’t normally review individual journal articles, but I am making an exception in this case for several reasons:

  • it is the only research publication I have seen that attempts to measure actual learning from a MOOC in a quantitative manner (if you know of other publications, please let me know)
  • as you’d expect from MIT, the research is well conducted, within the parameters of a quasi-experimental design
  • the paper indicates, in line with many other comparisons between modes of delivery, that the conditions which are associated with the context of teaching are more important than just the mode of delivery
  • I was having to read this paper carefully for my book on ‘Teaching in a Digital Age’, but for reasons of space I would not be able to go into detail on this paper for my book, so I might as well share my full analysis with you.

What was the course?

8.MReV – Mechanics ReView, an introduction to Newtonian Mechanics, is the online version of a similar course offered on campus in the spring for MIT students who failed the Introductory Newtonian Mechanics in the fall. In other words, it is based on a second-chance course for MIT-based campus students.

The online version was offered in the summer semester as a free, open access course through edX and was aimed particularly at high school physics teachers but also to anyone else interested. The course consisted of the following components:

  • an online eText, especially designed for the course
  • reference materials both inside the course and outside the course (e.g., Google, Wikipedia, or a textbook)
  • an online discussion area/forum
  • mainly multiple-choice online tests and ‘quizzes’, interspersed on a weekly basis throughout the course.

Approximately 17,000 people signed-up for 8.MReV. Most dropped out with no sign of commitment to the course; only 1,500 students were “passing” or on-track to earn a certificate after the second assignment. Most of those completing less than 50% of the homework and quiz problems dropped out during the course and did not take the post-test, so the analysis included only the 1,080 students who attempted more than 50% of the questions in the course. 1,030 students earned certificates.

Thus the study measured only the learning of the most successful online students (in terms of completing the online course).

Methodology (summary)

The study measured primarily ‘conceptual’ learning, based mainly on multiple-choice questions demanding a student response that generally can be judged right or wrong. Students were given a pre-test before the course and a post-test at the end of the course.

Two methods to test learning were used: a comparison between each student’s pre-test and post-test score to measure the learning gain during the course; and an analysis based on Item Response Theory (IRT) which does not show absolute learning (as measured by pre-post testing), but rather improvement relative to “class average.”

Because of the large size of the MOOC participants included in the study, the researchers were able to analyse performance between various ‘cohorts’ within the MOOC participants such as:

  • physics teachers
  • not physics teachers
  • physics background
  • no physics background
  • college math
  • no math
  • post-graduate qualification
  • bachelor degree
  • no more than high school

Lastly, the scores of the MOOC participants were compared with the scores of those taking the on-campus version of the course, which had the following features:

  • four hours of instruction in which staff interacted with small groups of students (a flipped classroom) each week,
  • staff office hours,
  • help from fellow students,
  • available physics tutors,
  • MIT library

Main results (summary)

  • gains in knowledge for the MOOC group were generally higher than those found in traditional, lecture-based classes and lower than (but closer to) those found in ‘interactive’ classes, but this result is hedged around with some considerable qualifications (‘more studies on MOOCs need to be done to confirm this’.)
  • in spite of the extra instruction that the on-campus students had, there was no evidence of positive, weekly relative improvement of the on-campus students compared with our online students. (Indeed, if my reading of Figure 5 in the paper is correct, the on-campus students did considerably worse).
  • there was no evidence within the MOOC group that cohorts with low initial ability learned less than the other cohorts

Conclusions

This is a valuable research report, carefully conducted and cautiously interpreted by the authors. However, for these reasons, it is really important not to jump to conclusions. In particular, the authors’ own caution at the end of the paper should be noted:

It is … important to note the many gross differences between 8.MReV and on-campus education. Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course … and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.

To this I would add that the design of this MOOC was somewhat different to many other xMOOCs in that it was based on online texts specially designed for the MOOC, and not on video lectures.

I’m still not sure from reading the paper how much students actually learned from the MOOC. About 1,000 who finished the course got a certificate, but it is difficult to interpret the gain in knowledge. The statistical measurement of an average gain of 0.3 doesn’t mean a lot. There is some mention of the difference being between a B and a B+, but I have probably misinterpreted that. If it is the case, though, I certainly would expect students taking a 13 week course to do much better than that. It would have been more helpful to have graded students on the pre-test then compared those grades on the post-test. We could then see if gains were in the order of at least one grade better, for instance.

Finally, this MOOC design suits a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It is less likely to develop the skills I have identified as being needed in a digital age.