December 16, 2017

Online learning in 2016: a personal review


global-peace-index-2016-aglobal-peace-initiative-b

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Image: © Institute for Economics and Peace. Canada is ranked seventh most peaceful. We don’t know where it ranks though in terms of online learning.

A personal review

I am not going to do a review of all the developments in online learning in 2016 (for this, see Audrey Watters’ excellent HackEducation Trends). What I am going to do instead is review what I actually wrote about in 2016 in this blog, indicating what to me was of particular interest in online learning during 2016. I have identified 38 posts I wrote in which I have explored in some detail issues that bubbled up (at least for me) in 2016.

1. Tracking online learning

Building a national survey of online learning in Canada (134 hits)

A national survey of university online and distance learning in Canada (1,529 hits)

In the USA, fully online enrollments continue to grow in 2014 (91 hits)

Are you ready for blended learning? (389 hits)

What the Conference Board of Canada thinks about online learning (200 hits)

I indulged my obsession with knowing the extent to which online learning is penetrating post-secondary education with five posts on this topic. In a field undergoing such rapid changes, it is increasingly important to be able to track exactly what is going on. Thus a large part of my professional activity in 2016 has been devoted to establishing, almost from scratch, a national survey of online learning in Canadian post-secondary institutions. I would have written more about this topic, but until the survey has been successfully conducted in 2017, I have preferred to keep a low profile on this issue.

However, during 2016 it did become clear to me, partly as a result of pilot testing of the questionnaire, and partly through visits to universities, that blended learning is not only gaining ground in Canadian post-secondary education at a much faster rate than I had anticipated, but is raising critical questions about what is best done online and what face-to-face, and how to prepare institutions and instructors for what is essentially a revolution in teaching.

This can be best summarized by what I wrote about the Conference Board of Canada’s report:

What is going on is a slowly boiling and considerably variable revolution in higher education that is not easily measured or even captured in individual anecdotes or interviews.

2. Faculty development and training

Getting faculty and instructors into online learning (183 hits)

Initiating instructors to online learning: 10 fundamentals (529 hits)

Online learning for beginners: 10. Ready to go (+ nine other posts on this topic = 4,238 hits)

5 IDEAS for a pedagogy of online learning (708 hits)

This was the area to which I devoted the most space, with ten posts on ‘Online Learning for Beginners’, aimed at instructors resisting or unready for online learning. These ten posts were then edited and published by Contact North as the 10 Fundamentals of Teaching Online.

Two fundamental conclusions: we need not only better organizational strategies to ensure that faculty have the knowledge and training they will need for effective teaching and learning in a digital age, but we also need to develop new teaching strategies and approaches that can exploit the benefits and even more importantly avoid the pitfalls of blended learning and learning technologies. I have been trying to make a contribution in this area, but much more needs to be done.

3. Learning environments

Building an effective learning environment (6,173 hits)

EDEN 2016: Re-imagining Learning Environments (597 hits)

Culture and effective online learning environments (1,260 hits)

Closely linked to developing appropriate pedagogies for a digital age is the concept of designing appropriate learning environments, based on learners’ construction of knowledge and the role of instructors in guiding and fostering knowledge management, independent learning and other 21st century skills.

This approach I argued is a better ‘fit’ for learners in a digital age than thinking in terms of blended, hybrid or fully online learning, and recognizes that not only can technology to be used to design very different kinds of learning environments from school or campus based learning environments, but also that technology is just one component of a much richer learning context.
Slide15

4. Experiential learning online

A full day of experiential learning in action (188 hits)

An example of online experiential learning: Ryerson University’s Law Practice Program (383 hits)

Is networked learning experiential learning? (163 hits)

These three posts explored a number of ways in which experiential learning is being done online, as this is a key methodology for developing skills in particular.

5. Open education

Acorns to oaks? British Columbia continues its progress with OERs (185 hits)

Talking numbers about open publishing and online learning (113 hits)

Towards an open pedagogy for online learning (385 hits)

These posts also tracked the development of open publishing and open educational resources, particularly in British Columbia, leading me to conclude that the OER ‘movement’ has far too narrow a concept of open-ness and that in its place we need an open pedagogy into which open educational resources are again just one component, and perhaps not the most significant.

6. Technology applications in online learning

An excellent guide to multimedia course design (659 hits)

Is video a threat to learning management systems? (603 hits)

Some comments on synchronous online learning technologies (231 hits)

Amongst all the hype about augmented reality, learning analytics and the application of artificial intelligence, I found it more useful to look at some of the technologies that are in everyday use in online learning, and how these could best be used.

7. Technology and alienation

Technology and alienation: online learning and labour market needs (319 hits)

Technology and alienation: symptoms, causes and a framework for discussion (512 hits)

Technology, alienation and the role of education: an introduction (375 hits)

Automation or empowerment: online learning at the crossroads (1,571 hits)

Why digital technology is not necessarily the answer to your problem (474 hits)

These were more philosophical pieces, prompted to some extent by the wider concerns of the impact of technology on jobs and how that has influenced Brexit and the Trump phenomena.

Nevertheless this issue is also very relevant to the teaching context. In particular I was challenging the ‘Silicon Valley’ assumption that computers will eventually replace the need for teachers, and in particular the danger of using algorithms in teaching without knowing who wrote the algorithms, what their philosophy of teaching is, and thus what assumptions have been built into the use of data.

Image: Applift

Image: Applift

8. Learning analytics

Learning analytics and learning design at the UK Open University (90 hits)

Examining ethical and privacy issues surrounding learning analytics (321 hits)

Continuing more or less the same theme of analysing the downside as well as the upside of technology in education, these two posts looked at how some institutions, and the UK Open University in particular, are being thoughtful about the implications of learning analytics, and building in policies for protecting privacy and gaining student ‘social license’ for the use of analytics.

9. Assessment

Developing a next generation online learning assessment system (532 hits)

This is an area where much more work needs to be done. If we are to develop new or better pedagogies for a digital age, we will also need better assessment methods. Unfortunately the focus once again appears to be more on the tools of assessment, such as online proctoring, where large gains have been made in 2016, but which still focus on proctoring traditional assessment procedures such as time-restricted exams, multiple choice tests and essay writing. What we need are new methods of assessment that focus on measuring the types of knowledge and skills that are needed in a digital age.

For instance, e-portfolios have held a lot of promise for a long time, but are still being used and evaluated at a painfully slow rate. They do offer though one method for assessment that reflects much better the needs of assessing 21st century knowledge and skills. However we need more imagination and creativity in developing new assessment methods for measuring the knowledge and skills needed for a digital age.

That was the year that was

Well, it was 2016 from the perspective of someone no longer teaching online or managing online learning:

  • How far off am I, from your perspective?
  • What were the most significant developments for you in online learning in 2016?
  • What did I miss that you think should have been included? Perhaps I can focus on this next year.

I have one more post looking at 2016 to come, but that will be more personal, looking at my whole range of online learning activities in 2016.

In the meantime have a great seasonal break and I will be back in touch some time in the new year.

Developing a next generation online learning assessment system

Facial recognition

Facial recognition

Universitat Oberta de Catalunya (2016) An Adaptive Trust-based e-assessment system for learning (@TeSLA) Barcelona: UOC

This paper describes a large, collaborative European Commission project headed by the Open University of Catalonia, called TeSLA, (no, not to develop a European electric car, but) a state-of-the-art online assessment system that will be accepted as equal to if not better than traditional face-to-face assessment in higher education.

The challenge

The project argues that at the moment there is no (European?) online assessment system that:

  • has the same level of trust as face-to-face assessment systems
  • that is universally accepted by educational institutions, accreditation agencies and employers
  • incorporates pedagogical as well as technical features
  • integrates with other aspects of teaching and learning
  • provides true and secure ‘authentication’ of authorship.

I added the ‘European’, as I think this claim might come as a surprise to Western Governors’ University, which has been successfully using online proctoring for some time. It is also why I used the term ‘next generation’ in the heading, as the TeSLA project is aiming at something much more technologically advanced than the current WGU system, which consists mainly of a set of web cameras observing learners taking an assessment (click here for a demonstration).

Also, the TeSLA proposal makes a good point when it says any comprehensive online assessment system must also be able to handle formative as well as summative assessment, and that this can be a challenge as formative assessment is often embedded in the day-to-day teaching and learning activities.

But the main reason for this project is that online learning assessment currently lacks the credibility of face-to-face assessment.

The solution

A non-invasive system that is able to provide a quality continuous assessment model, using proportionate and necessary controls that will ensure student identity and authorship [in a way that offers] accrediting agencies and society unambiguous proof of academic progression….

Any solution must work fully online and take into account ‘academic requirements’ for assessment, including enriched feedback, adaptive learning, formative assessment and personalized learning.

This will require the use of technologies that provide reliable and accurate user authentication and identification of authorship, face and voice recognition, and keystroke dynamics recognition (see here for video examples of the proposed techniques).

The solution must result in

a system based on demonstrable trust between the institution and its students. Student trust is continuously updated according to their interaction with the institution, such as analysis of their exercises, peer feedback in cooperative activities or teacher confidence information. Evidence is continuously collected and contrasted in order to provide such unambiguous proof.

The players

The participants in this project include

  • eight universities,
  • four research centres,
  • three educational quality assurance agencies,
  • three technology companies,
  • from twelve different countries.

In total the project will have a team of about 80 professionals and will use large-scale pilots involving over 14,000 European students.

Comment

I think this is a very interesting project and is likely to grab a lot of attention. At the end of the day, there could well be some significant improvements to online assessment that will actually transfer to multiple online courses and programs.

However, I spent many years working on large European Commission projects and I am certainly glad I don’t have to do that any more. Quite apart from the truly mindless bureaucracy that always accompanies such projects (the form-filling is vast and endless), there are real challenges in getting together participants who can truly contribute to such a project. Participants are determined more by political considerations, such as regional representation, rather than technical competence. Such projects in the end are largely driven by two or three key players; the remaining participants are more likely to slow down or inhibit the project, and they certainly divert essential funding away from the those most able to make the project succeed. However, these projects are as much about raising the level of all European countries in terms of learning technologies as becoming a world leader in this field.

These criticisms apply to any of the many European Commission projects, but there are some issues that are particular to this project:

  1. I am not convinced that there is a real problem here, or at least a problem that requires better technology as a solution. Assessment for online learning has been successfully implemented now for more than 20 years, and while it mostly depends on some form of face-to-face invigilation, this has not proved a major acceptability problem or a barrier to online enrolments. There will always be those who do not accept the equivalence of online learning, and the claimed shortcomings of online assessment are just another excuse for non-acceptance of online learning in general.
  2. Many of the problems of authenticity and authorship are the same for face-to-face assessment. Cheating is not exclusive to online learning, nor is there any evidence that it is more prevalent in online learning where it is provided by properly accredited higher education institutions. Such a study is just as likely to reduce rather than increase trust in online learning by focusing attention on an issue that has not been a big problem to date.
  3. Even if this project does result in more ‘trustworthy’ online assessment, there are huge issues of privacy and security of data involved, not to mention the likely cost to institutions. Perhaps the most useful outcome from this project will be a better understanding of these risks, and development of protocols for protecting student privacy and the security of the data collected for this purpose. I wish though that a privacy commissioner was among the eighteen different participants in this project. I fail to see how such a project could be anything but invasive for students, most of whom will be assessed from home.

For all these reasons, this project is well worth tracking. It has the potential to radically change the way we not only assess online learners, but also how we teach them, because assessment always drives learner behaviour. Whether such changes will be on balance beneficial though remains to be seen.

Keyboard dynamics

Keyboard dynamics

Answering questions about teaching online: assessment and evaluation

How to assess students online: remote exam proctoring

How to assess students online: remote exam proctoring (see Chapter 4.5 on competency-based learning)

Following on from my Contact North webinar on the first five chapters of my book, Teaching in a Digital Age, and my blog post on this yesterday, there were four follow-up questions from the seminar to which I posted written answers. Here they are:

Unanswered Questions

Q: ­If multiple choice is not great for applied learning assessment – could you please give us some tips for more effective assessment in the virtual environment?­

Big question! There are several ways to assess applied learning, and their appropriateness will depend on the subject area and the learning goals (Look particularly at Appendix A, Section 8). Here are some examples:

  • via project work, where the outcome of the project is assessed. (This could be either an individual or group project). Marking a project that may take several weeks work on the part of students helps keep the marking workload down, although this may be offset to some extent by the help that may need to be given to learners during the project.
  • through e-portfolios, where students are asked to apply what they are learning to practical real-life contexts. The e-portfolio is then used to assess what students have learned by the end of the course.
  • use of online discussion forums, where students are assessed on their contributions, in particular on their ability to apply knowledge to specific real world situations (e.g. in contemporary international politics)
  • using simulations where students have to input data to solve problems, and make decisions. The simulation collects the data and allows for qualitative assessment by the instructor. (This depends on there being suitable simulations available or the ability to create one.)

Q: I am finding in my post-graduate online courses the professor is interacting less and less in the online weekly forums, while I know there are competing theories as to how much they should interact with students, do you have an opinion on whether or not professors should or should not interact weekly? Personally, I enjoy their interaction I find it furthers my learning.

This is another big issue. In general, the research is pretty much consistent: in online learning, instructor ‘presence’ is critical to the success of many students. Look particularly at Chapter 4, Section 4 and Chapter 11, Section 10. However presence alone is not sufficient. The online discussion must be designed properly to lead to academic learning and the instructor’s intervention should be to raise the level of thinking in the discussion (see 4.4.2 in the book). Above all, the discussion topics must be relevant and from a student’s perspective clearly contribute to answering assessment questions better. The instructors should in my view be checking daily their online discussion forums and should respond or intervene at least weekly. Again though this is a design issue; the better the course design, the less they should need to log in daily.

Q: Can you give an example of how a MOOC can supplement a face-to-face or fully online course?

I think the best way is to consider a MOOC as an open educational resource (OER). There is a whole chapter (Chapter 10) in the book on OERs. Thus MOOCs (or more likely parts of MOOCs) might be used in a flipped classroom context, where students study the MOOC then come to class to do work around it. But be careful. Many MOOCs are not OER. They are protected by copyright and cannot be used without permission. They may be available only for a limited period. If it is your own MOOC, on the other hand, that’s different. My question is though: is the MOOC material the best OER material available or are there other sources that would fit the class requirement better, such as an open textbook? Or even better, should you look at designing the course completely differently, to increase student interaction, self-learning and the development of higher order thinking skills, by using one of the other teaching methods in the book?

Q: Would better learning analytics reports help teachers have a more relevant role in MOOCS?

Learning analytics can be helpful but usually they are not sufficient. Analytics provide only quantitative or measurable data, such as time on task, demographics about successful or unsuccessful students, analysis of choices in multiple-choice tests, etc. This is always useful information but will not necessarily tell you why students are struggling to understand or are not continuing. Compare this with a good online discussion forum where students can raise questions and the instructor can respond. Students’ comments, questions and discussion can provide a lot of valuable feedback about the design of the course, but require in most cases some form of qualitative analysis by the instructor. This is difficult in massive online courses and learning analytics alone will not resolve this, although they can help, for instance, in focusing down on those parts of the MOOC where students are having difficulties.

Any more questions?

I’m more than happy to post regular responses to any questions you may have about online teaching, either related to the book or quite independent of it. Just send them to me at tony.bates@ubc.ca

A review of MOOCs and their assessment tools

What kind of MOOC?

What kind of MOOC?

Chauhan, A. (2014) Massive Open Online Courses (MOOCS): Emerging Trends in Assessment and Accreditation Digital Education Review, No. 25

For the record, Amit Chauhan, from Florida State University, has reviewed the emerging trends in MOOC assessments and their application in supporting student learning and achievement.

Holy proliferating MOOCs!

He starts with a taxonomy of MOOC instructional models, as follows:

  • cMOOCs
  • xMOOCs
  • BOOCs (a big open online course) – only one example, by a professor from Indiana University with a grant from Google, is given which appears to be a cross between an xMOOC and a cMOOC and had 500 participants.
  • DOCCs (distributed open collaborative course): this involved 17 universities sharing and adapting the same basic MOOC
  • LOOC (little open online course): as well as 15-20 tuition-paying campus-based students, the courses also allow a limited number of non-registered students to also take the course, but also paying a fee. Three examples are given, all from New England.
  • MOORs (massive open online research): again just one example is given, from UC San Diego, which seems to be a mix of video-based lecturers and student research projects guided by the instructors
  • SPOCs (small, private, online courses): the example given is from Harvard Law School, which pre-selected 500 students from over 4,000 applicants, who take the same video-delivered lectures as on-campus students enrolled at Harvard
  • SMOCs: (synchronous massive open online courses): live lectures from the University of Texas offered to campus-based students are also available synchronously to non-enrolled students for a fee of $550. Again, just one example.

MOOC assessment models and emerging technologies

Chauhan describes ‘several emerging tools and technologies that are being leveraged to assess learning outcomes in a MOOC. These technologies can also be utilized to design and develop a MOOC with built-in features to measure learning outcomes.’

  • learning analytics on MIT’s 6.002x, Circuits and Electronics. This is a report of the study by Breslow et al. (2013) of the use of learning analytics to study participants’ behaviour on the course to identify factors influencing student performance.
  • personal learning networks on PLENK 2010: this cMOOC is actually about personal learning networks and encouraged participants to use a variety of tools to develop their own personal learning networks
  • mobile learning on MobiMOOC, another connectivist MOOC. The learners in MobiMOOC utilized mobile technologies for accessing course content, knowledge creation and sharing within the network. Data were collected from participant discussion forums and hashtag analysis to track participant behaviour
  • digital badges have been used in several MOOCs to reward successful completion of an end of course test, participation in discussion forums, or in peer review activities
  • adaptive assessment:  assessments based on Item Response Theory (IRT) are designed to automatically adapt to student learning and ability to measure learner performance and learning outcomes. The tests include different difficulty levels and based on the response of the learner to each test item, the difficulty level decreases or increases to match learner ability and potential. No example of actual use of IRT in MOOCs was given.
  • automated assessments: Chauhan describes two automated assessment tools, Automated Essay Scoring (AES) and Calibrated Peer Review™ (CPR), that are really automated tools for assessing and giving feedback on writing skills. One study on their use in MOOCs (Balfour, 2013) is cited.
  • recognition of prior learning: I think Chauhan is suggesting that institutions offering RPL can/should include MOOCs in student RPL portfolios.

Chauhan concludes:

Assessment in a MOOC does not necessarily have to be about course completion.  Learners can be assessed on time-on-task; learner-course component interaction; and a certification of the specific skills and knowledge gained from a MOOC….. Ultimately, the satisfaction gained from completing the course can be potential indicator of good learning experiences.

Alice in MOOCland

Chauhan describes the increasing variation of instructional methods now associated with the generic term ‘MOOC’, to the point where one has to ask whether the term has any consistent meaning. It’s difficult to see how a SPOC for instance differs from a typical online credit course, except perhaps in that it uses a recorded lecture rather than a learning management system or VLE. The only common factor in these variations is that the course is being offered to some non-registered students, but then if they have to pay a $500 fee, surely that’s a registered student? If a course is neither massive, nor open, nor free, how can it be a MOOC?

Further, if MOOC participants are taking exactly the same course and tests as registered students, will the institution award them credit for it and admit them to the institution? If not, why not? It seems that some institutions really haven’t thought this through. I’d like to know what Registrar’s make of all this.

At some point, institutions will need to develop a clearer, more consistent strategy for open learning, in terms of how it can best be provided, how it calibrates with formal learning, and how open learning can be accommodated within the fiscal constraints of the institution, and then where MOOCs might fit with the strategy. It seems that a lot of institutions – or rather instructors – are going into open learning buttock-backwards.

More disturbing for me though is the argument Chauhan makes for assessing everything except what participants learn from MOOCs. With the exception of automated tests, all these tools do is describe all kinds of behaviour except for learning. These tools may be useful for identifying factors that influence learning, on a post hoc rationalization, but you need to be able to measure the learning in the first place, unless you see MOOCs as some cruel form of entertainment. I have no problem with trying to satisfy students, I have no problem with MOOCs as un-assessed non-formal education, but if you try to assess participants, at the end of the day it’s what they learn that matters. MOOCs need better tools for measuring learning, but I didn’t see any described in this article.

Further, if MOOC participants are taking exactly the same course and tests as registered students, will the institution award them credit for it and admit them to the institution? If not, why not? It seems that some institutions really haven’t thought this through. I’d like to know what Registrar’s make of all this.

At some point, institutions will need to develop a clearer, more consistent strategy for open learning, in terms of how it can best be provided, how it calibrates with formal learning, and how open learning can be accommodated within the fiscal constraints of the institution, and then where MOOCs might fit with the strategy. It seems that a lot of institutions – or rather instructors – are going into open learning buttock-backwards.

More disturbing for me though is the argument Chauhan makes for assessing everything except what participants learn from MOOCs. With the exception of automated tests, all these tools do is describe all kinds of behaviour except for learning. These tools may be useful for identifying factors that influence learning, on a post hoc rationalization, but you need to be able to measure the learning in the first place, unless you see MOOCs as some cruel form of entertainment. I have no problem with trying to satisfy students, I have no problem with MOOCs as un-assessed non-formal education, but if you try to assess participants, at the end of the day it’s what they learn that matters. MOOCs need better tools for measuring learning, but I didn’t see any described in this article.

References

Balfour, S. P. (2013). Assessing writing in MOOCs: Automated Essay Scoring and Calibrated Peer review. Research & Practice in Assessment, Vol. 8, No. 1

Breslow, L., Pritchard, D. E., DeBoer, J., Stump, G. S., Ho, A. D., & Seaton, D. T. (2013). Studying learning in the worldwide classroom: Research into edx’s first mooc. Research & Practice in Assessment, 8, 13-25.

The strengths and weaknesses of MOOCs: Part 2: learning and assessment

Remote exam proctoring

Remote exam proctoring

The writing of my book, Teaching in a Digital Age, has been interrupted for nearly two weeks by my trip to England for the EDEN Research Workshop. As part of the first draft of the book, I have already published three posts on MOOCs:

In this post, I ask (and try to answer) what do participants learn from MOOCs, and I also evaluate their assessment methods.

What do students learn in MOOCs?

This is a much more difficult question to answer, because so little of the research to date (2014) has tried to answer this question. (One reason, as we shall see, is that assessment of learning in MOOCs remains a major challenge). There are at least two kinds of study: quantitative studies that seek to quantify learning gains; and qualitative studies that describe the experience of learners within MOOCs, which indirectly provide some insight into what they have learned.

At the time of writing, the most well conducted study of learning in MOOCs has been by Colvin et al. (2014), who investigated ‘conceptual learning’ in an MIT Introductory Physics MOOC. They compared learner performance not only between different sub-categories of learners within the MOOC, such as those with no physics or math background with those such as physic teachers who had considerable prior knowledge, but also with on-campus students taking the same curriculum in a traditional campus teaching format. In essence, the study found no significant differences in learning gains between or within the two types of teaching, but it should be noted that the on-campus students were students who had failed an earlier version of the course and were retaking it.

This research is a classic example of the no significant difference in comparative studies in educational technology; other variables, such as differences in the types of students, were as important as the mode of delivery. Also, this MOOC design represents a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It doesn’t attempt to develop the skills needed in a digital age as identified in Chapter 1.

There have been far more studies of the experience of learners within MOOCs, particularly focusing on the discussions within MOOCs (see for instance, Kop, 2011). In general (although there are exceptions), discussions are unmonitored, and it is left to participants to make connections and respond to other students comments.

However, there are some strong criticisms of the effectiveness of the discussion element of MOOCs for developing the high-level conceptual analysis required for academic learning. To develop deep, conceptual learning, there is a need in most cases for intervention by a subject expert, to clarify misunderstandings or misconceptions, to provide accurate feedback,  to ensure that the criteria for academic learning, such as use of evidence, clarity of argument, etc., are being met, and to ensure the necessary input and guidance to seek deeper understanding (see Harasim, 2013).

Furthermore, the more massive the course, the more likely participants are to feel ‘overload, anxiety and a sense of loss’, if there is not some instructor intervention or structure imposed (Knox, 2014). Firmin et al. (2014) have shown that when there is some form of instructor ‘encouragement and support of student effort and engagement’, results improve for all participants in MOOCs. Without a structured role for subject experts, participants are faced with a wide variety of quality in terms of comments and feedback from other participants. There is again a great deal of research on the conditions necessary for the successful conduct of collaborative and co-operative group learning (see for instance, Dillenbourg, 1999, Lave and Wenger, 1991), and these findings certainly have not been generally applied to the management of MOOC discussions to date.

One counter argument is that at least cMOOCs develop a new form of learning based on networking and collaboration that is essentially different from academic learning, and MOOCs are thus more appropriate to the needs of learners in a digital age. Adult participants in particular, it is claimed by Downes and Siemens, have the ability to self-manage the development of high level conceptual learning.  MOOCs are ‘demand’ driven, meeting the interests of individual students who seek out others with similar interests and the necessary expertise to support them in their learning, and for many this interest may well not include the need for deep, conceptual learning but more likely the appropriate applications of prior knowledge in new or specific contexts. MOOCs do appear to work best for those who already have a high level of education and therefore bring many of the conceptual skills developed in formal education with them when they join a MOOC, and therefore contribute to helping those who come without such skills.

Over time, as more experience is gained, MOOCs are likely to incorporate and adapt some of the findings from research on smaller group work for much larger numbers. For instance, some MOOCs are using ‘volunteer’ or community tutors (Dillenbourg, 2014).The US State Department has organized MOOC camps through US missions and consulates abroad to mentor MOOC participants. The camps include Fulbright scholars and embassy staff who lead discussions on content and topics for MOOC participants in countries abroad (Haynie, 2014). Some MOOC providers, such as the University of British Columbia, pay a small cohort of academic assistants to monitor and contribute to the MOOC discussion forums (Engle, 2014). Engle reported that the use of academic assistants, as well as limited but effective interventions from the instructors themselves, made the UBC MOOCs more interactive and engaging. However, paying for people to monitor and support MOOCs will of course increase the cost to providers. Consequently, MOOCs are likely to develop new automated ways to manage discussion effectively in very large groups. The University of Edinburgh is experimenting with automated ‘teacherbots’ that crawl through online discussion forums and direct predetermined comments to students identified as needing help or encouragement (Bayne, 2014).

These results and approaches are consistent with prior research on the importance of instructor presence for successful for-credit online learning. In the meantime, though, there is much work still to be done if MOOCs are to provide the support and structure needed to ensure deep, conceptual learning where this does not already exist in students. The development of the skills needed in a digital age is likely to be an even greater challenge when dealing with massive numbers. However, we need much more research into what participants actually learn in MOOCs and under what conditions before any firm conclusions can be drawn.

Assessment

Assessment of the massive numbers of participants in MOOCs has proved to be a major challenge. It is a complex topic that can be dealt with only briefly here. However, Chapter 5.8 provides a general analysis of different types of assessment, and Suen (2014) provides a comprehensive and balanced overview of the way assessment has been used in MOOCs to date. This section draws heavily on Suen’s paper.

Computer marked assignments

Assessment to date in MOOCs has been primarily of two kinds. The first is based on quantitative multiple-choice tests, or response boxes where formulae or ‘correct code’ can be entered and automatically checked. Usually participants are given immediate automated feedback on their answers, ranging from simple right or wrong answers to more complex responses depending on the type of response they have checked, but in all cases, the process is usually fully automated.

For straight testing of facts, principles, formulae, equations and other forms of conceptual learning where there are clear, correct answers, this works well. In fact, multiple choice computer marked assignments were used by the UK Open University as long ago as the 1970s, although the means to give immediate online feedback were not available then. However, this method of assessment is limited for testing deep or ‘transformative’ learning, and particularly weak for assessing the intellectual skills needed in a digital age, such as creative or original thinking.

Peer review

The second type of assessment that has been tried in MOOCs has been peer assessment, where participants assess each other’s work. Peer assessment is not new. It has been successfully used for formative assessment in traditional classrooms and in some online teaching for credit (Falchikov and Goldfinch, 2000; van Zundert et al., 2010). More importantly, peer assessment is seen as a powerful way to improve deep understanding and knowledge through the rating process, and at the same time, it can be useful for developing some of the skills needed in a digital age, such as critical thinking, for those participants assessing the work of others.

However, a key feature of the successful use of peer assessment has been the close involvement of an instructor or teacher, in providing benchmarks, rubrics or criteria  for assessment, and for monitoring and adjusting peer assessments to ensure consistency and a match with the benchmarks set by the instructor. Although an instructor can provide the benchmarks and rubrics in MOOCs, close monitoring of the multiple peer assessments is difficult if not impossible with the very large numbers of participants in MOOCs. As a result, MOOC participants often become incensed at being randomly assessed by other participants who may not and often do not have the knowledge or ability to give a ‘fair’ or accurate assessment of a participant’s work.

Various attempts to get round the limitations of peer assessment in MOOCs have been tried such as calibrated peer reviews, based on averaging all the peer ratings, and Bayesian post hoc stabilization, but although these statistical techniques reduce the error (or spread) of peer review somewhat they still do not remove the problems of systematic errors of judgement in raters due to misconceptions. This is particularly a problem where a majority of participants fail to understand key concepts in a MOOC, in which case peer assessment becomes the blind leading the blind.

Automated essay scoring

This is another area where there have been attempts to automate scoring. Although such methods are increasingly sophisticated they are currently limited in terms of accurate assessment to measuring primarily technical writing skills, such as grammar, spelling and sentence construction. Once again they do not measure accurately essays where higher level intellectual skills are demonstrated.

Badges and certificates

Particularly in xMOOCs, participants may be awarded a certificate or a ‘badge’ for successful completion of the MOOC, based on a final test (usually computer-marked) which measures the level of learning in a course. The American Council on Education (ACE), which represents the presidents of U.S. accredited, degree-granting institutions, recommended offering credit for five courses on the Coursera MOOC platform. However, according to the person responsible for the review process:

what the ACE accreditation does is merely accredit courses from institutions that are already accredited. The review process doesn’t evaluate learning outcomes, but is a course content focused review thus obviating all the questions about effectiveness of the pedagogy in terms of learning outcomes.’ (Book, 2013)

Indeed, most of the institutions offering MOOCs will not accept their own certificates for admission or credit within their own, campus-based programs. Probably nothing says more about the confidence in the quality of the assessment than this failure of MOOC providers to recognize their own teaching.

The intent behind assessment

To evaluate assessment in MOOCs requires an examination of the intent behind assessment. As identified earlier in another chapter of my book, there are many different purposes behind assessment. Peer assessment and immediate feedback on computer-marked tests can be extremely valuable for formative assessment, enabling participants to see what they have understood and to help develop further their understanding of key concepts. In cMOOCs, as Suen points out, learning is measured as the communication that takes place between MOOC participants, resulting in crowdsourced validation of knowledge – it’s what the sum of all the participants come to believe to be true as a result of participating in the MOOC, so formal assessment is unnecessary. However, what is learned in this way is not necessarily academically validated knowledge, which to be fair, is not the concern of cMOOC proponents such as Stephen Downes.

Academic assessment is a form of currency, related not only to measuring student achievement but also affecting student mobility (e.g. entrance to grad school) and perhaps more importantly employment opportunities and promotion. From a learner’s perspective, the validity of the currency – the recognition and transferability of the qualification – is essential. To date, MOOCs have been unable to demonstrate that they are able to assess accurately the learning achievements of participants beyond comprehension and knowledge of ideas, principles and processes (recognizing that there is some value in this alone). What MOOCs have not been able to demonstrate is that they can either develop or assess deep understanding or the intellectual skills required in a digital age. Indeed, this may not be possible within the constraints of massiveness, which is their major distinguishing feature from other forms of online learning, although the lack of valid methods of assessment will not stop computer scientists from trying to find ways to analyze participant online behaviour to show that such learning is taking place.

Up next

I hope the next post will be my last on this chapter on MOOCs. It will cover the following topics:

  • the cost of MOOCs and economies of scale
  • branding
  • the political, economic and social factors that explain the rise of MOOCs.

Over to you

As regular readers know, this is my way of obtaining peer review for my open textbook (so clearly I am not against peer review in principle!). So if I have missed anything important on this topic, or have misrepresented people’s views, or you just plain disagree with what I’ve written, please let me know. In particular, I am hoping for comments on:

  • comprehensiveness of the sources used that address learning and assessment methods in MOOCs
  • arguments that should have been included, either as a strength or a weakness
  • errors of fact

Yes, I’m a glutton for punishment, but you need to be a masochist to publish openly on this topic.

References

Bayne, S. (2014) Teaching, Research and the More-than-Human in Digital Education Oxford UK: EDEN Research Workshop (url to come)

Book, P. (2103) ACE as Academic Credit Reviewer–Adjustment, Accommodation, and Acceptance WCET Learn, July 25

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Dillenbourg, P. (ed.) (1999) Collaborative-learning: Cognitive and Computational Approaches. Oxford: Elsevier

Dillenbourg, P. (2014) MOOCs: Two Years Later, Oxford UK: EDEN Research Workshop (url to come)

Engle, W. (2104) UBC MOOC Pilot: Design and Delivery Vancouver BC: University of British Columbia

Falchikov, N. and Goldfinch, J. (2000) Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks Review of Educational Research, Vol. 70, No. 3

Firmin, R. et al. (2014) Case study: using MOOCs for conventional college coursework Distance Education, Vol. 35, No. 2

Haynie, D. (2014). State Department hosts ‘MOOC Camp’ for online learners. US News,January 20

Harasim, L. (2012) Learning Theory and Online Technologies New York/London: Routledge

Ho, A. et al. (2014) HarvardX and MITx: The First Year of Open Online Courses Fall 2012-Summer 2013 (HarvardX and MITx Working Paper No. 1), January 21

Knox, J. (2014) Digital culture clash: ‘massive’ education in the e-Learning and Digital Cultures Distance Education, Vol. 35, No. 2

Kop, R. (2011) The Challenges to Connectivist Learning on Open Online Networks: Learning Experiences during a Massive Open Online Course International Review of Research into Open and Distance Learning, Vol. 12, No. 3

Lave, J. and Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press

Milligan, C., Littlejohn, A. and Margaryan, A. (2013) Patterns of engagement in connectivist MOOCs, Merlot Journal of Online Learning and Teaching, Vol. 9, No. 2

Suen, H. (2104) Peer assessment for massive open online courses (MOOCs) International Review of Research into Open and Distance Learning, Vol. 15, No. 3

van Zundert, M., Sluijsmans, D., van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and future directions. Learning and Instruction, 20, 270-279