May 28, 2015

Why the fuss about MOOCs? Political, social and economic drivers

Listen with webReader
Daphne Koller's TED talk on MOOCs (click to activate video)

Daphne Koller’s TED talk on MOOCs (click to activate video)

The end of MOOCs

This is the last part of my chapter on MOOCs for my online open textbook, Teaching in a Digital Age. In a series of prior posts, I have looked at the strengths and weaknesses of MOOCs. Here I summarise this section and look at why MOOCs have gained so much attention.

Brief summary of strengths and weaknesses of MOOCs

The main points of my analysis of the strengths and weaknesses of MOOCs can be summarised as follows:

Strengths

  • the main value proposition of MOOCs is that through the use of computer automation and/or peer-to-peer communication MOOCs can eliminate the very large variable costs in higher education associated with providing learner support and quality assessment
  • MOOCs, particularly xMOOCs, deliver high quality content from some of the world’s best universities for free to anyone with a computer and an Internet connection
  • MOOCs can be useful for opening access to high quality content, particularly in Third World countries, but to do so successfully will require a good deal of adaptation, and substantial investment in local support and partnerships
  • MOOCs are valuable for developing basic conceptual learning, and for creating large online communities of interest or practice
  • MOOCs are an extremely valuable form of lifelong learning and continuing education
  • MOOCs have forced conventional and especially elite institutions to reappraise their strategies towards online and open learning
  • institutions have been able to extend their brand and status by making public their expertise and excellence in certain academic areas

Weaknesses

  • the high registration numbers for MOOCs are misleading; less than half of registrants actively participate, and of these, only a small proportion successfully complete the course; nevertheless, absolute numbers of successful participants are still higher than for conventional courses
  • MOOCs are expensive to develop, and although commercial organisations offering MOOC platforms have opportunities for sustainable business models, it is difficult to see how publicly funded higher education institutions can develop sustainable business models for MOOCs
  • MOOCs tend to attract those with already a high level of education, rather than widen access
  • MOOCs so far have been limited in the ability to develop high level academic learning, or the high level intellectual skills needed in a knowledge based society
  • assessment of the higher levels of learning remains a challenge for MOOCs, to the extent that most MOOC providers will not recognise their own MOOCs for credit
  • MOOC materials may be limited by copyright or time restrictions for re-use as open educational resources

Why the fuss about MOOCs?

It can be seen from the previous section that the pros and cons of MOOCs are finely balanced. Given though the obvious questions about the value of MOOCs, and the fact that before MOOCs arrived, there had been substantial but quiet progress for over ten years in the use of online learning for undergraduate and graduate programs, you might be wondering why MOOCs have commanded so much media interest, and especially why a large number of government policy makers, economists, and computer scientists have become so ardently supportive of MOOCs, and why there has been such a strong, negative reaction, not only from many traditional university and college instructors, who are right to be threatened by some of the claims being made for MOOCs, but also from many professionals in online learning (see for instance, Bates, 2012; Daniel, 2012; Hill, 2012; Watters, 2013), who might be expected to be more supportive of MOOCs

It needs to be recognised that the discourse around MOOCs is not usually based on a cool, rational, evidence-based analysis of the pros and cons of MOOCs, but is more likely to be driven by emotion, self-interest, fear, or ignorance of what education is actually about. Thus it is important to explore the political, social and economic factors that have driven MOOC mania.

Massive, free and Made in America!

This is what I will call the intrinsic reason for MOOC mania. It is not surprising that, since the first MOOC from Stanford professors Andrew Ng and Daphne Koller attracted 270,000 sign-ups from around the world, since the course was free, and since it came from professors at one of the most prestigious private universities in the USA, the American media were all over it. It was big news in its own right, however you look at it, especially as courses from Sebastian Thrun, another Stanford professor, and others from MIT and Harvard followed shortly, with equally staggering numbers of participants.

It’s the Ivy Leagues!

Until MOOCs came along, the major Ivy League universities in the USA, such as Stanford, MIT, Harvard and UC Berkeley, as well as many of the most prestigious universities in Canada, such as the University of Toronto and McGill, and elsewhere, had largely ignored online learning in any form.

However, by 2011, online learning, in the form of for credit undergraduate and graduate courses, was making big inroads at many other, very respectable universities, such as Carnegie Mellon, Penn State, and the University of Maryland in the USA, and also in many of the top tier public universities in Canada and elsewhere, to the extent that almost one in three course enrolments in the USA were now in online courses. Furthermore, at least in Canada, the online courses were often getting good completion rates and matching on-campus courses for quality.

The Ivy League and other highly prestigious universities that had ignored online learning were beginning to look increasingly out of touch by 2011. By launching into MOOCs, these prestigious universities could jump to the head of the queue in terms of technology innovation, while at the same time protecting their selective and highly personal and high cost campus programs from direct contact with online learning. In other words, MOOCs gave these prestigious universities a safe sandbox in which to explore online learning, and the Ivy League universities gave credibility to MOOCs, and, indirectly, online learning as a whole.

It’s disruptive!

For years before 2011, various economists, philosophers and industrial gurus had been predicting that education was the next big area for disruptive change due to the march of new technologies (see for instance Lyotard, 1979; Tapscott, undated; Christensen and Eyring, 2011).

Online learning in credit courses though was being quietly absorbed into the mainstream of university teaching, through blended learning, without any signs of major disruption, but here with MOOCs was a massive change, providing evidence at long last in the education sector to support the theories of disruptive innovation.

It’s Silicon Valley!

It is no coincidence that the first MOOCs were all developed by entrepreneurial computer scientists. Ng and Koller very quickly went on to create Coursera as a private commercial company, followed shortly by Thrun, who created Udacity. Anant Agarwal, a computer scientist at MIT, went on to head up edX.

The first MOOCs were very typical of Silicon Valley start-ups: a bright idea (massive, open online courses with cloud-based, relatively simple software to handle the numbers), thrown out into the market to see how it might work, supported by more technology and ideas (in this case, learning analytics, automated marking, peer assessment) to deal with any snags or problems. Building a sustainable business model would come later, when some of the dust had settled.

As a result it is not surprising that almost all the early MOOCs completely ignored any pedagogical theory about best practices in teaching online, or any prior research on factors associated with success or failure in online learning. It is also not surprising as a result that a very low percentage of participants actually successfully complete MOOCs – there’s a lot of catching up still to do, but so far Coursera and to a lesser extent edX have continued to ignore educators and prior research in online learning. They would rather do their own research, even if it means re-inventing the wheel. The commercial MOOC platform providers though are beginning to work out a sustainable business model.

It’s the economy, stupid!

Of all the reasons for MOOC mania, Bill Clinton’s famous election slogan resonates most with me. It should be remembered that by 2011, the consequences of the disastrous financial collapse of 2008 were working their way through the economy, and particularly were impacting on the finances of state governments in the USA.

The recession meant that states were suddenly desperately short of tax revenues, and were unable to meet the financial demands of state higher education systems. For instance, California’s community college system, the nation’s largest, suffered about $809 million in state funding cuts between 2008-2012, resulting in a shortfall of 500,000 places in its campus-based colleges. Free MOOCs were seen as manna from heaven by the state governor, Jerry Brown.

One consequence of rapid cuts to government funding was a sharp spike in tuition fees, bringing the real cost of higher education sharply into focus. Tuition fees in the USA have increased by 7% per annum over the last 10 years, compared with an inflation rate of 4% per annum. Here at last was a possible way to rein in the high cost of higher education.

Now though the economy in the USA is picking up and revenues are flowing back into state coffers, and so the pressure for more radical solutions to the cost of higher education is beginning to ease. It will be interesting to see if MOOC mania continues as the economy grows, although the search for more cost-effective approaches to higher education is not going to disappear.

Don’t panic!

These are all very powerful drivers of MOOC mania, which makes it all the more important to try to be clear and cool headed about the strengths and weaknesses of MOOCs. The real test is whether MOOCs can help develop the knowledge and skills that learners need in a knowledge-based society. The answer of course is yes and no.

As a low-cost supplement to formal education, they can be quite valuable, but not as a complete replacement. They can at present teach conceptual learning, comprehension and in a narrow range of activities, application of knowledge. They can be useful for building communities of practice, where already well educated people or people with a deep, shared passion for a topic can learn from one another, another form of continuing education.

However, certainly to date, MOOCs have not been able to demonstrate that they can lead to transformative learning, deep intellectual understanding, evaluation of complex alternatives, and evidence-based decision-making, and without greater emphasis on expert-based learner support and more qualitative forms of assessment, they probably never will, at least without substantial increases in their costs.

At the end of the day, there is a choice between throwing more resources into MOOCs and hoping that some of their fundamental flaws can be overcome without too dramatic an increase in costs, or whether we would be better investing in other forms of online learning and educational technology that could lead to more cost-effective learning outcomes. I know where I would put my money, and it’s not into MOOCs.

Over to you

This will be my last contribution to the discussion of MOOCs for my book, so let’s have it!

1. Do you agree with the strengths and weaknesses of MOOCs that I have laid out? What would you add or remove or change?

2. What do you think of the drivers of MOOC mania? Are these accurate? Are there other, more important drivers of MOOC mania?

3. Do you even agree that there is a mania about MOOCs, or is their rapid expansion all perfectly understandable?

References

Bates, T. (2012) What’s right and what’s wrong about Coursera-style MOOCs, Online learning and distance education resources, August 5

Christensen, C. and Eyring, H. (2011), The Innovative University: Changing the DNA of Higher Education, New York, New York, USA: John Wiley & Sons,

Daniel, J. (2012). Making sense of MOOCs: Musings in a maze of myth, paradox and possibility.Journal of Interactive Media in Education, Vol. 3

Hill, P. (2012) Four Barriers that MOOCs must overcome to build a sustainable model, e-Literate, July 24

Lyotard, J-J. (1979) La Condition postmoderne: rapport sur le savoir: Paris: Minuit

Tapscott, D. (undated) The transformation of education dontapscott.com

Watters, A. (2013) MOOC Mania: Debunking the hype around massive, open online courses The Digital Shift, 18 April

A New Zealand analysis of MOOCs

Listen with webReader

NZ MOOCs 2

Shrivastava, A. and Guiney, P. (2014) Technological Development and Tertiary Education Delivery Models: The Arrival of MOOCs  Wellington NZ: Tertiary Education Commission/Te Amorangi Mātauranga Matua

Why this paper?

Another report for the record on MOOCs, this time from the New Zealand Tertiary Education Commission. The reasoning behind this report:

The paper focuses on MOOCs [rather than doing a general overview of emerging technologies] because of their potential to disrupt tertiary education and the significant opportunities, challenges and risks that they present. MOOCs are also the sole focus of this paper because of their scale and the involvement of the elite United States universities.

What’s in the paper?

The paper provides a fairly standard, balanced analysis of developments in MOOCs, first by describing the different MOOC delivery models, their business models and the drivers behind MOOCs, then by following up with a broad discussion of the possible implications of MOOCs for New Zealand, such as unbundling of services, possible economies of scale, globalization of tertiary (higher) education, adaptability to learners’ and employers’ needs, and the possible impact on New Zealand’s tertiary education workforce.

There is also a good summary of MOOCs being offered by New Zealand institutions.

At the end of the paper some interesting questions for further discussion are raised:

  • What will tertiary education delivery look like in 2030?

  • What kinds of opportunities and challenges do technological developments, including MOOCs, present to the current policy, regulatory and operational arrangements for tertiary teaching and learning in New Zealand?

  • How can New Zealand make the most of the opportunities and manage any associated risks and challenges?

  • Do MOOCs undermine the central value of higher education, or are they just a helpful ‘updating’ that reflects its new mass nature?

  • Where do MOOCs fit within the New Zealand education and qualifications systems?

  • Who values the knowledge and skills gained from a MOOC programme and why?

  • Can economies of scale be achieved through MOOCs without loss of quality?

  • Can MOOCs lead to better learning outcomes at the same or less cost than traditional classroom-based teaching? If so, how might the Government go about funding institutions that want to deliver MOOCs to a mix of domestic and international learners?

  • What kinds of MOOC accreditation models might make sense in the context of New Zealand’s quality-assurance system?

Answers on a postcard, please, to the NZ Tertiary Education Commission.

Comment

Am I alone in wondering what has happened to for-credit online education in government thinking about the future? It is as if 20 years of development of undergraduate and graduate online courses and programs never existed. Surely a critical question for institutions and government planners is:

  • what are the relative advantages and disadvantages of MOOCs over other forms of online learning? What can MOOCs learn from our prior experience with credit-based online learning?

There are several reasons for considering this, but one of the most important is the huge investment many institutions, and, indirectly, governments. have already made in credit-based online learning.

By and large, online learning in publicly funded universities, both in New Zealand and in Canada, has been very successful in terms of both increasing access and in student learning. It is also important to be clear about the differences and some of the similarities between credit-based online learning and MOOCs.

Some of the implications laid out in this paper, such as possibilities of consortia and institutional collaboration, apply just as much to credit-based online learning as to MOOCs, and many of the negative criticisms of MOOCs, such as difficulties of assessment and lack of learner support, disappear when applied to credit-based online learning.

Please, policy-makers, realise that MOOCs are not your only option for innovation through online learning. There are more established and well tested solutions already available.

The strengths and weaknesses of MOOCs: Part 2: learning and assessment

Listen with webReader
Remote exam proctoring

Remote exam proctoring

The writing of my book, Teaching in a Digital Age, has been interrupted for nearly two weeks by my trip to England for the EDEN Research Workshop. As part of the first draft of the book, I have already published three posts on MOOCs:

In this post, I ask (and try to answer) what do participants learn from MOOCs, and I also evaluate their assessment methods.

What do students learn in MOOCs?

This is a much more difficult question to answer, because so little of the research to date (2014) has tried to answer this question. (One reason, as we shall see, is that assessment of learning in MOOCs remains a major challenge). There are at least two kinds of study: quantitative studies that seek to quantify learning gains; and qualitative studies that describe the experience of learners within MOOCs, which indirectly provide some insight into what they have learned.

At the time of writing, the most well conducted study of learning in MOOCs has been by Colvin et al. (2014), who investigated ‘conceptual learning’ in an MIT Introductory Physics MOOC. They compared learner performance not only between different sub-categories of learners within the MOOC, such as those with no physics or math background with those such as physic teachers who had considerable prior knowledge, but also with on-campus students taking the same curriculum in a traditional campus teaching format. In essence, the study found no significant differences in learning gains between or within the two types of teaching, but it should be noted that the on-campus students were students who had failed an earlier version of the course and were retaking it.

This research is a classic example of the no significant difference in comparative studies in educational technology; other variables, such as differences in the types of students, were as important as the mode of delivery. Also, this MOOC design represents a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It doesn’t attempt to develop the skills needed in a digital age as identified in Chapter 1.

There have been far more studies of the experience of learners within MOOCs, particularly focusing on the discussions within MOOCs (see for instance, Kop, 2011). In general (although there are exceptions), discussions are unmonitored, and it is left to participants to make connections and respond to other students comments.

However, there are some strong criticisms of the effectiveness of the discussion element of MOOCs for developing the high-level conceptual analysis required for academic learning. To develop deep, conceptual learning, there is a need in most cases for intervention by a subject expert, to clarify misunderstandings or misconceptions, to provide accurate feedback,  to ensure that the criteria for academic learning, such as use of evidence, clarity of argument, etc., are being met, and to ensure the necessary input and guidance to seek deeper understanding (see Harasim, 2013).

Furthermore, the more massive the course, the more likely participants are to feel ‘overload, anxiety and a sense of loss’, if there is not some instructor intervention or structure imposed (Knox, 2014). Firmin et al. (2014) have shown that when there is some form of instructor ‘encouragement and support of student effort and engagement’, results improve for all participants in MOOCs. Without a structured role for subject experts, participants are faced with a wide variety of quality in terms of comments and feedback from other participants. There is again a great deal of research on the conditions necessary for the successful conduct of collaborative and co-operative group learning (see for instance, Dillenbourg, 1999, Lave and Wenger, 1991), and these findings certainly have not been generally applied to the management of MOOC discussions to date.

One counter argument is that at least cMOOCs develop a new form of learning based on networking and collaboration that is essentially different from academic learning, and MOOCs are thus more appropriate to the needs of learners in a digital age. Adult participants in particular, it is claimed by Downes and Siemens, have the ability to self-manage the development of high level conceptual learning.  MOOCs are ‘demand’ driven, meeting the interests of individual students who seek out others with similar interests and the necessary expertise to support them in their learning, and for many this interest may well not include the need for deep, conceptual learning but more likely the appropriate applications of prior knowledge in new or specific contexts. MOOCs do appear to work best for those who already have a high level of education and therefore bring many of the conceptual skills developed in formal education with them when they join a MOOC, and therefore contribute to helping those who come without such skills.

Over time, as more experience is gained, MOOCs are likely to incorporate and adapt some of the findings from research on smaller group work for much larger numbers. For instance, some MOOCs are using ‘volunteer’ or community tutors (Dillenbourg, 2014).The US State Department has organized MOOC camps through US missions and consulates abroad to mentor MOOC participants. The camps include Fulbright scholars and embassy staff who lead discussions on content and topics for MOOC participants in countries abroad (Haynie, 2014). Some MOOC providers, such as the University of British Columbia, pay a small cohort of academic assistants to monitor and contribute to the MOOC discussion forums (Engle, 2014). Engle reported that the use of academic assistants, as well as limited but effective interventions from the instructors themselves, made the UBC MOOCs more interactive and engaging. However, paying for people to monitor and support MOOCs will of course increase the cost to providers. Consequently, MOOCs are likely to develop new automated ways to manage discussion effectively in very large groups. The University of Edinburgh is experimenting with automated ‘teacherbots’ that crawl through online discussion forums and direct predetermined comments to students identified as needing help or encouragement (Bayne, 2014).

These results and approaches are consistent with prior research on the importance of instructor presence for successful for-credit online learning. In the meantime, though, there is much work still to be done if MOOCs are to provide the support and structure needed to ensure deep, conceptual learning where this does not already exist in students. The development of the skills needed in a digital age is likely to be an even greater challenge when dealing with massive numbers. However, we need much more research into what participants actually learn in MOOCs and under what conditions before any firm conclusions can be drawn.

Assessment

Assessment of the massive numbers of participants in MOOCs has proved to be a major challenge. It is a complex topic that can be dealt with only briefly here. However, Chapter 5.8 provides a general analysis of different types of assessment, and Suen (2014) provides a comprehensive and balanced overview of the way assessment has been used in MOOCs to date. This section draws heavily on Suen’s paper.

Computer marked assignments

Assessment to date in MOOCs has been primarily of two kinds. The first is based on quantitative multiple-choice tests, or response boxes where formulae or ‘correct code’ can be entered and automatically checked. Usually participants are given immediate automated feedback on their answers, ranging from simple right or wrong answers to more complex responses depending on the type of response they have checked, but in all cases, the process is usually fully automated.

For straight testing of facts, principles, formulae, equations and other forms of conceptual learning where there are clear, correct answers, this works well. In fact, multiple choice computer marked assignments were used by the UK Open University as long ago as the 1970s, although the means to give immediate online feedback were not available then. However, this method of assessment is limited for testing deep or ‘transformative’ learning, and particularly weak for assessing the intellectual skills needed in a digital age, such as creative or original thinking.

Peer review

The second type of assessment that has been tried in MOOCs has been peer assessment, where participants assess each other’s work. Peer assessment is not new. It has been successfully used for formative assessment in traditional classrooms and in some online teaching for credit (Falchikov and Goldfinch, 2000; van Zundert et al., 2010). More importantly, peer assessment is seen as a powerful way to improve deep understanding and knowledge through the rating process, and at the same time, it can be useful for developing some of the skills needed in a digital age, such as critical thinking, for those participants assessing the work of others.

However, a key feature of the successful use of peer assessment has been the close involvement of an instructor or teacher, in providing benchmarks, rubrics or criteria  for assessment, and for monitoring and adjusting peer assessments to ensure consistency and a match with the benchmarks set by the instructor. Although an instructor can provide the benchmarks and rubrics in MOOCs, close monitoring of the multiple peer assessments is difficult if not impossible with the very large numbers of participants in MOOCs. As a result, MOOC participants often become incensed at being randomly assessed by other participants who may not and often do not have the knowledge or ability to give a ‘fair’ or accurate assessment of a participant’s work.

Various attempts to get round the limitations of peer assessment in MOOCs have been tried such as calibrated peer reviews, based on averaging all the peer ratings, and Bayesian post hoc stabilization, but although these statistical techniques reduce the error (or spread) of peer review somewhat they still do not remove the problems of systematic errors of judgement in raters due to misconceptions. This is particularly a problem where a majority of participants fail to understand key concepts in a MOOC, in which case peer assessment becomes the blind leading the blind.

Automated essay scoring

This is another area where there have been attempts to automate scoring. Although such methods are increasingly sophisticated they are currently limited in terms of accurate assessment to measuring primarily technical writing skills, such as grammar, spelling and sentence construction. Once again they do not measure accurately essays where higher level intellectual skills are demonstrated.

Badges and certificates

Particularly in xMOOCs, participants may be awarded a certificate or a ‘badge’ for successful completion of the MOOC, based on a final test (usually computer-marked) which measures the level of learning in a course. The American Council on Education (ACE), which represents the presidents of U.S. accredited, degree-granting institutions, recommended offering credit for five courses on the Coursera MOOC platform. However, according to the person responsible for the review process:

what the ACE accreditation does is merely accredit courses from institutions that are already accredited. The review process doesn’t evaluate learning outcomes, but is a course content focused review thus obviating all the questions about effectiveness of the pedagogy in terms of learning outcomes.’ (Book, 2013)

Indeed, most of the institutions offering MOOCs will not accept their own certificates for admission or credit within their own, campus-based programs. Probably nothing says more about the confidence in the quality of the assessment than this failure of MOOC providers to recognize their own teaching.

The intent behind assessment

To evaluate assessment in MOOCs requires an examination of the intent behind assessment. As identified earlier in another chapter of my book, there are many different purposes behind assessment. Peer assessment and immediate feedback on computer-marked tests can be extremely valuable for formative assessment, enabling participants to see what they have understood and to help develop further their understanding of key concepts. In cMOOCs, as Suen points out, learning is measured as the communication that takes place between MOOC participants, resulting in crowdsourced validation of knowledge – it’s what the sum of all the participants come to believe to be true as a result of participating in the MOOC, so formal assessment is unnecessary. However, what is learned in this way is not necessarily academically validated knowledge, which to be fair, is not the concern of cMOOC proponents such as Stephen Downes.

Academic assessment is a form of currency, related not only to measuring student achievement but also affecting student mobility (e.g. entrance to grad school) and perhaps more importantly employment opportunities and promotion. From a learner’s perspective, the validity of the currency – the recognition and transferability of the qualification – is essential. To date, MOOCs have been unable to demonstrate that they are able to assess accurately the learning achievements of participants beyond comprehension and knowledge of ideas, principles and processes (recognizing that there is some value in this alone). What MOOCs have not been able to demonstrate is that they can either develop or assess deep understanding or the intellectual skills required in a digital age. Indeed, this may not be possible within the constraints of massiveness, which is their major distinguishing feature from other forms of online learning, although the lack of valid methods of assessment will not stop computer scientists from trying to find ways to analyze participant online behaviour to show that such learning is taking place.

Up next

I hope the next post will be my last on this chapter on MOOCs. It will cover the following topics:

  • the cost of MOOCs and economies of scale
  • branding
  • the political, economic and social factors that explain the rise of MOOCs.

Over to you

As regular readers know, this is my way of obtaining peer review for my open textbook (so clearly I am not against peer review in principle!). So if I have missed anything important on this topic, or have misrepresented people’s views, or you just plain disagree with what I’ve written, please let me know. In particular, I am hoping for comments on:

  • comprehensiveness of the sources used that address learning and assessment methods in MOOCs
  • arguments that should have been included, either as a strength or a weakness
  • errors of fact

Yes, I’m a glutton for punishment, but you need to be a masochist to publish openly on this topic.

References

Bayne, S. (2014) Teaching, Research and the More-than-Human in Digital Education Oxford UK: EDEN Research Workshop (url to come)

Book, P. (2103) ACE as Academic Credit Reviewer–Adjustment, Accommodation, and Acceptance WCET Learn, July 25

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Dillenbourg, P. (ed.) (1999) Collaborative-learning: Cognitive and Computational Approaches. Oxford: Elsevier

Dillenbourg, P. (2014) MOOCs: Two Years Later, Oxford UK: EDEN Research Workshop (url to come)

Engle, W. (2104) UBC MOOC Pilot: Design and Delivery Vancouver BC: University of British Columbia

Falchikov, N. and Goldfinch, J. (2000) Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks Review of Educational Research, Vol. 70, No. 3

Firmin, R. et al. (2014) Case study: using MOOCs for conventional college coursework Distance Education, Vol. 35, No. 2

Haynie, D. (2014). State Department hosts ‘MOOC Camp’ for online learners. US News,January 20

Harasim, L. (2012) Learning Theory and Online Technologies New York/London: Routledge

Ho, A. et al. (2014) HarvardX and MITx: The First Year of Open Online Courses Fall 2012-Summer 2013 (HarvardX and MITx Working Paper No. 1), January 21

Knox, J. (2014) Digital culture clash: ‘massive’ education in the e-Learning and Digital Cultures Distance Education, Vol. 35, No. 2

Kop, R. (2011) The Challenges to Connectivist Learning on Open Online Networks: Learning Experiences during a Massive Open Online Course International Review of Research into Open and Distance Learning, Vol. 12, No. 3

Lave, J. and Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press

Milligan, C., Littlejohn, A. and Margaryan, A. (2013) Patterns of engagement in connectivist MOOCs, Merlot Journal of Online Learning and Teaching, Vol. 9, No. 2

Suen, H. (2104) Peer assessment for massive open online courses (MOOCs) International Review of Research into Open and Distance Learning, Vol. 15, No. 3

van Zundert, M., Sluijsmans, D., van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and future directions. Learning and Instruction, 20, 270-279

Review of ‘Online Distance Education: Towards a Research Agenda.’

Listen with webReader
Drop-out: the elephant in the DE room that no-one wants to talk about

Drop-out: the elephant in the DE room that no-one wants to talk about

Zawacki-Richter, O. and Anderson, T. (eds.) (2014) Online Distance Education: Towards a Research Agenda Athabasca AB: AU Press, pp. 508

It is somewhat daunting to review a book of over 500 pages of research on any topic. I doubt if few other than the editors are likely to read this book from cover to cover. It is more likely to be kept on one’s bookshelf (if these still exist in a digital age) for reference whenever needed. Nevertheless, this is an important work that anyone working in online learning needs to be aware of, so I will do my best to cover it as comprehensively as I can.

Structure of the book

The book is a collection of about 20 chapters by a variety of different authors (more on the choice of authors later). Based on a Delphi study and analysis of ‘key research journals’ in the field, the editors have organized the topic into three sections, with a set of chapters on each sub-section, as follows:

1. Macro-level research: distance education systems and theories

  • access, equity and ethics
  • globalization and cross-cultural issues
  • distance teaching systems and institutions
  • theories and models
  • research methods and knowledge transfer

2. Meso-level research: management, organization and technology

  • management and organization
  • costs and benefits
  • educational technology
  • innovation and change
  • professional development and faculty support
  • learner support services
  • quality assurance

3. Micro-level: teaching and learning in distance education

  • instructional/learning design
  • interaction and communication
  • learner characteristics.

In addition, there is a very useful preface from Otto Peters, an introductory chapter by the editors where they justify their structural organization of research, and a short conclusion that calls for a systematic research agenda in online distance education research.

More importantly, perhaps, Terry Anderson and Olaf Zawacki-Richter demonstrate empirically that research in this field has been skewed towards micro-level research (about half of all publications).  Interestingly, and somewhat surprisingly given its importance, costs and benefits of online distance education is the least researched area.

What I liked

It is somewhat invidious to pick out particular chapters, because different people will have different interests from such a wide-ranging list of topics. I have tended to choose those that I found were new and/or particularly enlightening for me, but other readers’ choices will be different. However, by selecting a few excellent chapters, I hope to give some idea of the quality of the book.

1. The structuring/organization of research

Anderson and Zawacki-Richter have done an excellent job in providing a structural framework for research in this field. This will be useful both for those teaching about online and distance education but in particular for potential Ph.D. students wondering what to study. This book will provide an essential starting point.

2. Summary of the issues in each area of research

Again, the editors have done an excellent job in their introductory chapter in summarizing the content of each of the chapters that follows, and in so doing pulling out the key themes and issues within each area of research. This alone makes the book worthwhile.

3. Globalization, Culture and Online Distance Education

Charlotte (Lani) Gunawardena of the University of New Mexico has written the most comprehensive and deep analysis of this issue that I have seen, and it is an area in which I have a great deal of interest, since most of the online teaching I have done has been with students from around the world and sometimes multi-lingual.

After a general discussion of the issue of globalization and education, she reviews research in the following areas:

  • diverse educational expectations
  • learners and preferred ways of learning
  • socio-cultural environment and online interaction
  • help-seeking behaviours
  • silence
  • language learning
  • researching culture and online distance learning

This chapter should be required reading for anyone contemplating teaching online.

4. Quality assurance in Online Distance Education

I picked this chapter by Colin Latchem because he is so deeply expert in this field that he is able to make what can be a numbingly boring but immensely important topic a fun read, while at the same time ending with some critical questions about quality assurance. In particular Latchem looks at QA from the following perspectives:

  • definitions of quality
  • accreditation
  • online distance education vs campus-based teaching
  • quality standards
  • transnational online distance education
  • open educational resources
  • costs of QA
  • is online distance education yet good enough?
  • an outcomes approach to QA.

This chapter definitely showcases a master at the top of his game.

5. The elephant in the room: student drop-out

This is a wonderfully funny but ultimately serious argument between Ormond Simpson and Alan Woodley about the elephant in the distance education room that no-one wants to mention. Here they start poking the elephant with some sticks (which they note is not likely to be a career-enhancing move.) The basic argument is that institutions should and could do more to reduce drop-out/increase course completion. This chapter also stunned me with providing hard data about really low completion rates for most open university students. I couldn’t help comparing these with the high completion rates for online credit courses at dual-mode (campus-based) institutions, at least in Canada (which of course are not ‘open’ institutions in that students must have good high school qualifications.)

Woodley’s solution to reducing drop-out is quite interesting (and later well argued):

  • make it harder to get in
  • make it harder to get out

In both cases, really practical and not too costly solutions are offered that nevertheless are consistent with open access and high quality teaching.

In summary

The book contains a number of really good chapters that lay out the issues in researching online distance education.

What I disliked

I have to say that I groaned when I first saw the list of contributors. The same old, same old list of distance education experts with a heavy bias towards open universities. Sure, they are nearly all well-seasoned experts, and there’s nothing wrong with that per se (after all, I see myself as one of them.)

But where are the young researchers here, and especially the researchers in open educational resources, MOOCs, social media applications in online learning, and above all researchers from the many campus-based universities now mainstreaming online learning? There is almost nothing in the book about research into blended learning, and flipped classrooms are not even mentioned. OK, the book is about online distance learning but the barriers or distinctions are coming down with a vengeance. This book will never reach those who most need it, the many campus-based instructors now venturing for the first time into online learning in one way or another. They don’t see themselves as primarily distance educators.

And a few of the articles were more like lessons in history than an up-to-date review of research in the field. Readers of this blog will know that I strongly value the history of educational technology and distance learning. But these lessons need to be embedded in the here and now. In particular, the lessons need to be spelled out. It is not enough to know that Stanford University researchers as long ago as 1974 were researching the costs and benefits of educational broadcasting in developing countries, but what lessons does this have for some of the outrageous claims being made about MOOCs? A great deal in fact, but this needs explaining in the context of MOOCs today.

Also the book is solely focused on post-secondary university education. Where is the research on online distance education in the k-12/school sector or the two-year college/vocational sector? Maybe they are topics for other books, but this is where the real gap exists in research publications in online learning.

Lastly, although the book is reasonably priced for its size (C$40), and is available as an e-text as well as the fully printed version, what a pity it is not an open textbook that could then be up-dated and crowd-sourced over time.

Conclusion

This is essential reading for anyone who wants to take a professional, evidence-based approach to online learning (distance or otherwise). It will be particularly valuable for students wanting to do research in this area. The editors have done an incredibly good job of presenting a hugely diverse and scattered area in a clear and structured manner. Many of the chapters are gems of insight and knowledge in the field.

However, we have a huge challenge of knowledge transfer in this field. Repeatedly authors in the book lamented that many of the new entrants to online learning are woefully ignorant of the research previously done in this field. We need a better way to disseminate this research than a 500 page printed text that only those already expert in the field are likely to access. On the other hand, the book does provide a strong foundation from which to find better ways to disseminate this knowledge. Knowledge dissemination in a digital world then is where the research agenda for online learning needs to focus.

 

The role of communities of practice in a digital age

Listen with webReader
Bank of America's Vital Voices progam links women executives of small and medium sized enterprises  Image: © Belfast Telegraph, 2014

Bank of America’s Vital Voices progam links women executives of small and medium sized enterprises from around the world
Image: © Belfast Telegraph, 2014

The story so far

I have published the first five chapters of my open textbook, ‘Teaching in a Digital Age‘.  I am now working on Chapter 6, ‘Models for Designing Teaching and Learning.’

In my last three posts I discussed respectively the appropriateness for a digital age of the classroom model, the ADDIE model, and the competency-based learning model. In this post, I explore the learning model based on communities of practice.

The theories behind communities of practice

The design of teaching often integrates different theories of learning. Communities of practice are one of the ways in which experiential learning, social constructivism, and connectivism can be combined, illustrating the limitations of trying to rigidly classify learning theories. Practice tends to be more complex.

What are communities of practice?

Definition: 

Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.

Wenger, 2014

The basic premise behind communities of practice is simple: we all learn in everyday life from the communities in which we find ourselves. Communities of practice are everywhere. Nearly everyone belongs to some community of practice, whether it is through our working colleagues or associates, our profession or trade, or our leisure interests, such as a book club. Wenger (2000) argues that a community of practice is different from a community of interest or a geographical community in that it involves a shared practice: ways of doing things that are shared to some significant extent among members.

Wenger argues that there are three crucial characteristics of a community of practice:

  • domain: a common interest that connects and holds together the community
  • community: a community is bound by the shared activities they pursue (for example, meetings, discussions) around their common domain
  • practice: members of a community of practice are practitioners; what they do informs their participation in the community; and what they learn from the community affects what they do.

Wenger (2000) has argued that although individuals learn through participation in a community of practice, more important is the generation of newer or deeper levels of knowledge through the sum of the group activity. If the community of practice is centered around business processes, for instance, this can be of considerable benefit to an organization. Smith (2003) notes that:

…communities of practice affect performance..[This] is important in part because of their potential to overcome the inherent problems of a slow-moving traditional hierarchy in a fast-moving virtual economy. Communities also appear to be an effective way for organizations to handle unstructured problems and to share knowledge outside of the traditional structural boundaries. In addition, the community concept is acknowledged to be a means of developing and maintaining long-term organizational memory.

Brown and Duguid (2000) describe a community of practice developed around the Xerox customer service representatives who repaired the machines in the field. The Xerox reps began exchanging tips and tricks over informal meetings at breakfast or lunch and eventually Xerox saw the value of these interactions and created the Eureka project to allow these interactions to be shared across the global network of representatives. The Eureka database has been estimated to have saved the corporation $100 million. Companies such as Google and Apple are encouraging communities of practice through the sharing of knowledge across their many specialist staff.

Technology provides a wide range of tools that can support communities of practice, as indicated by Wenger (2010) in the diagram below:

Image: © Etienne Wenger, 2010

Designing effective communities of practice

Most communities of practice have no formal design and tend to be self-organising systems. They have a natural life cycle, and come to an end when they no longer serve the needs of the community. However, there is now a body of theory and research that has identified actions that can help sustain and improve the effectiveness of communities of practice.

 Wenger, McDermott and Snyder (2002) have identified seven key design principles for creating effective and self-sustaining communities of practice, related specifically to the management of the community, although the ultimate success of a community of practice will be determined by the activities of the members of the community themselves. Designers of a community of practice need to:

  1. Design for evolution: ensuring that the community can evolve and shift in focus to meet the interests of the participants without moving too far from the common domain of interest
  2. Open a dialogue between inside and outside perspectives: encourage the introduction and discussion of new perspectives that come or are brought in from outside the community of practice
  3. Encourage and accept different levels of participation, The strength of participation varies from participant to participant. The ‘core’ (most active members) are those who participate regularly. There are others who follow the discussions or activities but do not take a leading role in making active contributions. Then there are those (likely the majority) who are on the periphery of the community but may become more active participants if the activities or discussions start to engage them more fully. All these levels of participation need to be accepted and encouraged within the community.
  4. Develop both public and private community spaces: communities of practice are strengthened if they encourage individual or group activities that are more personal or private as well as the more public general discussions; for instance, individuals may decide to blog about their activities, or in a larger online community of practice a small group that live or work close together may also decide to meet informally on a face-to-face basis
  5. Focus on value. Attempts should be made explicitly to identify, through feedback and discussion, the contributions that the community most values, then focus the discussion and activities around these issues.
  6. Combine familiarity and excitement, by focusing both on shared, common concerns and perspectives, but also by introducing radical or challenging perspectives for discussion or action
  7. Create a rhythm for the community: there needs to be a regular schedule of activities or focal points that bring participants together on a regular basis, within the constraints of participants’ time and interests.

Subsequent research has identified a number of critical factors that influence the effectiveness of participants in communities of practice, These include being:

  • aware of social presence: individuals need to feel comfortable in engaging socially with other professionals or ‘experts’ in the domain, and those with greater knowledge must be willing to share in a collegial manner that respects the views and knowledge of other participants (social presence is defined as the awareness of others in an interaction combined with an appreciation of the interpersonal aspects of that interaction.)
  • motivated to share information for the common good of the community
  • able and willing to collaborate.

EDUCAUSE has developed a step-by-step guide for designing and cultivating communities of practice in higher education (Cambridge, Kaplan and Suter, 2005).

Lastly, research on other related sectors, such as collaborative learning or MOOCs, can inform the design and development of communities of practice. For instance, communities of practice need to balance between structure and chaos: too much structure and many participants are likely to feel constrained in what they need to discuss; too little structure and participants can quickly lose interest or become overwhelmed.

Many of the other findings about group and online behaviour, such as the need to respect others, observing online etiquette, and preventing certain individuals from dominating the discussion, are all likely to apply. However, because many communities of practice are by definition self-regulating, establishing rules of conduct and even more so enforcing them is really a responsibility of the participants themselves.

Learning through communities of practice in a digital age

Communities of practice are a powerful manifestation of informal learning. They generally evolve naturally to address commonly shared interests and problems. By their nature, they tend to exist outside formal educational organisations. Participants are not usually looking for formal qualifications, but to address issues in their life and to be better at what they do. Furthermore, communities of practice are not dependent on any particular medium; participants may meet face-to-face socially or at work, or they can participate in online or virtual communities of practice.

It should be noted that communities of practice can be very effective in a digital world, where the working context is volatile, complex, uncertain and ambiguous.  A large part of the lifelong learning market will become occupied by communities of practice and self-learning, through collaborative learning, sharing of knowledge and experience, and crowd-sourcing new ideas and development. Such informal learning provision will be particularly valuable for non-governmental or charitable organizations, such as the Red Cross, Greenpeace or UNICEF, or local government, looking for ways to engage communities in their areas of operation.

These communities of learners will be open and free, and hence will provide a competitive alternative to the high priced lifelong learning programs being offered by research universities. This will put pressure on universities and colleges to provide more flexible arrangements for recognition of informal learning, in order to hold on to their current monopoly of post-secondary accreditation.

One of the significant developments in recent years has been the use of massive open online courses (MOOCs) for developing online communities of practice. To date the focus of the majority of MOOCs from providers such as Coursera, Udacity and edX, has been on academic ‘courses’, on topics such as artificial intelligence or dinosaurs, which do have a widespread interest. However, these more instructionist MOOCs are not really developed as communities of practice, because  they use mainly a transmissive pedagogy, from experts to those considered less expert. Even though there may be massive numbers participating in online forums, they are not constructed to maximise the contributions from the participants (despite the fact that most MOOC participants already have high levels of education.). Indeed there is evidence that in really large instructionist MOOCs, participants feel overwhelmed by the magnitude and lack of structuring of the participant contributions (see for instance Knox, 2014).

In comparison, connectivist MOOCs are an ideal way to bring together specialists scattered around the world to focus on a common interest or domain. Connectivist MOOCs are much closer to being virtual communities of practice, in that they put much more emphasis on sharing knowledge between more or less equal participants. However, current connectivist MOOCs do not always incorporate what research indicates are best practices for developing communities of practice, and those wanting to establish a virtual community of practice at the moment need some kind of MOOC provider to get them started and give them access to the necessary MOOC software.

In the long run, MOOCs need to evolve to the point where it is possible for those with a common interest to easily create their own open, online communities of practice. As open source MOOC platforms evolve, it should become easier for people without computer science degrees to create and more importantly manage their own MOOCs, without having to go through a MOOC provider such as Coursera or edX. Also, there are other simpler tools, such as wikis, or more complex ones, such as virtual worlds, that may in the long run have more potential for virtual communities of practice created and organised by the participants themselves.

Although communities of practice are likely to become more rather than less important in a digital age, it is probably a mistake to think of them as a replacement for traditional forms of education. There is no single, ‘right’ approach to the design of teaching. Different groups have different needs. Communities of practice are more of an alternative for certain kinds of learners, such as lifelong learners, and are likely to work best when participants already have some domain knowledge and can contribute personally and in a constructive manner – which suggests the need for at least some form of prior general education or training for those participating in effective communities of practice.

In conclusion, it is clear is that in an increasingly volatile, uncertain, complex, and ambiguous world, and given the openness of the Internet, the social media tools now available, and the need for sharing of knowledge on a global scale, virtual communities of practice will become even more common and important. Smart educators and trainers will look to see how they can harness the strength of this design model, particularly for lifelong learning. However, merely lumping together large numbers of people with a common interest is unlikely to lead to effective learning.  Attention needs to be paid to those design principles that lead to effective communities of practice.

Over to you

Once again, I’m not an expert on communities of practice, so feedback on what I have written on this model of learning will be much appreciated. In particular:

  • Have a got it wrong? Are their important elements missing?
  • Is there good research on the design of communities of practice that I have missed?
  • Do you agree that they are NOT a replacement for other forms of education?
  • Can you really design a community of practice or do they just evolve naturally? If so, what conditions are needed for success other than those already discussed in this post?
  • How would you evaluate the success of a community of practice? What would you look for? How could this be identified or described?

I would love to hear from anyone who has attempted to create communities of practice and what design elements they would recommend.

References

Brown, J. and Duguid, Paul (2000). “Balancing act: How to capture knowledge without killing it”Harvard Business Review.

Cambridge, D., Kaplan, S. and Suter, V. (2005) Community of Practice Design Guide Louisville  CO: EDUCAUSE

Knox, J. (2004) Digital culture clash: “massive” education in the E-learning and Digital Cultures MOOC Distance Education, Vol. 35, No. 2

Wenger, E. (2000) Communities of Practice: Learning, Meaning and Identity Cambridge UK: Cambridge University Press

Wenger, E. (2014) Communities of practice: a brief introduction, accessed 26 September, 2014

Wenger, E, McDermott, R., and Snyder, W. (2002). Cultivating Communities of Practice (Hardcover). Harvard Business Press; 1 edition.