November 21, 2014

EDEN research papers: OERs (inc. MOOCs), quality/assessment, social media, analytics and research methods

Listen with webReader

EDEN RSW me 2

EDEN has now published a second report on my review of papers submitted to the EDEN research workshop in Oxford a couple of weeks ago. All the full papers for the workshop can be accessed here.

Main lessons (or unanswered questions) I took away:

OERs and MOOCs

  • what does awarding badges of certificates for MOOCs or other OER actually mean? For instance will institutions give course exemption or credits for the awards, or accept such awards for admission purposes? Or will the focus be on employer recognition? How will participants who are awarded badges know what their ‘currency’ is worth?
  • can MOOCs be designed to go beyond comprehension or networking to develop other critical 21st century skills such as critical thinking, analysis and evaluation? Can they lead to ‘transformational learning’ as identified by Kumar and Arnold (see Quality and Assessment below)
  • are there better design models for open courses than MOOCs as currently structured? If so what would they look like?
  • is there a future for learning object repositories when nearly all academic content becomes open and online?

Quality and assessment

  • research may inform but won’t resolve policy issues
  • quality is never ‘objective’ but is value-driven
  • the level of intervention must be long and significant enough to result in significant learning gains
  • there’s lots of research already that indicates the necessary conditions for successful use of online discussion forums but if these conditions are not present then learning will not take place
  • the OU’s traditional model of course design constrains the development of successful collaborative online learning.

Use of social media in open and distance learning

There were surprisingly few papers on this topic. My main takeaway:

  • the use of social media needs to be driven by sound pedagogical theory that takes into account the affordances of social media (as in Sorensen’s study described in an earlier post under course design)

Data analytics and student drop-out

  • institutions/registrars must pay attention to how student data is tagged/labeled for analytic purposes, so there is consistency in definitions, aggregation and interpretation;
  • when developing or applying an analytics software program, consideration needs to be given to the level of analysis and what potential users of the data are looking for; this means working with instructional designers, faculty and administrators from the beginning
  • analytics need to be integrated with action plans to identify and support early at risk students

Research methods

Next

If these bullets interest you at all, then I strongly recommend you go and read the original papers in full – click here. My summary is of necessity personal and abbreviated and the papers provide much greater richness of context.

 

 

Why the fuss about MOOCs? Political, social and economic drivers

Listen with webReader
Daphne Koller's TED talk on MOOCs (click to activate video)

Daphne Koller’s TED talk on MOOCs (click to activate video)

The end of MOOCs

This is the last part of my chapter on MOOCs for my online open textbook, Teaching in a Digital Age. In a series of prior posts, I have looked at the strengths and weaknesses of MOOCs. Here I summarise this section and look at why MOOCs have gained so much attention.

Brief summary of strengths and weaknesses of MOOCs

The main points of my analysis of the strengths and weaknesses of MOOCs can be summarised as follows:

Strengths

  • the main value proposition of MOOCs is that through the use of computer automation and/or peer-to-peer communication MOOCs can eliminate the very large variable costs in higher education associated with providing learner support and quality assessment
  • MOOCs, particularly xMOOCs, deliver high quality content from some of the world’s best universities for free to anyone with a computer and an Internet connection
  • MOOCs can be useful for opening access to high quality content, particularly in Third World countries, but to do so successfully will require a good deal of adaptation, and substantial investment in local support and partnerships
  • MOOCs are valuable for developing basic conceptual learning, and for creating large online communities of interest or practice
  • MOOCs are an extremely valuable form of lifelong learning and continuing education
  • MOOCs have forced conventional and especially elite institutions to reappraise their strategies towards online and open learning
  • institutions have been able to extend their brand and status by making public their expertise and excellence in certain academic areas

Weaknesses

  • the high registration numbers for MOOCs are misleading; less than half of registrants actively participate, and of these, only a small proportion successfully complete the course; nevertheless, absolute numbers of successful participants are still higher than for conventional courses
  • MOOCs are expensive to develop, and although commercial organisations offering MOOC platforms have opportunities for sustainable business models, it is difficult to see how publicly funded higher education institutions can develop sustainable business models for MOOCs
  • MOOCs tend to attract those with already a high level of education, rather than widen access
  • MOOCs so far have been limited in the ability to develop high level academic learning, or the high level intellectual skills needed in a knowledge based society
  • assessment of the higher levels of learning remains a challenge for MOOCs, to the extent that most MOOC providers will not recognise their own MOOCs for credit
  • MOOC materials may be limited by copyright or time restrictions for re-use as open educational resources

Why the fuss about MOOCs?

It can be seen from the previous section that the pros and cons of MOOCs are finely balanced. Given though the obvious questions about the value of MOOCs, and the fact that before MOOCs arrived, there had been substantial but quiet progress for over ten years in the use of online learning for undergraduate and graduate programs, you might be wondering why MOOCs have commanded so much media interest, and especially why a large number of government policy makers, economists, and computer scientists have become so ardently supportive of MOOCs, and why there has been such a strong, negative reaction, not only from many traditional university and college instructors, who are right to be threatened by some of the claims being made for MOOCs, but also from many professionals in online learning (see for instance, Bates, 2012; Daniel, 2012; Hill, 2012; Watters, 2013), who might be expected to be more supportive of MOOCs

It needs to be recognised that the discourse around MOOCs is not usually based on a cool, rational, evidence-based analysis of the pros and cons of MOOCs, but is more likely to be driven by emotion, self-interest, fear, or ignorance of what education is actually about. Thus it is important to explore the political, social and economic factors that have driven MOOC mania.

Massive, free and Made in America!

This is what I will call the intrinsic reason for MOOC mania. It is not surprising that, since the first MOOC from Stanford professors Andrew Ng and Daphne Koller attracted 270,000 sign-ups from around the world, since the course was free, and since it came from professors at one of the most prestigious private universities in the USA, the American media were all over it. It was big news in its own right, however you look at it, especially as courses from Sebastian Thrun, another Stanford professor, and others from MIT and Harvard followed shortly, with equally staggering numbers of participants.

It’s the Ivy Leagues!

Until MOOCs came along, the major Ivy League universities in the USA, such as Stanford, MIT, Harvard and UC Berkeley, as well as many of the most prestigious universities in Canada, such as the University of Toronto and McGill, and elsewhere, had largely ignored online learning in any form.

However, by 2011, online learning, in the form of for credit undergraduate and graduate courses, was making big inroads at many other, very respectable universities, such as Carnegie Mellon, Penn State, and the University of Maryland in the USA, and also in many of the top tier public universities in Canada and elsewhere, to the extent that almost one in three course enrolments in the USA were now in online courses. Furthermore, at least in Canada, the online courses were often getting good completion rates and matching on-campus courses for quality.

The Ivy League and other highly prestigious universities that had ignored online learning were beginning to look increasingly out of touch by 2011. By launching into MOOCs, these prestigious universities could jump to the head of the queue in terms of technology innovation, while at the same time protecting their selective and highly personal and high cost campus programs from direct contact with online learning. In other words, MOOCs gave these prestigious universities a safe sandbox in which to explore online learning, and the Ivy League universities gave credibility to MOOCs, and, indirectly, online learning as a whole.

It’s disruptive!

For years before 2011, various economists, philosophers and industrial gurus had been predicting that education was the next big area for disruptive change due to the march of new technologies (see for instance Lyotard, 1979; Tapscott, undated; Christensen and Eyring, 2011).

Online learning in credit courses though was being quietly absorbed into the mainstream of university teaching, through blended learning, without any signs of major disruption, but here with MOOCs was a massive change, providing evidence at long last in the education sector to support the theories of disruptive innovation.

It’s Silicon Valley!

It is no coincidence that the first MOOCs were all developed by entrepreneurial computer scientists. Ng and Koller very quickly went on to create Coursera as a private commercial company, followed shortly by Thrun, who created Udacity. Anant Agarwal, a computer scientist at MIT, went on to head up edX.

The first MOOCs were very typical of Silicon Valley start-ups: a bright idea (massive, open online courses with cloud-based, relatively simple software to handle the numbers), thrown out into the market to see how it might work, supported by more technology and ideas (in this case, learning analytics, automated marking, peer assessment) to deal with any snags or problems. Building a sustainable business model would come later, when some of the dust had settled.

As a result it is not surprising that almost all the early MOOCs completely ignored any pedagogical theory about best practices in teaching online, or any prior research on factors associated with success or failure in online learning. It is also not surprising as a result that a very low percentage of participants actually successfully complete MOOCs – there’s a lot of catching up still to do, but so far Coursera and to a lesser extent edX have continued to ignore educators and prior research in online learning. They would rather do their own research, even if it means re-inventing the wheel. The commercial MOOC platform providers though are beginning to work out a sustainable business model.

It’s the economy, stupid!

Of all the reasons for MOOC mania, Bill Clinton’s famous election slogan resonates most with me. It should be remembered that by 2011, the consequences of the disastrous financial collapse of 2008 were working their way through the economy, and particularly were impacting on the finances of state governments in the USA.

The recession meant that states were suddenly desperately short of tax revenues, and were unable to meet the financial demands of state higher education systems. For instance, California’s community college system, the nation’s largest, suffered about $809 million in state funding cuts between 2008-2012, resulting in a shortfall of 500,000 places in its campus-based colleges. Free MOOCs were seen as manna from heaven by the state governor, Jerry Brown.

One consequence of rapid cuts to government funding was a sharp spike in tuition fees, bringing the real cost of higher education sharply into focus. Tuition fees in the USA have increased by 7% per annum over the last 10 years, compared with an inflation rate of 4% per annum. Here at last was a possible way to rein in the high cost of higher education.

Now though the economy in the USA is picking up and revenues are flowing back into state coffers, and so the pressure for more radical solutions to the cost of higher education is beginning to ease. It will be interesting to see if MOOC mania continues as the economy grows, although the search for more cost-effective approaches to higher education is not going to disappear.

Don’t panic!

These are all very powerful drivers of MOOC mania, which makes it all the more important to try to be clear and cool headed about the strengths and weaknesses of MOOCs. The real test is whether MOOCs can help develop the knowledge and skills that learners need in a knowledge-based society. The answer of course is yes and no.

As a low-cost supplement to formal education, they can be quite valuable, but not as a complete replacement. They can at present teach conceptual learning, comprehension and in a narrow range of activities, application of knowledge. They can be useful for building communities of practice, where already well educated people or people with a deep, shared passion for a topic can learn from one another, another form of continuing education.

However, certainly to date, MOOCs have not been able to demonstrate that they can lead to transformative learning, deep intellectual understanding, evaluation of complex alternatives, and evidence-based decision-making, and without greater emphasis on expert-based learner support and more qualitative forms of assessment, they probably never will, at least without substantial increases in their costs.

At the end of the day, there is a choice between throwing more resources into MOOCs and hoping that some of their fundamental flaws can be overcome without too dramatic an increase in costs, or whether we would be better investing in other forms of online learning and educational technology that could lead to more cost-effective learning outcomes. I know where I would put my money, and it’s not into MOOCs.

Over to you

This will be my last contribution to the discussion of MOOCs for my book, so let’s have it!

1. Do you agree with the strengths and weaknesses of MOOCs that I have laid out? What would you add or remove or change?

2. What do you think of the drivers of MOOC mania? Are these accurate? Are there other, more important drivers of MOOC mania?

3. Do you even agree that there is a mania about MOOCs, or is their rapid expansion all perfectly understandable?

References

Bates, T. (2012) What’s right and what’s wrong about Coursera-style MOOCs, Online learning and distance education resources, August 5

Christensen, C. and Eyring, H. (2011), The Innovative University: Changing the DNA of Higher Education, New York, New York, USA: John Wiley & Sons,

Daniel, J. (2012). Making sense of MOOCs: Musings in a maze of myth, paradox and possibility.Journal of Interactive Media in Education, Vol. 3

Hill, P. (2012) Four Barriers that MOOCs must overcome to build a sustainable model, e-Literate, July 24

Lyotard, J-J. (1979) La Condition postmoderne: rapport sur le savoir: Paris: Minuit

Tapscott, D. (undated) The transformation of education dontapscott.com

Watters, A. (2013) MOOC Mania: Debunking the hype around massive, open online courses The Digital Shift, 18 April

Strengths and weaknesses of MOOCs: Part 3, branding and cost

Listen with webReader
The MOOC value proposition is that they can eliminate variable costs of course delivery. Image: © OpenTuition.com, 2014

The MOOC value proposition is that MOOCs can eliminate the variable costs of course delivery. Image: © OpenTuition.com, 2014

The story so far

This is the fifth in a series of posts from my open textbook, Teaching in a Digital Age. I have already published four extracts from the book on MOOCs:

In this post, I examine their value for branding, and their costs.

Branding

Hollands and Tirthali (2014) in their survey on institutional expectations for MOOCs, found that building and maintaining brand was the second most important reason for institutions launching MOOCs (the most important was extending reach, which can also be seen as partly a branding exercise). Institutional branding through the use of MOOCs has been helped by elite Ivy League universities such as Stanford, MIT and Harvard leading the charge, and by Coursera limiting access to its platform to only ‘top tier’ universities. This of course has led to a bandwagon effect, especially since many of the universities launching MOOCs had previously disdained to move into credit-based online learning. MOOCs provided a way for these elite institutions to jump to the head of the queue in terms of status as ‘innovators’ of online learning, even though they arrived late to the party.

It obviously makes sense for institutions to use MOOCs to bring to a much wider public their areas of specialist expertise, such as the University of Alberta offering a MOOC on dinosaurs, MIT on electronics, and Harvard on Ancient Greek Heroes. MOOCs certainly help to widen knowledge of the quality of an individual professor (who are usually delighted to reach more students in one MOOC than in a lifetime of on-campus teaching). MOOCs are also a good way to give a glimpse of the quality of courses and programs offered by an institution.

However, it is difficult to measure the real impact of MOOCs on branding. As Hollands and Tirthali put it:

While many institutions have received significant media attention as a result of their MOOC activities, isolating and measuring impact of any new initiative on brand is a difficult exercise. Most institutions are only just beginning to think about how to capture and quantify branding-related benefits.

In particular, these elite institutions do not need MOOCs to boost the number of applicants for their campus-based programs (none to date is willing to accept successful completion of a MOOC for admission to credit programs), since elite institutions have no difficulty in attracting already highly qualified students.

Furthermore, once every other institution starts offering MOOCs, the branding effect gets lost to some extent. Indeed, exposing poor quality teaching or course planning to many thousands can have a negative impact on an institution’s brand, as Georgia Institute of Technology found when one of its MOOCs crashed and burned (Jaschik, 2013). However, by and large, most MOOCs succeed in the sense of bringing an institution’s reputation in terms of knowledge and expertise to many more people than it would in any other form of teaching.

Costs and economies of scale

The main value proposition of MOOCs is that they are free to participants. Once again we shall see this is more true in principle than in practice, because MOOC providers may charge a range of fees, especially for assessment. Although MOOCs may be free for participants, they are not without substantial cost to the provider institutions. Also, there are large differences in the costs of xMOOCs and cMOOCs, the latter being generally much cheaper to develop, although there are still some opportunity or actual costs even for cMOOCs.

Once again, there is very little information to date on the actual costs of designing and delivering a MOOC. However, we do know what the main cost factors are in online and distance learning, from previous research by Rumble (2001) and Hülsmann (2003). Using similar costing methodology, I tracked and analysed the cost of an online masters program at the University of British Columbia over a seven year period (Bates and Sangrà, 2011). This program used mainly a learning management system as the core technology, with instructors both developing the course and providing online learner support and assessment, assisted where necessary by extra adjunct faculty for handling larger class enrolments.

Costs of online learning break down into several categories:

  • initial program planning
  • course development
  • course delivery
  • course maintenance
  • institutional overheads.

Within each of these categories, there are sub-categories, such as the cost of instructors, media production and delivery costs, instructional design, and the cost of producing and delivering support materials. Not all costs apply in all circumstances, of course.

I found in my analysis of the costs of the UBC program that in 2003, development costs were approximately $20,000 to $25,000 per course. However, over a seven year period course development constituted less than 15% of the total cost, and occurred mainly in the first year or so of the program. Delivery costs, which included providing online learner support and student assessment, constituted more than a third of the total cost, and of course continued each year the course was offered (see Figure 6.8 below). Thus in credit-based online learning, delivery costs tend to be more than double the development costs over the life of a program.

Figure 6.8: Costs of an online masters program over seven years

Figure 6.8: Costs of an online masters program over seven years (from Bates and Sangrà, 2011, p. 172)

The main difference between MOOCs, credit-based online teaching, and campus-based teaching is that in principle MOOCs eliminate all delivery costs, because MOOCs do not provide learner support or instructor-delivered assessment, although again in practice this is not always true.

We do not have enough cases at the moment to draw firm conclusions about the costs of MOOCs but we do have some data. The University of Ottawa estimated the cost of developing an  xMOOC, based on figures provided to the university by Coursera, and on their own knowledge of the cost of developing online courses for credit, at around $100,000 (University of Ottawa, 2013).

Engle (2014) has reported on the actual cost of five MOOCs from the University of British Columbia. (In essence, there were really four UBC MOOCs, as one was in two shorter parts.) There are two important features concerning the UBC MOOCs that do not necessarily apply to other MOOCs. First, the UBC MOOCs used a wide variety of video production methods, from full studio production to desktop recording, so development costs varied considerably, depending on the sophistication of the video production technique. Second, the UBC MOOCs made extensive use of paid academic assistants, who monitored discussions and adapted or changed course materials as a result of student feedback, so there were substantial delivery costs as well.

Appendix B of the UBC report gives a pilot total of $217,657, but this excludes academic assistance or, perhaps the most significant cost, instructor time. Academic assistance came to 25% of the overall cost in the first year (excluding the cost of faculty). Working from the video production costs ($95,350) and the proportion of costs (44%) devoted to video production in Figure 1 in the report, I estimate the direct cost at $216,700, or approximately $54,000 per MOOC, excluding faculty time and co-ordination support (i.e. excluding program administration and overheads), but including academic assistance. However, the range of cost is almost as important. The video production costs for the MOOC which used intensive studio production were more than six times the video production costs of one of the other MOOCs.

There is also clearly a large opportunity cost involved in offering xMOOCs. By definition, the most highly valued faculty are involved in offering MOOCs. In a large research university, such faculty are likely to have, at a maximum, a teaching load of four to six courses a year. Although most instructors volunteer to do MOOCs, their time is limited. Either it means dropping one credit course for at least one semester, equivalent to 25 or more of their teaching load, or xMOOC development and delivery replaces time spent doing research. Furthermore, unlike credit-based courses, which run from anywhere between five to seven years, MOOCs are often offered only once or twice.

However one looks at it, the cost of xMOOC development, without including the time of the MOOC instructor, tends to be almost double the cost of developing an online credit course using a learning management system, because of the use of video in MOOCs. If the cost of the instructor is included, xMOOC production costs come closer to three times that of a similar length online credit course, especially given the extra time faculty tend put in for such a public demonstration of their teaching in a MOOC. The full cost of an xMOOC then seems to be currently around $100,000, but we really need some good, reliable data to substantiate this estimate. There is no reason though why xMOOCs could not use cheaper production methods, such as an LMS instead of video, for content delivery, or using and re-editing video recordings of classroom lectures via lecture capture.

Without learner support or academic assistance, though, delivery costs for MOOCs are zero, and this is where the huge potential for savings exist. The issue then is whether MOOCs can succeed without the cost of learner support and human assessment, or more likely, whether MOOCs can substantially reduce delivery costs through automation without loss of quality in learner performance. There is no evidence to date though that they can do this, and prior research on the importance of instructor presence for successful online credit programs suggests that learner support and assessment remain a major challenge for MOOCs.

In terms of sustainable business models, the elite universities have been able to move into xMOOCs because of generous donations from private foundations and use of endowment funds, but these forms of funding are limited for most institutions. Coursera and Udacity have the opportunity to develop successful business models through various means, such as charging MOOC provider institutions for use of their platform, by collecting fees for badges or certificates, through the sale of participant data, through corporate sponsorship, or through direct advertising.

However, particularly for publicly funded universities or colleges, most of these sources of income are not available or permitted, so it is hard to see how they can begin to recover the cost of a substantial investment in MOOCs, even with ‘cannibalising’ MOOC material for on-campus use. Every time a MOOC is offered, this takes away resources that could be used for online credit programs. Thus institutions are faced with some hard decisions about where to invest their resources for online learning. The case for putting scarce resources into MOOCs is far from clear, unless some way can be found to give credit for successful MOOC completion.

Next

I hope to wrap up what has turned out to be a whole chapter on MOOCs in my next post in this series, which will analyse the political and economic reasons behind the rapid development of MOOCs, and which will also provide some brief conclusions about MOOCs as a design model.

Help!

1. I am very conscious of the limited amount of data on either the success of MOOCs for branding, or in particular the true costs of developing and delivering MOOCs. So any information on the cost of MOOCs that you can provide or direct me to will be very much appreciated.

2. Has anyone attempted to measure the value of MOOCs for ‘branding’ a university or college? How would you do this?

3. Is it reasonable to compare the costs of xMOOCs to the costs of online credit courses? Are they competing for the same funds, or are they categorically different in their funding source and goals? If so, how?

4. Could you make the case that cMOOCs are a better value proposition than xMOOCs – or are they again too different to compare? In particular has anyone got data on the true cost of cMOOCs? How would you cost them?

As always, feedback, criticisms and comments are welcome.

References

Bates, A. and Sangrà, A. (2011) Managing Technology in Higher Education San Francisco: Jossey-Bass/John Wiley and Co

Engle, W. (2104) UBC MOOC Pilot: Design and Delivery Vancouver BC: University of British Columbia

Hollands, F. and Tirthali, D. (2014) MOOCs: Expectations and Realities New York: Columbia University Teachers’ College

Hülsmann, T. (2003) Costs without camouflage: a cost analysis of Oldenburg University’s  two graduate certificate programs offered  as part of the online Master of Distance Education (MDE): a case study, in Bernath, U. and Rubin, E., (eds.) Reflections on Teaching in an Online Program: A Case Study Oldenburg, Germany: Bibliothecks-und Informationssystem der Carl von Ossietsky Universität Oldenburg

Jaschik, S. (2013) MOOC Mess, Inside Higher Education, February 4

Rumble, G. (2001) The costs and costing of networked learning, Journal of Asynchronous Learning Networks, Vol. 5, No. 2

University of Ottawa (2013) Report of the e-Learning Working Group Ottawa ON: The University of Ottawa

A New Zealand analysis of MOOCs

Listen with webReader

NZ MOOCs 2

Shrivastava, A. and Guiney, P. (2014) Technological Development and Tertiary Education Delivery Models: The Arrival of MOOCs  Wellington NZ: Tertiary Education Commission/Te Amorangi Mātauranga Matua

Why this paper?

Another report for the record on MOOCs, this time from the New Zealand Tertiary Education Commission. The reasoning behind this report:

The paper focuses on MOOCs [rather than doing a general overview of emerging technologies] because of their potential to disrupt tertiary education and the significant opportunities, challenges and risks that they present. MOOCs are also the sole focus of this paper because of their scale and the involvement of the elite United States universities.

What’s in the paper?

The paper provides a fairly standard, balanced analysis of developments in MOOCs, first by describing the different MOOC delivery models, their business models and the drivers behind MOOCs, then by following up with a broad discussion of the possible implications of MOOCs for New Zealand, such as unbundling of services, possible economies of scale, globalization of tertiary (higher) education, adaptability to learners’ and employers’ needs, and the possible impact on New Zealand’s tertiary education workforce.

There is also a good summary of MOOCs being offered by New Zealand institutions.

At the end of the paper some interesting questions for further discussion are raised:

  • What will tertiary education delivery look like in 2030?

  • What kinds of opportunities and challenges do technological developments, including MOOCs, present to the current policy, regulatory and operational arrangements for tertiary teaching and learning in New Zealand?

  • How can New Zealand make the most of the opportunities and manage any associated risks and challenges?

  • Do MOOCs undermine the central value of higher education, or are they just a helpful ‘updating’ that reflects its new mass nature?

  • Where do MOOCs fit within the New Zealand education and qualifications systems?

  • Who values the knowledge and skills gained from a MOOC programme and why?

  • Can economies of scale be achieved through MOOCs without loss of quality?

  • Can MOOCs lead to better learning outcomes at the same or less cost than traditional classroom-based teaching? If so, how might the Government go about funding institutions that want to deliver MOOCs to a mix of domestic and international learners?

  • What kinds of MOOC accreditation models might make sense in the context of New Zealand’s quality-assurance system?

Answers on a postcard, please, to the NZ Tertiary Education Commission.

Comment

Am I alone in wondering what has happened to for-credit online education in government thinking about the future? It is as if 20 years of development of undergraduate and graduate online courses and programs never existed. Surely a critical question for institutions and government planners is:

  • what are the relative advantages and disadvantages of MOOCs over other forms of online learning? What can MOOCs learn from our prior experience with credit-based online learning?

There are several reasons for considering this, but one of the most important is the huge investment many institutions, and, indirectly, governments. have already made in credit-based online learning.

By and large, online learning in publicly funded universities, both in New Zealand and in Canada, has been very successful in terms of both increasing access and in student learning. It is also important to be clear about the differences and some of the similarities between credit-based online learning and MOOCs.

Some of the implications laid out in this paper, such as possibilities of consortia and institutional collaboration, apply just as much to credit-based online learning as to MOOCs, and many of the negative criticisms of MOOCs, such as difficulties of assessment and lack of learner support, disappear when applied to credit-based online learning.

Please, policy-makers, realise that MOOCs are not your only option for innovation through online learning. There are more established and well tested solutions already available.

EDEN research papers on learner characteristics, course design and faculty development in online learning

Listen with webReader
Some of the participants at the EDEN Research workshop, 2014

Some of the participants at the EDEN Research workshop, 2014

EDEN has now published my review of some of the research papers submitted to the EDEN research workshop in Oxford a couple of weeks ago. All the full papers for the workshop can be accessed here.

Main lessons I culled from these papers:

Learner characteristics

  • open and distance learners/online learners are much more heterogeneous than on-campus students: social background, institutional differences, prior education/learning experiences, all influence their readiness for online learning
  • as a result, ODL students need much more personalization or individualization of their learning: one size does not fit all
  • special attention needs to be paid to ‘at risk’ students very early in their studies: intense personal/tutor support is critical for such students.

It can be seen that such findings are important not only for the design of for-credit programs but also for MOOCs.

Course design

There were surprisingly few papers directly on this topic (although papers on other topics such as assessment and quality are also relevant of course).

The main lessons for me from this research on course design were:

  • technology offers opportunity for radically new course designs and new approaches to student learning,
  • such new designs need to be driven and informed by sound pedagogical theory/principles and prior research.

Faculty development

Main lessons:

  • we should be working to use technology to decrease faculty workload, not to increase it, as at present
  • this will probably require team teaching, with different skills within the team (subject expert, learner support staff, course designer/pedagogue, technology specialist); it is unrealistic to expect faculty to be expert in all these areas
  • to individualize learning, increased use of adaptive technology and the creation and support of personal learning environments will be necessary to help faculty manage the workload.

Next

Two more reports are expected shortly, covering OERs/MOOCs, quality and assessment, research methods and overall conclusions.

First part of report on EDEN Research Workshop now available

Listen with webReader
Sian Bayne presenting at EDEN Research Workshop, Oxford

Sian Bayne presenting at EDEN Research Workshop, Oxford

My official report on the 8th EDEN Research Workshop is being released as a series of four blog posts by the President of EDEN, Professor Antonio Moreira Teixera. The first, which is a very brief summary of the keynote presentations, is available here.

The other three, which provide my personal analysis of the research papers presented at the workshop, will be published on consecutive days later this week and I will let you know when each is published.

Perhaps more importantly, the 40+ papers presented at the EDEN Research Workshop are now available in their entirety as a pdf file. If you have any interest in research in online and/or open and distance education, many of the papers are well worth reading in full, rather than relying in my personal interpretation. Happy reading, Ph.D. students!

A review of MOOCs and their assessment tools

Listen with webReader
What kind of MOOC?

What kind of MOOC?

Chauhan, A. (2014) Massive Open Online Courses (MOOCS): Emerging Trends in Assessment and Accreditation Digital Education Review, No. 25

For the record, Amit Chauhan, from Florida State University, has reviewed the emerging trends in MOOC assessments and their application in supporting student learning and achievement.

Holy proliferating MOOCs!

He starts with a taxonomy of MOOC instructional models, as follows:

  • cMOOCs
  • xMOOCs
  • BOOCs (a big open online course) – only one example, by a professor from Indiana University with a grant from Google, is given which appears to be a cross between an xMOOC and a cMOOC and had 500 participants.
  • DOCCs (distributed open collaborative course): this involved 17 universities sharing and adapting the same basic MOOC
  • LOOC (little open online course): as well as 15-20 tuition-paying campus-based students, the courses also allow a limited number of non-registered students to also take the course, but also paying a fee. Three examples are given, all from New England.
  • MOORs (massive open online research): again just one example is given, from UC San Diego, which seems to be a mix of video-based lecturers and student research projects guided by the instructors
  • SPOCs (small, private, online courses): the example given is from Harvard Law School, which pre-selected 500 students from over 4,000 applicants, who take the same video-delivered lectures as on-campus students enrolled at Harvard
  • SMOCs: (synchronous massive open online courses): live lectures from the University of Texas offered to campus-based students are also available synchronously to non-enrolled students for a fee of $550. Again, just one example.

MOOC assessment models and emerging technologies

Chauhan describes ‘several emerging tools and technologies that are being leveraged to assess learning outcomes in a MOOC. These technologies can also be utilized to design and develop a MOOC with built-in features to measure learning outcomes.’

  • learning analytics on MIT’s 6.002x, Circuits and Electronics. This is a report of the study by Breslow et al. (2013) of the use of learning analytics to study participants’ behaviour on the course to identify factors influencing student performance.
  • personal learning networks on PLENK 2010: this cMOOC is actually about personal learning networks and encouraged participants to use a variety of tools to develop their own personal learning networks
  • mobile learning on MobiMOOC, another connectivist MOOC. The learners in MobiMOOC utilized mobile technologies for accessing course content, knowledge creation and sharing within the network. Data were collected from participant discussion forums and hashtag analysis to track participant behaviour
  • digital badges have been used in several MOOCs to reward successful completion of an end of course test, participation in discussion forums, or in peer review activities
  • adaptive assessment:  assessments based on Item Response Theory (IRT) are designed to automatically adapt to student learning and ability to measure learner performance and learning outcomes. The tests include different difficulty levels and based on the response of the learner to each test item, the difficulty level decreases or increases to match learner ability and potential. No example of actual use of IRT in MOOCs was given.
  • automated assessments: Chauhan describes two automated assessment tools, Automated Essay Scoring (AES) and Calibrated Peer Review™ (CPR), that are really automated tools for assessing and giving feedback on writing skills. One study on their use in MOOCs (Balfour, 2013) is cited.
  • recognition of prior learning: I think Chauhan is suggesting that institutions offering RPL can/should include MOOCs in student RPL portfolios.

Chauhan concludes:

Assessment in a MOOC does not necessarily have to be about course completion.  Learners can be assessed on time-on-task; learner-course component interaction; and a certification of the specific skills and knowledge gained from a MOOC….. Ultimately, the satisfaction gained from completing the course can be potential indicator of good learning experiences.

Alice in MOOCland

Chauhan describes the increasing variation of instructional methods now associated with the generic term ‘MOOC’, to the point where one has to ask whether the term has any consistent meaning. It’s difficult to see how a SPOC for instance differs from a typical online credit course, except perhaps in that it uses a recorded lecture rather than a learning management system or VLE. The only common factor in these variations is that the course is being offered to some non-registered students, but then if they have to pay a $500 fee, surely that’s a registered student? If a course is neither massive, nor open, nor free, how can it be a MOOC?

Further, if MOOC participants are taking exactly the same course and tests as registered students, will the institution award them credit for it and admit them to the institution? If not, why not? It seems that some institutions really haven’t thought this through. I’d like to know what Registrar’s make of all this.

At some point, institutions will need to develop a clearer, more consistent strategy for open learning, in terms of how it can best be provided, how it calibrates with formal learning, and how open learning can be accommodated within the fiscal constraints of the institution, and then where MOOCs might fit with the strategy. It seems that a lot of institutions – or rather instructors – are going into open learning buttock-backwards.

More disturbing for me though is the argument Chauhan makes for assessing everything except what participants learn from MOOCs. With the exception of automated tests, all these tools do is describe all kinds of behaviour except for learning. These tools may be useful for identifying factors that influence learning, on a post hoc rationalization, but you need to be able to measure the learning in the first place, unless you see MOOCs as some cruel form of entertainment. I have no problem with trying to satisfy students, I have no problem with MOOCs as un-assessed non-formal education, but if you try to assess participants, at the end of the day it’s what they learn that matters. MOOCs need better tools for measuring learning, but I didn’t see any described in this article.

Further, if MOOC participants are taking exactly the same course and tests as registered students, will the institution award them credit for it and admit them to the institution? If not, why not? It seems that some institutions really haven’t thought this through. I’d like to know what Registrar’s make of all this.

At some point, institutions will need to develop a clearer, more consistent strategy for open learning, in terms of how it can best be provided, how it calibrates with formal learning, and how open learning can be accommodated within the fiscal constraints of the institution, and then where MOOCs might fit with the strategy. It seems that a lot of institutions – or rather instructors – are going into open learning buttock-backwards.

More disturbing for me though is the argument Chauhan makes for assessing everything except what participants learn from MOOCs. With the exception of automated tests, all these tools do is describe all kinds of behaviour except for learning. These tools may be useful for identifying factors that influence learning, on a post hoc rationalization, but you need to be able to measure the learning in the first place, unless you see MOOCs as some cruel form of entertainment. I have no problem with trying to satisfy students, I have no problem with MOOCs as un-assessed non-formal education, but if you try to assess participants, at the end of the day it’s what they learn that matters. MOOCs need better tools for measuring learning, but I didn’t see any described in this article.

References

Balfour, S. P. (2013). Assessing writing in MOOCs: Automated Essay Scoring and Calibrated Peer review. Research & Practice in Assessment, Vol. 8, No. 1

Breslow, L., Pritchard, D. E., DeBoer, J., Stump, G. S., Ho, A. D., & Seaton, D. T. (2013). Studying learning in the worldwide classroom: Research into edx’s first mooc. Research & Practice in Assessment, 8, 13-25.

The strengths and weaknesses of MOOCs: Part 2: learning and assessment

Listen with webReader
Remote exam proctoring

Remote exam proctoring

The writing of my book, Teaching in a Digital Age, has been interrupted for nearly two weeks by my trip to England for the EDEN Research Workshop. As part of the first draft of the book, I have already published three posts on MOOCs:

In this post, I ask (and try to answer) what do participants learn from MOOCs, and I also evaluate their assessment methods.

What do students learn in MOOCs?

This is a much more difficult question to answer, because so little of the research to date (2014) has tried to answer this question. (One reason, as we shall see, is that assessment of learning in MOOCs remains a major challenge). There are at least two kinds of study: quantitative studies that seek to quantify learning gains; and qualitative studies that describe the experience of learners within MOOCs, which indirectly provide some insight into what they have learned.

At the time of writing, the most well conducted study of learning in MOOCs has been by Colvin et al. (2014), who investigated ‘conceptual learning’ in an MIT Introductory Physics MOOC. They compared learner performance not only between different sub-categories of learners within the MOOC, such as those with no physics or math background with those such as physic teachers who had considerable prior knowledge, but also with on-campus students taking the same curriculum in a traditional campus teaching format. In essence, the study found no significant differences in learning gains between or within the two types of teaching, but it should be noted that the on-campus students were students who had failed an earlier version of the course and were retaking it.

This research is a classic example of the no significant difference in comparative studies in educational technology; other variables, such as differences in the types of students, were as important as the mode of delivery. Also, this MOOC design represents a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It doesn’t attempt to develop the skills needed in a digital age as identified in Chapter 1.

There have been far more studies of the experience of learners within MOOCs, particularly focusing on the discussions within MOOCs (see for instance, Kop, 2011). In general (although there are exceptions), discussions are unmonitored, and it is left to participants to make connections and respond to other students comments.

However, there are some strong criticisms of the effectiveness of the discussion element of MOOCs for developing the high-level conceptual analysis required for academic learning. To develop deep, conceptual learning, there is a need in most cases for intervention by a subject expert, to clarify misunderstandings or misconceptions, to provide accurate feedback,  to ensure that the criteria for academic learning, such as use of evidence, clarity of argument, etc., are being met, and to ensure the necessary input and guidance to seek deeper understanding (see Harasim, 2013).

Furthermore, the more massive the course, the more likely participants are to feel ‘overload, anxiety and a sense of loss’, if there is not some instructor intervention or structure imposed (Knox, 2014). Firmin et al. (2014) have shown that when there is some form of instructor ‘encouragement and support of student effort and engagement’, results improve for all participants in MOOCs. Without a structured role for subject experts, participants are faced with a wide variety of quality in terms of comments and feedback from other participants. There is again a great deal of research on the conditions necessary for the successful conduct of collaborative and co-operative group learning (see for instance, Dillenbourg, 1999, Lave and Wenger, 1991), and these findings certainly have not been generally applied to the management of MOOC discussions to date.

One counter argument is that at least cMOOCs develop a new form of learning based on networking and collaboration that is essentially different from academic learning, and MOOCs are thus more appropriate to the needs of learners in a digital age. Adult participants in particular, it is claimed by Downes and Siemens, have the ability to self-manage the development of high level conceptual learning.  MOOCs are ‘demand’ driven, meeting the interests of individual students who seek out others with similar interests and the necessary expertise to support them in their learning, and for many this interest may well not include the need for deep, conceptual learning but more likely the appropriate applications of prior knowledge in new or specific contexts. MOOCs do appear to work best for those who already have a high level of education and therefore bring many of the conceptual skills developed in formal education with them when they join a MOOC, and therefore contribute to helping those who come without such skills.

Over time, as more experience is gained, MOOCs are likely to incorporate and adapt some of the findings from research on smaller group work for much larger numbers. For instance, some MOOCs are using ‘volunteer’ or community tutors (Dillenbourg, 2014).The US State Department has organized MOOC camps through US missions and consulates abroad to mentor MOOC participants. The camps include Fulbright scholars and embassy staff who lead discussions on content and topics for MOOC participants in countries abroad (Haynie, 2014). Some MOOC providers, such as the University of British Columbia, pay a small cohort of academic assistants to monitor and contribute to the MOOC discussion forums (Engle, 2014). Engle reported that the use of academic assistants, as well as limited but effective interventions from the instructors themselves, made the UBC MOOCs more interactive and engaging. However, paying for people to monitor and support MOOCs will of course increase the cost to providers. Consequently, MOOCs are likely to develop new automated ways to manage discussion effectively in very large groups. The University of Edinburgh is experimenting with automated ‘teacherbots’ that crawl through online discussion forums and direct predetermined comments to students identified as needing help or encouragement (Bayne, 2014).

These results and approaches are consistent with prior research on the importance of instructor presence for successful for-credit online learning. In the meantime, though, there is much work still to be done if MOOCs are to provide the support and structure needed to ensure deep, conceptual learning where this does not already exist in students. The development of the skills needed in a digital age is likely to be an even greater challenge when dealing with massive numbers. However, we need much more research into what participants actually learn in MOOCs and under what conditions before any firm conclusions can be drawn.

Assessment

Assessment of the massive numbers of participants in MOOCs has proved to be a major challenge. It is a complex topic that can be dealt with only briefly here. However, Chapter 5.8 provides a general analysis of different types of assessment, and Suen (2014) provides a comprehensive and balanced overview of the way assessment has been used in MOOCs to date. This section draws heavily on Suen’s paper.

Computer marked assignments

Assessment to date in MOOCs has been primarily of two kinds. The first is based on quantitative multiple-choice tests, or response boxes where formulae or ‘correct code’ can be entered and automatically checked. Usually participants are given immediate automated feedback on their answers, ranging from simple right or wrong answers to more complex responses depending on the type of response they have checked, but in all cases, the process is usually fully automated.

For straight testing of facts, principles, formulae, equations and other forms of conceptual learning where there are clear, correct answers, this works well. In fact, multiple choice computer marked assignments were used by the UK Open University as long ago as the 1970s, although the means to give immediate online feedback were not available then. However, this method of assessment is limited for testing deep or ‘transformative’ learning, and particularly weak for assessing the intellectual skills needed in a digital age, such as creative or original thinking.

Peer review

The second type of assessment that has been tried in MOOCs has been peer assessment, where participants assess each other’s work. Peer assessment is not new. It has been successfully used for formative assessment in traditional classrooms and in some online teaching for credit (Falchikov and Goldfinch, 2000; van Zundert et al., 2010). More importantly, peer assessment is seen as a powerful way to improve deep understanding and knowledge through the rating process, and at the same time, it can be useful for developing some of the skills needed in a digital age, such as critical thinking, for those participants assessing the work of others.

However, a key feature of the successful use of peer assessment has been the close involvement of an instructor or teacher, in providing benchmarks, rubrics or criteria  for assessment, and for monitoring and adjusting peer assessments to ensure consistency and a match with the benchmarks set by the instructor. Although an instructor can provide the benchmarks and rubrics in MOOCs, close monitoring of the multiple peer assessments is difficult if not impossible with the very large numbers of participants in MOOCs. As a result, MOOC participants often become incensed at being randomly assessed by other participants who may not and often do not have the knowledge or ability to give a ‘fair’ or accurate assessment of a participant’s work.

Various attempts to get round the limitations of peer assessment in MOOCs have been tried such as calibrated peer reviews, based on averaging all the peer ratings, and Bayesian post hoc stabilization, but although these statistical techniques reduce the error (or spread) of peer review somewhat they still do not remove the problems of systematic errors of judgement in raters due to misconceptions. This is particularly a problem where a majority of participants fail to understand key concepts in a MOOC, in which case peer assessment becomes the blind leading the blind.

Automated essay scoring

This is another area where there have been attempts to automate scoring. Although such methods are increasingly sophisticated they are currently limited in terms of accurate assessment to measuring primarily technical writing skills, such as grammar, spelling and sentence construction. Once again they do not measure accurately essays where higher level intellectual skills are demonstrated.

Badges and certificates

Particularly in xMOOCs, participants may be awarded a certificate or a ‘badge’ for successful completion of the MOOC, based on a final test (usually computer-marked) which measures the level of learning in a course. The American Council on Education (ACE), which represents the presidents of U.S. accredited, degree-granting institutions, recommended offering credit for five courses on the Coursera MOOC platform. However, according to the person responsible for the review process:

what the ACE accreditation does is merely accredit courses from institutions that are already accredited. The review process doesn’t evaluate learning outcomes, but is a course content focused review thus obviating all the questions about effectiveness of the pedagogy in terms of learning outcomes.’ (Book, 2013)

Indeed, most of the institutions offering MOOCs will not accept their own certificates for admission or credit within their own, campus-based programs. Probably nothing says more about the confidence in the quality of the assessment than this failure of MOOC providers to recognize their own teaching.

The intent behind assessment

To evaluate assessment in MOOCs requires an examination of the intent behind assessment. As identified earlier in another chapter of my book, there are many different purposes behind assessment. Peer assessment and immediate feedback on computer-marked tests can be extremely valuable for formative assessment, enabling participants to see what they have understood and to help develop further their understanding of key concepts. In cMOOCs, as Suen points out, learning is measured as the communication that takes place between MOOC participants, resulting in crowdsourced validation of knowledge – it’s what the sum of all the participants come to believe to be true as a result of participating in the MOOC, so formal assessment is unnecessary. However, what is learned in this way is not necessarily academically validated knowledge, which to be fair, is not the concern of cMOOC proponents such as Stephen Downes.

Academic assessment is a form of currency, related not only to measuring student achievement but also affecting student mobility (e.g. entrance to grad school) and perhaps more importantly employment opportunities and promotion. From a learner’s perspective, the validity of the currency – the recognition and transferability of the qualification – is essential. To date, MOOCs have been unable to demonstrate that they are able to assess accurately the learning achievements of participants beyond comprehension and knowledge of ideas, principles and processes (recognizing that there is some value in this alone). What MOOCs have not been able to demonstrate is that they can either develop or assess deep understanding or the intellectual skills required in a digital age. Indeed, this may not be possible within the constraints of massiveness, which is their major distinguishing feature from other forms of online learning, although the lack of valid methods of assessment will not stop computer scientists from trying to find ways to analyze participant online behaviour to show that such learning is taking place.

Up next

I hope the next post will be my last on this chapter on MOOCs. It will cover the following topics:

  • the cost of MOOCs and economies of scale
  • branding
  • the political, economic and social factors that explain the rise of MOOCs.

Over to you

As regular readers know, this is my way of obtaining peer review for my open textbook (so clearly I am not against peer review in principle!). So if I have missed anything important on this topic, or have misrepresented people’s views, or you just plain disagree with what I’ve written, please let me know. In particular, I am hoping for comments on:

  • comprehensiveness of the sources used that address learning and assessment methods in MOOCs
  • arguments that should have been included, either as a strength or a weakness
  • errors of fact

Yes, I’m a glutton for punishment, but you need to be a masochist to publish openly on this topic.

References

Bayne, S. (2014) Teaching, Research and the More-than-Human in Digital Education Oxford UK: EDEN Research Workshop (url to come)

Book, P. (2103) ACE as Academic Credit Reviewer–Adjustment, Accommodation, and Acceptance WCET Learn, July 25

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Dillenbourg, P. (ed.) (1999) Collaborative-learning: Cognitive and Computational Approaches. Oxford: Elsevier

Dillenbourg, P. (2014) MOOCs: Two Years Later, Oxford UK: EDEN Research Workshop (url to come)

Engle, W. (2104) UBC MOOC Pilot: Design and Delivery Vancouver BC: University of British Columbia

Falchikov, N. and Goldfinch, J. (2000) Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks Review of Educational Research, Vol. 70, No. 3

Firmin, R. et al. (2014) Case study: using MOOCs for conventional college coursework Distance Education, Vol. 35, No. 2

Haynie, D. (2014). State Department hosts ‘MOOC Camp’ for online learners. US News,January 20

Harasim, L. (2012) Learning Theory and Online Technologies New York/London: Routledge

Ho, A. et al. (2014) HarvardX and MITx: The First Year of Open Online Courses Fall 2012-Summer 2013 (HarvardX and MITx Working Paper No. 1), January 21

Knox, J. (2014) Digital culture clash: ‘massive’ education in the e-Learning and Digital Cultures Distance Education, Vol. 35, No. 2

Kop, R. (2011) The Challenges to Connectivist Learning on Open Online Networks: Learning Experiences during a Massive Open Online Course International Review of Research into Open and Distance Learning, Vol. 12, No. 3

Lave, J. and Wenger, E. (1991). Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press

Milligan, C., Littlejohn, A. and Margaryan, A. (2013) Patterns of engagement in connectivist MOOCs, Merlot Journal of Online Learning and Teaching, Vol. 9, No. 2

Suen, H. (2104) Peer assessment for massive open online courses (MOOCs) International Review of Research into Open and Distance Learning, Vol. 15, No. 3

van Zundert, M., Sluijsmans, D., van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and future directions. Learning and Instruction, 20, 270-279

New journal on research into online learning for k-12 educators

Listen with webReader
Image: © myeducation.com

Image: © myeducation.com

The Journal of Online Learning Research (JOLR) is a peer-reviewed, international journal devoted to the theoretical, empirical, and pragmatic understanding of technologies and their impact on primary and secondary pedagogy and policy in primary and secondary (K-12) online and blended environments.

This new quarterly journal (premieres Jan. 2015) is Open Access, and distributed by the EdITLib Digital Library as well as available in print by subscription for institutions/libraries.

JOLR papers should address online learning, catering particularly to the educators who research, practice, design, and/or administer in primary and secondary schooling in online settings. However, the journal also serves those educators who have chosen to blend online learning tools and strategies in their face-to-face classroom.

For Author Guidelines & to Submit: Click HERE

Thanks to Russell Poulin at WCET for directing me to this.

The dissemination of research in online learning: a lesson from the EDEN Research Workshop

Listen with webReader
The Sheldonian Theatre, Oxford

The Sheldonian Theatre, Oxford

The EDEN Research Workshop

I’m afraid I have sadly neglected my blog over the last two weeks, as I was heavily engaged as the rapporteur for the EDEN 8th Research Workshop on challenges for research on open and distance learning, which took place in Oxford, England last week, with the UK Open University as the host and sponsor. I was also there to receive a Senior Fellowship from EDEN, awarded at the Sheldonian Theatre, the official ceremonial hall of the University of Oxford.

There were at the workshop almost 150 participants from more than 30 countries, in the main part European, with over 40 selected research papers/presentations. The workshop was highly interactive, with lots of opportunity for discussion and dialogue, and formal presentations were kept to a minimum. Together with some very stimulating keynotes, the workshop provided a good overview of the current state of online, open and distance learning in Europe. From my perspective it was a very successful workshop.

My full, factual report on the workshop will be published next week as a series of three blog posts by Antonio Moreira Texeira, the President of EDEN, and I will provide a link when these are available, but in the meantime I would like to reflect more personally on one of the issues that came out of the workshop, as this issue is more broadly applicable.

Houston, we have a problem: no-one reads our research

Well, not no-one, but no-one outside the close group of those doing research in the area. Indeed, although in general the papers for the workshop were of high quality, there were still far too many papers that suggested the authors were unaware of key prior research in the area.

But the real problem is that most practitioners – instructors and teachers – are blissfully unaware of the major research findings about teaching and learning online and at a distance. The same applies to the many computer scientists who are now moving into online learning with new products, new software and new designs. MOOCs are the most obvious example. Andrew Ng, Sebastian Thrun and Daphne Koller – all computer scientists – designed their MOOCs without any consideration about what was already known about online learning – or indeed teaching or learning in general, other than their experience as lecturers at Stanford University. The same applies to MIT’s and Harvards’s courses on edX, although MIT/Harvard are at least  starting to do their own research, but again ignoring or pretending that nothing else has been done before. This results in mistakes being made (unmonitored student discussion), the re-invention of the wheel hyped as innovation or major breakthroughs (online courses for the masses), and surprised delight at discovering what has already been known for many years (e.g. students like immediate feedback).

Perhaps of more concern though is that as more and more instructors move into blended and hybrid learning, they too are unaware of best practices based on research and evaluation of online learning, and knowledge about online learners and their behaviour. This applies not only to online course design in general, but also particularly to the management of online discussions.

It will of course be argued that MOOCs and hybrid learning are somehow different from previous online and distance courses and therefore the research does not apply. These are revolutionary innovations and therefore the rules of the game have changed. What was known before is therefore no longer relevant. This kind of thinking though misunderstands the nature of sustainable innovation, which usually builds on past knowledge – in other words, successful innovation is more cumulative than a leap into the dark. Indeed, it is hard to imagine any field other than education where innovators would blithely ignore previous knowledge. (‘I don’t know anything about civil engineering, but I have a great idea for a bridge.’ Let’s see how far that will get you.)

Who’s to blame?

Well, no-one really. There are several reasons why research in online learning is not better disseminated:

  • research into any kind of learning is not easy; there are just so many different variables or conditions that affect learning in any context. This has several consequences:
    • it is difficult to generalize, because learning contexts vary so much
    • clearly significant results are difficult to find when so many other variables are likely to affect learning outcomes
    • thus results are usually hedged with so many reservations that any clear message gets lost
  • because research into online learning is out of the mainstream of educational research it has been poorly funded by the research councils. Thus most studies are small scale, qualitative and practitioner-driven. This means interventions are small scale and therefore do not identify major changes in learning, and the results are mainly of use to the practitioner who did the research, so don’t get more widely disseminated
  • most research in online learning is published in journals that are not read by either practitioners or computer scientists (who publish in their own journals that no-one else reads). Furthermore, there are a large number of journals in the field, so integration of research findings is difficult, although Anderson and Zawacki-Richter (2104) have done a good job in bringing a lot of the research together in one publication – but which unfortunately is nearly 500 pages long, and hence unlikely to reach many practitioners, at least in a digestible form
  • online learning is still a relatively new field, less than 20 years old, so it is taking time to build a solid foundation of verifiable research in which people can have confidence
  • most instructors at a post-secondary level have no formal training in any form of teaching and learning, so there are difficulties in bringing research and best practices to their attention.

What can be done?

First let me state clearly that I believe there is a growing and significant body of evidence about best practices in online learning that is evidence-based and research-driven. These best practices are general enough to be applied in a wide variety of contexts. In fact I will shortly write a post called ‘Ten things we know from research in online learning’ that will set out some of the most important results and their implications for teaching and learning online. However, we need more attempts to pull together the scattered research into more generalizable conclusions and more widely distributed forms of communication.

At the same time, we need also to get out the message about the complexity of teaching and learning, without which it will be difficult to evaluate or appreciate fully the findings from research in online learning. It is understanding that:

  • learning is a process, not a product,
  • there are different epistemological positions about what constitutes knowledge and how to teach it,
  • above all, identifying desirable learning outcomes is a value-driven decision; and acceptance of a diversity of values about what constitutes knowledge is to be welcomed, not restricted, in education, so long as there is genuine choice for teachers and learners.
  • however, if we want to develop the skills needed in a digital age, the traditional lecture-based model, whether offered face-to-face or online, is inadequate
  • academic knowledge is different from everyday knowledge; academic knowledge means transforming understanding of the world through evidence, theory and rational argument/dialogue, and effective teachers/instructors are essential for this
  • learning is heavily influenced by the context in which it takes place: one critical variable is the quality of course design; another is the role of an expert instructor. These variables are likely to be more important than any choice of technology or delivery mode.

There are therefore multiple audiences for the dissemination of research in online learning:

  • practitioners: teachers and instructors
  • senior managers and administrators in educational institutions
  • computer scientists and entrepreneurs interested in educational services or products
  • government and other funding agencies.

I can suggest a number of ways in which research dissemination can be done, but what is needed is a conversation about

(a) how best to identify the key research findings on online learning around which most experienced practitioners and researchers can agree

(b) the best means to get these messages out to the various stakeholders.

I believe that this is an important role for organizations such as EDEN, EDUCAUSE, ICDE, but it is also a responsibility for every one of us who works in the field and believes passionately about the value of online learning.