December 22, 2014

EDEN research papers: OERs (inc. MOOCs), quality/assessment, social media, analytics and research methods

Listen with webReader

EDEN RSW me 2

EDEN has now published a second report on my review of papers submitted to the EDEN research workshop in Oxford a couple of weeks ago. All the full papers for the workshop can be accessed here.

Main lessons (or unanswered questions) I took away:

OERs and MOOCs

  • what does awarding badges of certificates for MOOCs or other OER actually mean? For instance will institutions give course exemption or credits for the awards, or accept such awards for admission purposes? Or will the focus be on employer recognition? How will participants who are awarded badges know what their ‘currency’ is worth?
  • can MOOCs be designed to go beyond comprehension or networking to develop other critical 21st century skills such as critical thinking, analysis and evaluation? Can they lead to ‘transformational learning’ as identified by Kumar and Arnold (see Quality and Assessment below)
  • are there better design models for open courses than MOOCs as currently structured? If so what would they look like?
  • is there a future for learning object repositories when nearly all academic content becomes open and online?

Quality and assessment

  • research may inform but won’t resolve policy issues
  • quality is never ‘objective’ but is value-driven
  • the level of intervention must be long and significant enough to result in significant learning gains
  • there’s lots of research already that indicates the necessary conditions for successful use of online discussion forums but if these conditions are not present then learning will not take place
  • the OU’s traditional model of course design constrains the development of successful collaborative online learning.

Use of social media in open and distance learning

There were surprisingly few papers on this topic. My main takeaway:

  • the use of social media needs to be driven by sound pedagogical theory that takes into account the affordances of social media (as in Sorensen’s study described in an earlier post under course design)

Data analytics and student drop-out

  • institutions/registrars must pay attention to how student data is tagged/labeled for analytic purposes, so there is consistency in definitions, aggregation and interpretation;
  • when developing or applying an analytics software program, consideration needs to be given to the level of analysis and what potential users of the data are looking for; this means working with instructional designers, faculty and administrators from the beginning
  • analytics need to be integrated with action plans to identify and support early at risk students

Research methods

Next

If these bullets interest you at all, then I strongly recommend you go and read the original papers in full – click here. My summary is of necessity personal and abbreviated and the papers provide much greater richness of context.

 

 

First part of report on EDEN Research Workshop now available

Listen with webReader
Sian Bayne presenting at EDEN Research Workshop, Oxford

Sian Bayne presenting at EDEN Research Workshop, Oxford

My official report on the 8th EDEN Research Workshop is being released as a series of four blog posts by the President of EDEN, Professor Antonio Moreira Teixera. The first, which is a very brief summary of the keynote presentations, is available here.

The other three, which provide my personal analysis of the research papers presented at the workshop, will be published on consecutive days later this week and I will let you know when each is published.

Perhaps more importantly, the 40+ papers presented at the EDEN Research Workshop are now available in their entirety as a pdf file. If you have any interest in research in online and/or open and distance education, many of the papers are well worth reading in full, rather than relying in my personal interpretation. Happy reading, Ph.D. students!

The dissemination of research in online learning: a lesson from the EDEN Research Workshop

Listen with webReader
The Sheldonian Theatre, Oxford

The Sheldonian Theatre, Oxford

The EDEN Research Workshop

I’m afraid I have sadly neglected my blog over the last two weeks, as I was heavily engaged as the rapporteur for the EDEN 8th Research Workshop on challenges for research on open and distance learning, which took place in Oxford, England last week, with the UK Open University as the host and sponsor. I was also there to receive a Senior Fellowship from EDEN, awarded at the Sheldonian Theatre, the official ceremonial hall of the University of Oxford.

There were at the workshop almost 150 participants from more than 30 countries, in the main part European, with over 40 selected research papers/presentations. The workshop was highly interactive, with lots of opportunity for discussion and dialogue, and formal presentations were kept to a minimum. Together with some very stimulating keynotes, the workshop provided a good overview of the current state of online, open and distance learning in Europe. From my perspective it was a very successful workshop.

My full, factual report on the workshop will be published next week as a series of three blog posts by Antonio Moreira Texeira, the President of EDEN, and I will provide a link when these are available, but in the meantime I would like to reflect more personally on one of the issues that came out of the workshop, as this issue is more broadly applicable.

Houston, we have a problem: no-one reads our research

Well, not no-one, but no-one outside the close group of those doing research in the area. Indeed, although in general the papers for the workshop were of high quality, there were still far too many papers that suggested the authors were unaware of key prior research in the area.

But the real problem is that most practitioners – instructors and teachers – are blissfully unaware of the major research findings about teaching and learning online and at a distance. The same applies to the many computer scientists who are now moving into online learning with new products, new software and new designs. MOOCs are the most obvious example. Andrew Ng, Sebastian Thrun and Daphne Koller – all computer scientists – designed their MOOCs without any consideration about what was already known about online learning – or indeed teaching or learning in general, other than their experience as lecturers at Stanford University. The same applies to MIT’s and Harvards’s courses on edX, although MIT/Harvard are at least  starting to do their own research, but again ignoring or pretending that nothing else has been done before. This results in mistakes being made (unmonitored student discussion), the re-invention of the wheel hyped as innovation or major breakthroughs (online courses for the masses), and surprised delight at discovering what has already been known for many years (e.g. students like immediate feedback).

Perhaps of more concern though is that as more and more instructors move into blended and hybrid learning, they too are unaware of best practices based on research and evaluation of online learning, and knowledge about online learners and their behaviour. This applies not only to online course design in general, but also particularly to the management of online discussions.

It will of course be argued that MOOCs and hybrid learning are somehow different from previous online and distance courses and therefore the research does not apply. These are revolutionary innovations and therefore the rules of the game have changed. What was known before is therefore no longer relevant. This kind of thinking though misunderstands the nature of sustainable innovation, which usually builds on past knowledge – in other words, successful innovation is more cumulative than a leap into the dark. Indeed, it is hard to imagine any field other than education where innovators would blithely ignore previous knowledge. (‘I don’t know anything about civil engineering, but I have a great idea for a bridge.’ Let’s see how far that will get you.)

Who’s to blame?

Well, no-one really. There are several reasons why research in online learning is not better disseminated:

  • research into any kind of learning is not easy; there are just so many different variables or conditions that affect learning in any context. This has several consequences:
    • it is difficult to generalize, because learning contexts vary so much
    • clearly significant results are difficult to find when so many other variables are likely to affect learning outcomes
    • thus results are usually hedged with so many reservations that any clear message gets lost
  • because research into online learning is out of the mainstream of educational research it has been poorly funded by the research councils. Thus most studies are small scale, qualitative and practitioner-driven. This means interventions are small scale and therefore do not identify major changes in learning, and the results are mainly of use to the practitioner who did the research, so don’t get more widely disseminated
  • most research in online learning is published in journals that are not read by either practitioners or computer scientists (who publish in their own journals that no-one else reads). Furthermore, there are a large number of journals in the field, so integration of research findings is difficult, although Anderson and Zawacki-Richter (2104) have done a good job in bringing a lot of the research together in one publication – but which unfortunately is nearly 500 pages long, and hence unlikely to reach many practitioners, at least in a digestible form
  • online learning is still a relatively new field, less than 20 years old, so it is taking time to build a solid foundation of verifiable research in which people can have confidence
  • most instructors at a post-secondary level have no formal training in any form of teaching and learning, so there are difficulties in bringing research and best practices to their attention.

What can be done?

First let me state clearly that I believe there is a growing and significant body of evidence about best practices in online learning that is evidence-based and research-driven. These best practices are general enough to be applied in a wide variety of contexts. In fact I will shortly write a post called ‘Ten things we know from research in online learning’ that will set out some of the most important results and their implications for teaching and learning online. However, we need more attempts to pull together the scattered research into more generalizable conclusions and more widely distributed forms of communication.

At the same time, we need also to get out the message about the complexity of teaching and learning, without which it will be difficult to evaluate or appreciate fully the findings from research in online learning. It is understanding that:

  • learning is a process, not a product,
  • there are different epistemological positions about what constitutes knowledge and how to teach it,
  • above all, identifying desirable learning outcomes is a value-driven decision; and acceptance of a diversity of values about what constitutes knowledge is to be welcomed, not restricted, in education, so long as there is genuine choice for teachers and learners.
  • however, if we want to develop the skills needed in a digital age, the traditional lecture-based model, whether offered face-to-face or online, is inadequate
  • academic knowledge is different from everyday knowledge; academic knowledge means transforming understanding of the world through evidence, theory and rational argument/dialogue, and effective teachers/instructors are essential for this
  • learning is heavily influenced by the context in which it takes place: one critical variable is the quality of course design; another is the role of an expert instructor. These variables are likely to be more important than any choice of technology or delivery mode.

There are therefore multiple audiences for the dissemination of research in online learning:

  • practitioners: teachers and instructors
  • senior managers and administrators in educational institutions
  • computer scientists and entrepreneurs interested in educational services or products
  • government and other funding agencies.

I can suggest a number of ways in which research dissemination can be done, but what is needed is a conversation about

(a) how best to identify the key research findings on online learning around which most experienced practitioners and researchers can agree

(b) the best means to get these messages out to the various stakeholders.

I believe that this is an important role for organizations such as EDEN, EDUCAUSE, ICDE, but it is also a responsibility for every one of us who works in the field and believes passionately about the value of online learning.

Getting ready for the EDEN Research workshop

Listen with webReader
Oxford: City of Dreaming Spires (Matthew Arnold)

Oxford: City of Dreaming Spires (Matthew Arnold)

I’m now in England, about to attend the EDEN Research Workshop on research into online learning that starts tomorrow (Sunday) in Oxford, with the event being hosted by the UK Open University, one of the main sources of systematic research in online learning. (EDEN is the European Distance and e-Learning Network)

This is one of my favourite events, because the aim is to bring together all those in Europe doing research in online learning to discuss their work, the issues and research methods. It’s a great chance for young or new players in the field to make themselves known and connect with other, more experienced, researchers. Altogether there will be about 120 participants, just the right size to get to know everyone over three days. I organised one such EDEN research workshop myself several years ago in Barcelona, when I was working at the Open University of Catalonia, and it was great fun.

The format is very interesting. All the papers are published a week ahead of the workshop, and each author gets just a few minutes in parallel sessions to briefly summarise, with plenty of time for discussion afterwards (what EDEN calls ‘research speed dating’). There are also several research workshops, such as ‘Linking Learning Design with Learning Analytics,’ as well as several keynotes (but not too many!) I’m particularly looking forward to Sian Bayne’s ‘Teaching, Research and the More-than-human in Digital Education.’ There are also poster sessions, 14 in all.

I am the Chair of the jury for the EDEN award for the best research paper, and also the workshop rapporteur. As a result I have been carefully reading all the papers over the last week, 44 in all, and I’m still trying to work out how to be in several places at the same time so I can cover all the sessions.

As a result I’ve had to put my book, ‘Teaching in a Digital Age‘, on hold for the last few days. However, the EDEN papers have already been so useful, bringing me the latest reviews and updates on research in this area that it is well worth taking a few more days before getting back to the strengths and weaknesses of MOOCs. I will be much better informed as a result as there are quite a few research papers on European MOOCs. I will also do a blog post after the conference, summing up what I heard during the three days.

So it looks like that I won’t have much time for dreaming in the city of dreaming spires.

 

 

What students learned from an MIT physics MOOC

Listen with webReader

Newtonian mechanics 2

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Why this paper?

I don’t normally review individual journal articles, but I am making an exception in this case for several reasons:

  • it is the only research publication I have seen that attempts to measure actual learning from a MOOC in a quantitative manner (if you know of other publications, please let me know)
  • as you’d expect from MIT, the research is well conducted, within the parameters of a quasi-experimental design
  • the paper indicates, in line with many other comparisons between modes of delivery, that the conditions which are associated with the context of teaching are more important than just the mode of delivery
  • I was having to read this paper carefully for my book on ‘Teaching in a Digital Age’, but for reasons of space I would not be able to go into detail on this paper for my book, so I might as well share my full analysis with you.

What was the course?

8.MReV – Mechanics ReView, an introduction to Newtonian Mechanics, is the online version of a similar course offered on campus in the spring for MIT students who failed the Introductory Newtonian Mechanics in the fall. In other words, it is based on a second-chance course for MIT-based campus students.

The online version was offered in the summer semester as a free, open access course through edX and was aimed particularly at high school physics teachers but also to anyone else interested. The course consisted of the following components:

  • an online eText, especially designed for the course
  • reference materials both inside the course and outside the course (e.g., Google, Wikipedia, or a textbook)
  • an online discussion area/forum
  • mainly multiple-choice online tests and ‘quizzes’, interspersed on a weekly basis throughout the course.

Approximately 17,000 people signed-up for 8.MReV. Most dropped out with no sign of commitment to the course; only 1,500 students were “passing” or on-track to earn a certificate after the second assignment. Most of those completing less than 50% of the homework and quiz problems dropped out during the course and did not take the post-test, so the analysis included only the 1,080 students who attempted more than 50% of the questions in the course. 1,030 students earned certificates.

Thus the study measured only the learning of the most successful online students (in terms of completing the online course).

Methodology (summary)

The study measured primarily ‘conceptual’ learning, based mainly on multiple-choice questions demanding a student response that generally can be judged right or wrong. Students were given a pre-test before the course and a post-test at the end of the course.

Two methods to test learning were used: a comparison between each student’s pre-test and post-test score to measure the learning gain during the course; and an analysis based on Item Response Theory (IRT) which does not show absolute learning (as measured by pre-post testing), but rather improvement relative to “class average.”

Because of the large size of the MOOC participants included in the study, the researchers were able to analyse performance between various ‘cohorts’ within the MOOC participants such as:

  • physics teachers
  • not physics teachers
  • physics background
  • no physics background
  • college math
  • no math
  • post-graduate qualification
  • bachelor degree
  • no more than high school

Lastly, the scores of the MOOC participants were compared with the scores of those taking the on-campus version of the course, which had the following features:

  • four hours of instruction in which staff interacted with small groups of students (a flipped classroom) each week,
  • staff office hours,
  • help from fellow students,
  • available physics tutors,
  • MIT library

Main results (summary)

  • gains in knowledge for the MOOC group were generally higher than those found in traditional, lecture-based classes and lower than (but closer to) those found in ‘interactive’ classes, but this result is hedged around with some considerable qualifications (‘more studies on MOOCs need to be done to confirm this’.)
  • in spite of the extra instruction that the on-campus students had, there was no evidence of positive, weekly relative improvement of the on-campus students compared with our online students. (Indeed, if my reading of Figure 5 in the paper is correct, the on-campus students did considerably worse).
  • there was no evidence within the MOOC group that cohorts with low initial ability learned less than the other cohorts

Conclusions

This is a valuable research report, carefully conducted and cautiously interpreted by the authors. However, for these reasons, it is really important not to jump to conclusions. In particular, the authors’ own caution at the end of the paper should be noted:

It is … important to note the many gross differences between 8.MReV and on-campus education. Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course … and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.

To this I would add that the design of this MOOC was somewhat different to many other xMOOCs in that it was based on online texts specially designed for the MOOC, and not on video lectures.

I’m still not sure from reading the paper how much students actually learned from the MOOC. About 1,000 who finished the course got a certificate, but it is difficult to interpret the gain in knowledge. The statistical measurement of an average gain of 0.3 doesn’t mean a lot. There is some mention of the difference being between a B and a B+, but I have probably misinterpreted that. If it is the case, though, I certainly would expect students taking a 13 week course to do much better than that. It would have been more helpful to have graded students on the pre-test then compared those grades on the post-test. We could then see if gains were in the order of at least one grade better, for instance.

Finally, this MOOC design suits a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It is less likely to develop the skills I have identified as being needed in a digital age.