September 2, 2014

Special edition on research on MOOCs in the journal ‘Distance Education’

Listen with webReader
The University of Toronto is one of a number of institutions conducting research on MOOCs

The University of Toronto is one of a number of institutions conducting research on MOOCs; their results are still to come

The August 2014 edition of the Australian-based journal, Distance Education (Vol.35, No. 2.), is devoted to new research on MOOCs. There is a guest editor, Kemi Jona, from Northwestern University, Illinois, as well as the regular editor, Som Naidu.

The six articles in this edition are fascinating, both in terms of their content, but even more so in their diversity. There are also three commentaries, by Jon Baggaley, Gerhard Fischer and myself.

My commentary provides my personal analysis of the six articles.

MOOCs are a changing concept

In most of the literature and discussion about MOOCs, there is a tendency to talk about ‘instructionist’ MOOCs (i.e. Coursera, edX, Udacity, xMOOCs) or ‘connectivist’ MOOCs (i.e. Downes, Siemens, Cormier, cMOOCs). Although this is still a useful distinction, representing very different pedagogies and approaches, the articles in this edition show that MOOCs come in all sizes and varieties.

Indeed, it is clear that the design of MOOCs is undergoing rapid development, partly as a result of more players coming in to the market, partly because of the kinds of research now being conducted on MOOCs themselves, and, sadly much more slowly, a recognition by some of the newer players that much is already known about open and online education that needs to be applied to the design of MOOCs, while accepting that there are certain aspects, in particular the scale, that make MOOCs unique.

The diversity of MOOC designs

These articles illustrate clearly such developments. The MOOCs covered by the articles range from

  • MOOC video recorded lectures watched in isolation by learners (Adams et al.)
  • MOOC video lectures watched in co-located groups in a flipped classroom mode without instructor or tutorial support (Nan Li et al.)
  • MOOCs integrated into regular campus-based programs with some learner support (Firmin et al.)
  • MOOCs using participatory and/or connectivist pedagogy (Anderson, Knox)

Also the size of the different MOOC populations studied here differed enormously, from 54 students per course to 42,000.

It is also clear that MOOC material is being increasingly extracted from the ‘massive’, open context and used in very specific ‘closed’ contexts, such as flipped classrooms, at which point one questions the difference between such use of MOOCs and regular for-credit online programming, which in many cases also use recorded video lectures or online discussion and increasingly other sources of open educational materials. I would expect in such campus-based contexts the same quality standards to apply to the MOOC+ course designs as are already applied to credit-based online learning. Some of the research findings in these articles indirectly support the need for this.

The diversity of research questions on MOOCs

Almost as interesting is the range of questions covered by these articles, which include:

  • capturing the lived experience of being in a MOOC (Adams et al.; Knox)
  • the extent to which learners can/should create their own content, and the challenges around that (Knox; Andersen)
  • how watching video lectures in a group affects learner satisfaction (Nan Li et al.)
  • what ‘massive’ means in terms of a unique pedagogy (Knox)
  • the ethical implications of MOOCs (Marshall)
  • reasons for academic success and failure in ‘flipped’ MOOCs (Firmin et al.; Knox)

What is clear from the articles is that MOOCs raise some fundamental questions about the nature of learning in digital environments. In particular, the question of the extent to which learners need guidance and support in MOOCs, and how this can best be provided, were common themes across several of the papers, with no definitive answers.

The diversity of methodology in MOOC research

Not surprisingly, given the range of research questions, there is also a very wide range of methodologies used in the articles in this edition, ranging from

  • phenomenology (Adams),
  • heuristics (Marshall)
  • virtual ethnography (Knox; Andersen)
  • quasi-experimental comparisons (Nan Li et al.)
  • data and learning analytics (Firmin et al.)

The massiveness of MOOCs, their accessibility, and the wide range of questions they raise make the topic a very fertile area for research, and this is likely to generate new methods of research and analysis in the educational field.

Lessons learned

Readers are likely to draw a variety of conclusions from these studies. Here are mine:

  • the social aspect of learning is extremely important, and MOOCs offer great potential for exploiting this kind of learning, but organizing and managing social learning on a massive scale, without losing the potential advantages of collaboration at scale, is a major challenge that still remains to be adequately addressed. The Knox article in particular describes in graphic detail the sense of being overwhelmed by information in open connectivist MOOCs. We still lack methods or designs that properly support participants in such environments. This is a critical area for further research and development.
  • a lecture on video is still a lecture, whether watched in isolation or in groups. The more we attempt to support this transmissive model through organized group work, ‘facilitators’, or ‘advisors’ the closer we move towards conventional (and traditional) education and the further away from the core concept of a MOOC.
  • MOOCs have a unique place in the educational ecology. MOOCs are primarily instruments for non-formal learning. Trying to adapt MOOCs to the campus not only undermines their primary purpose, but risks moving institutions in the wrong direction. We would be better re-designing our large lecture classes from scratch, using criteria, methods and standards appropriate to the goals of formal higher education. My view is that in the long run, we will learn more from MOOCs about handling social learning at scale than about transmitting information at scale. We already know about that. It’s called broadcasting.
  • lastly, there was surprisingly little in the articles about what actual learning took place. In some cases, it was a deliberate research strategy not to enquire into this, relying more on student or instructor feelings and perceptions. While other potential benefits, such as institutional branding, stimulating interest, providing a network of connections, and so on, are important, the question remains: what are participants actually learning from MOOCs, and does this justify the hype and investment (both institutionally and in participants’ time) that surrounds them?

Cultural and ethical issues

The Marshall paper provides an excellent overview of ethical issues, but there is almost no representation of perspectives on MOOCs from outside Western contexts. I would have liked to have seen more on cultural and ethical issues arising from the globalization of MOOCs, based on actual cases or examples. Given the global implications of MOOCs, other perspectives are needed. Perhaps this is a topic for another issue.

Happy reading

I am sure you will be as fascinated and stimulated by these articles as I am. I am also sure you will come away with different conclusions from mine. I am sure we will see a flood of other articles soon on this topic. Nevertheless, these articles are important in setting the research agenda, and should be essential reading for MOOC designers as well as future researchers on this topic.

How to get the articles

To obtain access to these articles, go to: http://www.tandfonline.com/toc/cdie20/current#.U-1WqrxdWh1

Submitting a doctoral thesis on online learning? Some things to keep in mind

Listen with webReader
© Relativity Media, 2011

© Relativity Media, 2011

Old people often complain that the world is going to hell in a hand-basket, that standards are falling, and it used to be better in our day. Having examined over 40 doctoral students over the last 45 years, often as the external examiner, it would be easy for me to fall into that trap. On the contrary, though, I am impressed with the quality of theses I have been examining recently, partly because of the quality of the students, partly because of the quality of the supervision, and partly because online learning and educational technology in general have matured as a field of study.

However, one advantage of being old is that you begin to see patterns or themes that either come round every 10 years or so or never go away, and that certainly applies to Ph.D. theses in this field. So I thought I might offer some advice to students as to what examiners tend to look for in theses in this field, although technically it should be the supervisors doing this, not me.

Who’s being examined: student or supervisor?

When I have failed a student (which is rare but has happened) it’s ALWAYS been because the standard of supervision was so poor that the student never stood a chance. Somewhat more frequently (although still fairly uncommon), the examiners’ recommendation was pass with substantial revision, or ‘adequate’ in some European countries. Both these classifications carry a significant message to the academic department that the supervisor(s) weren’t doing their job properly. (Although to be fair, in at least one case the thesis was submitted almost in desperation by the department, because the student had exhausted all his many different supervisors, and was running out of the very generous time allowed to submit.)

So the good news, students, is that, despite what might appear to be the opposite, by the time it comes to submitting your thesis for exam, the university is (or should be) 100 per cent behind you in wanting to get you through. (In recent years, this pressure from the university on examiners to pass students sometimes appears to be almost desperate, because a successful Ph.D. may carry a very significant weight towards the performance indicators for the university.)

Criteria for success

So at the risk of over-simplification, here is my advice for students, in particular, on what I, as an examiner, tend to look for in a thesis, starting with the most important. My comments apply mainly, but not exclusively, to traditional, research-based theses.

Level 1.

I have three main criteria which MUST be met for a pass:

  • is it original?
  • does it demonstrate that the student is capable of conducting independent research?
  • does the evidence support the conclusions drawn in the thesis?

Originality

The minimum a doctoral thesis must do is tell me something that was not already known in the field. Now this can still be what students often see as a negative outcome: their main hypothesis is found to be false. That’s fine, if it is a commonly held hypothesis in the field. (Example: digital natives are different from digital immigrants: no evidence was found for this in the study.) If it disproves or questions current wisdom, that’s good, even if the result was not what you were expecting. In fact, that’s really good, because the ‘null hypothesis’ – I’m trying to prove my hypothesis is false - is a more rigorous test than trying to find evidence to support something you actually thought to be true before you started the research (see Karl Popper (1934) on this).

Competence in research

For students, there are three good reasons for doing a Ph.D.:

  • because you want an academic position in a university or college
  • because you want to work as a full-time researcher outside the university
  • because you have a burning question to answer (e,.g.: what’s best done face-to-face, and what online, when teaching quantum physics?)

However, the main purpose of a Ph.D. (as distinct from other post-graduate qualifications) from a professional or institutional perspective is to enable students to conduct independent research. Thus the thesis must demonstrate this competency. In a sense, it is a trust issue: if this person does research, we should be able to trust him or her to do it within the norms and values of the subject discipline. (This is why it is stupid to even think of cheating by falsifying data or plagiarism: if found out, you will never get an academic job in a university, never mind the Ph.D.)

Evidence-based conclusions

My emphasis here is on ensuring that appropriate conclusions are drawn from whatever evidence is used (which should include the literature review as well as the actual data collected). If for instance the results are contrary to what might be expected from the literature review, some explanation or discussion is needed about why there is this difference. It may have to be speculative, but such contradictions need to be addressed and not ignored.

Level 2

Normally (although there will be exceptions) a good thesis will also meet the following criteria:

  • there is a clear narrative and structure to the thesis
  • there is a clear data audit trail, and all the raw/original data is accessible to examiners and the general public, subject to normal privacy/ethical requirements
  • the results must be meaningfully significant

Narrative and structure

Even in an applied thesis, this is a necessary component of a good thesis. The reader must be able to follow the plot – and the plot must be clear. The usual structure for a thesis in our field is:

  • identification of an issue or problem
  • review of relevant previous research/studies
  • identification of a research question or set of questions
  • methodology
  • results
  • conclusions and discussion.

However, other structures are possible. In an applied degree, the structure will or should be different, but even so, the reader in the main body of thesis should be able to follow clearly the rationale for the study, how it was conducted, the results, and the conclusions.

Data audit

Most – but not all – theses in the educational technology field have an empirical component. Data is collected, analysed and interpreted. All these steps have to be competently conducted, whether the data is mainly quantitative, qualitative or both. This usually means ensuring that there is a clear trail linking raw data through analysis into conclusions that can be followed and checked easily by a diligent reader (in this case, the examiners). This is especially important with qualitative data, because it is easy to cherry-pick comments that support your prior prejudices or assumptions while ignoring those that don’t fit. As an examiner, I do want access to raw data, even if it’s in an appendix or an online database.

However, I am also willing to accept a thesis that is pure argument. Nevertheless, this is a very risky option because this means offering something that is quite original and which can be adequately defended against the whole collective wisdom of the field. In the field of educational technology, it is hard to see how this can be done without resorting to some form of empirical evidence – but perhaps not impossible.

Significance of the research question and results

This is often the best test of how much the thesis is mainly the work of the supervisor and how much the student. A good supervisor can more or less frogmarch a student through the various procedural steps in doing a doctoral thesis, but what the supervisor cannot – or should not – provide is the original spark of a good research question, and the ability to see the significance of the study for the field as a whole. This is why orals are so important – this is the place to say why your study matters, but it also helps if you address this at the beginning and end of your written thesis as well.

Too often I have seen students who have asked questions that inevitably produce results that are trivial, already known, or are completely off-base. Even more tragic is when the student has an unexpected but important, well-founded set of data, but is unable to see the significance of the data for the field in general.

The problem is that supervisors quite rightly drill it into students that they must chose a research question that is manageable by an individual working mainly alone, and that their conclusions must be based on the data collected, but this does not mean that the research question needs to be trivial or that once the conclusions have been properly drawn, there should be no further discussion of their significance for the field as a whole. This is the real test of a student’s academic ability.

Tips for success

There are thousands of possible tips one could give to help Ph.D. students, but I will focus on just a few issues that seem to come up a lot in theses in this area:

1. Do a masters degree on online learning first

This will give you a good overview of the issues involved in online learning and should provide some essentially preparatory skills, such as an introduction to research methods and extensive writing.

Do this prior to starting a Ph.D. See: Recommended graduate programs in e-learning for a list of appropriate programs.

Do it online if possible so you know what its’s like to be an online student.

At a minimum, take a course on research methods in the social sciences/online learning.

2. Get a good supervisor

The trick is to find a supervisor willing to accept your proposed area of research. Try to find someone in the local Faculty of Education with an interest in online learning and try to negotiate a research topic of mutual interest. This is really the hardest and most important part. Getting the right supervisor is absolutely essential. However, there are many more potential students than education faculty interested in research in online learning.

If you find a willing and sympathetic local faculty member with an interest in online learning but worried they don’t have the right expertise to supervise your particular interest, suggest a committee with an external supervisor (anywhere in the world) who really has the expertise and who may be willing to share the supervision with your local supervisor. Again, though, your chances of getting either an internal or external supervisor is much higher if that person already knows you or is aware of your work. Doing an online masters might help here, since some of the instructors on the course may be interested in supervising you for a Ph.D., especially if they know your work through the masters. But again, good professors with expertise in online learning are already likely to have a full supervision load, so it is not easy. (And don’t ask me – I’m retired!)

This means that even before applying for a Ph.D., you need to do some homework. Identify a topic with some degree of flexibility, have in mind an internal and an external supervisor, and show that you have done the necessary courses such as research methods, educational theory, etc., that will prepare you for a Ph.D. (or are willing to do them first).

3. Develop a good research question

See above. Ideally, it should meet the following requirements:

a. The research is likely to add something new to our knowledge in the field

b. The results of the research (positive, negative or descriptive) are likely to be significant/important for instructors, students or an institution

c. You can do the research to answer the question on your own, within a year or so of starting to collect data.

d. It can be done within the ethical requirements of research

It is even better if you can collect data as part of your everyday work, for example by researching your own online teaching.

4. Get a good understanding of sampling and the level of statistics that your study requires

Even if you are doing a qualitative study, you really need to understand sampling – choosing subjects to participate in the study. The two issues you need to watch out for are:

1. Bias in the initial choice of subjects, especially choosing subjects that are likely to support any hypotheses or assumptions you may already have. (Hence the danger of researching your own teaching – but you can turn this to advantage by taking care to identify your prior assumptions in advance and being careful not to be unduly influenced by them in the design of the research).

2. Focusing too much on the number of respondents and not on the response rate, especially in quantitative studies. Most studies with response rates of 40 per cent or less are usually worthless, because the responders are unlikely to be representative of the the whole group (which is why student evaluation data is really dangerous, as the response rate is usually biased towards successful students, who are more likely to complete the questionnaires than unsuccessful students.) When choosing a sample, try to find independent data that can help you identify the extent of the likely bias due to non-responders. For instance, if looking at digital natives, check the age distribution of your responders with the age distribution of the total of the group from which you drew the sample, if that is available. If you had a cohort of 100 students, and 20 responded, how does the average age of the responders compare with the average age of the whole 200? If the average age of responders is much lower than non-responders, what significance does this have for your study?

Understanding statistics is a whole other matter. If you intend to do anything more complicated quantitatively than adding up quantitative data, make sure you understand the necessary statistics, especially what statistically different means. For instance, if you have a very large sample, even small differences are likely to be statistically significant, but they may not be meaningfully significant. Small samples increase the difficulty of getting statistically significant results, so drawing conclusions even when differences look large can be very dangerous from small samples.

5. Avoid tautological research design or quantitative designs with no independent variables

Basically, this means asking a question, stating a hypothesis, or designing research in such a way that the question or  hypothesis itself provides the answer. To elaborate, research question” “What is quality in online learning?’ ‘Answer: “It is defined by what educators say makes for quality in online courses and my research shows that these are clear learning objectives, accessibility, learner engagement, etc..” There is no independent variable here to validate the statements made by educators. (An independent variable might be exam results, participation rates of disabled people, etc.). Education is full of such self-justifications that have no clear, independent variables against which such statements have been tested. Merely re-iterating what people currently think is not original research.

For this reason, I am very skeptical of Delphi studies, which merely re-iterate already established views and opinions. I always ask: ‘Would a thorough literature review have provided the same results?’ The answer is usually: ‘No, you get a far more comprehensive and reliable overview of the topic from the literature review.’

6. Write well

Easily said, but not  easily done. However, writing that is clear, well-structured, evidence-based, grammatically correct and well argued makes a huge difference when it comes to the examination of the thesis. I have seen really weak research studies get through from the sheer quality of the writing. I have seen other really good research studies sent back for major revision because they were so badly written.

Writing is a skill, so it gets better with practice. This usually means writing the same chapter several times until you get it right. Write the first draft, put it away and come back to it several days later. Re-read it and then clarify or improve what you’ve written. Do it again, and again, until you are satisfied that someone who knows nothing about the subject beforehand can understand it. (Don’t assume that all the examiners will be expert in your particular topic.) If you can, get someone such as a spouse who knows nothing about the subject to read through a chapter and ask them just to put question marks alongside sentences or paragraphs they don’t understand. Then re-write them until they do.

The more practice and feedback you can get on your writing, the better, and this is best done long before you get to a final draft.

Is the Ph.D. process broken?

A general comment about the whole Ph.D. process: while not completely broken, it is probably the most costly and inefficient academic process in the whole university, riddled with bureaucracy, lack of clarity for students, and certainly in the non-quantitative areas, open to all kinds of challenges regarding the process and standards.

This is further complicated by a move in recent years to applied rather than research theses. In an applied thesis, the aim is to come up with something useful that can be applied in the field, such as the design of an e-portfolio template that can be used for an end of course assessment, rather than the traditional research thesis. I believe this to be a step in the right direction. Unfortunately though education departments often struggle to provide clear guidance to both students and examiners about the criteria for assessing such new degrees, which makes it even more of a shot in the dark in deciding whether a thesis is ready for submission.

Other suggestions or criticisms

These are (as usual) very personal comments. I’m sure students would like to hear from other examiners in this field, particularly if there is disagreement with my criteria and advice. And I’d like to hear from doctoral students themselves. Suggestions for further readings on the Ph.D. process would also be welcome.

I would also like to hear from those who question the whole Ph.D. process. I must admit to mixed feelings. We do need to develop good quality researchers in the field, and I think a research thesis is one way of doing this. I do feel though that the whole process could be made more efficient than it is at the moment.

In the meantime, good luck to all of you who are struggling with your doctoral studies in this field – we need you to succeed!

Reference

Popper, K. (1959) The Logic of Scientific Discovery London: Routlege

Conference: 8th EDEN Research Workshop on research in online learning and distance education

Listen with webReader
Oxford Spires Four Pillars Hotel

Oxford Spires Four Pillars Hotel

What: Challenges for research into Open & Distance Learning: Doing Things Better: Doing Better Things

The focus of the event is on quality research discussed in unusual workshop setting with informal and intimate surroundings. The session formats will promote collaboration opportunities, including: parallel ‘research-speed-dating’ papers, team symposia sessions, workshops and demonstrations.

When: 26-28 October, 2014

Where: Oxford Spires Four Pillars Hotel, Oxford, England

Who: The Open University (UK) is the host institution in collaboration with the European Distance and E-Learning Network. Main speakers include:

  • Sian Bayne, Digital Education, University of Edinburgh, UK
  • Cristobal Cobo, Research Fellow, Oxford Internet Institute, University of Oxford, UK
  • Pierre Dillenbourg, CHILI Lab, EPFL Center for Digital Education, Swiss Federal Institute of Technology, Lausanne, Switzerland
  • Allison Littlejohn, Director, Caledonian Academy, Glasgow Caledonian University, Chair in Learning Technology, UK
  • Philipp Schmidt, Executive Director, Peer 2 Peer University / MIT Media Lab fellow, USA
  • Willem van Valkenburg, Coordinator Delft Open Education Team, Delft University of Technology,
    The Netherlands

How: Submission of papers, workshop themes, posters and demonstrations are due by September 1: see: http://www.eden-online.org/2014_oxford/call.html

 

A balanced research report on the hopes and realities of MOOCs

Listen with webReader

Columbia MOOCs 2

Hollands, F. and Tirthali, D. (2014) MOOCs: Expectations and Reality New York: Columbia University Teachers’ College, Center for Benefit-Cost Studies of Education, 211 pp

We are now beginning to see a number of new research publications on MOOCs. The journal Distance Education will be publishing a series of research articles on MOOCs in June, but now Hollands and Tirthali have produced a comprehensive research analysis of MOOCs.

What the study is about

We have been watching for evidence that MOOCs are cost-effective in producing desirable educational outcomes compared to face-to-face experiences or other online interventions. While the MOOC phenomenon is not mature enough to afford conclusions on the question of long-term cost-effectiveness, this study serves as an exploration of the goals of institutions creating or adopting MOOCs and how these institutions define effectiveness of their MOOC initiatives. We assess the current evidence regarding whether and how these goals are being achieved and at what cost, and we review expectations regarding the role of MOOCs in education over the next five years. 

The authors used interviews with over 80 individuals covering 62 institutions ‘active in the MOOCspace’, cost analysis, and analysis of other research on MOOCs to support their findings. They identified six goals from the 29 institutions in the study that offered MOOCs, with following analysis of success or otherwise in accomplishing such goals:

1. Extending reach (65% 0f the 29 institutions)

Data from MOOC platforms indicate that MOOCs are providing educational opportunities to millions of individuals across the world. However, most MOOC participants are already well-educated and employed, and only a small fraction of them fully engages with the courses. Overall, the evidence suggests that MOOCs are currently falling far short of “democratizing” education and may, for now, be doing more to increase gaps in access to education than to diminish them. 

2. Building and maintaining brand (41%)

While many institutions have received significant media attention as a result of their MOOC activities, isolating and measuring impact of any new initiative on brand is a difficult exercise. Most institutions are only just beginning to think about how to capture and quantify branding-related benefits.

3. Reducing costs or increasing revenues (38%)

….revenue streams for MOOCs are slowly materializing but we do not expect the costs of MOOC production to fall significantly given the highly labor-intensive nature of the process. While these costs may be amortized across multiple uses and multiple years, they will still be additive costs to the institutions creating MOOCs. Free, non-credit bearing MOOCs are likely to remain available only from the wealthiest institutions that can subsidize the costs from other sources of funds. For most institutions, ongoing participation in the current MOOC experimentation will be unaffordable unless they can offer credentials of economic value to attract fee-paying participants, or can use MOOCs to replace traditional offerings more efficiently, most likely by reducing expensive personnel. 

4. Improving educational outcomes (38%)

for the most part, actual impact on educational outcomes has not been documented in any rigorous fashion. Consequently, in most cases, it is unclear whether the goal of improving educational outcomes has been achieved . However, there were two exceptions, providing evidence of improvement in student performance as a result of adopting MOOC strategies in on-campus courses

5. Innovation in teaching and learning (38%)

It is abundantly clear that MOOCs have prompted many institutions and faculty members to engage in new educational activities. The strategies employed online such as frequent assessments and short lectures interspersed with questions are being taken back on-campus. It is less clear what has been gained by these new initiatives because the value of innovation is hard to measure unless it can be tied to a further, more tangible objective. We …. conclude that most institutions are not yet making any rigorous attempt to assess whether MOOCs are more or less effective than other strategies to achieve these goals. 

6. Research on teaching and learning (28%)

A great deal of effort is being expended on trying to improve participant engagement and completion of MOOCs and less effort on determining whether participants actually gain skills or knowledge from the courses ….While the potential for MOOCs to contribute significantly to the development of personalized and adaptive learning is high, the reality is far from being achieved. 

Cost analysis

The report investigates the costs of developing MOOCs compared to those for credit-based online courses, but found wide variations and lack of reliable data.

Conclusions from the report

The authors came to the following conclusions:

1. there is no doubt that online and hybrid learning is here to stay and that MOOCs have catalyzed a shift in stance by some of the most strongly branded institutions in the United States and abroad.

2. MOOCs could potentially affect higher education in more revolutionary ways by:

  • offering participants credentials of economic value

  • catalyzing the development of true adaptive learning experiences

However, either of these developments face substantial barriers and will require major changes in the status quo.

My comments on the report

First this is an excellent, comprehensive and thoughtful analysis of the expectations and realities of MOOCs. It is balanced, but where necessary critical of the unjustified claims often made about MOOCs. This report should be required reading for anyone contemplating offering MOOCs.

Different people will take away different conclusions from this report, as one would expect from a balanced study. From my perspective, though, it has done little to change my views about MOOCs. MOOC providers to date have made little effort to identify the actual learning that takes place. It seems to be enough for many MOOC proponents to just offer a course, on the assumption that if people participate they will learn.

Nevertheless, MOOCs are evolving. Some of the best practices that have been used in credit-based online courses are now being gradually adopted as more MOOC players enter the market with experience of credit-based online learning. MOOCs will eventually occupy a small but important niche as an alternative form of non-formal, continuing and open education. They have proved valuable in making online learning more acceptable within traditional institutions that have resisted online learning previously. But no-one should fear them as a threat to credit-based education, either campus-based or online.

Research from the Michigan Virtual University on a connectivist MOOC

Listen with webReader

 MVU MOOC report

Ferdig, R. et al. (2014) Findings and reflections from the ‘K-12 Teaching in the 21st Century’ MOOC Lansing MI: Michigan Virtual Learning Research Institute

We are now beginning to get some in-depth research or evaluations of MOOCs. This one is from a team at Kent State University that developed a five week ‘connectivist’ MOOC aimed principally at three distinct audiences: high school students interested in becoming teachers, preservice teachers, and inservice teachers in the K-12 system.

I provide here a very brief summary of the report (as always, you should read the report for yourself if my summary gets you interested). Italics are direct quotes from the report.

Goal of the MOOC

How can we get teachers to think more deeply about reinventing education?

MOOC design

facilitators take on the role of connecting people around an idea for the purpose of bettering our understanding of the
idea. A connectivist-based MOOC draws on the extensive number of participants as well as the existing open repository of content to develop an experience. Participants are both teachers and learners in a process – not a product.

The course was designed around four principles often associated with teaching in the 21st century: connected learning, personalization, collaboration, and reflection.

Core technology

Coursesites by Blackboard provided the basic platform for content and discussion, supplemented by the use of participants’ social media networks and technologies. In addition participants were asked to create an ‘artifact’ to represent their learning.

Use of partners/co-facilitators

Kent State provided core facilitators for the MOOC, but they also invited other co-facilitators from schools, colleges and universities both in Michigan and from several other states.

Qualifications

Badges and continuing education units were given for successful participation.

Main results

Participants (data at time of enrollment, i.e. all participants)

Start of course: 673; end of course: 848; mainly from Michigan and surrounding states, although 12 were international

School teachers: 42%; k-12 students: 23%; post-secondary students: 16%; 19% other (inc. school administrators, university faculty); 80% female.

Participants’ response to the MOOC (168 participants who completed a post-course survey)

Most participants who responded enjoyed the MOOC, with in-service teachers enjoying it the most. Th main criticism (especially from the k-12 students) was the amount of work involved in following the MOOC.

Very active participation in the online discussion forums (within the Coursesites LMS)

There were over 6,000 actual posts (comments) and over 65,000 ‘hits’/looks over a five week period, from just over 300 of the participants – but almost to-thirds did not participate at all.

Types of participation

Lurkers (i.e. did not participate in LMS discussion forums – they may have participated through social media): 63%. There were accounts created in Facebook, Twitter, Delicious and blogs related to the course which indicated active social media connections both for registered participants and with those who had not registered for the course but were interested. However, these numbers were relatively small, and hard to measure.

Passive participation was defined as doing the minimum amount of work required to complete the course. Some of the passive participants were K-12 students forced to complete the MOOC for a class requirement.

There were also preservice teachers and inservice teachers who could be described as passive participants. These participants often completed the course; however, much like the high school students, their posts were limited to one or two sentences per posts. Their comments were also superficial, for example, “Nice job” or “I like what you did.”

Active participants participated in four ways:

  • informing personal practice
  • sharing the MOOC with their communities
  • leadership within the MOOC community
  • critical colleagues

The authors’ main conclusions

The seeking and sharing of digital media highlights that people want to form and engage in communities, and the growing interest in MOOCs shows this is true of educational communities as well….

Learning takes place in communities; depending on the implementation, technology has the capability to create and sustain the communities’ learning and practice….. Evidence in this report suggests that such activities can lead to positive outcomes, particularly as they relate to getting teachers to think more deeply about teaching and learning in the 21st century.

My comments

Even though (or perhaps because) this is a self-evaluation, this is a very useful report. I was fascinated for instance that this course ended with more participants than when it started, due to the ‘publicity’ of social media connections during the course itself.  It was interesting too that some of the participants in this MOOC were not necessarily willing participants – being forced to participate as part of a formal credit program. This seems to me to go against the whole purpose of a connectivist MOOC.

More importantly for me, the report highlights some of the ways research can be conducted on MOOCs and also some of the challenges. The study identifies the importance, from a research perspective, of having some kind of platform that can gather student data and track student behaviour, such as levels or types of participation. However, given the importance of social media for connectivist MOOCs, some way of accurately tracking related social media activity is critical. It seems to me that this is a problem that appropriate software could solve (further development of gRRShopper?), although privacy issues would need to be addressed as well. (Perhaps the spy agencies can help here – just joking!)

I agree completely with the authors when they write:

Researchers have already provided ample evidence that asking if a technology works is the wrong question. A more appropriate question is: under what conditions do certain types of MOOCs work?

Another even more pertinent question is: What prior research into credit-based online learning applies – and what does not apply – to different kinds of MOOCs? This might save a lot of time re-inventing the wheel, particularly for xMOOCs. I am getting sick of hearing from research on xMOOCs that immediate feedback helps retention – we have known that for nearly 100 years. We do need though for instance to assess the importance and most useful roles, if any, of instructors/facilitators/subject matter experts in MOOCs, and whether MOOCs can succeed with reduced ‘expert’ participation. This report suggests almost the opposite – connectivist MOOCs work best with a wide range of facilitators – but what are the hidden costs of this?

Finally, I also agree with the authors that completion rates are not the best measure of success for MOOCs. This MOOC does seem to have raised some interesting questions for participants. I’m just curious about their answers. Despite the very good work done by the instructors/researchers of this MOOC, I am still left with the question: what did the participants actually learn from this MOOC? For instance, what would an analysis of the student ‘artifacts’ have told us about their learning? Unless we try to answers questions about what actual learning took place then it will remain difficult if not impossible to measure the true value of different kinds of MOOC, and I think that would be a pity.

In the meantime, this report is definitely recommended reading for anyone interested in doing research on or evaluating MOOCs.