October 31, 2014

Getting ready for the EDEN Research workshop

Listen with webReader
Oxford: City of Dreaming Spires (Matthew Arnold)

Oxford: City of Dreaming Spires (Matthew Arnold)

I’m now in England, about to attend the EDEN Research Workshop on research into online learning that starts tomorrow (Sunday) in Oxford, with the event being hosted by the UK Open University, one of the main sources of systematic research in online learning. (EDEN is the European Distance and e-Learning Network)

This is one of my favourite events, because the aim is to bring together all those in Europe doing research in online learning to discuss their work, the issues and research methods. It’s a great chance for young or new players in the field to make themselves known and connect with other, more experienced, researchers. Altogether there will be about 120 participants, just the right size to get to know everyone over three days. I organised one such EDEN research workshop myself several years ago in Barcelona, when I was working at the Open University of Catalonia, and it was great fun.

The format is very interesting. All the papers are published a week ahead of the workshop, and each author gets just a few minutes in parallel sessions to briefly summarise, with plenty of time for discussion afterwards (what EDEN calls ‘research speed dating’). There are also several research workshops, such as ‘Linking Learning Design with Learning Analytics,’ as well as several keynotes (but not too many!) I’m particularly looking forward to Sian Bayne’s ‘Teaching, Research and the More-than-human in Digital Education.’ There are also poster sessions, 14 in all.

I am the Chair of the jury for the EDEN award for the best research paper, and also the workshop rapporteur. As a result I have been carefully reading all the papers over the last week, 44 in all, and I’m still trying to work out how to be in several places at the same time so I can cover all the sessions.

As a result I’ve had to put my book, ‘Teaching in a Digital Age‘, on hold for the last few days. However, the EDEN papers have already been so useful, bringing me the latest reviews and updates on research in this area that it is well worth taking a few more days before getting back to the strengths and weaknesses of MOOCs. I will be much better informed as a result as there are quite a few research papers on European MOOCs. I will also do a blog post after the conference, summing up what I heard during the three days.

So it looks like that I won’t have much time for dreaming in the city of dreaming spires.

 

 

What students learned from an MIT physics MOOC

Listen with webReader

Newtonian mechanics 2

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Why this paper?

I don’t normally review individual journal articles, but I am making an exception in this case for several reasons:

  • it is the only research publication I have seen that attempts to measure actual learning from a MOOC in a quantitative manner (if you know of other publications, please let me know)
  • as you’d expect from MIT, the research is well conducted, within the parameters of a quasi-experimental design
  • the paper indicates, in line with many other comparisons between modes of delivery, that the conditions which are associated with the context of teaching are more important than just the mode of delivery
  • I was having to read this paper carefully for my book on ‘Teaching in a Digital Age’, but for reasons of space I would not be able to go into detail on this paper for my book, so I might as well share my full analysis with you.

What was the course?

8.MReV – Mechanics ReView, an introduction to Newtonian Mechanics, is the online version of a similar course offered on campus in the spring for MIT students who failed the Introductory Newtonian Mechanics in the fall. In other words, it is based on a second-chance course for MIT-based campus students.

The online version was offered in the summer semester as a free, open access course through edX and was aimed particularly at high school physics teachers but also to anyone else interested. The course consisted of the following components:

  • an online eText, especially designed for the course
  • reference materials both inside the course and outside the course (e.g., Google, Wikipedia, or a textbook)
  • an online discussion area/forum
  • mainly multiple-choice online tests and ‘quizzes’, interspersed on a weekly basis throughout the course.

Approximately 17,000 people signed-up for 8.MReV. Most dropped out with no sign of commitment to the course; only 1,500 students were “passing” or on-track to earn a certificate after the second assignment. Most of those completing less than 50% of the homework and quiz problems dropped out during the course and did not take the post-test, so the analysis included only the 1,080 students who attempted more than 50% of the questions in the course. 1,030 students earned certificates.

Thus the study measured only the learning of the most successful online students (in terms of completing the online course).

Methodology (summary)

The study measured primarily ‘conceptual’ learning, based mainly on multiple-choice questions demanding a student response that generally can be judged right or wrong. Students were given a pre-test before the course and a post-test at the end of the course.

Two methods to test learning were used: a comparison between each student’s pre-test and post-test score to measure the learning gain during the course; and an analysis based on Item Response Theory (IRT) which does not show absolute learning (as measured by pre-post testing), but rather improvement relative to “class average.”

Because of the large size of the MOOC participants included in the study, the researchers were able to analyse performance between various ‘cohorts’ within the MOOC participants such as:

  • physics teachers
  • not physics teachers
  • physics background
  • no physics background
  • college math
  • no math
  • post-graduate qualification
  • bachelor degree
  • no more than high school

Lastly, the scores of the MOOC participants were compared with the scores of those taking the on-campus version of the course, which had the following features:

  • four hours of instruction in which staff interacted with small groups of students (a flipped classroom) each week,
  • staff office hours,
  • help from fellow students,
  • available physics tutors,
  • MIT library

Main results (summary)

  • gains in knowledge for the MOOC group were generally higher than those found in traditional, lecture-based classes and lower than (but closer to) those found in ‘interactive’ classes, but this result is hedged around with some considerable qualifications (‘more studies on MOOCs need to be done to confirm this’.)
  • in spite of the extra instruction that the on-campus students had, there was no evidence of positive, weekly relative improvement of the on-campus students compared with our online students. (Indeed, if my reading of Figure 5 in the paper is correct, the on-campus students did considerably worse).
  • there was no evidence within the MOOC group that cohorts with low initial ability learned less than the other cohorts

Conclusions

This is a valuable research report, carefully conducted and cautiously interpreted by the authors. However, for these reasons, it is really important not to jump to conclusions. In particular, the authors’ own caution at the end of the paper should be noted:

It is … important to note the many gross differences between 8.MReV and on-campus education. Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course … and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.

To this I would add that the design of this MOOC was somewhat different to many other xMOOCs in that it was based on online texts specially designed for the MOOC, and not on video lectures.

I’m still not sure from reading the paper how much students actually learned from the MOOC. About 1,000 who finished the course got a certificate, but it is difficult to interpret the gain in knowledge. The statistical measurement of an average gain of 0.3 doesn’t mean a lot. There is some mention of the difference being between a B and a B+, but I have probably misinterpreted that. If it is the case, though, I certainly would expect students taking a 13 week course to do much better than that. It would have been more helpful to have graded students on the pre-test then compared those grades on the post-test. We could then see if gains were in the order of at least one grade better, for instance.

Finally, this MOOC design suits a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It is less likely to develop the skills I have identified as being needed in a digital age.

 

 

Review of ‘Online Distance Education: Towards a Research Agenda.’

Listen with webReader
Drop-out: the elephant in the DE room that no-one wants to talk about

Drop-out: the elephant in the DE room that no-one wants to talk about

Zawacki-Richter, O. and Anderson, T. (eds.) (2014) Online Distance Education: Towards a Research Agenda Athabasca AB: AU Press, pp. 508

It is somewhat daunting to review a book of over 500 pages of research on any topic. I doubt if few other than the editors are likely to read this book from cover to cover. It is more likely to be kept on one’s bookshelf (if these still exist in a digital age) for reference whenever needed. Nevertheless, this is an important work that anyone working in online learning needs to be aware of, so I will do my best to cover it as comprehensively as I can.

Structure of the book

The book is a collection of about 20 chapters by a variety of different authors (more on the choice of authors later). Based on a Delphi study and analysis of ‘key research journals’ in the field, the editors have organized the topic into three sections, with a set of chapters on each sub-section, as follows:

1. Macro-level research: distance education systems and theories

  • access, equity and ethics
  • globalization and cross-cultural issues
  • distance teaching systems and institutions
  • theories and models
  • research methods and knowledge transfer

2. Meso-level research: management, organization and technology

  • management and organization
  • costs and benefits
  • educational technology
  • innovation and change
  • professional development and faculty support
  • learner support services
  • quality assurance

3. Micro-level: teaching and learning in distance education

  • instructional/learning design
  • interaction and communication
  • learner characteristics.

In addition, there is a very useful preface from Otto Peters, an introductory chapter by the editors where they justify their structural organization of research, and a short conclusion that calls for a systematic research agenda in online distance education research.

More importantly, perhaps, Terry Anderson and Olaf Zawacki-Richter demonstrate empirically that research in this field has been skewed towards micro-level research (about half of all publications).  Interestingly, and somewhat surprisingly given its importance, costs and benefits of online distance education is the least researched area.

What I liked

It is somewhat invidious to pick out particular chapters, because different people will have different interests from such a wide-ranging list of topics. I have tended to choose those that I found were new and/or particularly enlightening for me, but other readers’ choices will be different. However, by selecting a few excellent chapters, I hope to give some idea of the quality of the book.

1. The structuring/organization of research

Anderson and Zawacki-Richter have done an excellent job in providing a structural framework for research in this field. This will be useful both for those teaching about online and distance education but in particular for potential Ph.D. students wondering what to study. This book will provide an essential starting point.

2. Summary of the issues in each area of research

Again, the editors have done an excellent job in their introductory chapter in summarizing the content of each of the chapters that follows, and in so doing pulling out the key themes and issues within each area of research. This alone makes the book worthwhile.

3. Globalization, Culture and Online Distance Education

Charlotte (Lani) Gunawardena of the University of New Mexico has written the most comprehensive and deep analysis of this issue that I have seen, and it is an area in which I have a great deal of interest, since most of the online teaching I have done has been with students from around the world and sometimes multi-lingual.

After a general discussion of the issue of globalization and education, she reviews research in the following areas:

  • diverse educational expectations
  • learners and preferred ways of learning
  • socio-cultural environment and online interaction
  • help-seeking behaviours
  • silence
  • language learning
  • researching culture and online distance learning

This chapter should be required reading for anyone contemplating teaching online.

4. Quality assurance in Online Distance Education

I picked this chapter by Colin Latchem because he is so deeply expert in this field that he is able to make what can be a numbingly boring but immensely important topic a fun read, while at the same time ending with some critical questions about quality assurance. In particular Latchem looks at QA from the following perspectives:

  • definitions of quality
  • accreditation
  • online distance education vs campus-based teaching
  • quality standards
  • transnational online distance education
  • open educational resources
  • costs of QA
  • is online distance education yet good enough?
  • an outcomes approach to QA.

This chapter definitely showcases a master at the top of his game.

5. The elephant in the room: student drop-out

This is a wonderfully funny but ultimately serious argument between Ormond Simpson and Alan Woodley about the elephant in the distance education room that no-one wants to mention. Here they start poking the elephant with some sticks (which they note is not likely to be a career-enhancing move.) The basic argument is that institutions should and could do more to reduce drop-out/increase course completion. This chapter also stunned me with providing hard data about really low completion rates for most open university students. I couldn’t help comparing these with the high completion rates for online credit courses at dual-mode (campus-based) institutions, at least in Canada (which of course are not ‘open’ institutions in that students must have good high school qualifications.)

Woodley’s solution to reducing drop-out is quite interesting (and later well argued):

  • make it harder to get in
  • make it harder to get out

In both cases, really practical and not too costly solutions are offered that nevertheless are consistent with open access and high quality teaching.

In summary

The book contains a number of really good chapters that lay out the issues in researching online distance education.

What I disliked

I have to say that I groaned when I first saw the list of contributors. The same old, same old list of distance education experts with a heavy bias towards open universities. Sure, they are nearly all well-seasoned experts, and there’s nothing wrong with that per se (after all, I see myself as one of them.)

But where are the young researchers here, and especially the researchers in open educational resources, MOOCs, social media applications in online learning, and above all researchers from the many campus-based universities now mainstreaming online learning? There is almost nothing in the book about research into blended learning, and flipped classrooms are not even mentioned. OK, the book is about online distance learning but the barriers or distinctions are coming down with a vengeance. This book will never reach those who most need it, the many campus-based instructors now venturing for the first time into online learning in one way or another. They don’t see themselves as primarily distance educators.

And a few of the articles were more like lessons in history than an up-to-date review of research in the field. Readers of this blog will know that I strongly value the history of educational technology and distance learning. But these lessons need to be embedded in the here and now. In particular, the lessons need to be spelled out. It is not enough to know that Stanford University researchers as long ago as 1974 were researching the costs and benefits of educational broadcasting in developing countries, but what lessons does this have for some of the outrageous claims being made about MOOCs? A great deal in fact, but this needs explaining in the context of MOOCs today.

Also the book is solely focused on post-secondary university education. Where is the research on online distance education in the k-12/school sector or the two-year college/vocational sector? Maybe they are topics for other books, but this is where the real gap exists in research publications in online learning.

Lastly, although the book is reasonably priced for its size (C$40), and is available as an e-text as well as the fully printed version, what a pity it is not an open textbook that could then be up-dated and crowd-sourced over time.

Conclusion

This is essential reading for anyone who wants to take a professional, evidence-based approach to online learning (distance or otherwise). It will be particularly valuable for students wanting to do research in this area. The editors have done an incredibly good job of presenting a hugely diverse and scattered area in a clear and structured manner. Many of the chapters are gems of insight and knowledge in the field.

However, we have a huge challenge of knowledge transfer in this field. Repeatedly authors in the book lamented that many of the new entrants to online learning are woefully ignorant of the research previously done in this field. We need a better way to disseminate this research than a 500 page printed text that only those already expert in the field are likely to access. On the other hand, the book does provide a strong foundation from which to find better ways to disseminate this knowledge. Knowledge dissemination in a digital world then is where the research agenda for online learning needs to focus.

 

Special edition on research on MOOCs in the journal ‘Distance Education’

Listen with webReader
The University of Toronto is one of a number of institutions conducting research on MOOCs

The University of Toronto is one of a number of institutions conducting research on MOOCs; their results are still to come

The August 2014 edition of the Australian-based journal, Distance Education (Vol.35, No. 2.), is devoted to new research on MOOCs. There is a guest editor, Kemi Jona, from Northwestern University, Illinois, as well as the regular editor, Som Naidu.

The six articles in this edition are fascinating, both in terms of their content, but even more so in their diversity. There are also three commentaries, by Jon Baggaley, Gerhard Fischer and myself.

My commentary provides my personal analysis of the six articles.

MOOCs are a changing concept

In most of the literature and discussion about MOOCs, there is a tendency to talk about ‘instructionist’ MOOCs (i.e. Coursera, edX, Udacity, xMOOCs) or ‘connectivist’ MOOCs (i.e. Downes, Siemens, Cormier, cMOOCs). Although this is still a useful distinction, representing very different pedagogies and approaches, the articles in this edition show that MOOCs come in all sizes and varieties.

Indeed, it is clear that the design of MOOCs is undergoing rapid development, partly as a result of more players coming in to the market, partly because of the kinds of research now being conducted on MOOCs themselves, and, sadly much more slowly, a recognition by some of the newer players that much is already known about open and online education that needs to be applied to the design of MOOCs, while accepting that there are certain aspects, in particular the scale, that make MOOCs unique.

The diversity of MOOC designs

These articles illustrate clearly such developments. The MOOCs covered by the articles range from

  • MOOC video recorded lectures watched in isolation by learners (Adams et al.)
  • MOOC video lectures watched in co-located groups in a flipped classroom mode without instructor or tutorial support (Nan Li et al.)
  • MOOCs integrated into regular campus-based programs with some learner support (Firmin et al.)
  • MOOCs using participatory and/or connectivist pedagogy (Anderson, Knox)

Also the size of the different MOOC populations studied here differed enormously, from 54 students per course to 42,000.

It is also clear that MOOC material is being increasingly extracted from the ‘massive’, open context and used in very specific ‘closed’ contexts, such as flipped classrooms, at which point one questions the difference between such use of MOOCs and regular for-credit online programming, which in many cases also use recorded video lectures or online discussion and increasingly other sources of open educational materials. I would expect in such campus-based contexts the same quality standards to apply to the MOOC+ course designs as are already applied to credit-based online learning. Some of the research findings in these articles indirectly support the need for this.

The diversity of research questions on MOOCs

Almost as interesting is the range of questions covered by these articles, which include:

  • capturing the lived experience of being in a MOOC (Adams et al.; Knox)
  • the extent to which learners can/should create their own content, and the challenges around that (Knox; Andersen)
  • how watching video lectures in a group affects learner satisfaction (Nan Li et al.)
  • what ‘massive’ means in terms of a unique pedagogy (Knox)
  • the ethical implications of MOOCs (Marshall)
  • reasons for academic success and failure in ‘flipped’ MOOCs (Firmin et al.; Knox)

What is clear from the articles is that MOOCs raise some fundamental questions about the nature of learning in digital environments. In particular, the question of the extent to which learners need guidance and support in MOOCs, and how this can best be provided, were common themes across several of the papers, with no definitive answers.

The diversity of methodology in MOOC research

Not surprisingly, given the range of research questions, there is also a very wide range of methodologies used in the articles in this edition, ranging from

  • phenomenology (Adams),
  • heuristics (Marshall)
  • virtual ethnography (Knox; Andersen)
  • quasi-experimental comparisons (Nan Li et al.)
  • data and learning analytics (Firmin et al.)

The massiveness of MOOCs, their accessibility, and the wide range of questions they raise make the topic a very fertile area for research, and this is likely to generate new methods of research and analysis in the educational field.

Lessons learned

Readers are likely to draw a variety of conclusions from these studies. Here are mine:

  • the social aspect of learning is extremely important, and MOOCs offer great potential for exploiting this kind of learning, but organizing and managing social learning on a massive scale, without losing the potential advantages of collaboration at scale, is a major challenge that still remains to be adequately addressed. The Knox article in particular describes in graphic detail the sense of being overwhelmed by information in open connectivist MOOCs. We still lack methods or designs that properly support participants in such environments. This is a critical area for further research and development.
  • a lecture on video is still a lecture, whether watched in isolation or in groups. The more we attempt to support this transmissive model through organized group work, ‘facilitators’, or ‘advisors’ the closer we move towards conventional (and traditional) education and the further away from the core concept of a MOOC.
  • MOOCs have a unique place in the educational ecology. MOOCs are primarily instruments for non-formal learning. Trying to adapt MOOCs to the campus not only undermines their primary purpose, but risks moving institutions in the wrong direction. We would be better re-designing our large lecture classes from scratch, using criteria, methods and standards appropriate to the goals of formal higher education. My view is that in the long run, we will learn more from MOOCs about handling social learning at scale than about transmitting information at scale. We already know about that. It’s called broadcasting.
  • lastly, there was surprisingly little in the articles about what actual learning took place. In some cases, it was a deliberate research strategy not to enquire into this, relying more on student or instructor feelings and perceptions. While other potential benefits, such as institutional branding, stimulating interest, providing a network of connections, and so on, are important, the question remains: what are participants actually learning from MOOCs, and does this justify the hype and investment (both institutionally and in participants’ time) that surrounds them?

Cultural and ethical issues

The Marshall paper provides an excellent overview of ethical issues, but there is almost no representation of perspectives on MOOCs from outside Western contexts. I would have liked to have seen more on cultural and ethical issues arising from the globalization of MOOCs, based on actual cases or examples. Given the global implications of MOOCs, other perspectives are needed. Perhaps this is a topic for another issue.

Happy reading

I am sure you will be as fascinated and stimulated by these articles as I am. I am also sure you will come away with different conclusions from mine. I am sure we will see a flood of other articles soon on this topic. Nevertheless, these articles are important in setting the research agenda, and should be essential reading for MOOC designers as well as future researchers on this topic.

How to get the articles

To obtain access to these articles, go to: http://www.tandfonline.com/toc/cdie20/current#.U-1WqrxdWh1

Submitting a doctoral thesis on online learning? Some things to keep in mind

Listen with webReader
© Relativity Media, 2011

© Relativity Media, 2011

Old people often complain that the world is going to hell in a hand-basket, that standards are falling, and it used to be better in our day. Having examined over 40 doctoral students over the last 45 years, often as the external examiner, it would be easy for me to fall into that trap. On the contrary, though, I am impressed with the quality of theses I have been examining recently, partly because of the quality of the students, partly because of the quality of the supervision, and partly because online learning and educational technology in general have matured as a field of study.

However, one advantage of being old is that you begin to see patterns or themes that either come round every 10 years or so or never go away, and that certainly applies to Ph.D. theses in this field. So I thought I might offer some advice to students as to what examiners tend to look for in theses in this field, although technically it should be the supervisors doing this, not me.

Who’s being examined: student or supervisor?

When I have failed a student (which is rare but has happened) it’s ALWAYS been because the standard of supervision was so poor that the student never stood a chance. Somewhat more frequently (although still fairly uncommon), the examiners’ recommendation was pass with substantial revision, or ‘adequate’ in some European countries. Both these classifications carry a significant message to the academic department that the supervisor(s) weren’t doing their job properly. (Although to be fair, in at least one case the thesis was submitted almost in desperation by the department, because the student had exhausted all his many different supervisors, and was running out of the very generous time allowed to submit.)

So the good news, students, is that, despite what might appear to be the opposite, by the time it comes to submitting your thesis for exam, the university is (or should be) 100 per cent behind you in wanting to get you through. (In recent years, this pressure from the university on examiners to pass students sometimes appears to be almost desperate, because a successful Ph.D. may carry a very significant weight towards the performance indicators for the university.)

Criteria for success

So at the risk of over-simplification, here is my advice for students, in particular, on what I, as an examiner, tend to look for in a thesis, starting with the most important. My comments apply mainly, but not exclusively, to traditional, research-based theses.

Level 1.

I have three main criteria which MUST be met for a pass:

  • is it original?
  • does it demonstrate that the student is capable of conducting independent research?
  • does the evidence support the conclusions drawn in the thesis?

Originality

The minimum a doctoral thesis must do is tell me something that was not already known in the field. Now this can still be what students often see as a negative outcome: their main hypothesis is found to be false. That’s fine, if it is a commonly held hypothesis in the field. (Example: digital natives are different from digital immigrants: no evidence was found for this in the study.) If it disproves or questions current wisdom, that’s good, even if the result was not what you were expecting. In fact, that’s really good, because the ‘null hypothesis’ – I’m trying to prove my hypothesis is false – is a more rigorous test than trying to find evidence to support something you actually thought to be true before you started the research (see Karl Popper (1934) on this).

Competence in research

For students, there are three good reasons for doing a Ph.D.:

  • because you want an academic position in a university or college
  • because you want to work as a full-time researcher outside the university
  • because you have a burning question to answer (e,.g.: what’s best done face-to-face, and what online, when teaching quantum physics?)

However, the main purpose of a Ph.D. (as distinct from other post-graduate qualifications) from a professional or institutional perspective is to enable students to conduct independent research. Thus the thesis must demonstrate this competency. In a sense, it is a trust issue: if this person does research, we should be able to trust him or her to do it within the norms and values of the subject discipline. (This is why it is stupid to even think of cheating by falsifying data or plagiarism: if found out, you will never get an academic job in a university, never mind the Ph.D.)

Evidence-based conclusions

My emphasis here is on ensuring that appropriate conclusions are drawn from whatever evidence is used (which should include the literature review as well as the actual data collected). If for instance the results are contrary to what might be expected from the literature review, some explanation or discussion is needed about why there is this difference. It may have to be speculative, but such contradictions need to be addressed and not ignored.

Level 2

Normally (although there will be exceptions) a good thesis will also meet the following criteria:

  • there is a clear narrative and structure to the thesis
  • there is a clear data audit trail, and all the raw/original data is accessible to examiners and the general public, subject to normal privacy/ethical requirements
  • the results must be meaningfully significant

Narrative and structure

Even in an applied thesis, this is a necessary component of a good thesis. The reader must be able to follow the plot – and the plot must be clear. The usual structure for a thesis in our field is:

  • identification of an issue or problem
  • review of relevant previous research/studies
  • identification of a research question or set of questions
  • methodology
  • results
  • conclusions and discussion.

However, other structures are possible. In an applied degree, the structure will or should be different, but even so, the reader in the main body of thesis should be able to follow clearly the rationale for the study, how it was conducted, the results, and the conclusions.

Data audit

Most – but not all – theses in the educational technology field have an empirical component. Data is collected, analysed and interpreted. All these steps have to be competently conducted, whether the data is mainly quantitative, qualitative or both. This usually means ensuring that there is a clear trail linking raw data through analysis into conclusions that can be followed and checked easily by a diligent reader (in this case, the examiners). This is especially important with qualitative data, because it is easy to cherry-pick comments that support your prior prejudices or assumptions while ignoring those that don’t fit. As an examiner, I do want access to raw data, even if it’s in an appendix or an online database.

However, I am also willing to accept a thesis that is pure argument. Nevertheless, this is a very risky option because this means offering something that is quite original and which can be adequately defended against the whole collective wisdom of the field. In the field of educational technology, it is hard to see how this can be done without resorting to some form of empirical evidence – but perhaps not impossible.

Significance of the research question and results

This is often the best test of how much the thesis is mainly the work of the supervisor and how much the student. A good supervisor can more or less frogmarch a student through the various procedural steps in doing a doctoral thesis, but what the supervisor cannot – or should not – provide is the original spark of a good research question, and the ability to see the significance of the study for the field as a whole. This is why orals are so important – this is the place to say why your study matters, but it also helps if you address this at the beginning and end of your written thesis as well.

Too often I have seen students who have asked questions that inevitably produce results that are trivial, already known, or are completely off-base. Even more tragic is when the student has an unexpected but important, well-founded set of data, but is unable to see the significance of the data for the field in general.

The problem is that supervisors quite rightly drill it into students that they must chose a research question that is manageable by an individual working mainly alone, and that their conclusions must be based on the data collected, but this does not mean that the research question needs to be trivial or that once the conclusions have been properly drawn, there should be no further discussion of their significance for the field as a whole. This is the real test of a student’s academic ability.

Tips for success

There are thousands of possible tips one could give to help Ph.D. students, but I will focus on just a few issues that seem to come up a lot in theses in this area:

1. Do a masters degree on online learning first

This will give you a good overview of the issues involved in online learning and should provide some essentially preparatory skills, such as an introduction to research methods and extensive writing.

Do this prior to starting a Ph.D. See: Recommended graduate programs in e-learning for a list of appropriate programs.

Do it online if possible so you know what its’s like to be an online student.

At a minimum, take a course on research methods in the social sciences/online learning.

2. Get a good supervisor

The trick is to find a supervisor willing to accept your proposed area of research. Try to find someone in the local Faculty of Education with an interest in online learning and try to negotiate a research topic of mutual interest. This is really the hardest and most important part. Getting the right supervisor is absolutely essential. However, there are many more potential students than education faculty interested in research in online learning.

If you find a willing and sympathetic local faculty member with an interest in online learning but worried they don’t have the right expertise to supervise your particular interest, suggest a committee with an external supervisor (anywhere in the world) who really has the expertise and who may be willing to share the supervision with your local supervisor. Again, though, your chances of getting either an internal or external supervisor is much higher if that person already knows you or is aware of your work. Doing an online masters might help here, since some of the instructors on the course may be interested in supervising you for a Ph.D., especially if they know your work through the masters. But again, good professors with expertise in online learning are already likely to have a full supervision load, so it is not easy. (And don’t ask me – I’m retired!)

This means that even before applying for a Ph.D., you need to do some homework. Identify a topic with some degree of flexibility, have in mind an internal and an external supervisor, and show that you have done the necessary courses such as research methods, educational theory, etc., that will prepare you for a Ph.D. (or are willing to do them first).

3. Develop a good research question

See above. Ideally, it should meet the following requirements:

a. The research is likely to add something new to our knowledge in the field

b. The results of the research (positive, negative or descriptive) are likely to be significant/important for instructors, students or an institution

c. You can do the research to answer the question on your own, within a year or so of starting to collect data.

d. It can be done within the ethical requirements of research

It is even better if you can collect data as part of your everyday work, for example by researching your own online teaching.

4. Get a good understanding of sampling and the level of statistics that your study requires

Even if you are doing a qualitative study, you really need to understand sampling – choosing subjects to participate in the study. The two issues you need to watch out for are:

1. Bias in the initial choice of subjects, especially choosing subjects that are likely to support any hypotheses or assumptions you may already have. (Hence the danger of researching your own teaching – but you can turn this to advantage by taking care to identify your prior assumptions in advance and being careful not to be unduly influenced by them in the design of the research).

2. Focusing too much on the number of respondents and not on the response rate, especially in quantitative studies. Most studies with response rates of 40 per cent or less are usually worthless, because the responders are unlikely to be representative of the the whole group (which is why student evaluation data is really dangerous, as the response rate is usually biased towards successful students, who are more likely to complete the questionnaires than unsuccessful students.) When choosing a sample, try to find independent data that can help you identify the extent of the likely bias due to non-responders. For instance, if looking at digital natives, check the age distribution of your responders with the age distribution of the total of the group from which you drew the sample, if that is available. If you had a cohort of 100 students, and 20 responded, how does the average age of the responders compare with the average age of the whole 200? If the average age of responders is much lower than non-responders, what significance does this have for your study?

Understanding statistics is a whole other matter. If you intend to do anything more complicated quantitatively than adding up quantitative data, make sure you understand the necessary statistics, especially what statistically different means. For instance, if you have a very large sample, even small differences are likely to be statistically significant, but they may not be meaningfully significant. Small samples increase the difficulty of getting statistically significant results, so drawing conclusions even when differences look large can be very dangerous from small samples.

5. Avoid tautological research design or quantitative designs with no independent variables

Basically, this means asking a question, stating a hypothesis, or designing research in such a way that the question or  hypothesis itself provides the answer. To elaborate, research question” “What is quality in online learning?’ ‘Answer: “It is defined by what educators say makes for quality in online courses and my research shows that these are clear learning objectives, accessibility, learner engagement, etc..” There is no independent variable here to validate the statements made by educators. (An independent variable might be exam results, participation rates of disabled people, etc.). Education is full of such self-justifications that have no clear, independent variables against which such statements have been tested. Merely re-iterating what people currently think is not original research.

For this reason, I am very skeptical of Delphi studies, which merely re-iterate already established views and opinions. I always ask: ‘Would a thorough literature review have provided the same results?’ The answer is usually: ‘No, you get a far more comprehensive and reliable overview of the topic from the literature review.’

6. Write well

Easily said, but not  easily done. However, writing that is clear, well-structured, evidence-based, grammatically correct and well argued makes a huge difference when it comes to the examination of the thesis. I have seen really weak research studies get through from the sheer quality of the writing. I have seen other really good research studies sent back for major revision because they were so badly written.

Writing is a skill, so it gets better with practice. This usually means writing the same chapter several times until you get it right. Write the first draft, put it away and come back to it several days later. Re-read it and then clarify or improve what you’ve written. Do it again, and again, until you are satisfied that someone who knows nothing about the subject beforehand can understand it. (Don’t assume that all the examiners will be expert in your particular topic.) If you can, get someone such as a spouse who knows nothing about the subject to read through a chapter and ask them just to put question marks alongside sentences or paragraphs they don’t understand. Then re-write them until they do.

The more practice and feedback you can get on your writing, the better, and this is best done long before you get to a final draft.

Is the Ph.D. process broken?

A general comment about the whole Ph.D. process: while not completely broken, it is probably the most costly and inefficient academic process in the whole university, riddled with bureaucracy, lack of clarity for students, and certainly in the non-quantitative areas, open to all kinds of challenges regarding the process and standards.

This is further complicated by a move in recent years to applied rather than research theses. In an applied thesis, the aim is to come up with something useful that can be applied in the field, such as the design of an e-portfolio template that can be used for an end of course assessment, rather than the traditional research thesis. I believe this to be a step in the right direction. Unfortunately though education departments often struggle to provide clear guidance to both students and examiners about the criteria for assessing such new degrees, which makes it even more of a shot in the dark in deciding whether a thesis is ready for submission.

Other suggestions or criticisms

These are (as usual) very personal comments. I’m sure students would like to hear from other examiners in this field, particularly if there is disagreement with my criteria and advice. And I’d like to hear from doctoral students themselves. Suggestions for further readings on the Ph.D. process would also be welcome.

I would also like to hear from those who question the whole Ph.D. process. I must admit to mixed feelings. We do need to develop good quality researchers in the field, and I think a research thesis is one way of doing this. I do feel though that the whole process could be made more efficient than it is at the moment.

In the meantime, good luck to all of you who are struggling with your doctoral studies in this field – we need you to succeed!

Reference

Popper, K. (1959) The Logic of Scientific Discovery London: Routlege