Nine steps to quality online learning: Step 9: Evaluate and innovate
In this post I discuss the importance of evaluating each offering of an online course, how best to do this, and then the importance of maintaining and improving the course.
This is the last in a series of 10 posts on designing quality online courses. The nine steps are aimed mainly at instructors who are new to online learning, or have tried online learning without much help or success. The first nine posts (which should be read before this post) are:
A condensed version covering all the main posts in this series can be found on the Contact North web site: What you need to know about teaching online: nine key steps. (French version: Ce que le personnel enseignant doit savoir sur l’enseignment en ligne: neuf étapes clés‘)
The ten posts are also being translated into Portuguese by Professor Luis Roberto Brudna Holzle, Federal University, Brazil, available at Science Blogs: Nove passos para uma aprendizagem on-line de qualidade
Steps 1-8: Building a strong foundation
The emphasis in this series of posts is on getting the fundamentals of online teaching right. The discerning reader will have noted that there isn’t much in these posts about exciting new tools, MOOCs, the Khan Academy, MIT’s edX, and many other new developments in online learning. These tools and new programs offer great potential and we will discuss some of these in this post. However, it doesn’t matter what tools or revolutionary programs are being used, what we know of how people learn does not change a great deal over time, and we do know that learning is a process, and you ignore the factors that influence that process at your peril.
I’ve focused mainly on using LMSs, because that is what most institutions currently have, and they provide an adequate ‘framework’ within which the key processes of teaching and learning can be managed. But if you get these fundamentals right they will transfer well to the new tools and programs; if they don’t transfer well, such tools are likely to be a passing fad and will eventually die, because they don’t support the key processes that support learning. For example, MOOCs may reach hundreds of thousands of students, but if there is no suitable communication with or ‘online presence’ from an instructor, then most students will fail (as is the case at the moment). MOOCs will survive and grow if they can accommodate the core processes of clear learning outcomes, learner support, clear structure, management of student and faculty workload, etc.
The last key ‘fundamental’ of the teaching and learning process is evaluation and innovation: assessing what has been done, and then looking at ways to improve on it.
Why evaluation is important
This step isn’t specific to online teaching. It really applies to all forms of teaching. However, especially for instructors new to it, online teaching is different and therefore likely to be seen as higher risk. Online and distance learning is always held to a higher standard than conventional teaching, so more effort is required to justify their use. For tenure and promotion, it is important if you are teaching online to be able to provide evidence that the teaching has been at least as successful as your classroom courses. Online learning itself is continually developing. New tools and new approaches to teaching online are constantly coming available. They provide the opportunity to experiment a little to see if the results are better, and if we do that, we need to evaluate the impact of using a new tool or course design. It’s what professionals do. But the main reason is that teaching is like golf: we strive for perfection but can never achieve it. It’s always possible to improve, and one of the best ways of doing that is through a systematic analysis of past experience.
What to evaluate
In Step 1, I defined quality online learning very narrowly. It is outcomes based:
By quality, I mean ‘Reaching the same level or better with an online course as for an equivalent face-to-face course.’ This has two quantitative critical performance indicators:
- completion rates will be at least as good if not better for the online version
- grades or measures of learning will be at least as good if not better for the online version.
On a qualitative level, I suggested one other criterion:
- quality online learning will lead to new, different and more relevant learning outcomes that are better served by online learning.
So these are the minimum requirements. The first two are easily measured in quantitative terms. We should be aiming for completion rates for an online course of at least 85%, i.e. of 100 students starting the course, 85 complete by passing the end of course assessment (unfortunately, many classroom courses fail to achieve this rate, but if we value good teaching, we should be trying to bring as many students as possible to the set standard).
The second criterion is to compare the grades. We would expect at least as many As and Bs in our online version as in a classroom version. (I am assuming that students are taking the same exams, etc., whether they are in class or online, and are being marked to the same standards).
The third criterion is more difficult, because it suggests a change in the intended learning goals for a course that is delivered online. This might include assessing students’ communication skills, or their ability to find, evaluate, analyze and apply information appropriately within the subject domain, which are not assessed in the classroom version. This requires a qualitative judgement as to which learning goals are most important, and this may require endorsement or support from a departmental curriculum committee or even an external accreditation body.
However, even if we measure the course by these three criteria, we will not necessarily know what worked and what didn’t in the course. We need to look more closely at factors that may have influenced students’ ability to learn. We have laid out in the various steps some of these factors. Some of the questions to which you may want to get answers are as follows:
- What learning outcomes did most students struggle with?
- Were the learning outcomes or goals clear to students?
- Was the teaching material clear and well structured?
- Was the LMS easily accessible and available 24×7?
- Did students behave in the online discussion forums in the way expected?
- What topics generated good discussion and what didn’t?
- Did students draw on the course materials in their discussion forums or assignments?
- Did students make use of the podcasts?
- How many students logged in to the webcasts and did these students do better or worse than those that didn’t?
- Were the students overloaded with work?
- Was it too much work for me as an instructor?
- If so, what could I do to better manage my workload (or the students’) without losing quality?
- How satisfied were the students with the course?
I will now suggest some ways that these questions can be answered without again causing a huge amount of work.
How to evaluate factors contributing to or inhibiting learning on an online course
There is a range of resources you can draw on to do this, much more in fact than for evaluating classroom courses, because online learning leaves a traceable digital trail of evidence.
- student grades
- individual student participation rates in online activities, such as self-assessment questions, discussion forums, webinars
- qualitative analysis of the discussion forums, for instance the quality and range of comments, indicating the level or depth of engagement or thinking
- student assignments and exam answers
- student questionnaires
- online focus groups.
However, before starting it is useful to draw up a list of questions as in the previous section, and then look at which sources are most likely to provide answers to those questions.
One word about student questionnaires. Many institutions have a ‘standard’ student reporting system at the end of each course. These are often useless for the purposes of evaluating online courses. The questions asked need to be adapted to an online learning environment. However, because such questionnaires are used for cross course comparisons, the people who manage such evaluation forms are reluctant to have a different version for online teaching. Secondly, because these questionnaires are usually voluntarily completed by students after the course has ended, completion rates are often notoriously low (less than 20%). Low response rates are usually worthless or at best highly misleading. Students who have dropped out of the course won’t even get the questionnaire in most cases. Low response rates tend to be heavily biased towards successful students. It is the students who struggled or dropped out that you need to hear from.
I find small focus groups work better than student questionnaires, and for this I prefer synchronous tools such as Blackboard Collaborate. I will deliberately approach 7-8 specific students covering the full range of achievement, from drop-out to A, and conduct a one hour discussion around specific questions about the course. If one selected student does not want to participate, I try to find another in the same category.
In addition, at the end of a course, I tend to look at the student grades, and identify which students did well and which struggled. I then go back to the beginning of the course and track their online participation as far as possible (the next generation of learning analytics will make this much easier). I find that some factors are student specific (e.g. a gregarious student who communicates with everyone) and some are course factor specific, e.g. related to learning goals or the way I have explained or presented content. This qualitative approach will often suggest changes to the content or the way I interacted with students for the next version of the course. I may also determine next time to manage more carefully students who ‘hog’ the conversation.
Usually I spend quite a bit of time at the end of the first presentation of an online course evaluating it and making changes in the next version, usually working with a trusted instructional designer. After that I concentrate mainly on ensuring completion rates and grades are at the standard I have aimed for.
What I am more likely to do in the third or subsequent offerings is to look at ways to improve the course that are the result of new external factors, such as new software (e.g. an e-portfolio package), or new processes (e.g. student-generated content, using mobile phones or cameras, collecting project-related data). This keeps the course ‘fresh’ and interesting. However, I usually limit myself to one substantive change, partly for workload reasons but also because this way it is easier to measure the impact of the change.
It is indeed an exciting time to be an instructor. In particular, the new generation of web 2.0 tools, including WordPress, new, instructor-focused ‘lightweight LMSs such as Instructure, open educational resources, mobile learning, tablets and iPads, electronic publishing, MOOCs, all offer a wide variety of opportunities for innovation and experiment. These can be easily integrated within the existing LMS and existing course structure. I will discuss in another post how some of these tools can radically change the design and delivery of online learning.
However, it is important to remember that the aim is to enable students to learn effectively. We do have enough knowledge and experience to be able to design ‘safe’, effective learning around standard LMSs. Many of the new web 2.0 tools have not been thoroughly evaluated in post-secondary educational settings, and it is already clear that some of the newer tools or approaches are not proving to be as effective as older approaches to online learning. New is not always better. Thus for instructors starting in online learning, I would urge caution. Follow the experienced route, then gradually add and evaluate new tools and new approaches to learning as you become more experienced.
The nine steps are based on two foundations: effective learning strategies resulting from tested learning theories; and experience of successfully teaching online. The focus has been on instructors new to online learning. The posts are meant to lead you into working with other professionals, such as instructional and web designers, and preferably in a team with other online instructors.
The approach I have suggested is quite conservative, and some may wish to jump straight into what I would call second generation online learning, or e-learning 2.0. Nevertheless, even, or especially, working without a learning management system, it is important to remember that most students need clear learning goals, a clear structure or timetable of work, manageable study workloads, and instructor communication and presence. Most students also learn best, especially online, in a social environment that draws on and contributes to the knowledge and experience of other students.
Evaluation of the nine steps
In the spirit of this blog, it will help me evaluate the nine steps/ten posts of this series. So here are some questions:
- If you are new to online teaching, how helpful were these posts for you? What didn’t work, or what was missing?
- If you work with instructors who are new to or struggling with online teaching, would you refer these posts to them?
- Do you think this is a too conservative approach to teaching online? Too much focus on LMSs and not enough on web 2.0 tools?
- Do you agree that there are ‘fundamental processes of learning’ that are relatively independent of different tools?
- To what extent are these guidelines applicable to all kinds of teaching, not just online teaching? What do you think is ‘special’ that you need to know about online teaching?
- Was there something critical for quality online learning missing in the nine steps?
Your feedback either as a comment to this post or as an e-mail will be much appreciated and will make the next version much better!
I will revisit in another set of posts two issues:
- advanced online course design, based on the use of web 2.0 tools
- the new campus: designing for hybrid learning
I was surprised to find when conducting a mini review of formative evaluation in online teaching how little there is on the topic, and of what there is very little is helpful for the individual instructor trying to improve their course or could be recommended by me. If you know of any practical guide to formative evaluation of online teaching that will help individual instructors, please let me know! The best article by far that I found is:
Gunawardena, C., Lowe, C. & Carabajal, K. (2000). Evaluating Online Learning: models and methods. In D. Willis et al. (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2000 (pp. 1677-1684). Chesapeake, VA: AACE.
Note: there is a big difference between summative evaluation (which identifies the overall effectiveness for online learning compared with for instance classroom teaching) and formative evaluation, which seeks to learn from part experience to improve future performance. There is a large literature on summative evaluation of online learning, quality standards, and criteria. There is a much smaller literature specifically on formative evaluation of online teaching that enables an individual teacher to improve their teaching (although there is a much bigger literature on formative evaluation in classroom teaching). In this post, I have been focusing on formative evaluation, carried out by the instructor mainly to improve a specific course.