September 21, 2017

Answering questions about teaching online: assessment and evaluation

How to assess students online: remote exam proctoring

How to assess students online: remote exam proctoring (see Chapter 4.5 on competency-based learning)

Following on from my Contact North webinar on the first five chapters of my book, Teaching in a Digital Age, and my blog post on this yesterday, there were four follow-up questions from the seminar to which I posted written answers. Here they are:

Unanswered Questions

Q: ­If multiple choice is not great for applied learning assessment – could you please give us some tips for more effective assessment in the virtual environment?­

Big question! There are several ways to assess applied learning, and their appropriateness will depend on the subject area and the learning goals (Look particularly at Appendix A, Section 8). Here are some examples:

  • via project work, where the outcome of the project is assessed. (This could be either an individual or group project). Marking a project that may take several weeks work on the part of students helps keep the marking workload down, although this may be offset to some extent by the help that may need to be given to learners during the project.
  • through e-portfolios, where students are asked to apply what they are learning to practical real-life contexts. The e-portfolio is then used to assess what students have learned by the end of the course.
  • use of online discussion forums, where students are assessed on their contributions, in particular on their ability to apply knowledge to specific real world situations (e.g. in contemporary international politics)
  • using simulations where students have to input data to solve problems, and make decisions. The simulation collects the data and allows for qualitative assessment by the instructor. (This depends on there being suitable simulations available or the ability to create one.)

Q: I am finding in my post-graduate online courses the professor is interacting less and less in the online weekly forums, while I know there are competing theories as to how much they should interact with students, do you have an opinion on whether or not professors should or should not interact weekly? Personally, I enjoy their interaction I find it furthers my learning.

This is another big issue. In general, the research is pretty much consistent: in online learning, instructor ‘presence’ is critical to the success of many students. Look particularly at Chapter 4, Section 4 and Chapter 11, Section 10. However presence alone is not sufficient. The online discussion must be designed properly to lead to academic learning and the instructor’s intervention should be to raise the level of thinking in the discussion (see 4.4.2 in the book). Above all, the discussion topics must be relevant and from a student’s perspective clearly contribute to answering assessment questions better. The instructors should in my view be checking daily their online discussion forums and should respond or intervene at least weekly. Again though this is a design issue; the better the course design, the less they should need to log in daily.

Q: Can you give an example of how a MOOC can supplement a face-to-face or fully online course?

I think the best way is to consider a MOOC as an open educational resource (OER). There is a whole chapter (Chapter 10) in the book on OERs. Thus MOOCs (or more likely parts of MOOCs) might be used in a flipped classroom context, where students study the MOOC then come to class to do work around it. But be careful. Many MOOCs are not OER. They are protected by copyright and cannot be used without permission. They may be available only for a limited period. If it is your own MOOC, on the other hand, that’s different. My question is though: is the MOOC material the best OER material available or are there other sources that would fit the class requirement better, such as an open textbook? Or even better, should you look at designing the course completely differently, to increase student interaction, self-learning and the development of higher order thinking skills, by using one of the other teaching methods in the book?

Q: Would better learning analytics reports help teachers have a more relevant role in MOOCS?

Learning analytics can be helpful but usually they are not sufficient. Analytics provide only quantitative or measurable data, such as time on task, demographics about successful or unsuccessful students, analysis of choices in multiple-choice tests, etc. This is always useful information but will not necessarily tell you why students are struggling to understand or are not continuing. Compare this with a good online discussion forum where students can raise questions and the instructor can respond. Students’ comments, questions and discussion can provide a lot of valuable feedback about the design of the course, but require in most cases some form of qualitative analysis by the instructor. This is difficult in massive online courses and learning analytics alone will not resolve this, although they can help, for instance, in focusing down on those parts of the MOOC where students are having difficulties.

Any more questions?

I’m more than happy to post regular responses to any questions you may have about online teaching, either related to the book or quite independent of it. Just send them to me at tony.bates@ubc.ca

Measuring the success of an open textbook

Book table of contents 2

I have just come back from two days at one of my favourite communities of practice, the British Columbian Educational Technology Users’ Group (ETUG) annual workshop, this year at Simon Fraser University, Burnaby, BC. (More on this workshop in another post).

I was there to report on the early response to my online open textbook, Teaching in a Digital Age. I thought it might be of interest to share some of my presentation as a blog post, because it raises some questions about how to measure the success of an open textbook.

Student savings

This is the obvious and most important measure of success: how much does an open textbook save by reducing one of the major costs of education? Here in British Columbia, the average annual cost of textbooks for BC post-secondary institutions is $1,200 per student, if they bought all the required textbooks new (which of course, many don’t).

BCcampus, which has an extensive open textbook project, with over 60 open textbooks currently available, has been tracking their adoption by post-secondary institutions in BC. They have found that to date (actually, April 15, 2015):

  • 146 known adoptions of Open Textbooks at 14 of the 25 public post-secondary institutions across British Columbia
  • A student cost savings ranging from $475 K- $700 K (see their post on how they calculate student savings to see why they report the savings as a range).

More importantly, other studies have shown that when open textbooks are available, students make greater use of them, and students tend to perform better, as they have them from the first day of the course.

However, my open textbook is aimed at instructors and faculty, and my main aim is not to save them money, but to make the book more accessible than if it had been commercially published. So I need to use other criteria of success. I also have to compare the response to the book to the effort and cost of doing the book.

Quantitative responses

There were several deliberate marketing initiatives:

  • March 2014 – April 2015: the use of my blog for draft chapters. However, my blog readership, which consists of mainly professionals in the field of online learning, is a secondary market, although a very important one, as they can bring the book to the attention of the main market I am trying to reach, instructors and faculty;
  • April 7, 2015: I posted the final version of the book to the BCcampus Open textbook web site on April 6, 2014, and wrote a blog post the next day announcing that the book was finished and now fully available in various formats;
  • April 22, 2015: the Western Inter-State Co-operative for Educational Technology (WCET), is a national (U.S.), member-driven, non-profit which brings together colleges and universities, higher education organizations and companies to collectively improve the quality and reach of e-learning programs. They had invited me to write a blog post for their newsletter, promoting the book, and they published this on April 22;
  • April 29, 2015: Contact North in Ontario organised a world wide media release about the book
  • May 5, 2015: Academica.ca picked up on the Contact North media release (as did some other specialist media) and published a short announcement about the book.

The BCcampus open textbook web site tracks the number of visits to the book. The following shows the site traffic between April 27 and May

Figure 2: Book visitors April 29 - May 28

Figure 2: Book visitors April 29 – May 28

Between April 7 (my blog post) and April 29 (Contact North’s media blitz) the site traffic had been averaging about 1,000 visits a day, but with considerable daily variations. You can see that after a peak around April 30, the number of visits started to decline. Figure 3 illustrates why:

Figure 3: Book downloads, April 1- May 24

Figure 3: Book downloads, April 1- May 24 (source: BCcampus Open Textbook project)

Up until April 7, most people had been accessing/reading the html version of the book direct from the BCcampus web site. From April 7, though, visitors started downloading the book and reading it off-line or via a mobile version. Figure 4 shows the total number of downloads

Figure 4: Book download statistics

Figure 4: Book download statistics, April 7-May 24 (source: BCcampus)

Thus there were 8,205 downloads of the whole book between April 7 and May 24, i.e. over a seven week period. Interestingly, although the book is available in several mobile or tablet formats, 80% of the downloads were as pdfs.

Although the quantitative data is interesting, I’m not quite sure what to make of it. I struggled to exceed 10,000 copies of my best selling commercial book over a period of 10 years or so and 8,000+ downloads suggest a lot of readers, but it is easy to download something that is online and free then never read it or just a few pages.

In any case there are over half a million people in North America alone in the market I’m trying to reach. Although marketing is important, it also drives the numbers without indicating the level of engagement. So I am looking increasingly to qualitative feedback to measure the book’s success.

Qualitative responses

These can be classified in the following ways:

1. Individual faculty and instructors

This is probably my key market, but also one that is the hardest to reach, because these instructors are discipline and subject-based, and are unlikely to stumble across this book except by accident. To date, I have had a small number (less than 10) of individual instructors contact me to say that they are reading the book and finding it useful, but it is early days yet. I suspect that, like one of my other books, on effective teaching with technology in higher education, it will take time for the word to get out to this market. The earlier book in fact was selling more copies five years after publication than in its first year.

2. Faculty development

This is where someone has read the book, and wants to use it for the professional development of other instructors, usually within the same department. I have so far heard from the following who want to do this:

  • Neuroscience, CalTech
  • Professional development for digital education, Loma Linda University
  • College of Engineering, Drexel University
  • Bachelor of Technology, McMaster University
  • Faculty of Agriculture, Dalhousie University

Usually the person contacting me has been a dean or head of department, which is particularly rewarding for me.

3. Adoption as a text book for a course

The book has been quickly adopted by at least one instructor as part of either teacher training or graduate programs in education on courses at the following universities:

  • UBC (Master in Educational Technology)
  • Simon Fraser University (Professional Development certificate)
  • Royal Roads University (Master of Arts in Learning and Technology)
  • Vancouver Island University (Online Learning & Teaching Graduate Diploma (OLTD) Program)
  • Vancouver Community College (Provincial Instructor Diploma Program)
  • Seattle Pacific University (MEd in Digital Education).

Again, although these all lead to some form of qualification, they are also providing professional development opportunities for instructors and faculty. And in this case it is saving students money, although the students are mainly professionals.

4. Accreditation agencies

This was even more unexpected. I have received notification from two accrediting agencies that they are either using or recommending the book for continuing professional development:

  • Accrediting Bureau of Health Education Schools, USA
  • Texas Higher Education Coordinating Board

5. Graduate students use in theses/dissertations

I have heard from a few graduate students who are using the book to help with their dissertation or thesis. I’m hoping this will increase over time.

6. External reviews

One graduate student wrote to me to say she wanted to quote me in her thesis, but her supervisor warned her not to do this, as the book had not been formally peer reviewed. I have in fact commissioned three independent reviews, which will be published at the end of this month (June) alongside the book. It will be nearly a year before any formal reviews will be published, at least in traditional journals, but these will be important, so I am wondering which journals to send it to (as a pdf) – any suggestions?

Conclusions

It is still early days to be evaluating the success of my open textbook, but there are some surprises here for me.

  1. I didn’t expect so many people to download the whole book, and I certainly didn’t expect those that did download to use the pdf version. I actually designed the book for online use on a laptop or desktop computer, as the book has embedded multimedia and is meant to be interactive, and a resource that is dipped into rather than read from cover to cover. In another post, I discussed the problem of formatting graphics in the different versions, and I am hoping to have a slightly redesigned version for tablets soon, but maybe one should design for a pdf version from scratch.
  2. It may or may not be significant that I haven’t heard anything yet from units that are professionally responsible for faculty development, such as teaching and learning centres. I will be more than happy if I can reach directly individual faculty and instructors and their deans and heads of department. Teaching and learning centres are probably at their busiest now and already had their workshops designed before the book came out. However, I am hoping that it will eventually be seen as an essential resource by such centres, but maybe I need to network more with organisations such as the Society for Teaching and Learning in Higher Education to make them also more aware of the book.
  3. Authors need to make sure they are getting regular reports on the utilisation of their open textbooks. The software is there to do this, but it is not always accessible by the authors themselves without running a special report, so there is a bit of work to be done on the interface of Pressbooks to make this information more comprehensive and readily available to authors directly. However, interpreting such data is also tricky, even when it is available, and needs to be balanced with qualitative assessments as well.

I will write about the cost of doing an open textbook in another post, then end with a final post which will discuss whether I felt the whole exercise was worth it. In the meantime, I am wondering if any readers have suggestions for better ways to evaluate the success of an open textbook.

Ensuring quality teaching in a digital age: key takeaways

Building the foundations of quality teaching and learning

Building the foundations of quality teaching and learning

I have now completed and published Chapter 11, ‘Ensuring quality teaching in a digital age‘, for my online open textbook, Teaching in a Digital Age.’

Unlike earlier chapters, I have not published this as a series of blog posts, as it is based on an earlier set of blog posts called: ‘Nine steps to quality online learning.’

However, there are some substantial changes. The focus here is as much on applying basic principles of course design to face-to-face and blended/hybrid learning as to fully online course design.

More importantly, this chapter attempts to pull together all the principles from all previous ten chapters into a set of practical steps towards the design of quality teaching in a digital age.

Purpose of the chapter

When you have read this chapter, and in conjunction with what has been learned in previous chapters, you should be able to:

  • define quality in terms of teaching in a digital age
  • determine what your preferred approaches are to teaching and learning
  • decide what mode of delivery is most appropriate for any course you are responsible for
  • understand why teamwork is essential for effective teaching in a digital age
  • make best use of existing resources for any course
  • choose and use the right technology and tools to support your learning
  • set appropriate learning goals for teaching in a digital age
  • design an appropriate course structure and set of learning activities
  • know when and how to communicate with learners
  • evaluate your teaching, make necessary improvements, and improve your teaching through further innovation.

What is covered in this chapter

Key takeaways

1. For the purposes of this book, quality is defined as: teaching methods that successfully help learners develop the knowledge and skills they will require in a digital age.

2. Formal national and institutional quality assurance processes do not guarantee quality teaching and learning. In particular, they focus on past ‘best’ practices, processes to be done before actual teaching, and often ignore the affective, emotional or personal aspects of learning. Nor do they focus particularly on the needs of learners in a digital age.

3. New technologies and the needs of learners in a digital age require a re-thinking of traditional campus-based teaching, especially where it is has been based mainly on the transmission of knowledge. This means re-assessing the way you teach and determining how you would really like to teach in a digital age. This requires imagination and vision rather than technical expertise.

4. It is important to determine the most appropriate mode of delivery, based on teaching philosophy, the needs of students, the demands of the discipline, and the resources available.

5. It is best to work in a team. Blended and especially fully online learning require a range of skills that most instructors are unlikely to have. Good course design not only enables students to learn better but also controls teacher and instructor workload. Courses look better with good graphic and web design and professional video production. Specialist technical help frees up teachers and instructors to concentrate on the knowledge and skills that students need to develop.

6. Full use should be made of existing resources, including institutionally-supported learning technologies, open educational resources, learning technology staff, and the experience of your colleagues.

7. The main technologies you will be using should be mastered, so you are professional and knowledgeable about their strengths and weaknesses for teaching.

8. Learning goals that are appropriate for learners in a digital age need to be clearly defined. The skills students need should be embedded within their subject domain, and these skills should be formally assessed.

9. A coherent and clearly communicable structure, and learning activities for a course, should be developed that are manageable in terms of workload for both students and instructor.

10. Regular and on-going instructor/teacher presence, especially when students are studying partly or wholly online, is essential for student success. This means effective communication between teacher/instructor and students. It is particularly important to encourage inter-student communication, either face-to-face or online.

11. The extent to which the new learning goals of re-designed courses aimed at developing the knowledge and skills needed in a digital age have been achieved should be carefully evaluated and ways in which the course could be improved should be identified.

Over to you

Although the previous blog posts on nine steps to quality online learning were well received (they have been used in some post-secondary education courses) feedback on this revised book version will be much appreciated.  I haven’t seen anything similar that tries to integrate basic principles across all three modes of delivery, so I am especially interested to see how these are perceived in terms of regular classroom and blended learning.

Up next

The final chapter, which will take a brief look at the institutional policies and strategies needed to support teachers and instructors wanting to teach well in a digital age. It will deal explicitly with what we should expect (and more importantly, not expect) of teachers and instructors, issues around faculty development and teacher training, working methods for teachers and instructors, and learning technology support.

I aim to finish this (and the whole book, at least in first draft form) by March 14. French and Spanish translations are already under way.

Nine steps to quality online learning: Step 9: Evaluate and innovate

© Hilary Page-Bucci, 2002

In this post I discuss the importance of evaluating each offering of an online course, how best to do this, and then the importance of maintaining and improving the course.

This is the last in a series of 10 posts on designing quality online courses. The nine steps are aimed mainly at instructors who are new to online learning, or have tried online learning without much help or success. The first nine posts (which should be read before this post) are:

Nine steps to quality online learning: Introduction

Nine steps to quality online learning: Step 1: Decide how you want to teach online

Nine steps to quality online-learning: Step 2: Decide on what kind of online course

Nine steps to quality online learning: Step 3: Work in a Team

Nine steps to quality online learning: Step 4: Build on existing resources

Nine steps to quality online learning: Step 5: Master the technology

Nine steps to quality online learning: Step 6: Set appropriate learning goals

Nine steps to quality online learning: Step 7: Design course structure and learning activities

Nine steps to quality online learning: Step 8: Communicate, communicate, communicate

A condensed version covering all the main posts in this series can be found on the Contact North web site: What you need to know about teaching online: nine key steps. (French version: Ce que le personnel enseignant doit savoir sur l’enseignment en ligne: neuf étapes clés‘)

The ten posts are also being translated into Portuguese by Professor Luis Roberto Brudna Holzle, Federal University, Brazil, available at Science Blogs: Nove passos para uma aprendizagem on-line de qualidade

Steps 1-8: Building a strong foundation

The emphasis in this series of posts is on getting the fundamentals of online teaching right. The discerning reader will have noted that there isn’t much in these posts about exciting new tools, MOOCs, the Khan Academy, MIT’s edX, and many other new developments in online learning. These tools and new programs offer great potential and we will discuss some of these in this post. However, it doesn’t matter what tools or revolutionary programs are being used, what we know of how people learn does not change a great deal over time, and we do know that learning is a process, and you ignore the factors that influence that process at your peril.

I’ve focused mainly on using LMSs, because that is what most institutions currently have, and they provide an adequate ‘framework’ within which the key processes of teaching and learning can be managed. But if you get these fundamentals right they will transfer well to the new tools and programs; if they don’t transfer well, such tools are likely to be a passing fad and will eventually die, because they don’t support the key processes that support learning. For example, MOOCs may reach hundreds of thousands of students, but if there is no suitable communication with or ‘online presence’ from an instructor, then most students will fail (as is the case at the moment). MOOCs will survive and grow if they can accommodate the core processes of clear learning outcomes, learner support, clear structure, management of student and faculty workload, etc.

The last key ‘fundamental’ of the teaching and learning process is evaluation and innovation: assessing what has been done, and then looking at ways to improve on it.

Why evaluation is important

This step isn’t specific to online teaching. It really applies to all forms of teaching. However, especially for instructors new to it, online teaching is different and therefore likely to be seen as higher risk. Online and distance learning is always held to a higher standard than conventional teaching, so more effort is required to justify their use. For tenure and promotion, it is important if you are teaching online to be able to provide evidence that the teaching has been at least as successful as your classroom courses. Online learning itself is continually developing. New tools and new approaches to teaching online are constantly coming available. They provide the opportunity to experiment a little to see if the results are better, and if we do that, we need to evaluate the impact of using a new tool or course design. It’s what professionals do. But the main reason is that teaching is like golf: we strive for perfection but can never achieve it. It’s always possible to improve, and one of the best ways of doing that is through a systematic analysis of past experience.

What to evaluate

In Step 1, I defined quality online learning very narrowly. It is outcomes based:

 By quality, I mean ‘Reaching the same level or better with an online course as for an equivalent face-to-face course.’ This has two quantitative critical performance indicators:

  • completion rates will be at least as good if not better for the online version
  • grades or measures of learning will be at least as good if not better for the online version.

On a qualitative level, I suggested one other criterion:

  • quality online learning will lead to new, different and more relevant learning outcomes that are better served by online learning.

So these are the minimum requirements. The first two are easily measured in quantitative terms. We should be aiming for completion rates for an online course of at least 85%, i.e. of 100 students starting the course, 85 complete by passing the end of course assessment (unfortunately, many classroom courses fail to achieve this rate, but if we value good teaching, we should be trying to bring as many students as possible to the set standard).

The second criterion is to compare the grades. We would expect at least as many As and Bs in our online version as in a classroom version. (I am assuming that students are taking the same exams, etc., whether they are in class or online, and are being marked to the same standards).

The third criterion is more difficult, because it suggests a change in the intended learning goals for a course that is delivered online. This might include assessing students’ communication skills, or their ability to find, evaluate, analyze and apply information appropriately within the subject domain, which are not assessed in the classroom version. This requires a qualitative judgement as to which learning goals are most important, and this may require endorsement or support from a departmental curriculum committee or even an external accreditation body.

However, even if we measure the course by these three criteria, we will not necessarily know what worked and what didn’t in the course. We need to look more closely at factors that may have influenced students’ ability to learn. We have laid out in the various steps some of these factors. Some of the questions to which you may want to get answers are as follows:

  • What learning outcomes did most students struggle with?
  • Were the learning outcomes or goals clear to students?
  • Was the teaching material clear and well structured?
  • Was the LMS easily accessible and available 24×7?
  • Did students behave in the online discussion forums in the way expected?
  • What topics generated good discussion and what didn’t?
  • Did students draw on the course materials in their discussion forums or assignments?
  • Did students make use of the podcasts?
  • How many students logged in to the webcasts and did these students do better or worse than those that didn’t?
  • Were the students overloaded with work?
  • Was it too much work for me as an instructor?
  • If so, what could I do to better manage my workload (or the students’) without losing quality?
  • How satisfied were the students with the course?

I will now suggest some ways that these questions can be answered without again causing a huge amount of work.

Analysis of a sample of exam answers will often provide information about course structure and the presentation of materials

How to evaluate factors contributing to or inhibiting learning on an online course

There is a range of resources you can draw on to do this, much more in fact than for evaluating classroom courses, because online learning leaves a traceable digital trail of evidence.

  • student grades
  • individual student participation rates in online activities, such as self-assessment questions, discussion forums, webinars
  • qualitative analysis of the discussion forums, for instance the quality and range of comments, indicating the level or depth of engagement or thinking
  • student assignments and exam answers
  • student questionnaires
  • online focus groups.

However, before starting it is useful to draw up a list of questions as in the previous section, and then look at which sources are most likely to provide answers to those questions.

One word about student questionnaires. Many institutions have a ‘standard’ student reporting system at the end of each course. These are often useless for the purposes of evaluating online courses. The questions asked need to be adapted to an online learning environment. However, because such questionnaires are used for cross course comparisons, the people who manage such evaluation forms are reluctant to have a different version for online teaching. Secondly, because these questionnaires are usually voluntarily completed by students after the course has ended, completion rates are often notoriously low (less than 20%). Low response rates are usually worthless or at best highly misleading. Students who have dropped out of the course won’t even get the questionnaire in most cases. Low response rates tend to be heavily biased towards successful students. It is the students who struggled or dropped out that you need to hear from.

I find small focus groups work better than student questionnaires, and for this I prefer synchronous tools such as Blackboard Collaborate. I will deliberately approach 7-8 specific students covering the full range of achievement, from drop-out to A, and conduct a one hour discussion around specific questions about the course. If one selected student does not want to participate, I try to find another in the same category.

In addition, at the end of a course, I tend to look at the student grades, and identify which students did well and which struggled. I then go back to the beginning of the course and track their online participation as far as possible (the next generation of learning analytics will make this much easier). I find that some factors are student specific (e.g. a gregarious student who communicates with everyone) and some are course factor specific, e.g. related to learning goals or the way I have explained or presented content. This qualitative approach will often suggest changes to the content or the way I interacted with students for the next version of the course. I may also determine next time to manage more carefully students who ‘hog’ the conversation.

Innovate

Usually I spend quite a bit of time at the end of the first presentation of an online course evaluating it and making changes in the next version, usually working with a trusted instructional designer. After that I concentrate mainly on ensuring completion rates and grades are at the standard I have aimed for.

What I am more likely to do in the third or subsequent offerings is to look at ways to improve the course that are the result of new external factors, such as new software (e.g. an e-portfolio package), or new processes (e.g. student-generated content, using mobile phones or cameras, collecting project-related data). This keeps the course ‘fresh’ and interesting. However, I usually limit myself to one substantive change, partly for workload reasons but also because this way it is easier to measure the impact of the change.

It is indeed an exciting time to be an instructor. In particular, the new generation of web 2.0 tools, including WordPress, new, instructor-focused ‘lightweight LMSs such as Instructure, open educational resources, mobile learning, tablets and iPads, electronic publishing, MOOCs, all offer a wide variety of opportunities for innovation and experiment. These can be easily integrated within the existing LMS and existing course structure. I will discuss in another post how some of these tools can radically change the design and delivery of online learning.

However, it is important to remember that the aim is to enable students to learn effectively. We do have enough knowledge and experience to be able to design ‘safe’, effective learning around standard LMSs. Many of the new web 2.0 tools have not been thoroughly evaluated in post-secondary educational settings, and  it is already clear that some of the newer tools or approaches are not proving to be as effective as older approaches to online learning. New is not always better. Thus for instructors starting in online learning, I would urge caution. Follow the experienced route, then gradually add and evaluate new tools and new approaches to learning as you become more experienced.

Summary

The nine steps are based on two foundations: effective learning strategies resulting from tested learning theories; and experience of successfully teaching online. The focus has been on instructors new to online learning. The posts are meant to lead you into working with other professionals, such as instructional and web designers, and preferably in a team with other online instructors.

The approach I have suggested is quite conservative, and some may wish to jump straight into what I would call second generation online learning, or e-learning 2.0. Nevertheless, even, or especially, working without a learning management system, it is important to remember that most students need clear learning goals, a clear structure or timetable of work, manageable study workloads, and instructor communication and presence. Most students also learn best, especially online, in a social environment that draws on and contributes to the knowledge and experience of other students.

Evaluation of the nine steps

In the spirit of this blog, it will help me evaluate the nine steps/ten posts of this series. So here are some questions:

  1. If you are new to online teaching, how helpful were these posts for you? What didn’t work, or what was missing?
  2. If you work with instructors who are new to or struggling with online teaching, would you refer these posts to them?
  3. Do you think this is a too conservative approach to teaching online? Too much focus on LMSs and not enough on web 2.0 tools?
  4. Do you agree that there are ‘fundamental processes of learning’ that are relatively independent of different tools?
  5. To what extent are these guidelines applicable to all kinds of teaching, not just online teaching? What do you think is ‘special’ that you need to know about online teaching?
  6. Was there something critical for quality online learning missing in the nine steps?

Your feedback either as a comment to this post or as an e-mail will be much appreciated and will make the next version much better!

Next steps

I will revisit in another set of posts two issues:

  • advanced online course design, based on the use of web 2.0 tools
  • the new campus: designing for hybrid learning

Further reading

I was surprised to find when conducting a mini review of formative evaluation in online teaching how little there is on the topic, and of what there is very little is helpful for the individual instructor trying to improve their course or could be recommended by me. If you know of any practical guide to formative evaluation of online teaching that will help individual instructors, please let me know! The best article by far that I found is:

Gunawardena, C., Lowe, C. & Carabajal, K. (2000). Evaluating Online Learning: models and methods. In D. Willis et al. (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference 2000 (pp. 1677-1684). Chesapeake, VA: AACE.

Note: there is a big difference between summative evaluation (which identifies the overall effectiveness for online learning compared with for instance classroom teaching) and formative evaluation, which seeks to learn from part experience to improve future performance. There is a large literature on summative evaluation of online learning, quality standards, and criteria. There is a much smaller literature specifically on formative evaluation of online teaching that enables an individual teacher to improve their teaching (although there is a much bigger literature on formative evaluation in classroom teaching). In this post, I have been focusing on formative evaluation, carried out by the instructor mainly to improve a specific course.

 

Journal of Learning Design: Vol. 4, No. 4, 2011

The latest edition of the Journal of Learning Design is now available.
From the editorial:
This issue presents six papers from a wide range of disciplines and locations..... The papers in this issue fit loosely into three categories that move from the macro in considering whole curriculum or course design, through to finer levels of evaluating teaching and learning materials, to end with the micro decisions made in the selection of teaching methods or technologies to support those methods. 

Contents

Editorial

Measuring Course Learning Outcomes 1-9
Mohsen Keshavarz of University of Tehran, IRAN
Full Paper

Leading Learning Design: Investigating Program Leaders Initial Conceptions Of Graduate Attributes 10-20
Kylie Readman of University of the Sunshine Coast, AUSTRALIA
Full Paper

Course Cohesion: An Elusive Goal For Tertiary Education  21-30
Nan Bahr and Margaret Lloyd of Queensland University of Education, AUSTRALIA 
Full Paper

Evaluation Of Learning Materials – A Holistic Framework 31-44
Jeppe Bundsgaard of Aarhus University and Thomas Hansen of University College Lillebaelt, DENMARK
Full Paper

Blended Learning Using Role-Plays, Wikis And Blogs  45-55
Michele Ruyters and Kathy Douglas of RMIT University, and Siew Fang Law of Victoria University, AUSTRALIA
Full Paper

Nobody Says No: Student Self-Censorship In A Collaborative Knowledge Building Activity 56-68
Alan Roberts and Rod Nason of Queensland University of Technology, AUSTRALIA
Full Paper