Ruhe, V. and Zumbo, B. (2009) Evaluation in Distance Education and e-Learning London/New York: Guildford Press

Contents

Preface

1. Why do we need a new approach to evaluation in distance education and e-learning?

2. The theory and practice of program evaluation

3. Evaluation theory and practice in distance education

4. Messick’s framework: what do evaluators need to know?

5. Getting started

6. The unfolding model: scientific evidence

7. The unfolding model: values and consequences

8. Findings from two authentic case studies

9. Bringing it all together

Appendix A: Summary of the 1994 Program Evaluation Standards

Appendix B: Glossary

Appendix C: List of Associations

References

Author index

Subject index

About the authors

Review: for an excellent review, see Kennedy, M. (2009), in IRRODL, Vol. 10, No. 2

My comments: I was about to write a full review on this book, when I read Mary Kennedy’s review in the latest edition of IRRODL. I agree with nearly all the points made by Mary in her review, and I recommend you read Mary’s review first, but I do have a few comments of my own to add. (What? You’re surprised?).

I agree with Mary that this book would be a useful beginner’s guide to evaluation. It covers the basic methods of data collection and analysis, and provides a useful framework (‘the unfolding model’) for conducting ‘scientific’ evaluation. I found the focus on unintended consequences particularly valuable. For an individual teacher wanting to evaluate their own teaching, this is a useful handbook.

However, there are some serious omissions in the book, which result from a very academic approach to evaluation. For instance, although learning technology or distance education units often conduct internal evaluation studies of the kind described in the book (but not often enough), the main form of evaluation in North American universities is by external peer review, and there is no discussion about this at all in the book. As someone whose department was basically closed down by an external program review that simply ignored the ‘scientific’ evaluation information provided to the review committee, some critique or even discussion of the reality of evaluation in e-learning and distance education would have been welcome.

That reality is that evaluation, particularly of e-learning and distance education in universities and colleges, is a highly charged political process, based often on power struggles between the VP Academic, deans, faculty, and staff in the unit responsible for supporting e-learning or distance education. Evaluation in this context is never a cool, logical, scientific process, even though this would be my preference. Learning technology and/or distance education units tend to be treated as peripheral and if necessary disposable units, and therefore have little institutional power, and are therefore particularly vulnerable to power plays and internal machinations. Yet none of this is even acknowledged yet alone discussed in the book.

My second major criticism of the book is that it somehow misses the essence of both distance education and e-learning. There is an academic discussion of the importance of identifying values and learning outcomes, but the authors don’t focus on the actual values underpinning distance education or e-learning, such as access, flexibility, innovation in teaching, skills for a knowledge-based society, supporting lifelong learners, etc., and more importantly, on how to measure whether the values or goals specific to e-learning and distance education have been achieved successfully. Indeed, it would be really helpful for many instructors to go through a process that enables them to clarify the benefits of using technology for teaching, and their specific reasons for using technology in this particular course or program, as a basic first step in evaluation.

The third omission is any discussion about quality standards and quality assurance in e-learning and distance education. There is a very large literature on this topic, which basically, in evaluation terms, is focused on formative evaluation – making sure the courses or programs are well designed before students are exposed to them. Again, this approach has its strengths and weaknesses, but this is not even mentioned in the book.

Another omission is any discussion of the costs of evaluation, particularly in terms of the time it demands. Especially when an institution moves into e-learning for the first time, the emphasis is on production and delivery – get the programs out the door or ship, ship, ship, as the mantra used to be in the software world. There are obvious dangers in this, but again, politically, you have to show ‘proof-of-concept’ for anything that is even mildly innovative in a university. ‘Scientific’ evaluation unfortunately tends to be seen as a luxury in such a climate – the ‘product’ either succeeds or it doesn’t. Don’t misunderstand me: I’m not recommending this approach, but some acknowledgement or recognition of the reality in which evaluation has to occur would have made the book much more valuable.

In brief, the book adopts a cookie-cutter approach to evaluation. The shape is nice, but you don’t actually get to taste the cake.

1 COMMENT

Leave a Reply to IRRODL’s latest issue: Middle East Regional Focus Issue: A Bridge over Troubled Waters | Tony Bates Cancel reply

Please enter your comment!
Please enter your name here