March 2, 2015

What do we mean by quality when teaching in a digital age?

Listen with webReader
© Insights, 2012

© Insights, 2012

Before I start on my nine steps to quality learning for my open textbook, Teaching in a Digital Age, I have needed to ‘clear the decks’ about what we mean by quality. I thought this bit might be useful to share, as quality is a very slippery concept at the best of times.

The aim of this chapter is to provide some practical guidelines for teachers and instructors to ensure quality teaching in a digital age. Before I can do this, however, it is necessary to clarify what is meant by ‘quality’ in education, because I am using ‘quality’ here in a very specific way.

Definitions

Probably there is no other topic in education which generates so much discussion and controversy as ‘quality’. Many books have been written on the topic, but I will cut to the chase and give my definition of quality up-front. For the purposes of this book, quality is defined as:

teaching methods that successfully help learners develop the knowledge and skills they will require in a digital age.

This of course is the short answer to the question. A longer answer means looking, at least briefly, at:

  • institutional and degree accreditation
  • internal (academic) quality assurance processes
  • differences in quality assurance between traditional classroom teaching and online and distance education
  • the relationship between quality assurance processes and learning outcomes
  • ‘quality assurance fit for purpose': meeting the goals of education in a digital age.

This will then provide the foundations for my recommendations for quality teaching that will follow in this chapter.

Institutional and degree accreditation

Most governments act to protect consumers in the education market by ensuring that institutions are properly accredited and the qualifications they award are valid and are recognised as of being of ‘quality.’ However, the manner in which institutions and degrees are accredited varies a great deal. The main difference is between the USA and virtually any other country.

The U.S. Department of Education’s Network for Education Information states in its description of accreditation and quality assurance in the USA:

Accreditation is the process used in U.S. education to ensure that schools, postsecondary institutions, and other education providers meet, and maintain, minimum standards of quality and integrity regarding academics, administration, and related services. It is a voluntary process based on the principle of academic self-governance. Schools, postsecondary institutions and programs (faculties) within institutions participate in accreditation. The entities which conduct accreditation are associations comprised of institutions and academic specialists in specific subjects, who establish and enforce standards of membership and procedures for conducting the accreditation process.

Both the federal and state governments recognize accreditation as the mechanism by which institutional and programmatic legitimacy are ensured. In international terms, accreditation by a recognized accrediting authority is accepted as the U.S. equivalent of other countries’ ministerial recognition of institutions belonging to national education systems.

In other words, in the USA, accreditation and quality assurance is effectively self-regulated by the educational institutions and faculty through their control of accreditation agencies, although the government does have some ‘weapons of enforcement’, mainly through the withdrawal of student financial aid for students at any institution that the U.S. Department of Education deems to be failing to meet standards.

In many other countries, government has the ultimate authority to accredit institutions and approve degrees, although in countries such as Canada and the United Kingdom, this is often exercised by arm’s length agencies appointed by government, but consisting mainly of representatives from the various institutions within the system. These bodies have a variety of names, but Degree Quality Assurance Board is a typical title.

However, more important than the formal lines of responsibility for quality is how the accrediting agencies actually operate. Usually, once a degree program is approved, there is little follow-up or monitoring afterwards, unless formal complaints are subsequently made about the quality of the program, although many institutions now voluntarily review programs every five years or so. Also, once an institution has been accredited, the accreditation agency may delegate back to the institution the approval of it own degree programs, providing that it has an internal process in place for assuring quality, although where government is formally responsible, new degrees may still come to an accrediting agency, to ensure there is no duplication within the system, that there is a defined market for the program, or where approval to deviate from government guidelines on fees may be requested. Nevertheless, mainly to ensure academic freedom from direct government interference, universities in particular have a large degree of autonomy in most economically advanced countries for determining ‘quality’ in programming.

However, in recent years, some regulatory agencies such as the United Kingdom’s Quality Assurance Agency for Higher Education have adopted formal quality assurance processes based on practices that originated in industry. The U.K. QAA’s Quality Code for Higher Education which aims to guide universities on what the QAA is looking for runs to several hundred pages. Chapter B3 on Learning and Teaching is 25 pages long and has seven indicators of quality. Indicator 4 is typical:

Higher education providers assure themselves that everyone involved in teaching or supporting student learning is appropriately qualified, supported and developed.

Many institutions as a result of pressure from external agencies have therefore put in place formal quality assurance processes over and beyond the normal academic approval processes (see Clarke-Okah et al., 2014, for a typical, low-cost example).

Internal quality assurance

It can be seen then that the internal processes for ensuring quality programs within an institution are particularly important. Although again the process can vary considerably between institutions, at least in universities the process is fairly standard. A proposal for a new degree will usually originate from a group of faculty/instructors within a department. The proposal will be discussed and amended at departmental and/or Faculty meetings, then once approved will go to the university senate for final approval. The administration in the form of the Provost’s Office will usually be involved, particularly where resources, such as new appointments, are required.

Although this is probably an over-generalisation, significantly the proposal will contain information about who will teach the course and their qualifications to teach it, the content to be covered within the program (often as a list of courses with short descriptions), a set of required readings, and usually something about how students will be assessed. Increasingly, such proposals may also include broad learning outcomes for the program.

If there is a proposal for courses within a program or the whole program to be delivered fully online, it is likely that the proposal will come under intense internal scrutiny. What is unlikely to be included in a proposal though is what methods of teaching will be used. This is usually considered the responsibility of individual faculty members. It is this aspect of quality with which this chapter is concerned.

Lastly, some institutions require every program to be reviewed after five or more years of operation, or at the discretion of the senior administration. Again whether and how this is done varies considerably. One common approach is for an internal review process, with an internal evaluation report by a committee set up by the department offering the program, followed by a review of the internal committee’s report by external assessors. This review may or (more frequently) may not lead to significant changes in a program, but this will depend on the instructors responsible for teaching the program agreeing to implement any recommended changes. Less frequently, where enrolment for a program has declined to unacceptable levels or where external complaints about a program have been received, the Vice President Academic may call for an external review of the program, in which case anything is possible, up to and including closure of the program.

Jung and Latchem (2102), in a review of quality assessment processes in a large number of online and distance education institutions around the world, make the following important points about quality assurance processes within institutions:

  • focus on outcomes as the leading measure of quality
  • take a systemic approach to quality assurance
  • see QA as a process of continuous improvement
  • move the institution from external controls to an internal culture of quality
  • poor quality has very high costs so investment in quality is worthwhile.

In particular, Butcher and Wilson-Strydom (2013) warn:

you should not assume that creating quality assurance structures… automatically improves quality….Institutional quality assurance structures and processes are important, but beware of making them an exercise in compliance for accountability, rather than a process of learning and self-improvement that really improves quality.

There are many guidelines for quality traditional classroom teaching. Perhaps the most well know are those of Chickering and Gamson (1987), based on an analysis of 50 years of research into best practices in teaching. They argue that good practice in undergraduate education:

  1. Encourages contact between students and faculty
  2. Develops reciprocity and cooperation among students.
  3. Encourages active learning.
  4. Gives prompt feedback.
  5. Emphasizes time on task.
  6. Communicates high expectations.
  7. Respects diverse talents and ways of learning.

Online courses and programs

Because online learning was new and hence open to concern about its quality, there have also been many guidelines, best practices and quality assurance criteria created and applied to online programming. All these guidelines and procedures have been derived from the experience of previously successful online programs, best practices in teaching and learning, and research and evaluation of online teaching and learning.

Some degree quality assurance boards (such as the QAA in the UK and PEQAB in Ontario) have put in place specific ‘benchmarks’ for online courses. A comprehensive list of online quality assurance standards, organizations and research on online learning can be found in Appendix 3. Graham et al. (2001) applied Chickering and Gamson’s seven principles for face-to-face teaching to the evaluation of four online courses from a mid-western university in the USA, and adapted these principles for online learning.

Thus ensuring quality in online learning is not rocket science. There is plenty of evidence of what works and what doesn’t, which will be examined in more detail in this chapter. There is no need to build a bureaucracy around this, but there does need to be some mechanism, some way of calling institutions when they fail to meet these standards. However, we should also do the same for campus-based teaching. As more and more already accredited (and ‘high quality’) campus-based institutions start moving into hybrid learning, the establishment of quality in the online learning elements of programs will become even more important.

Thus there are plenty of evidence-based guidelines for ensuring quality in teaching, both face-to-face and online. The main challenge then is to ensure that teachers and instructors are aware of these best practices and that institutions have processes in place to ensure that guidelines for quality teaching are implemented and followed.

Quality assurance, innovation and learning outcomes

It may have been noted that most QA processes are front-loaded, i.e. they look at inputs – such as the academic qualifications of faculty, or the processes to be adopted for effective teaching, such as clear learning objectives – rather than outputs, such as what students have actually learned. They also tend to be backward-looking, that is, they focus on past best practices.

This is particularly important for evaluating new teaching approaches. Butcher and Hoosen (2014) state:

The quality assurance of post-traditional higher education is not straightforward, because openness and flexibility are primary characteristics of these new approaches, whereas traditional approaches to quality assurance were designed for teaching and learning within more tightly structured frameworks.

 However, Butcher and Hoosen (2014) go on to say that:

fundamental judgements about quality should not depend on whether education is provided in a traditional or post-traditional manner …the growth of openness is unlikely to demand major changes to quality assurance practices in institutions. The principles of good quality higher education have not changed…. Quality distance education is a sub-set of quality education…Distance education should be subject to the same quality assurance mechanisms as education generally.’

Such arguments though offer a particular challenge for teaching in a digital age, where it is argued that learning outcomes need to include the development of skills such as independent learning, facility in using social media for communication, and knowledge management, skills that have not been explicitly identified in the past. Quality assurance processes are not usually tied to specific types of learning outcomes, but are more closely linked to general performance measures such as course completion rates, time to degree completion and grades based on past learning goals.

Furthermore, we have already seen in Chapters 9 and 10 that new media and new methods of teaching are emerging that have not been around long enough to be subject to analysis of best practices. A too rigid view of quality assessment based on past practices could have serious negative implications for innovation in teaching and for meeting newly emerging learning needs. ‘Best practice’ may need occasionally to be challenged, so new approaches can be experimented with and evaluated.

Quality assurance: fit for purpose in a digital age

Maxim Jean-Louis, the President of Contact North, at a presentation in 2010 to the Higher Education Quality Council of Ontario, made a very useful distinction about different ways of looking at quality in education:

  • Quality as ‘Excellence’- a definition that sets abstract goals for institutions and academic communities to always striving to be the best, mainly taken as having elitist undertones. In post-secondary education this could mean winning Nobel prizes, attraction of research funds or the “best” faculty as measured by research output and teaching evaluations. The drawback here is that this tends to also exclude the work of the ‘further education’ sectors, and is not applied equally between disciplines (citation counts do not exist for historians and many other subjects).
  • Quality as ‘Meeting a pre-determined standard’- a definition that requires only a given standard to be met, e.g. a minimum grade, basic competency, the ability to read, write, use a computer, etc. [It might also include competencies and skills, degree completion rates, time to degree completion, etc.] The drawback of this is that setting and measuring this ‘standard’ is difficult at best and idealistic at worst.
  • Quality as ‘fitness for purpose’ - in this construction of quality, we have to decide the extent to which the service or product meets the goals set – does this course or program do what it says it was going to do? Such a construction of quality allows institutions/sectors to define goals themselves according to their mandate and concentrates on meeting the needs of their customers (whether this be upgrading learners, graduate researchers, industry, etc.).

Quality assurance processes must address the increasing diversity of our educational systems. Distance education organizations are not the same as elite traditional universities and shouldn’t try to be. This would mean looking for different measures of quality in the Open University, for instance, than in Cambridge University. Neither one is necessarily better (depending on what they are trying to achieve), but the learning experience ought to be different, even though the intended learning outcomes may be similar; this will mean different design criteria but not necessarily different criteria for assessing the quality of the learning.

In the meantime, much more attention needs to be directed at what campus-based institutions are doing when they move to hybrid or online learning. Are they following best practices, or even better, developing innovative, better teaching methods? The design of xMOOCs and the high drop-out rates of many new two year colleges new to online learning in the USA suggest they are not.

This means that different types of institution will and should evaluate quality differently. If the goal or purpose is to develop the knowledge and skills that learners will need in a digital age, then this is the ‘standard’ by which quality should be assessed, while at the same time taking into account what we already know about general best practices in teaching. The recommendations for quality teaching in a digital age that follow in this chapter are based on these principles.

Over to you

There is so much I wanted to write here about the stupidity of the current system of institutional accreditation and internal quality assurance processes, especially but not exclusively in the United Kingdom, but this section is meant as an introduction to practical guidelines for teaching and learning. So I’ve tried to be uncharacteristically restrained in writing this section. But feedback is even more welcome than usual.

1. (a) First, are there any incorrect facts in this section? This is a large and complex topic and it is easy to get things wrong.

(b) Have I left out anything really important about assessing quality in teaching and learning?

2. One problem with this topic is that it tends to gravitate between two polarised positions: those who believe in absolute truth and those who are relativists. Absolute truthers believe that there is a God-given set of ‘quality’ standards that are set primarily by elite institutions that everyone else should strive to meet. Relativists (like myself) believe that quality is in the eye of the beholder; it all depends on what your goals are. Hence my definition of quality is set among the rather limited goal in one way – and extremely ambitious in another – of developing teaching methods that will help learners develop the knowledge and skills they will need in a digital age. So: any views on my definition of quality? Is it fit for purpose?

3. What do you think of the current system of (a) institutional accreditation and (b) internal quality assurance processes?

My own view is that institutional accreditation is definitely needed to protect against really incompetent or downright dishonest organisations, but, depending on the jurisdiction, it is very much an insider’s process and not very transparent, and while current accreditation processes may set minimum standards it certainly doesn’t do much to improve quality in the system.

Similarly, internal quality assurance processes are far too cosy and protect the status quo. The internal program approval processes are based again on peer review of a very limited kind, with often ‘I’ll scratch your back if you’ll scratch mine’ approach to program approval. I’ve been on a number of program reviews as an external reviewer, but rarely see any significant changes, despite sometimes scathing reviews from the externals.

And as for formal QA processes, they are the kiss of death for quality, tangling faculty and administrators in incredibly bureaucratic processes without dealing with the real issues around quality teaching and learning.

Of course, all these practices are in the name of protecting academic freedom, which is important – but surely better processes can be derived for improving quality without threatening academic freedom.

4. So lastly, is it wise for me to restrain myself from adding these types of comments in the book – or will I muddy the waters of what is to come if I do?

References and further reading

Butcher, N. and Wilson-Strydom, M. (2013) A Guide to Quality in Online Learning Dallas TX: Academic Partnerships

Butcher, N. and Hoosen, S. (2014) A Guide to Quality in Post-traditional Online Higher Education Dallas TX: Academic Partnerships

Chickering, A., and Gamson, Z. (1987) ‘Seven Principles for Good Practice in Undergraduate Education’ AAHE Bulletin, March 1987.

Clarke-Okah, W. et al. (2014) The Commonwealth of Learning Review and Improvement Model for Higher Education Institutions Vancouver BC: Commonwealth of Learning

Graham, C. et al. (2001) Seven Principles of Effective Teaching: A Practical Lens for Evaluating Online Courses The Technology Source, March/April

Jung, I. and Latchem, C. (2012) Quality Assurance and Accreditation in Distance Education and e-Learning New York/London: Routledge

Choosing design models for a digital age

Listen with webReader
Image: http://www.keepcalm-o-matic.co.uk/p/keep-calm-and-make-the-right-choice-3/

Image: http://www.keepcalm-o-matic.co.uk/p/keep-calm-and-make-the-right-choice-3/

Oh, dear, it appears that I missed out in posting the conclusion to my Chapter 6, on Models for Designing Teaching and Learning for my book, ‘Teaching in a Digital Age’, so here it is:

Choosing a model

This chapter covers a range of different design models or approaches to teaching. There are many more that could have been included. However, it is clear that there is a choice of possible models, depending on a number of factors, most of which are listed in Chapter 5, Building an Effective Learning Environment.

Your choice of model will then depend very much on the context in which you are teaching. However, I have suggested that a key criterion should be the suitability of the design model for developing the knowledge and skills that learners will need in a digital age. Other critical factors will be the demands of the subject domain, characteristics of the learners you will likely be teaching, the resources available, especially in terms of supporting learners, and probably most important of all, your own views and beliefs about what constitutes ‘good teaching.’

Furthermore, the models by and large are not mutually exclusive. They can probably be mixed and matched to a certain degree, but there are limitations in doing this. Moreover, a consistent approach will be less confusing not only to learners, but also to you as a teacher or instructor.

So: how would you go about choosing an appropriate design model? I set out below in Figure 6.20 one way of doing this. I have chosen five criteria as headings along the top of the table:

  • epistemological basis: in what epistemological view of knowledge is this model based? Does the model suggest a view of knowledge as content that must be learned, does the model suggest a rigid (‘correct’) way of designing learning (objectivist)? Or does the model suggest that learning is a dynamic process and knowledge needs to be discovered and is constantly changing (constructivist)? Does the model suggest that knowledge lies in the connections and interpretations of different nodes or people on networks and that connections matter more in terms of creating and communicating knowledge than the individual nodes or people on the network (connectivist)? Or is the model epistemologically neutral, in that one could use the same model to teach from different epistemological positions?
  • 20th century learning: does this design model lead to the kind of learning that would prepare people for an industrial society, with standardised learning outcomes, will it help identify and select a relatively small elite for higher education or senior positions in society, does it enable learning to be easily organised into similarly performing groups of learners?
  • 21st century learning: does the model encourage the development of the soft skills and the effective management of knowledge needed in a digital world? Does the model enable and support the appropriate educational use of the affordances of new technologies? Does it provide the kind of educational support that learners need to succeed in a volatile, uncertain, complex and ambiguous world? Does it enable and encourage learners to become global citizens?
  • academic quality: does it lead to deep understanding and transformative learning? Does it enable students to become experts in their chosen subject domain?
  • flexibility: does the model meet the needs of the diversity of learners today? Does it encourage open and flexible access to learning? Does it help teachers and instructors to adapt their teaching to ever changing circumstances?

Now these are my criteria, and you may well want to use different criteria (cost is another important factor), but I have drawn up the table this way because it has helped me consider better where I stand on the different models. Where I think the model is strong on a particular criterion, I have given it three stars, where weak, one star, and n/a for not applicable. Again, you may – no, should – rank the models differently. (See, that’s why I’m a constructivist – if I was an objectivist, I’d tell you what damned criteria to use!)

Figure 6.20 A comparison of different design models

Figure 6.20 A comparison of different design models

It can be seen that the only model that ranks highly on all three criteria of 21st century learning, academic quality and flexibility is online collaborative learning. Experiential learning and agile design also score highly. Transmissive lectures come out worst. This is a pretty fair reflection of my preferences. However, if you are teaching first year civil engineering to over 500 students, your criteria and rankings will almost certainly be different from mine. So please see Figure 6.20 as a heuristic device and not a general recommendation.

Common design characteristics

It is worth noting that, once again, there is extensive research and experience that point to the key factors to be taken into consideration in the successful implementation of teaching, whichever design model is being used. In essence we are talking about using best practices in the design of teaching. Although different design models have different approaches to teaching, there is a significant number of the core principles in the design of teaching and learning that extend across several of the design models. These can be summarised as follows:

  • know your students: identify the key characteristics of the students you will be or could be teaching, and how that will influence your methods of teaching
  • know what you are trying to achieve: in any particular course or program what are the critical areas of content and the particular skills or learning outcomes that students need to achieve as a result of your teaching? What is the best way to identify and assess these desired outcomes?
  • know how students learn: what drives learning for your students? How do you engage or motivate students?  How can you best support that learning?
  • know how to implement this knowledge: What kind of learning environment do you need to create to support student learning? What design model(s) will work best for you within that environment?
  • know how to use technology to support your teaching: this is really a sub-set of the previous point, and is discussed in much more detail in other chapters
  • know what resources you have, and what can be done within the constraints you have to work with
  • ensure that the assessment of students actually measures the intended learning outcomes – and unintended ones.

Design models and the quality of teaching and learning

Lastly, the review of different models indicate some of the key issues around quality:

  • first, what students learn is more likely to be influenced by choosing an appropriate design model for the context in which you are teaching, than by focusing on a particular technology or delivery method. Technology and delivery method are more about access and flexibility and hence learner characteristics than they are about learning. Learning is affected more by pedagogy and the design of instruction.
  • second, different design models are likely to lead to different kinds of learning outcomes. This is why there is so much emphasis in this book on being clear about what knowledge and skills are needed in a digital age. These are bound to vary somewhat across different subject domains, but only to a limited degree. Understanding of content is always going to be important, but the skills of independent learning, critical thinking, innovation and creativity are even more important. Which design model is most likely to help develop these skills in your students?
  • third, quality depends not only on the choice of an appropriate design model, but also on how that approach to teaching is implemented. Online collaborative learning can be done well, or it can be done badly. The same applies to other design models. Following core design principles is critical for the successful use of any particular design model. Also there is considerable research on what the conditions are for success in using some of the newer models. The findings from such research need to be applied when implementing a particular model.
  • lastly students and teachers get better with practice. If you are moving to a new design model, give yourself (and your students) time to get comfortable with it. It will probably take two or three courses where the new model is applied before you begin to feel comfortable that it is producing the results you were hoping for. However, it is better to make some mistakes along the way than to continue to teach comfortably, but not produce the graduates that are needed in the future.

Even when we have chosen a particular design model or teaching approach, though, it still has to be implemented. The remaining chapters in this book will focus then on implementation.

Feedback, please

1. What other criteria might you have used for deciding on an appropriate model?

2. Is this the best way to make a decision about a particular design approach to teaching? If not, how would you go about it?

3. Any other comments about design models for teaching and learning? Any important ones missed?

Next

Chapter 8, on ‘Understanding Technology in Education.’ (Chapter 7 on MOOCs has already been published.)

EDEN research papers: OERs (inc. MOOCs), quality/assessment, social media, analytics and research methods

Listen with webReader

EDEN RSW me 2

EDEN has now published a second report on my review of papers submitted to the EDEN research workshop in Oxford a couple of weeks ago. All the full papers for the workshop can be accessed here.

Main lessons (or unanswered questions) I took away:

OERs and MOOCs

  • what does awarding badges of certificates for MOOCs or other OER actually mean? For instance will institutions give course exemption or credits for the awards, or accept such awards for admission purposes? Or will the focus be on employer recognition? How will participants who are awarded badges know what their ‘currency’ is worth?
  • can MOOCs be designed to go beyond comprehension or networking to develop other critical 21st century skills such as critical thinking, analysis and evaluation? Can they lead to ‘transformational learning’ as identified by Kumar and Arnold (see Quality and Assessment below)
  • are there better design models for open courses than MOOCs as currently structured? If so what would they look like?
  • is there a future for learning object repositories when nearly all academic content becomes open and online?

Quality and assessment

  • research may inform but won’t resolve policy issues
  • quality is never ‘objective’ but is value-driven
  • the level of intervention must be long and significant enough to result in significant learning gains
  • there’s lots of research already that indicates the necessary conditions for successful use of online discussion forums but if these conditions are not present then learning will not take place
  • the OU’s traditional model of course design constrains the development of successful collaborative online learning.

Use of social media in open and distance learning

There were surprisingly few papers on this topic. My main takeaway:

  • the use of social media needs to be driven by sound pedagogical theory that takes into account the affordances of social media (as in Sorensen’s study described in an earlier post under course design)

Data analytics and student drop-out

  • institutions/registrars must pay attention to how student data is tagged/labeled for analytic purposes, so there is consistency in definitions, aggregation and interpretation;
  • when developing or applying an analytics software program, consideration needs to be given to the level of analysis and what potential users of the data are looking for; this means working with instructional designers, faculty and administrators from the beginning
  • analytics need to be integrated with action plans to identify and support early at risk students

Research methods

Next

If these bullets interest you at all, then I strongly recommend you go and read the original papers in full – click here. My summary is of necessity personal and abbreviated and the papers provide much greater richness of context.

 

 

Conference in Crete on quality in open education

Listen with webReader

Heraklion, Crete

Heraklion, Crete: but the conference may not be here

What: SCOPE 2014: Changing the trajectory: quality for opening up education

‘In order to make open learning and education more relevant and feasible for organizations as well as learners, innovations have to be combined with well-proven learning traditions and flexible quality standards. In addition new models for recognition of open learning are needed: education institutions need a better understanding of how open education processes can contribute to excellent learning and high quality education provision, and certification schemes need to incorporate more flexible concepts of open education.

Who: EFQUEL (European Foundation for Quality in e-Learning)

When: 7-9 May, 2014

Where: Somewhere on the Greek island of Crete in the Mediterranean: the exact venue will be announced soon 

How: Submissions of scientific papers related to the conference (max 8 pages) must be sent to papersubmission@efquel.org by February 10th, 2014 using the official template (see http://eif.efquel.org/call-for-papers/). Interactive workshop proposals can make use of another template also available on the website.

Tom Carey’s reflections on the HEQCO report on online learning and productivity: 2 – What we left out – and why.

Listen with webReader
©acreelman.blogspot.com, 2013

©acreelman.blogspot.com, 2013

Carey, T., & Trick, D. (2013). How Online Learning Affects Productivity, Cost and Quality in Higher Education: An Environmental Scan and Review of the Literature. Toronto: Higher Education Quality Council of Ontario.

Tom Carey is one of the authors of the above study, and as an example of the best of reflective practice, he has kindly provided his thoughts about the report, now that it is finished. His first thoughts were published yesterday. This is the second part.

Tom Carey: Part II of Reflections on Researching and Writing on Emerging Developments in Online Learning

In yesterday’s guest post, I provided some reflections on the process and product of a research project for the Higher Education Quality Council of Ontario (HEQCO): How Online Learning Affects Productivity, Cost and Quality in Higher Education: An Environmental Scan and Review of the Literature. That post described the Results that Surprised in the report, from my perspective as an author. Today’s post provides some insights about what we chose to not include in the report, following the old advice of expert film editors that the most interesting scenes in a movie may be those “left behind on the cutting room floor”.

Developments We Didn’t Include

There were some emerging developments on our original target list for which we could not find compelling examples at scale: Semantic Web, Mobile Learning, Ubiquitous Connective, etc. I am sure these are going to be important, but in the interests of preserving the Teachable Moment aspects we focused only on developments with convincing data for impacts on learning outcomes and productivity: convincing in the context of Ontario higher education institutions.  For example, the Ithaka study of the Open Learning Initiative software allowed us to highlight Adaptive Learning Systems at scale (and the recent follow-up book by William Bowen contains several other insights we could cite if we were starting now).

Similarly, the report only deals with Open Educational Resources as a sideline in the discussion of Open and Affordable Textbooks: the rationale was that the textbook developments were a hot topic in “peer” higher education systems ‒ British Columbia, California, New York, etc. ‒ represented low hanging fruit in terms of potential for building collaborations amongst students, faculty and academic leaders within institutions.  And we didn’t do any justice to the Canadian developments in connectivist MOOCs, mostly because we had our hands full trying to help our target audience make sense of the instructionist MOOCs that were hogging the headlines and couldn’t work out how to get beyond that without losing their attention. (I have already apologized to George for this: Stephen, Dave et al can consider themselves included in the apology.)   These choices about which innovations to highlight may have bypassed the disruptive in favour of the radical, but helping decision makers to make sense of – and act on – opportunities for radical change was more than enough for us to bite off.

Issues We Couldn’t Include

Some of the content we wrote but could not include in the report was just not clear enough or complete enough to be included in the public document. For a few topics, we were keenly aware that more work had to be done but that we had not made sense of what that work might be. We didn’t want to go on at length in the report about these points for several reasons: calling for further research sounds like too familiar an ending to a Research Report, including more than a quick mention for what is not yet clear seemed to detract from our Teachable Moment goal, and some of the further exploration needed would be an outcome of our proposed Call for Action through collaborative sense-making across institutions.  For those interested in ‘where to next’ in terms of understanding the impact of emerging developments, here is my personal list of high priority issues that need more clarity.

The different roles of various online learning interactions in various contexts: I would like to have referenced the work by Terry Anderson and others on how increases in one form of learning interaction can result in a decreased need for another type of interaction. This was implicit in our Call to Action around understanding and leveraging scalability: use more scalable interactions where appropriate in order to redirect resources – especially time – into other interactions which are less readily scaled.

Here is my current woefully incomplete attempt to reframe our analysis of emerging developments in online learning using different types of learning interactions – whether online or not:

  • Learner-content interactions can be used effectively to advance Quality and Productivity for technical mastery outcomes, e.g., performance tasks with single solutions and predictable pathways to completion (allowing adaptive systems to provide task guidance)
  • Learner-learner interactions can be used effectively to advance Quality and Productivity for (some) of the question-and-answer and formative feedback roles traditionally carried out with learner-instructor interactions, and seem to be essential (at the moment?) for outcomes involving complex challenges with diverse ways of knowing.
  • Learner-instructor interactions appear to be essential for outcomes involving deep personal change related to learning itself:  grappling with threshold concepts, enhancing learning mindsets and strategies, and ‘getting better at getting better’ for knowledge-intensive work
  • Learner-expert interactions are required for formation of learners’ identity and practice as members of knowledge-building communities, whether in professional/career contexts or in their roles as community members and global citizens.

Much more work to be done in this area, including ensuring that the outcomes listed above that are not readily scaled don’t get left out in the quest for greater productivity:  if we neglect such outcomes, where would the ‘higher’ be in higher education?

Institutional productivity gains may be possible @ scale in traditional institutions: you may have noticed that the list of interactions above has unbundled the role of “instructor” (who can apply expertise in pedagogical content knowledge) from the role of “practice expert” who can help learners transition into full engagement with knowledge-practice networks. Traditional institutions may struggle to unbundle such roles, or even to respect their differing contributions.

Traditional institutions – and in Ontario higher education, that is all we have at the moment ‒ may also struggle to reinvest the results of productivity gains from online learning beyond a course context. As long as we think of ‘workload’ in terms of ‘courses taught’, any savings in effort may disappear into other localized activities. How can we reframe the work, and workload, of teaching to optimize educator and learner time, without resorting to an alien ‘managerial’ language? (Mention “activity-based-costing” in a budget meeting and the challenges to reinvesting productivity gains become all too clear J.)

I tried adapting the idea of Constructive Alignment from instructional design, with an expansion into “Productive Alignment” where educators also include in their designs the goal of optimizing resource usage. If certain students can achieve certain learning outcomes with reduced learner-instructor interactions, e.g., with MOOC resources used in a hybrid course format, then effective instructional design requires that we achieve this Productive Alignment to optimize time and resources. I couldn’t explain this notion effectively enough to include it in the report, but I am convinced that some such changes in the ways we talk about educator roles and responsibilities are going to be needed if the full potential of online learning is to be achieved. And this is going to be both more necessary and more difficult with the ‘higher’ learning experiences and outcomes listed above, which develop slowly over the course of a program and are not readily described as discrete competencies to be tested in a short-term performance task.

Dealing with Quality and Productivity in tandem is a fiscal, political and pedagogical necessity: finally, I wish we had been able to make a better case for the pedagogical rationale for dealing with Scalability, Quality and Productivity issues in parallel. We did include some rationale for determined action now on systematic collaborations across institutions to understand and leverage the emerging potential of online learning.

However, that argument was framed mostly in terms of fiscal and political realities. The fiscal reality across higher education systems requires that we get more focused on deploying the least resources to achieve the highest level of outcomes, and the political reality requires that we in public higher education either ‘do or be done to’ in our dealings with Quality and Productivity.

But there is another rationale that is more closely linked to our purposes and ideals in higher education. Even if we did not have our current fiscal constraints and the expectations of stricter constraints in the future…even if public higher education had the full confidence of political leaders as to our ability to change and adapt to our changing circumstances…if our students see us clinging to traditional practices and structures rather than taking on our challenges with boldness and confidence, what model are we presenting to them about how to deal with challenges in their workplaces, their families and communities, in the earth’s environment and the global knowledge economy?  Will our plea for engagement with the knowledge and wisdom of the past, present and future fall on deaf ears if we don’t practice what we preach?

Making that case was beyond our reach in this project, but it remains on my personal to-do list. I keep thinking of Parker Palmer’s concise formulation in The Courage to Teach: “How we teach is a critical part of what we teach”. That is the pedagogical rationale for our taking charge of higher education’s fate by applying emerging knowledge and wisdom about online learning…with care, compassion and courage.