December 22, 2014

Choosing design models for a digital age

Listen with webReader
Image: http://www.keepcalm-o-matic.co.uk/p/keep-calm-and-make-the-right-choice-3/

Image: http://www.keepcalm-o-matic.co.uk/p/keep-calm-and-make-the-right-choice-3/

Oh, dear, it appears that I missed out in posting the conclusion to my Chapter 6, on Models for Designing Teaching and Learning for my book, ‘Teaching in a Digital Age’, so here it is:

Choosing a model

This chapter covers a range of different design models or approaches to teaching. There are many more that could have been included. However, it is clear that there is a choice of possible models, depending on a number of factors, most of which are listed in Chapter 5, Building an Effective Learning Environment.

Your choice of model will then depend very much on the context in which you are teaching. However, I have suggested that a key criterion should be the suitability of the design model for developing the knowledge and skills that learners will need in a digital age. Other critical factors will be the demands of the subject domain, characteristics of the learners you will likely be teaching, the resources available, especially in terms of supporting learners, and probably most important of all, your own views and beliefs about what constitutes ‘good teaching.’

Furthermore, the models by and large are not mutually exclusive. They can probably be mixed and matched to a certain degree, but there are limitations in doing this. Moreover, a consistent approach will be less confusing not only to learners, but also to you as a teacher or instructor.

So: how would you go about choosing an appropriate design model? I set out below in Figure 6.20 one way of doing this. I have chosen five criteria as headings along the top of the table:

  • epistemological basis: in what epistemological view of knowledge is this model based? Does the model suggest a view of knowledge as content that must be learned, does the model suggest a rigid (‘correct’) way of designing learning (objectivist)? Or does the model suggest that learning is a dynamic process and knowledge needs to be discovered and is constantly changing (constructivist)? Does the model suggest that knowledge lies in the connections and interpretations of different nodes or people on networks and that connections matter more in terms of creating and communicating knowledge than the individual nodes or people on the network (connectivist)? Or is the model epistemologically neutral, in that one could use the same model to teach from different epistemological positions?
  • 20th century learning: does this design model lead to the kind of learning that would prepare people for an industrial society, with standardised learning outcomes, will it help identify and select a relatively small elite for higher education or senior positions in society, does it enable learning to be easily organised into similarly performing groups of learners?
  • 21st century learning: does the model encourage the development of the soft skills and the effective management of knowledge needed in a digital world? Does the model enable and support the appropriate educational use of the affordances of new technologies? Does it provide the kind of educational support that learners need to succeed in a volatile, uncertain, complex and ambiguous world? Does it enable and encourage learners to become global citizens?
  • academic quality: does it lead to deep understanding and transformative learning? Does it enable students to become experts in their chosen subject domain?
  • flexibility: does the model meet the needs of the diversity of learners today? Does it encourage open and flexible access to learning? Does it help teachers and instructors to adapt their teaching to ever changing circumstances?

Now these are my criteria, and you may well want to use different criteria (cost is another important factor), but I have drawn up the table this way because it has helped me consider better where I stand on the different models. Where I think the model is strong on a particular criterion, I have given it three stars, where weak, one star, and n/a for not applicable. Again, you may – no, should – rank the models differently. (See, that’s why I’m a constructivist – if I was an objectivist, I’d tell you what damned criteria to use!)

Figure 6.20 A comparison of different design models

Figure 6.20 A comparison of different design models

It can be seen that the only model that ranks highly on all three criteria of 21st century learning, academic quality and flexibility is online collaborative learning. Experiential learning and agile design also score highly. Transmissive lectures come out worst. This is a pretty fair reflection of my preferences. However, if you are teaching first year civil engineering to over 500 students, your criteria and rankings will almost certainly be different from mine. So please see Figure 6.20 as a heuristic device and not a general recommendation.

Common design characteristics

It is worth noting that, once again, there is extensive research and experience that point to the key factors to be taken into consideration in the successful implementation of teaching, whichever design model is being used. In essence we are talking about using best practices in the design of teaching. Although different design models have different approaches to teaching, there is a significant number of the core principles in the design of teaching and learning that extend across several of the design models. These can be summarised as follows:

  • know your students: identify the key characteristics of the students you will be or could be teaching, and how that will influence your methods of teaching
  • know what you are trying to achieve: in any particular course or program what are the critical areas of content and the particular skills or learning outcomes that students need to achieve as a result of your teaching? What is the best way to identify and assess these desired outcomes?
  • know how students learn: what drives learning for your students? How do you engage or motivate students?  How can you best support that learning?
  • know how to implement this knowledge: What kind of learning environment do you need to create to support student learning? What design model(s) will work best for you within that environment?
  • know how to use technology to support your teaching: this is really a sub-set of the previous point, and is discussed in much more detail in other chapters
  • know what resources you have, and what can be done within the constraints you have to work with
  • ensure that the assessment of students actually measures the intended learning outcomes – and unintended ones.

Design models and the quality of teaching and learning

Lastly, the review of different models indicate some of the key issues around quality:

  • first, what students learn is more likely to be influenced by choosing an appropriate design model for the context in which you are teaching, than by focusing on a particular technology or delivery method. Technology and delivery method are more about access and flexibility and hence learner characteristics than they are about learning. Learning is affected more by pedagogy and the design of instruction.
  • second, different design models are likely to lead to different kinds of learning outcomes. This is why there is so much emphasis in this book on being clear about what knowledge and skills are needed in a digital age. These are bound to vary somewhat across different subject domains, but only to a limited degree. Understanding of content is always going to be important, but the skills of independent learning, critical thinking, innovation and creativity are even more important. Which design model is most likely to help develop these skills in your students?
  • third, quality depends not only on the choice of an appropriate design model, but also on how that approach to teaching is implemented. Online collaborative learning can be done well, or it can be done badly. The same applies to other design models. Following core design principles is critical for the successful use of any particular design model. Also there is considerable research on what the conditions are for success in using some of the newer models. The findings from such research need to be applied when implementing a particular model.
  • lastly students and teachers get better with practice. If you are moving to a new design model, give yourself (and your students) time to get comfortable with it. It will probably take two or three courses where the new model is applied before you begin to feel comfortable that it is producing the results you were hoping for. However, it is better to make some mistakes along the way than to continue to teach comfortably, but not produce the graduates that are needed in the future.

Even when we have chosen a particular design model or teaching approach, though, it still has to be implemented. The remaining chapters in this book will focus then on implementation.

Feedback, please

1. What other criteria might you have used for deciding on an appropriate model?

2. Is this the best way to make a decision about a particular design approach to teaching? If not, how would you go about it?

3. Any other comments about design models for teaching and learning? Any important ones missed?

Next

Chapter 8, on ‘Understanding Technology in Education.’ (Chapter 7 on MOOCs has already been published.)

EDEN research papers: OERs (inc. MOOCs), quality/assessment, social media, analytics and research methods

Listen with webReader

EDEN RSW me 2

EDEN has now published a second report on my review of papers submitted to the EDEN research workshop in Oxford a couple of weeks ago. All the full papers for the workshop can be accessed here.

Main lessons (or unanswered questions) I took away:

OERs and MOOCs

  • what does awarding badges of certificates for MOOCs or other OER actually mean? For instance will institutions give course exemption or credits for the awards, or accept such awards for admission purposes? Or will the focus be on employer recognition? How will participants who are awarded badges know what their ‘currency’ is worth?
  • can MOOCs be designed to go beyond comprehension or networking to develop other critical 21st century skills such as critical thinking, analysis and evaluation? Can they lead to ‘transformational learning’ as identified by Kumar and Arnold (see Quality and Assessment below)
  • are there better design models for open courses than MOOCs as currently structured? If so what would they look like?
  • is there a future for learning object repositories when nearly all academic content becomes open and online?

Quality and assessment

  • research may inform but won’t resolve policy issues
  • quality is never ‘objective’ but is value-driven
  • the level of intervention must be long and significant enough to result in significant learning gains
  • there’s lots of research already that indicates the necessary conditions for successful use of online discussion forums but if these conditions are not present then learning will not take place
  • the OU’s traditional model of course design constrains the development of successful collaborative online learning.

Use of social media in open and distance learning

There were surprisingly few papers on this topic. My main takeaway:

  • the use of social media needs to be driven by sound pedagogical theory that takes into account the affordances of social media (as in Sorensen’s study described in an earlier post under course design)

Data analytics and student drop-out

  • institutions/registrars must pay attention to how student data is tagged/labeled for analytic purposes, so there is consistency in definitions, aggregation and interpretation;
  • when developing or applying an analytics software program, consideration needs to be given to the level of analysis and what potential users of the data are looking for; this means working with instructional designers, faculty and administrators from the beginning
  • analytics need to be integrated with action plans to identify and support early at risk students

Research methods

Next

If these bullets interest you at all, then I strongly recommend you go and read the original papers in full – click here. My summary is of necessity personal and abbreviated and the papers provide much greater richness of context.

 

 

Conference in Crete on quality in open education

Listen with webReader

Heraklion, Crete

Heraklion, Crete: but the conference may not be here

What: SCOPE 2014: Changing the trajectory: quality for opening up education

‘In order to make open learning and education more relevant and feasible for organizations as well as learners, innovations have to be combined with well-proven learning traditions and flexible quality standards. In addition new models for recognition of open learning are needed: education institutions need a better understanding of how open education processes can contribute to excellent learning and high quality education provision, and certification schemes need to incorporate more flexible concepts of open education.

Who: EFQUEL (European Foundation for Quality in e-Learning)

When: 7-9 May, 2014

Where: Somewhere on the Greek island of Crete in the Mediterranean: the exact venue will be announced soon 

How: Submissions of scientific papers related to the conference (max 8 pages) must be sent to papersubmission@efquel.org by February 10th, 2014 using the official template (see http://eif.efquel.org/call-for-papers/). Interactive workshop proposals can make use of another template also available on the website.

Tom Carey’s reflections on the HEQCO report on online learning and productivity: 2 – What we left out – and why.

Listen with webReader
©acreelman.blogspot.com, 2013

©acreelman.blogspot.com, 2013

Carey, T., & Trick, D. (2013). How Online Learning Affects Productivity, Cost and Quality in Higher Education: An Environmental Scan and Review of the Literature. Toronto: Higher Education Quality Council of Ontario.

Tom Carey is one of the authors of the above study, and as an example of the best of reflective practice, he has kindly provided his thoughts about the report, now that it is finished. His first thoughts were published yesterday. This is the second part.

Tom Carey: Part II of Reflections on Researching and Writing on Emerging Developments in Online Learning

In yesterday’s guest post, I provided some reflections on the process and product of a research project for the Higher Education Quality Council of Ontario (HEQCO): How Online Learning Affects Productivity, Cost and Quality in Higher Education: An Environmental Scan and Review of the Literature. That post described the Results that Surprised in the report, from my perspective as an author. Today’s post provides some insights about what we chose to not include in the report, following the old advice of expert film editors that the most interesting scenes in a movie may be those “left behind on the cutting room floor”.

Developments We Didn’t Include

There were some emerging developments on our original target list for which we could not find compelling examples at scale: Semantic Web, Mobile Learning, Ubiquitous Connective, etc. I am sure these are going to be important, but in the interests of preserving the Teachable Moment aspects we focused only on developments with convincing data for impacts on learning outcomes and productivity: convincing in the context of Ontario higher education institutions.  For example, the Ithaka study of the Open Learning Initiative software allowed us to highlight Adaptive Learning Systems at scale (and the recent follow-up book by William Bowen contains several other insights we could cite if we were starting now).

Similarly, the report only deals with Open Educational Resources as a sideline in the discussion of Open and Affordable Textbooks: the rationale was that the textbook developments were a hot topic in “peer” higher education systems ‒ British Columbia, California, New York, etc. ‒ represented low hanging fruit in terms of potential for building collaborations amongst students, faculty and academic leaders within institutions.  And we didn’t do any justice to the Canadian developments in connectivist MOOCs, mostly because we had our hands full trying to help our target audience make sense of the instructionist MOOCs that were hogging the headlines and couldn’t work out how to get beyond that without losing their attention. (I have already apologized to George for this: Stephen, Dave et al can consider themselves included in the apology.)   These choices about which innovations to highlight may have bypassed the disruptive in favour of the radical, but helping decision makers to make sense of – and act on – opportunities for radical change was more than enough for us to bite off.

Issues We Couldn’t Include

Some of the content we wrote but could not include in the report was just not clear enough or complete enough to be included in the public document. For a few topics, we were keenly aware that more work had to be done but that we had not made sense of what that work might be. We didn’t want to go on at length in the report about these points for several reasons: calling for further research sounds like too familiar an ending to a Research Report, including more than a quick mention for what is not yet clear seemed to detract from our Teachable Moment goal, and some of the further exploration needed would be an outcome of our proposed Call for Action through collaborative sense-making across institutions.  For those interested in ‘where to next’ in terms of understanding the impact of emerging developments, here is my personal list of high priority issues that need more clarity.

The different roles of various online learning interactions in various contexts: I would like to have referenced the work by Terry Anderson and others on how increases in one form of learning interaction can result in a decreased need for another type of interaction. This was implicit in our Call to Action around understanding and leveraging scalability: use more scalable interactions where appropriate in order to redirect resources – especially time – into other interactions which are less readily scaled.

Here is my current woefully incomplete attempt to reframe our analysis of emerging developments in online learning using different types of learning interactions – whether online or not:

  • Learner-content interactions can be used effectively to advance Quality and Productivity for technical mastery outcomes, e.g., performance tasks with single solutions and predictable pathways to completion (allowing adaptive systems to provide task guidance)
  • Learner-learner interactions can be used effectively to advance Quality and Productivity for (some) of the question-and-answer and formative feedback roles traditionally carried out with learner-instructor interactions, and seem to be essential (at the moment?) for outcomes involving complex challenges with diverse ways of knowing.
  • Learner-instructor interactions appear to be essential for outcomes involving deep personal change related to learning itself:  grappling with threshold concepts, enhancing learning mindsets and strategies, and ‘getting better at getting better’ for knowledge-intensive work
  • Learner-expert interactions are required for formation of learners’ identity and practice as members of knowledge-building communities, whether in professional/career contexts or in their roles as community members and global citizens.

Much more work to be done in this area, including ensuring that the outcomes listed above that are not readily scaled don’t get left out in the quest for greater productivity:  if we neglect such outcomes, where would the ‘higher’ be in higher education?

Institutional productivity gains may be possible @ scale in traditional institutions: you may have noticed that the list of interactions above has unbundled the role of “instructor” (who can apply expertise in pedagogical content knowledge) from the role of “practice expert” who can help learners transition into full engagement with knowledge-practice networks. Traditional institutions may struggle to unbundle such roles, or even to respect their differing contributions.

Traditional institutions – and in Ontario higher education, that is all we have at the moment ‒ may also struggle to reinvest the results of productivity gains from online learning beyond a course context. As long as we think of ‘workload’ in terms of ‘courses taught’, any savings in effort may disappear into other localized activities. How can we reframe the work, and workload, of teaching to optimize educator and learner time, without resorting to an alien ‘managerial’ language? (Mention “activity-based-costing” in a budget meeting and the challenges to reinvesting productivity gains become all too clear J.)

I tried adapting the idea of Constructive Alignment from instructional design, with an expansion into “Productive Alignment” where educators also include in their designs the goal of optimizing resource usage. If certain students can achieve certain learning outcomes with reduced learner-instructor interactions, e.g., with MOOC resources used in a hybrid course format, then effective instructional design requires that we achieve this Productive Alignment to optimize time and resources. I couldn’t explain this notion effectively enough to include it in the report, but I am convinced that some such changes in the ways we talk about educator roles and responsibilities are going to be needed if the full potential of online learning is to be achieved. And this is going to be both more necessary and more difficult with the ‘higher’ learning experiences and outcomes listed above, which develop slowly over the course of a program and are not readily described as discrete competencies to be tested in a short-term performance task.

Dealing with Quality and Productivity in tandem is a fiscal, political and pedagogical necessity: finally, I wish we had been able to make a better case for the pedagogical rationale for dealing with Scalability, Quality and Productivity issues in parallel. We did include some rationale for determined action now on systematic collaborations across institutions to understand and leverage the emerging potential of online learning.

However, that argument was framed mostly in terms of fiscal and political realities. The fiscal reality across higher education systems requires that we get more focused on deploying the least resources to achieve the highest level of outcomes, and the political reality requires that we in public higher education either ‘do or be done to’ in our dealings with Quality and Productivity.

But there is another rationale that is more closely linked to our purposes and ideals in higher education. Even if we did not have our current fiscal constraints and the expectations of stricter constraints in the future…even if public higher education had the full confidence of political leaders as to our ability to change and adapt to our changing circumstances…if our students see us clinging to traditional practices and structures rather than taking on our challenges with boldness and confidence, what model are we presenting to them about how to deal with challenges in their workplaces, their families and communities, in the earth’s environment and the global knowledge economy?  Will our plea for engagement with the knowledge and wisdom of the past, present and future fall on deaf ears if we don’t practice what we preach?

Making that case was beyond our reach in this project, but it remains on my personal to-do list. I keep thinking of Parker Palmer’s concise formulation in The Courage to Teach: “How we teach is a critical part of what we teach”. That is the pedagogical rationale for our taking charge of higher education’s fate by applying emerging knowledge and wisdom about online learning…with care, compassion and courage.

 

Tom Carey’s reflections on the HEQCO report on online learning and productivity: 1-Catching a teachable moment

Listen with webReader
© EDUCAUSE

© Leblanc, P., EDUCAUSE, 2013

Carey, T., & Trick, D. (2013). How Online Learning Affects Productivity, Cost and Quality in Higher Education: An Environmental Scan and Review of the Literature. Toronto: Higher Education Quality Council of Ontario.

Tom Carey is one of the authors of the above study, and as an example of the best of reflective practice, he has kindly provided his thoughts about the report, now that it is finished. The reflection is in two parts. The second part will follow tomorrow.

Tom Carey:

Tony’s last post presented a summary and response to a report authored by David Trick and myself for the Higher Education Quality Council of Ontario (HEQCO): How Online Learning Affects Productivity, Cost and Quality in Higher Education: An Environmental Scan and Review of the Literature. In parallel, I prepared this guest post with my reflections on what surprised me during the process of researching and writing the Environmental Scan of emerging developments in online learning. Tomorrow’s guest post will reflect on what we did not or could not include in the final version – partly because we were still left with a lot of sense-making still to do by the time we had hit our limits for pages and time (looking for help here, folks).

David may perhaps want to chime in on the surprises and omissions of the Literature Review where he so capably took the lead. Tony was also involved as an expert advisor, along with George Siemens and Ed Walker – our thanks again to all of you. We hope that our complementary perspectives will spark some more dialogue about these important issues: there have also been comments from Terry Anderson and visual notes by Giulia Forsythe.

Results that Surprised

My colleague Carl Bereiter pointed out to me long ago that the most interesting question to ask about a research project concerns the “results that surprise” – I think this was his opening query at every Ph.D. thesis defense. As someone with a long involvement in online learning, OER and distance education, here is my list of surprises:

We didn’t end up writing a research report: I’d better clarify that for our helpful partners at HEQCO, who sponsored the report and supported us in the process. We did write a Research Report as stated in the contract, but we found the decisions we made about content and tone were influenced equally by the mandate for a Research Report, the opportunity for a Teachable Moment and the growing sense that a Call for Collective Action through collaboration across institutions was needed.

The Teachable Moment came in part from the clamour about MOOCs, where the participation of prominent institutions had caused a sudden jump in attention – and perhaps even credibility – for online learning amongst some academic and political leaders. We tried to seize that moment, however brief it might be, to make the case for online learning as an ally in the challenges faced by higher education across the world: how to educate more students, with deeper and more lasting learning outcomes, in the face of fiscal constraints driven by demographic and economic factors over which we had no control. (More on the Call for Collective Action below…)

It seemed to help if we presented MOOCs as a cumulative development: as we worked on the report, we came to the conclusion that one way to make sense of MOOCs was to consider them as the aggregation of several other factors with longer histories and more evidence of success. The report positions MOOCs (the instructionist type, at least) as building on the other developments in online learning that we highlighted: Affordable Online Texts, Adaptive Systems, Learning Analytics and a variety of approaches to optimize student-instructor interactions. Framing MOOCs this way, as leveraging all of these advances at once to drive down the marginal cost of more students, seemed to reduce the magic of offering courses without charge. We make no claim that this is a complete portrayal of the MOOC phenomenon as of May 2013 – it certainly was not complete – and leave it to readers to assess how helpful that particular perspective may be in diminishing the image of MOOCs as a totally new (and alien?) invention.

There are still open questions about ‘significant difference’: as someone who had institutional oversight for online learning, I was already familiar with the body of research demonstrating that online learners could do at least as well as their counterparts in traditional courses. There were some caveats of course, and David summarized very clearly what the research said about what kinds of students might be best able to take advantage of online learning.  While data at the course level was available, we did not find similar evidence about ‘no significant difference’ at the program level…or for deeper learning outcomes that do not lend themselves to large scale objective measures that can be replicated across different student cohorts.

Of course, a big part of that lack of data derives from the difficulty of measuring such outcomes within a program – let alone across different ways to offer and support programs over time. It happened that as we were wrestling with these issues I was also working on program-level outcomes with several professional schools at a university on Canada’s west coast. Many of the outcomes we were trying to define and assess, such as ‘professional formation’ and ‘epistemic fluency’ (a term I learned about from Peter Goodyear of the University of Sydney), struck me as requiring interactions quite different from any of the course examples where ‘no significant difference’ results had appeared. Steve Gilbert of the TLT Group shared a label for this: The Learning That Matters Most. I describe below how this evolved into The Learning That Scales Least (at least as far as our current evidence shows).

The concluding Call to Collective Action was about scalability much more than technology: my thanks go out to the members of the Ontario Universities Council on E-Learning for getting on to me about this when I presented the report’s conclusions to them earlier this month. They convinced me to soften the focus on online learning in the concluding recommendations, in favour of a more level playing field around the evidence – at this moment and subject to change – about scaling up teaching and learning environments. If I were writing that part of the report now, my recommendation for the collaborative action required across the colleges and universities in the province would be something like this (but more concise, clearer, etc.):

We need to work together on understanding and leveraging emerging developments to scale up learning experiences ‒ wherever appropriate ‒ so that the resulting gains in productivity will allow us to sustain and advance the student outcomes requiring other kinds of learning experiences (that are not readily scaled).

The “collective” part of this conclusion came out of our difficulty in making sense of the emerging developments that we had highlighted: to paraphrase race car driver Mario Andretti, we concluded that “if you think you know what is happening, you’re not going deep enough”. We didn’t put anything that folksy in the report, but we did conclude that we couldn’t get much further on definitive statements about “where are MOOCs going” and the like. If we couldn’t make sense of emerging directions, perhaps we could “make sense of making sense”: outline a course of action in which institutions could collaborate to share the risks of the necessary investment in systematic experimentation. (There is more on this Call to Action in tomorrow’s guest post on Themes We Couldn’t Include…).

Thanks, Tom!