A l'école, Jean Marc Cote, 1901.
A l’école, Jean Marc Cote, 1901.

In this post, I attempt to examine what Sanjay Sharma of MIT calls ‘the magic of the campus‘, how at least some of this can be created in an online environment, and in particular whether or not this can or should be scaled to increase productivity.

The story so far

This is a continuation of the exploration of the potential for online learning to increase educational ‘productivity.’ Previous posts in this series include:

In the last post, I disaggregated the activities encompassed in online teaching, as a first step in identifying those that are easily scaled (and hence can lead to reductions in unit costs), and those that are either currently difficult to scale, or indeed should NOT be scaled.

I broke online teaching down into the following activities:

  1. Content development and delivery (the topic of the previous post)
  2. Student activities
  3. Learner support
  4. Assessment
  5. Planning, administration and overheads

I argued in my previous post that content development and delivery offered major economies of scale:

  • online content development and delivery is already resulting in increased productivity in post-secondary education, although it has yet to be well documented and publicised (except for MOOCs)
  • at the same time, there is still room for even greater increases in productivity through online course development and delivery, especially through the use of open educational resources and sharing of content across different institutions
  • online content development and delivery is only one component of online teaching; other components such as learner support and assessment are even more important
  • care is needed then because productivity changes in methods of online content development and delivery could have knock-on cost and productivity consequences in other areas of course delivery, such as learner support and assessment. In looking at productivity issues, all these factors need to be examined together

Supporting student learning

Why learner support is critical for productivity

Before we can start on this topic, it needs to be recognized that students vary enormously in their need for support in learning. Many lifelong learners, who have already been through a post-secondary education, have families, careers and a great deal of life experience, can be self-managed, autonomous learners, identifying what they need to learn and how best to do this. At the other extreme, there are students for whom the k-12 system was a disaster, who lack basic learning skills or foundations, such as reading, writing and mathematical skills, and therefore lack confidence in learning. These will need a lot of support to succeed. At the same time, there will be rare individuals in both categories who are exceptions – some lifelong learners will need a lot of support, and others who have otherwise failed in the formal education system nevertheless will have the confidence and determination to succeed, given a second chance, mainly through their own efforts.

There are also different attitudes from instructors and institutions towards the need for learner support. Some faculty may believe that ‘It’s my job to instruct and yours to learn’ – in other words, once students are presented with the necessary content through lectures or reading, the rest is up to them.

Nevertheless, the reality is that in a system of mass higher education, faculty will have to deal with students with a wide range of needs in terms of learner support, unless we are willing to sacrifice the future of many thousands of learners – which certainly is not productive. Thus a productive educational system will focus particularly on enabling as many students as possible to succeed (which is why of course course completion and graduation rates are so important.)

Different types of learner support

Tom Carey in his reflections on the HEQCO productivity report (of which he was one of the authors), reframed his analysis of emerging developments in online learning using different types of learning interactions:

  • Learner-content interactions can be used effectively to advance quality and productivity for technical mastery outcomes, e.g., performance tasks with single solutions and predictable pathways to completion (allowing adaptive systems to provide task guidance)
  • Learner-learner interactions can be used effectively to advance quality and productivity for (some) of the question-and-answer and formative feedback roles traditionally carried out with learner-instructor interactions, and seem to be essential (at the moment?) for outcomes involving complex challenges with diverse ways of knowing.
  • Learner-instructor interactions appear to be essential for outcomes involving deep personal change related to learning itself:  grappling with threshold concepts, enhancing learning mindsets and strategies, and ‘getting better at getting better’ for knowledge-intensive work
  • Learner-expert interactions are required for formation of learners’ identity and practice as members of knowledge-building communities, whether in professional/career contexts or in their roles as community members and global citizens.

Note that these four categories of interactions are not specific to online learning, but are also usually found on campuses. Tom though speculates as to how online learning could lead to more productive ways of providing this support.

Can you scale up ‘real’ learning online?

I think this is a good starting point but I would like to take the discussion further, and in particular focus on the point that Tom and David Trick raise in the HEQCO main report (p. 42):

We conclude that the purpose of adopting online learning should be to preserve and sustain what we value most in higher education: instruction that enables learners to develop new ways of knowing – and doing and being – that will prepare them to face the challenges of our times. This may at first seem paradoxical, since much of this “learning that matters most” may be the least amenable to scale up with online learning.

The concern is that the means by which higher education enables ‘new ways of knowing’, such as mentoring, one-on-one discussion and argument between professor and student, individualized instruction and informal assessment  as an ongoing process, student ‘bonding’ over intellectual and other pursuits, such as experimental and engineering design, cannot be scaled up through technology, although much of it may be replicated in online learning, just not at scale.

The problem here is that it is essential to be clear about what the magic of the campus really is, before even beginning to discuss whether it can be reproduced online and then scaled up. (I don’t want to get into the discussion of whether the magic of the campus is actually present on many campuses – I have been to too many campuses which are solely commuter campuses with lectures and little else. However, the issue here is about scaling up quality higher education, which for the sake of argument I will assume includes ‘qualitative’ activities such as personal mentoring.)

I will therefore work through the four types of learner support with an analysis of whether can they support higher level learning outcomes online and at scale and if so how. This post focuses on the first form of interaction.

Learner-content interactions

Computer-marked feedback

Psychologists such as Skinner have shown since the 1930s that comprehension/understanding and memorization of learning can be improved/enhanced through immediate feedback. Distance educators in the 1960s started building in short self-assessment questions in print-based learning materials (with correct or model answers on separate pages.) School textbooks also frequently use such methods. Research also showed the self-assessment questions need to be spaced regularly but that too many self-assessment activities become counter-productive, with students skipping them. Designers of MOOCs have recently ‘rediscovered’ that immediate student feedback can also be done online and students like it. The advantage of online self-assessment is that the answers are ‘hidden’ until the student attempts an answer. Research has shown this requires more effort and results in greater learning gains than jumping straight to the answer. There are of course huge economies of scale – once the self-assessment item is designed, it can be used by an infinite number of learners.

Tom Carey is correct in noting that computer-marked questions or feedback work particularly well for ‘ technical mastery outcomes, e.g., performance tasks with single solutions and predictable pathways to completion (allowing adaptive systems to provide task guidance)’ but instructional designers in open and distance learning institutions and corporate training have  also designed some fairly sophisticated self-assessment questions that require critical thinking, problem-solving, or sentence-based ‘qualitative’ answers, as well as multiple-choice, single answer questions. For self-assessment purposes, these more qualitative questions can work very well with a range of sample answers, and with automated feedback on why some answers are ‘better’ than others. However, these are more difficult to design (and hence more costly), but the cost is justified provided there are enough learners.

The problems start to arise though with computer-marked questions when they are used for the purposes of formal student assessment. Computer-based assignments in general do not work well for assessments dealing with complex issues, requiring creative or critical thinking, or  evaluation of alternative explanations, or complex problem-solving that require integration of several elements, i.e. computer-marked assignments do not handle well the assessment of learning outcomes requiring many of the higher order learning skills,. Students sometimes come up with valid answers that have not been anticipated by the designers of the questions. Handling semantics or meaning in everyday language in computer-assessed questions is still a major challenge for computing. Also for formal assessment purposes the items need to be changed for each assessment to avoid cheating. There is a whole industry built around the validation and reliability of computer-marked assignments. Experience suggests that such testing has some value in particular circumstances, but does not work well for assessing many kinds of learning outcomes, and especially the higher-order learning skills. More importantly, there are serious epistemological or philosophical objections to the use of computer-marked assignments in many subject areas such as literature, history, education and even business studies, where qualitative judgements are core to the subject area.

Nevertheless there are opportunities in online computer-marked questions for self-assessment in helping build a good foundation but automated forms of formal assessment are more limited for advanced levels of learning, except perhaps in mathematically-based subject areas. Nevertheless, more could be done in online learning to build in regular and creative forms of self-assessment, as used judiciously, it could increase learning effectiveness.

Moving the work to learners

I would suggest that there is another (and better) way of increasing productivity in learner-content interactions and that is through the re-design of courses. This would be a move to more of a discovery approach to teaching and learning, where the students interact more autonomously with learning materials. Thus students manage their own interactions with course content, through knowledge management and project work, but within a monitored learning design.

For instance, students would be assessed on a particular subject or topic through an e-portfolio of work that demonstrates the knowledge and skills they have worked their way through in the subject area. The content itself becomes a means to an end. The focus would be as much on students developing higher order learning skills as on mastering content.

The subject expert/senior professor would be more of a teaching consultant, guiding students or determining sources for students to search or use, designing projects to be worked on, providing principles and criteria for developing their project work, and providing rubrics and guidelines for assessment.

Scaling would be handled by the appointment of teaching assistants or preferably adjuncts who would monitor group work, provide individual mentoring where necessary, and would assess the final work of the students. The senior professor would monitor the teaching assistants or adjuncts, intervene directly with student groups where necessary, and would ensure consistency in assessment between TAs or adjuncts.

The productivity goal is to move as much of the effort in learning to the students, enabling one senior professor to manage many more students, especially in first and second year programs, without increasing his or her overall teaching load, while at the same time ensuring there is adequate support for learners through the use of lower paid but still highly skilled/trained teaching assistants or adjuncts supervised by the senior professor. This would not be lead to such huge productivity gains as can be achieved in online content delivery, but would still be an advance on current methods, especially in terms of higher quality learning outcomes focused on the development of skills that students will need in work and society.

Conclusion

As they say in Britain, there’s more than one way to skin a cat. Online learning does not necessarily have to equate with massive content delivery or computer-marked assessment. There may be other ways in which online learning  – or put in another way, better course design that uses online learning appropriately – could improve productivity. In particular a move towards giving students more autonomy in finding, analyzing, evaluating and applying information, within a learning design that provides appropriate learner support, could enable senior professors to work more productively, and enable learners to interact with content in a more interesting and productive manner than through computer-assessment or feedback.

Next

In the next post on this topic, I will discuss ways in which learner-learner interactions can be used to improve outcomes while reducing costs (I hope!). In the meantime, comments, suggestions and alternative ways to recreate the magic of the campus more productively online will be very welcome.

Webinar

Tom Carey will be doing a CIDER webinar on Tuesday (October 2) entitled:

What Kinds of Learning Can We Scale with Online Resources and Activities (and what can’t we scale)?

Description: This presentation will summarize recent research for the Higher Education Quality Council of Ontario on the impact of emerging developments in online learning for Quality and Productivity in higher education. The main discussion points will be the analysis of scalable learning resources and activities,  and the way different learning interactions may (or may not) be scalable.

If you can’t log in live (https://connect.athabascau.ca/cidersession) the session will be recorded.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here