Can 'the magic of the campus' be replicated online - and at scale?
Can ‘the magic of the campus’ be replicated online – and at scale?

The story so far

This is a continuation of the discussion on whether online learning can increase educational ‘productivity.’ Previous posts in this series include:

There is a CIDER webinar presentation on the HEQCO report available from here

In the last post, I concluded:

  • there are major economies of scale in using computer-based feedback for facilitating comprehension and technical mastery outcomes
  • computer-based feedback, when well designed, can also be useful in providing student feedback for more complex forms of learning, such as alternative strategies, critical thinking and evaluation
  • however computer-based analyses to date are inadequate for formal assessment of these higher order learning skills, where deep expertise and qualitative assessment is required, and where learners may provide new insights or alternative explanations
  • redesign of courses with a greater focus on student discovery (finding, analyzing and applying content) within a learning design offers more modest but still significant potential for increases in productivity, mainly through better learning outcomes (development of 21st century skills) and through more effective use of senior research professors’ time.

Learner-instructor interaction and economies of scale

In this current post, I examine particularly the learner-instructor interaction, and discuss whether online learning can provide economies of scale in this area. This is particularly important, because research on credit-based online learning has shown that course delivery (which includes both learner support and student assessment) accounted for the largest overall cost of an online program (37%), almost three times more than course development, over the life of an online program (Bates and Sangra, 2011).

Can we scale ‘the learning that matters most’?

This important question has been raised in the HEQCO report by Tom Carey and David Trick. It is this issue I wish to address here, since scaling up the delivery of content, and learner-content interaction, through online learning is relatively easy, although both depend on good course design for effective learning.

What is more challenging is whether we can also scale the kind of ‘learning that matters most’, namely helping students when they struggle with new concepts or ideas, helping students to gain deep understanding of a topic or subject, helping students to evaluate a range of different ideas or practices, providing students with professional formation or development, understanding the limits of knowledge, and above all enabling students to find, evaluate and apply knowledge appropriately in new or ill-defined contexts.

Before looking at whether or not such activities can be scaled, it is important to challenge the view, such as Sanjay Sharma’s at MIT, that such forms of learning can only be achieved on campus. There is also more than a hint of this assumption in the HEQCO report, at least with respect to undergraduate education. Those of us who have taught online will know that it is possible to develop these kinds of learning outcomes online, especially but not exclusively at graduate level. Strategies such as scaffolding or supporting knowledge construction through online discussion and dialogue, student reflection through e-portfolios, and above all personal online interventions and communication between students and instructor, have all been found to lead to learning outcomes at least as equivalent to those of students studying the same subjects on-campus (see references below).

There will remain a relatively few learning activities that matter most that are best done on campus, such as the development of hands-on skills, but there will be others, such as knowledge management, that may well be best done online. More importantly, there will be some students who really need the environment provided by a campus, and others that will prefer an online environment.

The issue is not can the learning that matters most be done online, but can it be scaled up through online learning? Certainly, I would argue that the main criticism of xMOOCs is that they spectacularly fail to address this form of learning. However, cMOOCs, when they operate at the level of communities of practice with relatively shared levels of understanding and knowledge among the participants, do have at least the potential for such economies of scale while maintaining or even improving quality of learning outcomes. The challenge though is how one accounts for the hidden costs of the participation of experts in such professional sharing, which rely heavily on volunteering or ‘moonlighting’ from a paid job by those with the expertise. I suspect though that even if these costs were calculated, they would still prove more ‘productive’ than conventional campus-based classes for this type of learner. However, the cost-effectiveness research has yet to be done.

The challenge though is scaling up the kinds of interaction between students and instructors that enable diagnosis of a student’s learning difficulties, that facilitate deep understanding of a subject, that encourage creative and original thinking, especially within undergraduate education. Adaptive learning and learning analytics may help to some extent, but in my view cannot yet come close to matching the skill of an experienced and skilled instructor. If instructors are to have enough time to engage in these kinds of dialogue and communication with students, there is clearly a limit on the number of students they can handle. Thus there is a possibility of small increases in productivity, aided by developments such as adaptive learning and learning analytics, but not major ones, in this aspect of teaching and learning.

Scaling the assessment of ‘learning that matters most.’

When ‘the magic of the campus’ is raised, one of the implicit assumptions is that student assessment is more valid because of the personal knowledge that faculty develop of a student in their entirety, and not just in their formal academic work: how they conduct themselves in class discussion (not just what they say, but how they say it), their interests and knowledge outside the formal curriculum (e.g. do they read widely or participate in valued extra-curricula activities), and the impression students make in social activities with faculty. This ‘tacit’ knowledge of a particular student that faculty acquire on campus can heavily influence the final assessment of a student, beyond that of the final exam. As they say at Oxford University, ‘Is he one of us?’

I was fortunate to have done my undergraduate degree in a department where every ‘honours’ student was well known by every faculty member. We were told that in the final exam, we could not get a worse grade than was already determined, but we could improve on it by a really good performance. In other words, the final exam was more of a rite of passage – the assessment was already more or less in place. This was only possible because of the ‘deep’ knowledge that faculty had already gained of the students. The fear that many faculty have of of online learning is that this kind of knowledge of a student is impossible ‘at a distance.’

Again, however, at least some elements of this ‘getting to know students’ can be achieved online, through continuous assessment, the use of e-portfolios and participation in online discussions. Again, the similarities between online learning and campus teaching are often greater than the differences. The problem is scaling up this kind of in-depth academic relationship between student and instructor, both for classroom and online teaching. Although the actual ratio may be difficult to specify, it is clear that this kind of relationship cannot be built up if the instructor:student ratio is in the thousands.

The fact is though that undergraduate students in most public universities are not in the fortunate position that I was. Even in their final year, many find themselves are in classes of over 100 students. They will probably be better off in an online class of 30 students, and even in an online class of 100, they may have more personal interaction with the instructor than in a lecture theatre, if the course is well designed. However, scaling up much beyond this ratio is not going to enable the more personal intellectual relationship to develop that allows for the more informal ‘I know what this student is capable of’ relationship, either online or on campus.

In short, for assessment based on deep knowledge of a student’s progress and capabilities, the scope for economies of scale are limited. In this sense, teacher:student ratios do matter, so economies of scale through online learning will be difficult to achieve for these higher order learning skills.

Conclusions

This has been a particularly difficult blog to write which suggests I may still not be thinking clearly about this topic, so please help me out! However, here is where I stand on this issue so far:

1. The ‘learning that matters most’ mainly addresses university teaching, but I suspect also increasingly technical, vocational and corporate training; the aim is to develop the knowledge and skills needed in a knowledge-based society.

2. Online learning can handle the ‘learning that matters most’ as well, in most cases, as on-campus teaching, although there will always be some exceptions.

3. However, there are major difficulties in scaling up the learner support and assessment activities that are needed for the learning that matters most, both online or on campus. The danger in scaling up is the loss of quality in terms of learning outcomes.

4. Adaptive learning software that helps individualize learning, and learning analytics, may help to a small degree in enabling instructors to handle slightly more students without loss of quality, but cannot as yet replace a skilled instructor, and probably never will.

5. New online course designs built around the use of new technologies have greater potential for increases in productivity – through producing better learning outcomes – for the learning that matters most, than through scaling up, i.e. by increasing teacher:student ratios.

6. We need more empirical research on the relationship between teaching methods, mode of delivery, costs, and the type of learning outcomes that constitute the ‘learning that matters most’ (not to mention better definitions).

Your input

First I’d really welcome responses to this post. In particular:

  • Is ‘the learning that matters most’ a useful concept for university teaching? Do you agree with my descriptions of it?
  • Have I missed something obvious in the possibility for scaling these learner support and assessment activities?
  • Can adaptive learning software and learning analytics take some or all of the load off instructors in developing such learning outcomes?
  • What would new online course designs that increase productivity look like? Do you have actual examples that have been implemented?

Next

In my next post on this topic, I will discuss an area where I think there is huge potential for increasing productivity through online learning, and that is through savings in physical overheads.

References

Anderson, T., Rourke, L., Garrison, R., & Archer, W. (2001). Assessing teaching presence in a computer conferencing context. Journal of Asynchronous Learning Networks, Vol. 5, No.2.

Baker, C. (2010) The Impact of Instructor Immediacy and Presence for Online Student Affective Learning, Cognition, and Motivation The Journal of Educators Online Vol. 7, No. 1

Bates, A. and Sangrà, A. (2011) Managing Technology in Higher Education: Strategies for Transforming Teaching and Learning San Francisco: Jossey-Bass/John Wiley and Son

Garrison, D. R. & Cleveland-Innes, M. (2005). Facilitating cognitive presence in online learning: Interaction is not enough. American Journal of Distance Education, Vol. 19, No. 3

Harasim, L. (2012) Learning Theory and Online Technologies New York/London: Routledge

Jonassen, D., Davidson, M., Collins, M., Campbell, J. and Haag, B. (1995) ‘Constructivism and Computer-mediated Communication in Distance Education’, American Journal of Distance Education, Vol. 9, No. 2, pp 7-26.

Paloff, R. and Pratt, K. (2007) Building Online Learning Communities San Francisco: John Wiley and Co.

Richardson, J. C., & Swan, K. (2003). Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Journal of Asynchronous Learning Networks, 7 (1), 68-8 8.

Salmon, G. (2000) E-moderating London/New York: Routledge

Sheridan, K. and Kelly, M.  (2010) The Indicators of Instructor Presence that are Important to Students in Online Courses MERLOT Journal of Online Learning and Teaching, Vol. 6, No. 4

 

 

7 COMMENTS

    • Thanks for this Tony. I appreciate your taking a hard look at scaling “the learning that really counts”.

      Attempts at scaling by increasing the size of the lecture theatre or substituting student-teacher interaction by canned video as is often done in xMOOCs, do limit the type and depth of learning activities and resultant deep learning.

      Since by scaling we mean that there are a large number of learners, the range of errors and effective feedback likely follows a long tail pattern with many similar comments, suggestions, assessment of errors etc. making up the bulk of the student-teacher interactions. These common teacher interventions can be aggregated, stored and applied automatically (using a simple tool as a Word Macro) or more sophisticated systems.

      Denise Whitelock has also developed a tool “Open Mentor” that assesses feedback given by tutors as a tool to understand effective feedback with possible scaling and integration with portfolios and peer assessment. See her artcile at http://www.teleurope.eu/mod/file/download.php?file_guid=75634

      We can also expect continuing improvements in machine marked essay algorithms such as used in various Latent Semantic Analysis based tools or the type of extraction of themes employed in projects like Open Essayist being developed at the Open University see http://oro.open.ac.uk/37548/1/LAK%20final.pdf

      So I don’t think we are at a point where these tools can be automated to the point where they provide the type of deep understanding of learners that you describe in your undergrad experience Tony. But we are seeing and will continue to see gains in productivity tools that COULD be used by online teachers to make their work more efficient and more effective. – and thus allow gradual scaling up.

      But adoption by these same teachers remains a HUGE challenge.

      • Thanks, Terry – excellent points

        Yes, I believe learning analytics can be helpful in identifying which teaching interventions help most, and some of these may well be automated so that instructors have to spend less time dealing with common problems.
        However, this should not be at the expense of good course design, which if done well should avoid creating some of the problems in the first place. Using analytics as a research tool for instructors is fine with me, but that’s different from using it to self-correct courses, which I have seen argued elsewhere.

        I don’t agree with the computer start-up mentality of throwing something out to see if it works then correcting later through analytics, if that means ignoring what research has already identified as a strategy that’s not likely to work. On ethical grounds students shouldn’t be used as ‘testers’ for some start-up nerds bright idea if there is strong evidence in advance that this isn’t going to work, and I think this criticism can justly be used against Coursera-style MOOCs and their use of peer review and automated testing. Just because it’s free doesn’t mean that we should waste students’ time.

        Similarly I have no objection in principle to automated essay marking, provided (a) that the very real semantic issues can be properly addressed, which they have not been to date and (b) that there is always an instructor overseeing the process, to deal with exceptions and unanticipated student responses. We will have to see whether the technology will get to the point where it leads to real productivity gains, but it sure isn’t there yet. I would suggest that e-portfolios for instance offer much more in productivity gains than automated testing – it might cost more, but the output is better.

        I agree also that getting instructors to adopt these new tools as they evolve is a big challenge, but an even greater challenge is to get the research about what we already know about learning online out to instructors and computer scientists before they start experimenting with students’ lives. Hence we need much more CIDER-type professional development opportunities. Indeed, we need to pay much more attention generally about how we as a profession can disseminate research findings and best practices to teachers and software designers so that it actually influences their practice (the Carnegie-Mellon Open Learning Initiative is a good example of this – but it is a very expensive process.). In particular we need to find a way to counter the hubris of computer scientists who have mistaken beliefs about what constitutes good teaching. Any suggestions?!

        I guess what I’m saying is that there are other parts of the online teaching and learning process (such as good design and scaling content development and delivery) that offer up much greater opportunities for increasing educational productivity than trying to automate learner support. Learner support is always in my view going to be labour-intensive if we are to develop the kind of learning that really matters.

  1. Tony, I agree that our focus in HEQCO report on “online learning” would have been better framed around “scalability”.

    Here is a practical example from our current work, in which we would like to explore ways to scale up instruction but don’t yet see a way to do it without compromising ‘the learning that matters most’. The instructional approach is a design studio, in which students work collaboratively over several weeks in small teams on complex design projects. There is no expectation of a single best solution for these kinds of complex challenges, and there are numerous parallel learning objectives in terms of gaining intimacy with the design materials, understanding of design processes, learning to give and receive constructive criticism, collaborative team work, etc.

    In addition their interaction within a collaborative team, students receive feedback and guidance in several forms:

    – cooperative interactions amongst students across teams. Because there is commonly a shared physical space in which artifacts from the design process are created and demonstrated, students interact across teams within the space, and then also benefit from the instructors’ feedback to other teams because they understand some of the context and challenges that the other teams are experiencing. This can be a very effective and efficient way for students to gain knowledge of a variety of design situations.

    – – informal ‘desk critiques’ where instructors stop by the student work areas to informally inquire about progress, rationale and sticky points, and to provide feedback and guidance.

    – more formal design critiques which have a scheduled and semi-public nature (often labeled ‘charettes’, after the French word for the carts on which architectural models were brought forward for examination and critique). A typical design studio course might have two such formal “crits” – interim and final – in which outside experts are often invited in to provide additional input and evaluation.

    Design studio classes have historically involved a relatively small number of students: the current norm seems to be around 20. There are several reasons for this beyond the obvious implications of workload for instructors:

    – a key element of the instructional method is what students learn from the critiques of other teams’ projects, and this requires that they have some familiarity about the processes and products of those teams. Design teachers and students suggest that the current norm for class size is in part determined by the number of other projects that students can track over the course of the projects, as without that knowledge of process and product they are not able to adequately absorb the value of the critiques of other teams’ projects.

    – a secondary limitation is the time commitment from external experts who bring particular value to the design crits. IIn addition to the implications of larger class sizes on their workload, there are also logistical issues about scheduling busy practitioners into synchronous sessions where they can interact with the students (and each other).

    This is far removed from the ‘technical mastery’ learning tasks where most of the energy on adaptive systems and learning analytics is showing potential to scale up instruction…and in some cases, payoff. The issue of scalability can be separated from the issue of what parts of this process might be handled through online learning – as a way to increase access to design education, say – but it must be dealt with whatever the learning environment. (E.g., see the Sloan Consortium conference presentation
    http://sloanconsortium.org/effective_practices/re-creating-studio-based-model-online-art-and-design-education)

    The design studio is not just an element of the teaching and learning environment: it is also a key element in professional practice, so that the design studio serves as a form of work-integrated learning within professional and practice-based education. http://www.csu.edu.au/__data/assets/pdf_file/0007/315736/2011_PPBE_Guidelines1.pdf . There are lots of other aspects of such ‘professional formation’ where our current methods do not easily scale up, including affective, meta-cognitive and dispositional outcomes.

LEAVE A REPLY

Please enter your comment!
Please enter your name here