Share or Save arrow7 Responses
  1. [...] via Improving productivity in online learning: can we scale ‘the learning that matters most’?. [...]

    • Terry Anderson
      October 24, 2013 - 9:24 am

      Thanks for this Tony. I appreciate your taking a hard look at scaling “the learning that really counts”.

      Attempts at scaling by increasing the size of the lecture theatre or substituting student-teacher interaction by canned video as is often done in xMOOCs, do limit the type and depth of learning activities and resultant deep learning.

      Since by scaling we mean that there are a large number of learners, the range of errors and effective feedback likely follows a long tail pattern with many similar comments, suggestions, assessment of errors etc. making up the bulk of the student-teacher interactions. These common teacher interventions can be aggregated, stored and applied automatically (using a simple tool as a Word Macro) or more sophisticated systems.

      Denise Whitelock has also developed a tool “Open Mentor” that assesses feedback given by tutors as a tool to understand effective feedback with possible scaling and integration with portfolios and peer assessment. See her artcile at http://www.teleurope.eu/mod/file/download.php?file_guid=75634

      We can also expect continuing improvements in machine marked essay algorithms such as used in various Latent Semantic Analysis based tools or the type of extraction of themes employed in projects like Open Essayist being developed at the Open University see http://oro.open.ac.uk/37548/1/LAK%20final.pdf

      So I don’t think we are at a point where these tools can be automated to the point where they provide the type of deep understanding of learners that you describe in your undergrad experience Tony. But we are seeing and will continue to see gains in productivity tools that COULD be used by online teachers to make their work more efficient and more effective. – and thus allow gradual scaling up.

      But adoption by these same teachers remains a HUGE challenge.

      • Tony Bates
        October 25, 2013 - 6:27 pm

        Thanks, Terry – excellent points

        Yes, I believe learning analytics can be helpful in identifying which teaching interventions help most, and some of these may well be automated so that instructors have to spend less time dealing with common problems.
        However, this should not be at the expense of good course design, which if done well should avoid creating some of the problems in the first place. Using analytics as a research tool for instructors is fine with me, but that’s different from using it to self-correct courses, which I have seen argued elsewhere.

        I don’t agree with the computer start-up mentality of throwing something out to see if it works then correcting later through analytics, if that means ignoring what research has already identified as a strategy that’s not likely to work. On ethical grounds students shouldn’t be used as ‘testers’ for some start-up nerds bright idea if there is strong evidence in advance that this isn’t going to work, and I think this criticism can justly be used against Coursera-style MOOCs and their use of peer review and automated testing. Just because it’s free doesn’t mean that we should waste students’ time.

        Similarly I have no objection in principle to automated essay marking, provided (a) that the very real semantic issues can be properly addressed, which they have not been to date and (b) that there is always an instructor overseeing the process, to deal with exceptions and unanticipated student responses. We will have to see whether the technology will get to the point where it leads to real productivity gains, but it sure isn’t there yet. I would suggest that e-portfolios for instance offer much more in productivity gains than automated testing – it might cost more, but the output is better.

        I agree also that getting instructors to adopt these new tools as they evolve is a big challenge, but an even greater challenge is to get the research about what we already know about learning online out to instructors and computer scientists before they start experimenting with students’ lives. Hence we need much more CIDER-type professional development opportunities. Indeed, we need to pay much more attention generally about how we as a profession can disseminate research findings and best practices to teachers and software designers so that it actually influences their practice (the Carnegie-Mellon Open Learning Initiative is a good example of this – but it is a very expensive process.). In particular we need to find a way to counter the hubris of computer scientists who have mistaken beliefs about what constitutes good teaching. Any suggestions?!

        I guess what I’m saying is that there are other parts of the online teaching and learning process (such as good design and scaling content development and delivery) that offer up much greater opportunities for increasing educational productivity than trying to automate learner support. Learner support is always in my view going to be labour-intensive if we are to develop the kind of learning that really matters.

  2. Tom Carey
    November 4, 2013 - 11:24 am

    Tony, I agree that our focus in HEQCO report on “online learning” would have been better framed around “scalability”.

    Here is a practical example from our current work, in which we would like to explore ways to scale up instruction but don’t yet see a way to do it without compromising ‘the learning that matters most’. The instructional approach is a design studio, in which students work collaboratively over several weeks in small teams on complex design projects. There is no expectation of a single best solution for these kinds of complex challenges, and there are numerous parallel learning objectives in terms of gaining intimacy with the design materials, understanding of design processes, learning to give and receive constructive criticism, collaborative team work, etc.

    In addition their interaction within a collaborative team, students receive feedback and guidance in several forms:

    – cooperative interactions amongst students across teams. Because there is commonly a shared physical space in which artifacts from the design process are created and demonstrated, students interact across teams within the space, and then also benefit from the instructors’ feedback to other teams because they understand some of the context and challenges that the other teams are experiencing. This can be a very effective and efficient way for students to gain knowledge of a variety of design situations.

    – – informal ‘desk critiques’ where instructors stop by the student work areas to informally inquire about progress, rationale and sticky points, and to provide feedback and guidance.

    – more formal design critiques which have a scheduled and semi-public nature (often labeled ‘charettes’, after the French word for the carts on which architectural models were brought forward for examination and critique). A typical design studio course might have two such formal “crits” – interim and final – in which outside experts are often invited in to provide additional input and evaluation.

    Design studio classes have historically involved a relatively small number of students: the current norm seems to be around 20. There are several reasons for this beyond the obvious implications of workload for instructors:

    – a key element of the instructional method is what students learn from the critiques of other teams’ projects, and this requires that they have some familiarity about the processes and products of those teams. Design teachers and students suggest that the current norm for class size is in part determined by the number of other projects that students can track over the course of the projects, as without that knowledge of process and product they are not able to adequately absorb the value of the critiques of other teams’ projects.

    – a secondary limitation is the time commitment from external experts who bring particular value to the design crits. IIn addition to the implications of larger class sizes on their workload, there are also logistical issues about scheduling busy practitioners into synchronous sessions where they can interact with the students (and each other).

    This is far removed from the ‘technical mastery’ learning tasks where most of the energy on adaptive systems and learning analytics is showing potential to scale up instruction…and in some cases, payoff. The issue of scalability can be separated from the issue of what parts of this process might be handled through online learning – as a way to increase access to design education, say – but it must be dealt with whatever the learning environment. (E.g., see the Sloan Consortium conference presentation
    http://sloanconsortium.org/effective_practices/re-creating-studio-based-model-online-art-and-design-education)

    The design studio is not just an element of the teaching and learning environment: it is also a key element in professional practice, so that the design studio serves as a form of work-integrated learning within professional and practice-based education. http://www.csu.edu.au/__data/assets/pdf_file/0007/315736/2011_PPBE_Guidelines1.pdf . There are lots of other aspects of such ‘professional formation’ where our current methods do not easily scale up, including affective, meta-cognitive and dispositional outcomes.

  3. [...] Continuar leyendo: La mejora de la productividad en el aprendizaje en línea: ¿se puede escalar "el aprendizaje q… [...]

  4. […] Improving productivity in online learning: can we scale the ‘learning that matters most’? […]

Leave a Reply

Mobile Theme