January 22, 2018

Comparing online learning in k-12 and post-secondary education in Canada

Barbour, M. and LaBonte, R. (2017) State of the Nation: K-12 E-Learning in Canada 2016 Edition The Canadian eLearning Network

Why a post on online learning in the k-12 sector?

My blog, rightly or wrongly, is focused primarily on post-secondary education, for several reasons. The first is that I’ve always had a problem keeping up with developments in online learning in just the post-secondary education sector, and I decided very early on that I could not do justice to both sectors. Secondly, my experience of online learning has been almost entirely in the post-secondary sector, so it made sense to focus there. Thirdly, I did teach (face-to-face) for three years, many years ago, in the k-12 sector, so I am well aware that there are considerable differences in funding, context and approaches. My wife is also now a retired school teacher and I learned early in my marriage not to mess in her area of considerable expertise.

However, it would be foolish to deny that there are also many synergies between the two sectors, and both sectors lose by being isolated from the other. This became obvious when I was doing research on the national survey of online learning in Canadian post-secondary education. For instance, when designing the web site (after we had collected the data) I came across the web site of  ‘State of the Nation’, a set of research reports on the Canadian k-12 sector of which, to my shame, I was totally ignorant. I deeply wished that I had read these reports before I started on the post-secondary survey.

The ‘State of the Nation’ Reports

A pan-Canadian network of K-12 online and blended learning schools and organizations – the Canadian e-Learning Network, or CANeLearn – was formed at a Montréal July 2013 Summit meeting of key stakeholders. CANeLearn’s mission is to provide leadership that champions student success in online and blended learning and provides members with networking, collaboration and research opportunities. Its initial focus is on sharing resources, professional development and research.

The 2016 edition is the ninth edition of their report, which together with brief issues papers, ‘vignettes’ and individual program surveys are all available on a new web site

The website includes a profile for each jurisdiction that is organized in the following manner:

  • a detailed description of the distance, online and blended learning programs operating in that jurisdiction;
  • a discussion of the various legislative and regulatory documents that govern how these distance, online and blended learning programs operate;
  • links to previous annual profiles;
  • an exploration of the history of e-learning in that jurisdiction; 
  • links to vignettes (i.e., stories designed to provide a more personalized perspective of those involved in K–12 e-learning) for that jurisdiction; 
  • links to any brief issues papers (i.e., more detailed discussions of specific issues related to the design, delivery and support of K–12 e-learning) in that jurisdiction;
  • the most recent responses to the individual program survey; and
  • an overview of the jurisdictions policies related to importing and exporting e-learning.

Finally, the website includes a blog that allows the research team to share relevant news and comment on issues related to K-12 distance, online and blended learning in Canada.

Key findings

As always, it is important to read the actual report, especially as the k-12 system in Canada is complex and devolved, so there are often qualifications and caveats to most of the findings, but here are my own main take-aways from this report, with comparisons with our national post-secondary education survey:

  1. Online and distance programs are available in the public k-12 sector in almost all provinces and territories: this is very similar in the post-secondary sector.
  2.  Approximately 5.7% of the 5.1 million k-12 students are enrolled in an online or distance education program. In Canadian post-secondary education, we estimate that approximately 12% of college course enrolments are online, and 16% in universities.
  3. Over the last few years, online and distance enrolments in the k-12 sector have remained steady (between 5.5% to 6% of all students), whereas there has been rapid growth over the last five years in all post-secondary sectors except for the CEGEP system in Québec.
  4. Tracking blended learning has proved equally difficult in the k-12 sector as in the post-secondary sector.
  5. Even though this report represents the ninth annual State of the Nation: K-12 E-Learning in Canada study, the lack of reliable data continues to persist in many jurisdictions. There is no requirement in either sector to track online or distance education activities, but without systematic and reliable data collection in this area, it is difficult to measure the impact of policy decisions or the extent to which Canadian education is moving to digital learning.

In addition to these national findings, the report provides a useful province-by-province breakdown of online and distance education activity

Conclusions

Although 5-6% of students enrolled in online and distance education programs may not seem like a great deal of activity compared with the 12-15% at the post-secondary level, it should be remembered that online and distance programs are often focused mainly on the older age groups in k-12, particularly grades 11 and 12. Distance and online learning also require a good deal of self-discipline and independent learning skills, which tend to develop with age.

As the report states:

Canada continues to have one of the highest per capita student enrollment in online courses and programs of any jurisdiction in the world and was one of the first countries to use the Internet to deliver distance learning courses to students.

But perhaps the most striking similarity between the two studies is the continued difficulty of obtaining reliable data and the almost grassroots, bottom-up approach to finding resources, designing the studies, and disseminating the results. This is both the strength and limitation of these two studies.

Maybe it is time for national and provincial agencies to start taking online and digital learning seriously, and find ways to fund and organise basic data collection in this area on a more systematic and consistent basis.

Five old educational technologies

Etherington, C. (2018) Five educational technologies, circa 1918 ELearning inside news, January 1

Despite rumours, I was not around in 1918, but this article is a very nice reminder of what was happening 100 years ago with educational technologies. The five technologies are:

  • magic lanterns
  • chalkboards
  • ink pens
  • abacuses
  • radio

When I started teaching, in 1965, in my school it was still compulsory for students to use ink pens (not ‘nasty Biros’, which were available then). This was a real problem for left-handed pupils, who tended to drag their hand across the wet ink when writing from left to right. I fought hard to get an exemption but my headmistress was adamant – no exceptions were allowed. We have made at least some advances since then regarding accessibility and accommodation to the needs of minorities.

As the article points out, radio was still a couple of years away from actually being used for instructional purposes, although it was increasingly available by 1918. The first BBC adult educational radio program was broadcast in 1924 and was about fleas: a talk on Insects in Relation to Man.

Nevertheless, these old technologies also illustrate how little has changed in many classrooms in terms of pedagogy. PowerPoint is nothing more than a merger of a magic lantern and a chalkboard, but the form of teaching remains the same.

It is much easier to identify technology changes then over 100 years but far less progress has been made on improving teaching methods –  or do you disagree?

A slightly longer video on how educational institutions should rethink their organization in an online world

The Open University of Catalonia has produced a slightly longer (6 minutes 17 seconds) YouTube video of me talking (in English) about how educational institutions should rethink their organization in an global world

Click here or the image above to view the video 

 

Results from the Canadian survey of online learning now available

Bates, T. (ed.) (2017) Tracking Online and Distance Education in Canadian Universities and Colleges: 2017 Vancouver BC: The National Survey of Online and Distance Education in Canadian Post-Secondary Education.

The anglophone version of the public report, as well as the full technical report, is now available for free downloading (Click on the title above or onlinelearningsurveycanada.ca – you will be asked for your e-mail address and a password).

The francophone version of the public report will be available on October 27 from https://formationenlignecanada.ca

Key findings of the report are:

  • Canada is a ‘mature’ online learning market: almost all Canadian colleges and universities now offer online courses and many have been doing so for 15 years or more;
  • there is at least one institution in every province that offers online courses or programs;
  • online enrolments have expanded at a rate of 10%-15% per annum over the last five years;
  • online learning now constitutes between 12%-16% of all post-secondary teaching for credit;
  • online learning courses can be found in almost all subject areas;
  • online learning is providing students with increased access and greater flexibility;
  • two-thirds of Canadian post-secondary institutions see online learning as very or extremely important for their future plans

  • most institutions have or are developing a strategy or plan for online learning
  • LMSs are used in almost every institution, but no particular brand dominates the Canadian market
  • a wide range technologies are being used with or alongside the LMS,the most predominant (over half the institutions) being online conferencing/webinar technologies, video-streaming and print;
  • OER are used in just under half of all institutions but moderately and open textbooks in less than 20%
  • there was no or little use reported of learning analytics, AI applications or competency-based learning, although tracking such use is difficult, as they are instructor- rather than institution-driven
  • hybrid learning (defined as a reduction in classroom time replaced by online learning activities) is widespread in terms of institutions, but low in use in most institutions (less than 10% of classes), although again this is not easily tracked; however, it was reported to lead to innovative teaching;
  • MOOCs were delivered in less than 20% of institutions in the 12 months prior to the survey, and one third reported they did not intend to offer MOOCs in the future
  • the main benefits of online learning were seen as:
    • increased access/flexibility
    • increased enrolments
    • more innovative teaching;
  • the main barriers were seen as:
    • lack of resources (particularly learning technology support staff)
    • faculty resistance
    • lack of government support (reported most in Québec and least in Ontario);
  • there were difficulties in obtaining reliable online course enrolment data: most institutions are not systematically tracking this and there are variations between provinces;
  • the report ends by recommending a standard system for reporting on digital learning.

Implications

The report deliberately does not draw out any implications or make any value judgements. Readers should draw their own conclusions. However here are my personal thoughts on the results, and these do not necessarily reflect those of the rest of the team:

  • smaller institutions (below 2,000 students) found lack of resources particularly difficult and were less likely to offer online courses: what could be done to provide better support for such institutions that want to offer more online teaching?
  • government support to institutions for online learning varied widely from province to province, and this showed in the figures for enrolment and for innovative teaching: some provinces may need to reconsider their policies and support for online learning or they will fall further behind other provinces in online provision for students
  • many institutions are in the process of developing strategies or plans for online learning: what worked and what did not work in those institutions that already have plans in place that could help inform those institutions now still developing plans in this area?

Next steps

This report would not have been possible without the support of many different organizations which are listed in the report itself. In particular, though, we are indebted to the staff in all the institutions who responded to the survey.

This is the first national snapshot of online and distance learning for both colleges and universities in Canada but its value will be much enhanced by a more longitudinal set of studies. The research team is working with potential sponsors to establish a stronger organizational structure, more secure long-term funding, and a more representative steering committee for the survey. I will be reporting back as these developments evolve.

In the meantime, thanks to everyone who helped make this report a reality.

A better ranking system for university teaching?

Who is top dog among UK universities?
Image: © Australian Dog Lover, 2017 http://www.australiandoglover.com/2017/04/dog-olympics-2017-newcastle-april-23.html

Redden, E. (2017) Britain Tries to Evaluate Teaching Quality Inside Higher Ed, June 22

This excellent article describes in detail a new three-tiered rating system of teaching quality at universities introduced by the U.K. government, as well as a thoughtful discussion. As I have a son and daughter-in-law teaching in a U.K. university and grandchildren either as students or potential students, I have more than an academic interest in this topic.

How are the rankings done?

Under the government’s Teaching Excellence Framework (TEF), universities in England and Wales will get one of three ‘awards’: gold, silver and bronze (apparently there are no other categories, such as tin, brass, iron or dross for those whose teaching really sucks). A total of 295 institutions opted to participate in the ratings.

Universities are compared on six quantitative metrics that cover:

  • retention rates
  • student satisfaction with teaching, assessment and academic support (from the National Student Survey)
  • rates of employment/post-graduate education six months after graduation.

However, awards are relative rather than absolute since they are matched against ‘benchmarks calculated to account for the demographic profile of their students and the mix of programs offered.’ 

This process generates a “hypothesis” of gold, silver or bronze, which a panel of assessors then tests against additional evidence submitted for consideration by the university (higher education institutions can make up to a 15-page submission to TEF assessors). Ultimately the decision of gold, silver or bronze is a human judgment, not the pure product of a mathematical formula.

What are the results?

Not what you might think. Although Oxford and Cambridge universities were awarded gold, so were some less prestigious universities such as the University of Loughborough, while some more prestigious universities received a bronze. So at least it provides an alternative ranking system to those that focus mainly on research and peer reputation.

What is the purpose of the rankings?

This is less clear. Ostensibly (i.e., according to the government) it is initially aimed at giving potential students a better way of knowing how universities stand with regard to teaching. However, knowing the Conservative government in the UK, it is much more likely to be used to link tuition fees to institutional performance, as part of the government’s free market approach to higher education. (The U.K. government allowed universities to set their own fees, on the assumption that the less prestigious universities would offer lower tuition fees, but guess what – they almost all opted for the highest level possible, and still were able to fill seats).

What are the pros and cons of this ranking?

For a more detailed discussion, see the article itself but here is my take on it.

Pros

First this is a more thoughtful approach to ranking than the other systems. It focuses on teaching (which will be many potential students’ initial interest in a university) and provides a useful counter-balance to the emphasis on research in other rankings.

Second it has a more sophisticated approach than just counting up scores on different criteria. It has an element of human judgement and an opportunity for universities to make their case about why they should be ranked highly. In other words it tries to tie institutional goals to teaching performance and tries to take into account the very large differences between universities in the U.K. in terms of student socio-economic background and curricula.

Third, it does provide a simple, understandable ‘award’ system of categorizing universities on their quality of teaching that students and their parents can at least understand.

Fourth, and most important of all, it sends a clear message to institutions that teaching matters. This may seem obvious, but for many universities – and especially faculty – the only thing that really matters is research. Whether though this form of ranking will be sufficient to get institutions to pay more than lip service to teaching remains to be seen.

Cons

However, there are a number of cons. First the national student union is against it, partly because it is heavily weighted by student satisfaction ratings based on the National Student Survey, which thousands of students have been boycotting (I’m not sure why). One would have thought that students in particular would value some accountability regarding the quality of teaching. But then, the NUS has bigger issues with the government, such as the appallingly high tuition fees (C$16,000 a year- the opposition party in parliament, Labour, has promised free tuition).

More importantly, there are the general arguments about university rankings that still apply to this one. They measure institutional performance not individual department or instructor performance, which can vary enormously within the same institution. If you want to study physics it doesn’t help if a university has an overall gold ranking but its physics department is crap or if you get the one instructor who shouldn’t be allowed in the building.

Also the actual quantitative measures are surrogates for actual teaching performance. No-one has observed the teaching to develop the rankings, except the students, and student rankings themselves, while one important measure, can also be highly misleading, based on instructor personality and the extent to which the instructor makes them work to get a good grade.

The real problem here is two-fold: first, the difficulty of assessing quality teaching in the first place: one man’s meat is another man’s poison. There is no general agreement, at least within an academic discipline, as to what counts as quality teaching (for instance, understanding, memory of facts, or skills of analysis – maybe all three are important but can how one teaches to develop these diverse attributes be assessed separately?).

The second problem is the lack of quality data on teaching performance – it just isn’t tracked directly. Since a student may take courses from up to 40 different instructors and from several different disciplines/departments in a bachelor’s program, it is no mean task to assess the collective effectiveness of their quality of teaching. So we are left with surrogates of quality, such as completion rates.

So is it a waste of time – or worse?

No, I don’t think so. People are going to be influenced by rankings, whatever. This particular ranking system may be flawed, but it is a lot better than the other rankings which are so much influenced by tradition and elitism. It could be used in ways that the data do not justify, such as justifying tuition fee increases or decreased government funding to institutions. It is though a first systematic attempt at a national level to assess quality in teaching, and with patience and care could be considerably improved. But most of all, it is an attempt to ensure accountability for the quality of teaching that takes account of the diversity of students and the different mandates of institutions. It may make both university administrations and individual faculty pay more attention to the importance of teaching well, and that is something we should all support.

So I give it a silver – a good try but there is definitely room for improvement. 

Thanks to Clayton Wright for drawing my attention to this.

Next up

I’m going to be travelling for the next three weeks so my opportunity to blog will be limited – but that has been the case for the last six months. My apologies – I promise to do better. However, a four hour layover at Pearson Airport does give me some time for blogging!