April 29, 2017

Is networked learning experiential learning?

Image: © Justin Grimes, The Guardian, 2013

Image: © Justin Grimes, The Guardian, 2013

Campbell, G. (2016) Networked learning as experiential learning Educause Review, Vol. 51 No. 1, January 11

This is an interesting if somewhat high level discussion by the Vice-Provost for Learning Innovation at Virginia Commonwealth University, USA, of the importance of networked learning as experiential learning:

the experience of building and participating within a digitally mediated network of discovery and collaboration is an increasingly necessary foundation for all other forms of experiential learning in a digital age. Moreover, the experience of building and participating within a digitally mediated network of discovery is itself a form of experiential learning, indeed a kind of metaexperiential learning that vividly and concretely teaches the experience of networks themselves.

This article might be useful for those who feel a need for a pedagogical or philosophical justification for networked learning. However, I have two reservations about Campbell’s argument which are closely related:

  • Campbell appears in one part of the article to be arguing students need some kind of academic training to understand the underlying nature of digital networking, but he is not too clear in the article about what that entails or indeed what that underlying nature is, beyond the purely technical;
  • second, I struggled to see what the consequences of the argument are for me as a teacher: what should I be doing to ensure that students are using networked learning as experiential learning? Does this happen automatically?

I think Campbell is arguing that instructors should move away from selecting and packaging information for students, and allow them to build knowledge through digital networks both within and outside the academy. I of course agree with this part of the argument, but the hard part is knowing the best ways to do this so that learners achieve the knowledge and skills they will need.

As with all teaching methods, networked learning and/or experiential learning can be done well or badly. I would like to see (a) a more precise description of what networked learning means to Gardner in terms of practice, and (b) some guidelines or principles to support instructors in using networked learning as a form of experiential learning. This needs to go beyond what we know about collaborative learning in online groups, although even the application of what we know about this would be a big step forward for most instructors.

Without a clear analysis of how digital networking results in learning, and how this differs from non-digital networked learning, networked learning runs the risk of being yet another overworked buzzword that really doesn’t help a great deal.

Despite my reservations I encourage you to take a look at this article and see if you can make more sense of it than I have, because I believe that this is a very important development/argument that needs further discussion and critical analysis.

For a more pragmatic take on this topic see:

LaRue, B. and Galindo, S. (2009). ‘Synthesizing Corporate and Higher Education Learning Strategies’. in Rudestam, K. and Schoenholtz-Read, J. (eds.) Handbook of Online Learning: Innovations in  Higher Education and Corporate Training Thousand Oaks, CA: Sage Publications.

 

What students spend on textbooks and how it affects open textbooks

Avoid bookstore line-ups - adopt an online, open textbook Image:  The Saskatoon StarPhoenix

Avoid bookstore line-ups – adopt an online, open textbook
Image: The Saskatoon StarPhoenix

Hill, P. (2015) Bad Data Can Lead To Bad Policy: College students don’t spend $1,200+ on textbooks, e-Literate, November 8

Caulfield, M. (2015) Asking What Students Spend on Textbooks Is the Wrong Question, Hapgood, November 9

Just wanted to draw your attention to two really interesting and useful blog posts about the cost of textbooks.

First thanks to Phil Hill for correcting what I and many other have been saying: that students are spending more than $1,000 a year on textbooks. It turns out that what students are actually spending is around $530 – $640 (all figures in this post are in U.S. dollars and refer to U.S. post-secondary education.) Furthermore, student spending on textbooks has actually declined (slightly) over the last few years (probably as a result of increasing tuition fees – something has to give).

Mike Caulfield however points out that the actual cost of recommended textbooks is over $1,000 a year (or more accurately, between $968 and $1221, depending on the mix of rental and newly purchased books), and that this is the figure that counts, because if students are spending less, then they are putting their studies at risk by not using recommended texts.

For instance, a report by consumer advocacy group U.S. PIRG found that the cost of textbooks has jumped 82% since 2002. As a result, 65% of about 2,000 students say they have opted out of buying (or renting) a required textbook because of the price. According to the survey, 94% of the students who had skipped buying textbooks believed it could hurt their performance in class. Furthermore, 48% of the students said that they had altered which classes they take due to textbook costs, either taking fewer classes or different classes.

More importantly, students and significantly their families do not look at the cost of textbooks in isolation. They also take into account tuition fees and the cost of living, especially if they are studying away from home. So they are likely to consider what they are expected to spend on textbooks rather than what they will actually spend when deciding on post-secondary education. The high cost of textbooks is just another factor that acts as a deterrent for many low income families.

Whether you take the actual expenditure of around $600 a year  per student or the required spending of $1,220, having open textbooks available not only results in very real savings for students, but also will have a more important psychological effect in encouraging some students and parents to consider post-secondary education who might not do so otherwise. Getting the methodology of costing textbooks right is important if we are to measure the success of open textbooks, but whichever way you look at it, open textbooks are the right way to go.

Lastly, if you are encouraging your students to become digitally literate, I suggest that you ask them to read the two posts, which, as well as dealing with an issue in which your students will have a major interest, are paragons of well-researched writing, and above all courteous and respectful in their differences.

Who are your online students?

Student at computer at home 2

Clinefelter, D. L., & Aslanian, C. B. (2015). Online college students 2015: Comprehensive data on demands and preferences. Louisville, KY: The Learning House, Inc.

The survey

This is an interesting report based on a survey of 1,500 individuals nationwide (USA) who were:

  • at least 18 years of age;
  • had a minimum of a high school degree or equivalent;
  • and were recently enrolled, currently enrolled or planning to enroll in the next 12 months in either a fully online undergraduate or graduate degree program or a fully online certificate or licensure program.

Main findings

This is a very brief summary of a 53 page report packed with data, which I strongly recommend reading in full, especially if you are involved with marketing or planning online programs or courses, but here is a brief tasting menu:

1. Competition for online students is increasing

Data from the National Student Clearinghouse Research Center (2015) show that college enrollments [in the USA] declined by close to 2%, yielding 18.6 million college students today. About 5.5 million of these students are studying partially or fully online. At the same time, competition for these online students is increasing. Between 2012 and 2013, 421 institutions launched online programs for the first time, an increase of 23% to 2,250 institutions.

2. The main motivation for online students is to improve their work prospects

Roughly 75% of online students seek further education to change careers, get a job, earn a promotion or keep up to date with their skills…..Colleges that want to excel in attracting prospective online students must prepare them for and connect them to the world of work.

3. As competition for students stiffens, online students expect policies and processes tailored to their needs

For example, the amount of transfer credit accepted has consistently been ranked one of the top 10 factors in selecting an institution.

4. In online education, everything is local

Half of online students live within 50 miles of their campus, and 65% live within 100 miles….It is critical that institutions have a strong local brand so that they are at the top of students’ minds when they begin to search for a program of study.

5. Affordability is a critical variable

Forty-five percent of respondents to the 2015 survey reported that they selected the most inexpensive institution. … Thus, it is not surprising that among 23 potential marketing messages, the most appealing were “Affordable tuition” and “Free textbooks.”

6. We could do better

21% reported “Inconsistent/poor contact and communication with instructors,” and 17% reported “Inconsistent/poor quality of instruction. ” More contact with regular faculty was requested, especially as advisors.

7. Blended learning is an option – for some

About half of the respondents indicated they would attend a hybrid or low-residency option if their program was not available fully online. But 30% said they would not attend if their program was not available online.

8. The program or major drives the selection process.

60% indicated they selected their program of study first and then considered institutions.

9. Online students are diverse

Online students have a wide range of needs and backgrounds. Even the age factor is changing, with more and more students under 25 years of age choosing to study online for their undergraduate degrees.

10. Cost matters

Undergraduates reported paying $345 per credit, and graduate students reported paying $615 per credit, on average (equivalent to around $25,000 for a full degree). Applicants need clear and easily accessible information about the costs of studying online and the financial aid rules regarding online students.

As I said, this is just a taste of an information-packed report, which is useful not only to those marketing programs to students, but also for convincing faculty of the importance of online learning.

But remember: this is a study of online students in the USA. There may be problems in generalising too much to other jurisdictions.

 

 

A review of MOOCs and their assessment tools

What kind of MOOC?

What kind of MOOC?

Chauhan, A. (2014) Massive Open Online Courses (MOOCS): Emerging Trends in Assessment and Accreditation Digital Education Review, No. 25

For the record, Amit Chauhan, from Florida State University, has reviewed the emerging trends in MOOC assessments and their application in supporting student learning and achievement.

Holy proliferating MOOCs!

He starts with a taxonomy of MOOC instructional models, as follows:

  • cMOOCs
  • xMOOCs
  • BOOCs (a big open online course) – only one example, by a professor from Indiana University with a grant from Google, is given which appears to be a cross between an xMOOC and a cMOOC and had 500 participants.
  • DOCCs (distributed open collaborative course): this involved 17 universities sharing and adapting the same basic MOOC
  • LOOC (little open online course): as well as 15-20 tuition-paying campus-based students, the courses also allow a limited number of non-registered students to also take the course, but also paying a fee. Three examples are given, all from New England.
  • MOORs (massive open online research): again just one example is given, from UC San Diego, which seems to be a mix of video-based lecturers and student research projects guided by the instructors
  • SPOCs (small, private, online courses): the example given is from Harvard Law School, which pre-selected 500 students from over 4,000 applicants, who take the same video-delivered lectures as on-campus students enrolled at Harvard
  • SMOCs: (synchronous massive open online courses): live lectures from the University of Texas offered to campus-based students are also available synchronously to non-enrolled students for a fee of $550. Again, just one example.

MOOC assessment models and emerging technologies

Chauhan describes ‘several emerging tools and technologies that are being leveraged to assess learning outcomes in a MOOC. These technologies can also be utilized to design and develop a MOOC with built-in features to measure learning outcomes.’

  • learning analytics on MIT’s 6.002x, Circuits and Electronics. This is a report of the study by Breslow et al. (2013) of the use of learning analytics to study participants’ behaviour on the course to identify factors influencing student performance.
  • personal learning networks on PLENK 2010: this cMOOC is actually about personal learning networks and encouraged participants to use a variety of tools to develop their own personal learning networks
  • mobile learning on MobiMOOC, another connectivist MOOC. The learners in MobiMOOC utilized mobile technologies for accessing course content, knowledge creation and sharing within the network. Data were collected from participant discussion forums and hashtag analysis to track participant behaviour
  • digital badges have been used in several MOOCs to reward successful completion of an end of course test, participation in discussion forums, or in peer review activities
  • adaptive assessment:  assessments based on Item Response Theory (IRT) are designed to automatically adapt to student learning and ability to measure learner performance and learning outcomes. The tests include different difficulty levels and based on the response of the learner to each test item, the difficulty level decreases or increases to match learner ability and potential. No example of actual use of IRT in MOOCs was given.
  • automated assessments: Chauhan describes two automated assessment tools, Automated Essay Scoring (AES) and Calibrated Peer Review™ (CPR), that are really automated tools for assessing and giving feedback on writing skills. One study on their use in MOOCs (Balfour, 2013) is cited.
  • recognition of prior learning: I think Chauhan is suggesting that institutions offering RPL can/should include MOOCs in student RPL portfolios.

Chauhan concludes:

Assessment in a MOOC does not necessarily have to be about course completion.  Learners can be assessed on time-on-task; learner-course component interaction; and a certification of the specific skills and knowledge gained from a MOOC….. Ultimately, the satisfaction gained from completing the course can be potential indicator of good learning experiences.

Alice in MOOCland

Chauhan describes the increasing variation of instructional methods now associated with the generic term ‘MOOC’, to the point where one has to ask whether the term has any consistent meaning. It’s difficult to see how a SPOC for instance differs from a typical online credit course, except perhaps in that it uses a recorded lecture rather than a learning management system or VLE. The only common factor in these variations is that the course is being offered to some non-registered students, but then if they have to pay a $500 fee, surely that’s a registered student? If a course is neither massive, nor open, nor free, how can it be a MOOC?

Further, if MOOC participants are taking exactly the same course and tests as registered students, will the institution award them credit for it and admit them to the institution? If not, why not? It seems that some institutions really haven’t thought this through. I’d like to know what Registrar’s make of all this.

At some point, institutions will need to develop a clearer, more consistent strategy for open learning, in terms of how it can best be provided, how it calibrates with formal learning, and how open learning can be accommodated within the fiscal constraints of the institution, and then where MOOCs might fit with the strategy. It seems that a lot of institutions – or rather instructors – are going into open learning buttock-backwards.

More disturbing for me though is the argument Chauhan makes for assessing everything except what participants learn from MOOCs. With the exception of automated tests, all these tools do is describe all kinds of behaviour except for learning. These tools may be useful for identifying factors that influence learning, on a post hoc rationalization, but you need to be able to measure the learning in the first place, unless you see MOOCs as some cruel form of entertainment. I have no problem with trying to satisfy students, I have no problem with MOOCs as un-assessed non-formal education, but if you try to assess participants, at the end of the day it’s what they learn that matters. MOOCs need better tools for measuring learning, but I didn’t see any described in this article.

Further, if MOOC participants are taking exactly the same course and tests as registered students, will the institution award them credit for it and admit them to the institution? If not, why not? It seems that some institutions really haven’t thought this through. I’d like to know what Registrar’s make of all this.

At some point, institutions will need to develop a clearer, more consistent strategy for open learning, in terms of how it can best be provided, how it calibrates with formal learning, and how open learning can be accommodated within the fiscal constraints of the institution, and then where MOOCs might fit with the strategy. It seems that a lot of institutions – or rather instructors – are going into open learning buttock-backwards.

More disturbing for me though is the argument Chauhan makes for assessing everything except what participants learn from MOOCs. With the exception of automated tests, all these tools do is describe all kinds of behaviour except for learning. These tools may be useful for identifying factors that influence learning, on a post hoc rationalization, but you need to be able to measure the learning in the first place, unless you see MOOCs as some cruel form of entertainment. I have no problem with trying to satisfy students, I have no problem with MOOCs as un-assessed non-formal education, but if you try to assess participants, at the end of the day it’s what they learn that matters. MOOCs need better tools for measuring learning, but I didn’t see any described in this article.

References

Balfour, S. P. (2013). Assessing writing in MOOCs: Automated Essay Scoring and Calibrated Peer review. Research & Practice in Assessment, Vol. 8, No. 1

Breslow, L., Pritchard, D. E., DeBoer, J., Stump, G. S., Ho, A. D., & Seaton, D. T. (2013). Studying learning in the worldwide classroom: Research into edx’s first mooc. Research & Practice in Assessment, 8, 13-25.

What students learned from an MIT physics MOOC

Newtonian mechanics 2

Colvin, K. et al. (2014) Learning an Introductory Physics MOOC: All Cohorts Learn Equally, Including On-Campus Class, IRRODL, Vol. 15, No. 4

Why this paper?

I don’t normally review individual journal articles, but I am making an exception in this case for several reasons:

  • it is the only research publication I have seen that attempts to measure actual learning from a MOOC in a quantitative manner (if you know of other publications, please let me know)
  • as you’d expect from MIT, the research is well conducted, within the parameters of a quasi-experimental design
  • the paper indicates, in line with many other comparisons between modes of delivery, that the conditions which are associated with the context of teaching are more important than just the mode of delivery
  • I was having to read this paper carefully for my book on ‘Teaching in a Digital Age’, but for reasons of space I would not be able to go into detail on this paper for my book, so I might as well share my full analysis with you.

What was the course?

8.MReV – Mechanics ReView, an introduction to Newtonian Mechanics, is the online version of a similar course offered on campus in the spring for MIT students who failed the Introductory Newtonian Mechanics in the fall. In other words, it is based on a second-chance course for MIT-based campus students.

The online version was offered in the summer semester as a free, open access course through edX and was aimed particularly at high school physics teachers but also to anyone else interested. The course consisted of the following components:

  • an online eText, especially designed for the course
  • reference materials both inside the course and outside the course (e.g., Google, Wikipedia, or a textbook)
  • an online discussion area/forum
  • mainly multiple-choice online tests and ‘quizzes’, interspersed on a weekly basis throughout the course.

Approximately 17,000 people signed-up for 8.MReV. Most dropped out with no sign of commitment to the course; only 1,500 students were “passing” or on-track to earn a certificate after the second assignment. Most of those completing less than 50% of the homework and quiz problems dropped out during the course and did not take the post-test, so the analysis included only the 1,080 students who attempted more than 50% of the questions in the course. 1,030 students earned certificates.

Thus the study measured only the learning of the most successful online students (in terms of completing the online course).

Methodology (summary)

The study measured primarily ‘conceptual’ learning, based mainly on multiple-choice questions demanding a student response that generally can be judged right or wrong. Students were given a pre-test before the course and a post-test at the end of the course.

Two methods to test learning were used: a comparison between each student’s pre-test and post-test score to measure the learning gain during the course; and an analysis based on Item Response Theory (IRT) which does not show absolute learning (as measured by pre-post testing), but rather improvement relative to “class average.”

Because of the large size of the MOOC participants included in the study, the researchers were able to analyse performance between various ‘cohorts’ within the MOOC participants such as:

  • physics teachers
  • not physics teachers
  • physics background
  • no physics background
  • college math
  • no math
  • post-graduate qualification
  • bachelor degree
  • no more than high school

Lastly, the scores of the MOOC participants were compared with the scores of those taking the on-campus version of the course, which had the following features:

  • four hours of instruction in which staff interacted with small groups of students (a flipped classroom) each week,
  • staff office hours,
  • help from fellow students,
  • available physics tutors,
  • MIT library

Main results (summary)

  • gains in knowledge for the MOOC group were generally higher than those found in traditional, lecture-based classes and lower than (but closer to) those found in ‘interactive’ classes, but this result is hedged around with some considerable qualifications (‘more studies on MOOCs need to be done to confirm this’.)
  • in spite of the extra instruction that the on-campus students had, there was no evidence of positive, weekly relative improvement of the on-campus students compared with our online students. (Indeed, if my reading of Figure 5 in the paper is correct, the on-campus students did considerably worse).
  • there was no evidence within the MOOC group that cohorts with low initial ability learned less than the other cohorts

Conclusions

This is a valuable research report, carefully conducted and cautiously interpreted by the authors. However, for these reasons, it is really important not to jump to conclusions. In particular, the authors’ own caution at the end of the paper should be noted:

It is … important to note the many gross differences between 8.MReV and on-campus education. Our self-selected online students are interested in learning, considerably older, and generally have many more years of college education than the on-campus freshmen with whom they have been compared. The on-campus students are taking a required course that most have failed to pass in a previous attempt. Moreover, there are more dropouts in the online course … and these dropouts may well be students learning less than those who remained. The pre- and posttest analysis is further blurred by the fact that the MOOC students could consult resources before answering, and, in fact, did consult within course resources significantly more during the posttest than in the pretest.

To this I would add that the design of this MOOC was somewhat different to many other xMOOCs in that it was based on online texts specially designed for the MOOC, and not on video lectures.

I’m still not sure from reading the paper how much students actually learned from the MOOC. About 1,000 who finished the course got a certificate, but it is difficult to interpret the gain in knowledge. The statistical measurement of an average gain of 0.3 doesn’t mean a lot. There is some mention of the difference being between a B and a B+, but I have probably misinterpreted that. If it is the case, though, I certainly would expect students taking a 13 week course to do much better than that. It would have been more helpful to have graded students on the pre-test then compared those grades on the post-test. We could then see if gains were in the order of at least one grade better, for instance.

Finally, this MOOC design suits a behaviourist-cognitivist approach to learning that places heavy emphasis on correct answers to conceptual questions. It is less likely to develop the skills I have identified as being needed in a digital age.