Assessment 2

The story so far

Chapter 5 of my open textbook, ‘Teaching in a Digital Age’ is about the design of teaching and learning, which I am currently writing and publishing as I go.

I started Chapter 5 by suggesting that instructors should think about design through the lens of constructing a comprehensive learning environment in which teaching and learning will take place. I have been working through the various components of a learning environment, focusing particularly on how the digital age affects the way we need to look at some of these components.

I briefly described some of the key components of an effective learning environment in a series of blog posts:

In this post, I examine the assessment of students as a key component, and how assessment methods need to be adapted to meet the needs of a digital age. This is the last component I’m discussing, but it will be followed by a final post that discusses the value of designing teaching and learning through the lens of a comprehensive learning environment.

Learner assessment

‘I was struck by the way assessment always came at the end, not only in the unit of work but also in teachers’ planning….Assessment was almost an afterthought…

Teachers…are being caught between competing purposes of …assessment and are often confused and frustrated by the difficulties that they experience as they try to reconcile the demands.’

Earle, 2003

Learner assessment in a digital age

Because assessment is a huge topic, it is important to be clear that the purpose of this section is (a) to look at one of the components that constitute an effective and comprehensive learning environment, and (b) briefly to examine the extent to which assessment is or should be changing in a digital age. Assessment will be a recurring theme in this book, so in this section the treatment is deliberately cursory.

Probably nothing drives the behaviour of students more than how they will be assessed. Not all students are instrumental in their learning, but given the competing pressures on students’ time in a digital age, most ‘successful’ learners focus on what will be examined and how they can most effectively (i.e. in as little time as possible) meet the assessment requirements. Therefore decisions about methods of assessment will in most contexts be fundamental to building an effective learning environment.

The purpose of assessment

There are many different reasons for assessing learners. It is important to be clear about the purpose of the assessment, because it is unlikely that one single assessment instrument will meet all assessment needs. Here are some reasons (you can probably think of many more):

  1. to improve and extend students’ learning
  2. to assess students’ knowledge and competence in terms of desired learning goals or outcomes
  3. to provide the teacher/instructor with feedback on the effectiveness of their teaching and how it might be improved
  4. to provide information for employers about what the student knows and/or can do
  5. to filter students for further study, jobs or professional advancement
  6. for institutional accountability and/or financial purposes.

I have deliberately ordered these in importance for creating an effective learning environment. In terms of the needs of a digital age, assessment needs to focus on both developing and assessing skills. This means that continuous or formative assessment will be as important as summative or ‘end-of-course’ assessment.

A question to be considered is whether there is a need for assessment of learning in the first place. There may be contexts, such as a community of practice, where learning is informal, and the learners themselves decide what they wish to learn, and whether they are satisfied with what they have learned. In other cases, learners may not want or need to be formally evaluated or graded, but do want or need feedback on how they are doing with their learning. ‘Do I really understand this?’ or ‘How am I doing compared to other learners?’

However, even in these contexts, some informal methods of assessment by experts, specialists or more experienced participants could help other participants extend their learning by providing feedback and indicating the level of competence or understanding that a participant has achieved or has yet to accomplish. Lastly, students themselves can extend their learning by participating in both self-assessment and peer assessment, preferably with guidance and monitoring from a more knowledgeable or skilled instructor.

Methods of assessment

The form the assessment takes, as well as the purpose, will be influenced by the instructors’ or examiners’ underlying epistemology: what they believe constitutes knowledge, and therefore how students need to demonstrate their knowledge. The form of assessment should also be influenced by the knowledge and skills that students need in a digital age, which means focusing as much on assessing skills as knowledge of content.

There is a wide range of possible assessment methods. I have selected just a few to illustrate how technology can change the way we assess learners in ways that are relevant to a digital age:

  • computer-based multiple-choice tests: good for testing ‘objective’ knowledge of facts, ideas, principles, laws, and quantitative procedures in mathematics, science and engineering etc., and are cost-effective for these purposes. This form of testing though tends to be limited  in assessing high-level intellectual skills, such as complex problem-solving, creativity, and evaluation, and therefore less likely to be useful for developing or assessing many of the skills needed in a digital age.
  • written essays or short answers: good for assessing comprehension and some of the more advanced intellectual skills, such as critical thinking, but are labour intensive, open to subjectivity, and are not good for assessing practical skills. Experiments are taking place with automated essay marking, using developments in artificial intelligence, but so far automated essay marking still struggles with reliably identifying valid semantic meaning (for a balanced and more detailed account of the current state of machine grading, see Mayfield, 2013Parachuri, 2013).
  • project work: either individual but more commonly group-based, project work encourages the development of authentic skills that require understanding of content, knowledge management, problem-solving, collaborative learning, evaluation, creativity and practical outcomes. Designing valid and practical project work needs a high level of skill and imagination from the instructor.
  • e-portfolios (an online compendium of student work): enables self-assessment through reflection, knowledge management, recording and evaluation of learning activities, such as teaching or nursing practice, and recording of an individual’s contribution to project work (as an example, see  the use of e-portfolios in Visual Arts and Built Environment at the University of Windsor.); usually self-managed by the learner but can be made available or adapted for formal assessment purposes or job interviews
  • simulations, educational games (usually online) and virtual worlds: facilitate the practice of skills, such as complex and real time decision-making, operation of (simulated or remote) complex equipment, the development of safety procedures and awareness, risk taking and assessment in a safe environment, and activities that require a combination of manual and cognitive skills (see the training of Canadian Border Service officers at Loyalist College, Ontario). Currently expensive to develop, but cost-effective with multiple use, where it replaces the use of extremely expensive equipment, where operational activities cannot be halted for training purposes, or  where available as open educational resources.
Virtual world border crossing, Loyalist College, Ontario
Virtual world border crossing, Loyalist College, Ontario

It can be seen that some of these assessment methods are both formative, in helping students to develop and increase their competence and knowledge, as well as summative, in assessing knowledge and skill levels at the end of a course or program.

In conclusion

Nothing is likely to drive student learning more than the method of assessment. At the same time, assessment methods are rapidly changing and are likely to continue to change. Assessment in terms of skills development needs to be both ongoing and continuous as well as summative. There is an increasing range of digitally based tools that can enrich the quality and range of student assessment. Therefore the choice of assessment methods, and their relevance to other components, are vital elements of any effective learning environment.

Over to you

Your views, comments and criticisms are always welcome. In particular:

  • are there other methods of assessment relevant to a digital age that I should have included?
  • there is still a heavy reliance on computer-based multiple-choice tests in much teaching, mainly for cost reasons. However, although there are exceptions, in general these really don’t assess the high level conceptual skills needed in a digital age. Are there other methods that are equally as economical, particularly in terms of instructor time, that are more suitable for assessment in a digital age? For instance, do you think automated essay grading is a viable alternative?
  • would it be helpful to think about assessment right at the start of course planning, rather than at the end? Is this feasible?

Or any other comments on assessment as a critical component of a learning environment, please!

Next up

Why thinking in terms of a comprehensive learning environment is necessary but not sufficient when designing a course or program.

 

References

Earle, L. (2003) Assessment as Learning Thousand Oaks CA: Corwin Press

Mayfield, E. (2013) Six ways the edX Announcement Gets Automated Essay Grading Wrong, e-Literate, April 8

Parachuri, V. (2013) On the automated scoring of essays and the lessons learned along the way, vicparachuri.com,  July 31

 

4 COMMENTS

  1. Prof. Bates,
    I think it would be interesting to include some notes on learning analytics. Since this issue will become increasingly important in assessment in online learning environments , not only for teachers but also to the students themselves.
    a greeting, and I keep reading …

    • Thanks, Raidell.
      I completely agree with you about the growing importance of learning analytics. I plan however to deal with this in more detail later in the book, especially in the next chapter on the design of teaching and learning. Your comment though is spot on and much appreciated

  2. Hello! I would like to use a reference of Your book in my work but I can’t find a publishing year and publisher anywhere. Can You, please, tell this information?

    • Hi, Laura

      I understand your difficulty in referencing the book. Because it is an open access publication and was thus independently published it is not possible to follow conventional referencing practice.

      I suggest you reference the book as follows (following guidelines provided to me by the Library of the University of British Columbia):

      Bates, A.W. (2015) Teaching in a Digital Age: Guidelines for Designing Teaching and Learning in a Digital Age Victoria BC: BCcampus (accessed at https://opentextbc.ca/teachinginadigitalage/ on [date])

      BCcampus is the organization that provides the software and the hosting of the book on its server (although in truth as an open publication it could be hosted anywhere, and a permanent digital copy resides in the library of the University of British Columbia).

      Although the book was independently published, please note that the book was peer reviewed before publication and the reviews are included in an appendix of the book.

      I hope this is helpful and that you are finding the book useful,

Leave a Reply to Raidell Avello Cancel reply

Please enter your comment!
Please enter your name here