Categories
Articles

Assessment – many ways to do it

 

Jisc’s 2020 Future of Assessment report identifies five desirable features that assessors should design their assessments to deliver: authentic, accessible, appropriately automated, continuous and secure. Those can sometimes seem to conflict, for example if you decide that “secure” assessment requires the student to be online through their exam, then you have an “accessibility” problem for students who may not have the required broadband provision in their best location for taking the test. But that example highlights some differences between the features, which may help us avoid those clashes and produce assessments that are better for everyone.

First is how much control we, as assessors, have over the definition of the terms. “Accessible” is almost entirely defined externally: either by law, for some groups of students, or by circumstance such as broadband disparity. If we want our assignments to be “accessible” then we don’t have much choice what that means. But, at the opposite extreme, if you define “secure” as meaning “conducted within the assessment rules”, then it’s clear that we have a lot of freedom to change those rules and make nearly any kind of assessment “secure”. This needn’t create a free-for-all:  security rules should still offer an appropriate combination of (pre-assessment) prevention, (during assessment) enforcement, and (post-assessment) verification.

For example if you think that collusion among students is a “security” problem, you can re-define the assessment as group, rather than individual, work; if you think using external resources is a “security” problem, then make it an open-book research question, rather than a memory one. In each case, what used to be a “security” problem is now an authentic assessment of a valuable transferrable skill. It may well be that the only non-negotiable aspect of “secure” is “assessment was completed by the student”: but even here there are several possibilities, including checking that the assessment product is consistent with previous work, in quality, style, etc. Sometimes a student will “get it” on the eve of the exam, and their mark jump beyond the expected range: sometimes they may have a bad day and do significantly worse. In both cases we should be investigating gently, rather than jumping to conclusions about what happened.

Which raises another difference between the features: when they have to be present. Accessibility generally needs to be delivered at the time(s) of assessment, though even here there may be some potential for prior or subsequent adjustment. For security the moment of assessment is just one among a host of stages where measures – both preventive and corrective – can be applied. These can and should be designed to work together. Traditional invigilators will generally give warnings and make notes of anything that looks like a breach of the rules: only if a student’s behaviour is disrupting others will they be asked to leave. Combining the notes, output and other relevant information to produce an assessment score is done, later, by markers.

To avoid conflicts between the features, it may be helpful to start by looking at the ones where we have fewest options. Those are probably “accessible” and “authentic”: the latter offers some choices around which skill(s) or scenario(s) we want to be authentic to, but once that decision has been made, the only question is how close to reality we need to get. “Appropriately automated” and “continuous” are likely to be somewhat constrained by external environmental factors: including assessment facilities, technologies and staff workload. But, as well as being a little more flexible than accessible and authentic, these two are actually quite hard to even define until you’ve done those first two. And, as discussed above, “secure” has the widest suite of options, so it ought to be possible to apply it to nearly anything the first four have produced. There may still be some negotiation and adjustment between the five, but getting the right sequence should avoid the sort of painting-into-a-corner situation created by starting with the feature that is actually most flexible.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *