A fascinating discussion session with colleagues who worked on Jisc’s “Future of Assessment” report. When that was written, in the first months of 2020, its intention was to look at how things might change over the next five years. Little did we know…
When the pandemic hit, suddenly many of things we had expected to happen by 2025 needed to be done by June. It was very quickly apparent that traditional exam halls were not going to be possible for the 2020 cycle, so there was a very rapid pivot to other ways of assessing students’ abilities. And that was amazingly successful. As I commented: the Future came 57 months early!
So now we know that it is possible to do assessments under lockdown conditions, can we use that experience as an opportunity to think what the future of assessment might actually look like? Maybe getting back to normal in terms of travel, meetings and gatherings shouldn’t just mean reverting to the traditional forms of assessment?
We gradually coalesced around five questions:
What are we actually assessing, and is that what we want to assess? Knowledge, skill, ability to work under pressure, ability to write/type intensively for an extended period, short-term memory, long-term memory, digital wealth?
Who/which groups does that put at a disadvantage? And is the disadvantage something – like lack of computers, bandwidth or a quiet place for undisturbed assessment – which can be fixed by providing appropriate resources; or something inherently incompatible between the student and the assessment style?
Can we reduce the stress of being assessed? We should at least be wary of approaches that increase stress.
Can we make our exams more realistic? How many jobs actually require us to sit at a desk for three hours, with no access to external information resources?
What is malpractice? Can we adapt our style of assessment to make that ineffective or meaningless? Maybe a student who can use reference materials quickly to produce a well-argued response within the time-limit is actually demonstrating their knowledge of the subject? Having done both open-book and exam hall assessments during my law degrees, I felt the former tested my subject knowledge, the latter my exam technique. And have the words “digital” and “remote” triggered an excessive concern with on-the-spot “security”, overriding other important concerns? If we take a broader view of the context in which assessment takes place, there may be ways to detect cheating that don’t require us to discard the other priorities for effective assessment.