[Script for a presentation at a recent Westminster Education Forum event…]
Back in February 2020 we knew what assessment looked like. Jisc had just published “The Future of Assessment”, setting five targets – Authentic, Accessible, Appropriately Automated, Continuous, and Secure – to aim for by 2025. Then COVID made us all look at assessment through a new, and much more urgent, lens
It seems to me there were three responses, invoking different roles for technology
First: technology as defender of the existing process. This saw the lockdown requirement for remote assessment as a threat, with technology as the solution. In its most extreme form, this produced e-proctoring: which makes assessment even more stressful, even less Authentic, and even less Accessible for those who don’t have their own private space or technology, or without bandwidth for a live video connection throughout the assessment. These – I hope – are temporary measures…
Then there’s technology as facilitator. Can it help us adopt different forms of assessment that address both the problems we knew about in 2020 and those of the pandemic? Here I’m thinking of things like open book – probably a more Authentic preparation for the workplace – and, if we actually want to test memory, rapid-fire multiple choice. Technology should create opportunities for things like advance downloading and on-device management of resources and timings, so network connectivity is no longer critical; or assessment of multiple-choice responses; or supporting markers by highlighting or grouping desired features in long-form essays; or suggesting when they may be marking inconsistently. Incidentally this takes a very different approach to assessment security: redefining “misconduct” as a feature, rather than a bug. If someone can select appropriate quotes and examples from a whole library, rather than just their memorized notes, or look up multiple choice answers within tight time limits, then they probably know the subject pretty well.
Finally, something to bear in mind as we contemplate a “return to…”. Can technology be an enabler for kinds of assessment that are otherwise impractical? Could it give us a better perspective on assessing group work – or at least an early indication of things going wrong – by looking at patterns of communication among the students? Having technology inspect discussion content may be too intrusive; or maybe not, if the alternative is accusations of “letting the team down”, breaking personal relationships and individual confidence? Or could we use technology to move from assessment of learning, through assessment for learning, to assessment as learning, where the assessment activities themselves are relevant and productive learning experiences. For example students could learn by providing anonymised feedback – with tutor guidance – on one anothers’ work? These exciting ideas, which are already being tested, were presented at a EUNIS workshop in November 2021.
So, will technology be a defender, a facilitator, or an enabler of our post-pandemic assessment system? Could pandemic-enforced changes move us towards better assessments?
Informed by experiences of the pandemic, Jisc has now published revised principles. Assessment should:
- Help learners understand what good looks like;
- Support the personalized needs of learners;
- Foster active learning;
- Develop autonomous learners;
- Manage staff and learner workload effectively;
- Foster a motivated learning community;
- Promote learner employability.
We will need humans and technology to work together to deliver those.