Measuring Student Workloads

Discussions of student wellbeing tend to focus on providing individual support for those who are struggling to cope. That’s great, but likely to demand a lot of skilled staff time. A few years ago Bangor University investigated whether the university might be contributing to stress through excessive or spiky workloads. Addressing causes of stress would, of course, benefit many students at once. And quite possibly staff, too…

The Bangor researchers considered a department that had a single catalogue of modules and assignments. From that, timelines of student workload could be extracted in a consistent fashion. When I heard the work presented, they were planning to model how different student behaviours would affect the timing and intensity of workloads: the student who works steadily as soon as an assignment is set may have a different experience to the one who leaves everything to the last minute. Tutors could then be helped to adjust their assignments or schedules to avoid creating excessive peaks across the cohort.

Expanding and reproducing that idea at institution scale would require a central source of information about modules and assignments. Depending on institutional practice and technology, that might be available from a VLE or the Jisc Learning Analytics Service. Refinements could include distinguishing formative and summative assignments and types, and adding exams, but even partial data can generate ‘heat maps’ of assessments and dates across courses or faculties that suggest where pinch points may exist.

If that “demand-side” information isn’t available, then perhaps there are “supply-side” proxies that could be used instead? A colleague pointed out that the act of submitting assignments also produces records, and that that might be a more consistent source of cross-institution data. Logs from submission or checking systems should at least show how many assessments were completed each week, revealing peaks and troughs. Additional details such as number of submission attempts and proximity to submission date might reveal common strategies that may need support.

Doing this at assessment, module or programme level shouldn’t require any personal data: just counts. Determining whether particular combinations result in high workloads probably does require linking submissions by the same individual (“students doing Intellectual Property and Data Protection made three submissions that week”), but should be possible using strong pseudonyms that don’t identify who the students are. The same is, I think, true of the approach using “demand-side” data: either can be done in a privacy-protecting way. The aim here isn’t to identify “steady workers” and “last minutes”, but to adjust our demands so as to make life tolerable for both.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published.