I’ve been reading about Slow Computing and the need for ‘digital forgetting’. But, unlike the GDPR Right to Erasure, human forgetting isn’t clean: more often involving uncertainty rather than simple elimination. That leaves our database in a different state: whereas digital erasure has no effect on the records that remain, much of our human memory is still present but of uncertain quality. We don’t know which of those memories are accurate, so should be wary of placing too much reliance on them.
So what would happen if there were a Right to Data Decay? We might imagine databases inhabited by chaos monkeys that randomly alter a small percentage of field values each time they ran. To mimic human forgetting, the likelihood of alteration could relate to the age of the record or the length of time since it was last used. Over time, old or unused records would become increasingly unreliable, and relying on them increasingly hazardous. Provided the database holder bears the cost of that unreliability, they might well be motivated to introduce more frequent re-validation processes and/or to simply dispose of data once it approached the age of unreliability.
Those seem like incentives that benefit both the data controller and the data subjects. So do we need such a right? Probably not, once you realise that the real world already develops in ways that make data go out of date. The university still sending glossy alumni brochures to the person who sold our house a decade ago is wasting money through not recognising that; the online book-seller still recommending children’s books for relatives who are now grown up is devaluing its other recommendations to me and, presumably, “people like me” too; data science models that incorporate too-old data may well lose accuracy, especially where the purpose of the model is to change the reality that it reflects.
Recognise data decay, save money, improve services!