I’m delighted to announce that the Journal of Learning Analytics has published our paper on why and how we developed the Jisc Wellbeing Analytics Code of Practice. If you want to know the context that prompted our interest in data-supported wellbeing, or how we mined the GDPR for all possible safeguards, then have a look […]
Author: Andrew Cormack
I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!
Getting a Feel for AI Terrain
Decisions whether or not to use Artificial Intelligence (AI) should involve considering several factors, including the institution’s objectives, purpose and culture, readiness, and issues relating to the particular application. Jisc’s Pathway Towards Responsible, Ethical AI is designed to help you with that detailed investigation and decision-making. But I wondered whether there might be a check […]
Legal cases aren’t often a source for guidance on system management but, thanks to the cooperation of the victims of a ransomware attack, a recent Monetary Penalty Notice (MPN) from the Information Commissioner (ICO) is an exception. Vulnerability management was mentioned in previous MPNs (e.g. Carphone Warehouse, Cathay Pacific, and DSG), but they don’t go […]
Change: A Feature, not a Bug
Reading the Machine Learning literature, you could get the impression that the aim is to develop a perfect model of the real world. That may be true when you are trying to distinguish between dogs and muffins, but for a lot of applications in education, I suspect that a model that achieved perfection would be […]
Swaddling AI
I’ve been reading a fascinating paper on “System Safety and Artificial Intelligence”, applying ways of thinking about safety-critical software to Artificial Intelligence (AI). Following is very much my interpretation: I hope it’s accurate but do read the paper as there’s lots more to think about. AI is a world of probabilities, statistics and data. That […]
Visualising the Draft EU AI Act
I’m hoping to use the EU’s draft AI Act as a way to think about how we can safely use Artificial Intelligence. The Commission’s draft sets a number of obligations on both providers and users of AI; formally these only apply when AI is used in “high-risk” contexts, but they seem like a useful “have […]
Explaining Network Telemetry
A really interesting series of talks on how to gather and share information about the performance of networks at today’s GEANT Telemetry and Data Workshop. One of the most positive things was a clear awareness that this information can be sensitive both to individuals and to connected organisations. So, as the last speaker, I decided […]
GDPR Article 21 provides a “right to object” whenever personal data are processed based on either Legitimate Interests or Public Interests. In both cases, an individual can highlight “grounds relating to his or her personal situation” and require the data controller to consider whether there remain “compelling legitimate grounds for the processing which override the […]
I was invited to contribute to a seminar on the Right to Object (RtO). Normally this GDPR provision is seen as a way to prevent harm to a particular individual because of their special circumstances. But I wondered whether data controllers could also use the RtO process as an opportunity to review whether their processing […]
When the Government first announced plans to regulate online discussion platforms I wondered whether small organisations would be able to outsource the compliance burden to a provider better equipped to deliver rapid and effective response. Clause 180(2) of the Online Safety Bill suggests the answer is yes: The provider of a user-to-user service is to […]