Categories
Articles

Regulating Online Harms

This morning’s Westminster e-Forum event on regulating Online Harms contained a particularly rich discussion of both the challenges and opportunities of using regulation to reduce the amount of harmful content we see on the Internet. The Government published a white paper in April 2019 and an initial response to comments in February 2020. A full response is expected later this year with legislation to follow in this Parliament.

One of the biggest challenges is the ambition to address both content that is illegal – typically well-defined in laws such as the Protection of Children Act – and content that is lawful but harmful. In the latter category the current pandemic has drawn particular attention to misinformation (accidental) and disinformation (deliberate): apparently a single item of “junk news” can achieve greater reach through social media than individual articles in the mainstream media. The Government’s initial response has recognised that these do need to be treated separately and suggests the approach to lawful but harmful will focus on processes rather than individual items of content.

Among the challenges identified:

  • The scope of the regulation is still unclear. Although the Government has stated that “less than 5% of UK businesses” will be covered, it have not provided any suggestion of how the current broad definition – any website allowing user-generated content, comments, or interaction – will be reduced. TechUK noted the very wide range of size and capability of organisations in the UK’s digital sector: regulation must not require measures that only large organisations can deliver. It may help to consider different ways of reducing the visibility of harmful content: preventing, disrupting, de-ranking and de-monetising as well as removing. Appropriate measures of “success” will be needed, which may well vary between types of harm, approaches to reduction and types/scales of regulated entity.
  • The range of lawful content to be addressed is not clear, likely to be dynamic and, by its nature, hard to deal with at scale. Targets of hate-speech have changed dramatically over the past three months: could systems and processes (which may need to review millions of items a day) keep up with changes in vocabulary? There may also be a high level of dependency on context: whether a statement is teasing or bullying may well depend on the relationship between the individuals. Victoria Nash from the Oxford Internet Institute suggested we may well need specific legislation on statements by political leaders: should these be judged by the same standards as those applied to citizens? This should definitely not be left to platforms and regulators to decide.
  • Although the White Paper mentioned the importance of protecting free (lawful) speech, this has not been supported by the same level of detail as commitments on rapid removal. Internet regulation has a long, and largely undistinguished, history of creating one-sided incentives that make it safer for platforms to remove material rather than justify their decision to continue to publish it. The White Paper set ambitious targets for removal of material within 24 hours, but accurate decisions need to be given as much credit as fast ones. Asgar Koene raised an interesting point that clarity is particularly important for UK legislation since, unlike Germany, we share a language with countries whose approach to this free speech balance may be significantly different. Global platforms cannot simply apply our national law to content in “our” national language.

The sessions also included presentations on some models of regulation that could be more widely adopted.

  • The Internet Watch Foundation’s work in using public reports and their own proactive investigations to generate takedown notices, URL lists and hashlists for child abuse images is well-known. The depth of their collaboration with industry perhaps less so. The charity has been industry-funded from its origins twenty years ago. It now works with technical experts from members to understand the complex ways in which material is distributed and how this might best be disrupted; and it holds hackathons where members can explore new ways of using IWF data in their systems.
  • The Internet Commission is a newer non-profit organisation that aims to identify and promote best practices in content moderation. These look broadly at the moderation process: reporting, moderation, promotion, notice, appeal, resources, governance. Companies from diverse sectors have volunteered for an independent assessment, involving confidential bilateral review, private benchmarking and information exchange. Some common issues that are already emerging are the relationship between automated and human moderation, and how to provide rights of appeal. The first annual report, due before Christmas, will be interesting both for content and model.
  • Finally, since many of the problems are social in origin, we should not ignore the contribution of social solutions. Though attempts at “awareness raising” and “digital literacy” have been common for many years, panellists suggested it might be more fruitful to focus on “digital civility”. Rather than trying to teach readers how to assess the source of material we receive, this might usefully focus on responsible habits in how we should use it. Ofcom’s digital literacy surveys suggest that while a lot of people are concerned about material they encounter, many don’t do anything about it.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *