Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Categories
Closed Consultations

EU Notice and Action Consultation

I’ve sent in a Janet response to the EU’s consultation “A Clean and Open Internet: Procedures for notifying and acting on illegal content hosted by online intermediaries”. At the moment the E-Commerce Directive (transposed into UK law as the Electronic Commerce (EC Directive) Regulations 2002) says that websites aren’t liable for unlawful material (either criminal or civil) that is posted to their sites by third parties until they are either notified of alleged illegality or gain knowledge of it by other means (including their own investigations). Once they do know about the material they must remove it “expeditiously” to avoid liability thereafter.

This has been criticised both for discouraging sites from moderating or checking what is posted, and for encouraging them to remove material as soon as any complaint is received. The latter problem was highlighted as a human rights issue by the Law Commission in 2002, and I’ve recently discovered that the OFT pointed out that it could also be a consumer protection one (if reports of bad service are suppressed by legal threats) last year.

Within the limits of what is mostly a checkbox form for responding, I’ve tried to highlight those problems, particularly as they affect education organisations who may be expected (and sometimes required by law) both to proactively check content and to promote free speech. At the moment both of those are actually discouraged by liability law.

However in designing a better system it seems to me that there are two different kinds of illegality that may need to be dealt with separately. For one kind it’s actually impossible to tell from the posted content alone whether or not it is unlawful. For example content can’t be defamatory if it is true, and it can’t breach copyright if the poster has permission to post it. The website host can’t determine those from the information it has. For that sort of material I’ve suggested that the poster does need to have a “right of reply” to an allegation of law-breaking, whether that is established by a right to have material put back after take down (as in US law for copyright) or by requiring the poster to be contacted before material is removed (as seems to be the idea for the UK’s new Defamation Bill). For the other kind of material – which includes malware and indecent images of children – it is clear from the material itself that it is unlawful, so a right of reply would simply delay the process of removing something that is plainly unlawful to publish or distribute.

The consultation also asks about systems for reporting problems to websites. From the responses I’ve had from members of the Janet community it seems that most websites do react quickly when told of problems (given the legal position it would be odd if they didn’t), but that it can be difficult to find where to send reports. I’ve therefore agreed with the Commission’s suggestion that reporting mechanisms should be made more obvious, but pointed out that these may need to be suitable both for a human to enter a single problem report and for an automated system to report a batch of problems, such as a range of phishing sites. The consultation suggests that sites that provide reporting interfaces should only have to respond to reports sent through them – I’ve suggested that at least a report by another channel shouldn’t trigger loss of liability protection.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *