[UPDATE] Recordings from the event are now available
David Clark of MIT is one of the best people to take a long view of the Internet: he has been working on it since the 1970s. So his suggestion – in a Weizenbaum Institute Symposium yesterday – that the 2020s may see as dramatic a change in Internet regulation as the 1990s is significant.
Before the 1990s, most Internet development had taken place in the public sector, in research and military organisations. Once the technology had been shown to work, and become usable (to some extent) by private individuals and organisations, government reduced its involvement and commercial organisations were allowed, indeed encouraged, to take more of a role. To facilitate this, regulation of the new medium was deliberately light touch: in both the USA and Europe, existing models of publisher liability were rejected as being too onerous for a developing commercial market.
There’s no question that that created an explosion of new ideas, services and possibilities: for Jisc’s recent 25th anniversary a number of us “silverlocks” reflected on what we were doing with networks in 1994 (I was setting up Cardiff University’s first web server) and our younger interviewers were amazed how primitive it all sounded. But David’s sense is that Governments are now looking, increasingly unhappily, at the consequences of that decision.
Much of that is down to simple economics. The original vision of the Internet was a mesh of cooperating entities, providing distributed services. That’s OK, perhaps, for large research universities who are used to collaborating. But it’s a really hard model to sustain: it’s much easier to build a centralised encyclopedia or discussion group than a distributed one. There’s no need for entities to agree to define (and then, which may be even harder, implement) complex standards and protocols; nor to persuade users that it might be better not to head to the service where all their friends are; nor to explain to regulators why discussing future plans is positive for competition and not the early stages of a cartel. Centralised services are much easier to monetise: the Internet has lots of protocols for moving bytes, very few for moving pennies. Dominant players emerge naturally, it doesn’t need any evil intent.
But once you have dominant players, offering frictionless and essentially free exchange of information, they naturally become a focus for societal problems that are normally constrained by either friction or economics. The list of issues that Governments are being told to “do something” about is increasing: privacy, trust, use of platforms by malicious and adverse interests, societal and economic dependency on large platforms, anonymity, national security, concentration of power, erosion of democracy, taxation… And there are signs that policy-makers in many countries and regions are looking to respond to at least some of those demands.
But in the 1990s the Government stakeholders were relatively few and shared a common approach, and the commercial ones were small and fragmented. Now, by contrast, Governments have very diverse views on Internet regulatory policy: many even hold competing policies (for example on encryption) in different departments. The consistent view is now on the commercial side – at least among the large platforms. It now at least as common for platform choices to change Government policies (notably on COVID contact-tracing apps and link taxes) as the other way around. As David concluded, we may well be at the start of a change in Internet regulation that is as significant as twenty-five years ago, but significantly slower and messier.