The recent increased awareness of federated social networks has produced some discussion about their status under new “platform regulation” laws, such as the UK Online Safety Bill. Most of this has focussed on whether federated instances might be covered by legislation and, if so, what their operators’ responsibilities are.
But this post uses them as a way to look at content regulation in general. In particular, are these laws about controlling what we post, or what we read? On a centralised platform such as Facebook or Twitter, there’s no difference: the platform operator controls both what its users post and what they can see. But in a federated system, each instance has its own community of people who can both post and read, but members of that community can also choose to read content posted on other instances by people who have no relationship with the local instance or its policies. What light does that difference shed on how we think about regulation?
Posting is fundamental to the definitions in the Online Safety Bill: a service that doesn’t allow posting (clause 2 says “generated”, “uploaded”, “shared”) isn’t a user-to-user service, so immediately falls out of scope. Services that allow interaction, but limit this posting to “expressing a view” (via likes, votes, etc.) on provider content are also exempt (see Schedule 1 clause 4). Posting is also at the heart of the model of different federated instances having different policies: these may be pre-defined by an instance operator and those who find them welcoming can join, or an existing community may decide on its preferred rules and create an instance to implement them. Perhaps the strongest community link is an instance for employees, where contracts may already contain a policy on acceptable posting. This is very different to a centralised social network where a single policy covers all posters and readers, no matter how (un)comfortable they may be with it.
Reading isn’t as deeply embedded in the Bill, though groups of readers are likely to be a consideration in the required risk assessments. Two features of current federated systems support group-appropriate reading. As above, federated instances are expected to set up and enforce different rules for what is posted locally, and members of an instance can choose what (if any) content they see from outside that instance. Such choices are more effective in the currently normal situation where federated instances don’t use algorithms to select or promote extra content to individual users. An individual reader can start from their local timeline (which should follow the instance’s policies) and use controls to narrow or widen their personal policies by blocking, following, searching or accessing a broader timeline. Instance operators can block whole external instances, typically because of incompatible policy or practice, but readers who want to read content from a blocked instance can still do so, either by joining it, or by reading its public feed. Both of these are outside the control or even visibility of the blocking instance operator.
Federated social networks offer an alternative way to think about platform regulation. It will be interesting to see whether Parliament or OFCOM incorporate this additional perspective as they develop and implement the UK legislation.