Thinking about blocking

Throughout the time I’ve been working for Janet, the possibility of using technology to block undesirable activity on networks and computers keeps coming up. Here are four questions I use to think about whether and how technology is likely to be effective in reducing a particular kind of activity:

Where is the list?

Any technology needs a set of instructions. In the case of blocking, we need to tell it how to distinguish things that should be blocked from things that should be allowed. Typically, that’s a list of Internet locations. One day machine learning may get closer to understanding content or intention, but we’ll still need to provide it with a good/bad model.

So, can we get that list from someone else, or do we have to create and maintain it ourselves? Maintained lists of different categories of activity may be available, either free or as part of commercial services or appliances. If we have to create a new list, do we have the skills, resources and permission (in some cases including legal) to do that? How will we keep it up to date, and handle any challenges to our decisions to include or exclude particular actions or content?

Where is the technology?

Internet technologies typically give us four different ways to specify things to be blocked: network (IP) addresses, domain names (DNS), application identifiers such as URLs and email addresses, and content inspection (e.g. keywords or hash values). Each of these gives a different precision, depending on the nature of the unwanted activity, so we should choose the one that most accurately defines what it is we want to block. Errors are likely in both directions – over-blocking that prevents legitimate activity: under-blocking that allows some unwanted – but choosing the right blocking mechanism should minimise these. Modern technologies such as cloud hosting and Content Delivery Networks (CDNs) involve a lot of sharing of both domain names and IP addresses, so those rarely offer good precision. Application identifiers are usually the most precise but extracting and checking them adds delay and privacy issues. Content inspection is unreliable outside a narrow set of applications, such as detecting repeat appearances of known illegal images.

Whatever technology layer we choose for blocking, we need some equipment to implement the block, and some way to ensure that network traffic goes through that equipment. Depending on the approach chosen, existing routers (IP), resolvers (DNS) or proxies (identifiers and content) may offer relevant functions: otherwise new equipment will be needed. Note that forcing traffic through blocking equipment is likely to create a single point of failure. Blocking and resilience are very hard to reconcile.

Who are the users?

A few kinds of activity – notably, active threats to connected computers – can be blocked for every user of the network. More often institutions will want to choose which blocks to apply and to whom, so should opt-in to the blocking, rather than having it imposed. If institutions need to make local changes to make blocking effective, imposing it before they are ready will have unpredictable results, possibly undermining existing protection measures. To assess the effectiveness of blocking, or to use the blocked content in research or teaching, particular individuals or locations will need to be exempted from the block.

These issues have implications for where the blocking equipment is located, who configures it and has access to logs. Equipment should be placed where it will have access to as much of the traffic to be checked as possible and (because most technologies add delay) as little other traffic as can be arranged. Where fine-grained per-user or per-location control is needed, this must be managed by the organisation that can identify the individuals and locations that should be (temporarily) exempted: typically their institution. Note that such fine-grained control is technically complex to implement for IP and DNS blocks. Where access to logs is required – for example to provide help to those who may have tried to undertake prohibited activities – this should also be at institutional level.

How will people react?

Technical blocks can always be circumvented, so are most effective against activity that no one should want to encounter. Even if recipients welcome the block, we still need to consider how malicious actors will respond: they might simply change location so we have to update lists more frequently; but they may also move activity closer to legitimate services to make over-blocking more likely and more disruptive.

Attempting to block activity that users desire gives them an incentive to circumvent the block. They can use different connectivity (home or mobile), but there are many technical ways to evade blocks without changing network. The activity may then continue but be invisible to those operating the network. Worse, most evasion technologies circumvent all blocks, including those for unwanted activity such as viruses, ransomware and other threats to devices and individuals. As our Guide to Filtering on Janet explains, it is particularly important that technical measures against desired content are part of a wider awareness, behaviour and support process: information and warnings may help reduce deliberate circumvention.


Two examples show how the questions can help explore the use of technology against different types of unwanted activity.

Distributed Denial of Service (DDoS)

  • Where is the list? DDoS attacks against Janet and its customers are usually identified and blocked using a combination of source IP addresses and packet characteristics. Live information is available from commercial sources as well as Jisc’s own threat analysts.
  • Where is the technology? Although some attacks can be blocked using existing routers it is more efficient (and less disruptive to the routers’ intended function) to redirect suspect traffic to a dedicated cleaning service where malicious traffic can be identified and blocked and legitimate traffic from the same sources (which are usually compromised computers) forwarded to its intended direction.
  • Who are the users? DDoS attacks can target any institution or service. Since blocks are typically temporary (the average attack duration in mid-2022 was one hour, the maximum six) and precise, they can be applied to protect all users of the Janet network.
  • How will people react? Targets of blocked attacks should welcome the protection provided. Attackers can, and do, switch both the sources and targets of their attacks when blocked. Hence it is essential that blocks reflect live information from the network, as well as from external sources.


  • Where is the list? Lists of content, including some regulated by Terrorism laws, are available through filtering services.
  • Where is the technology? Terrorist content is frequently published through otherwise legitimate social media and hosting services. Unwanted content therefore needs to be defined at URL level, suitable for application proxies able to make these distinctions.
  • Who are the users? UniversitiesUK has guidance on how to provide researchers with the access they need to security-sensitive material. Such access must be managed by the institution that can vet requests for access, verify the identities of authorised researchers, and provide appropriate access control facilities.
  • How will people react? The Home Office Prevent Duty Guidance warns that some individuals may be drawn in by this kind of material. These may quickly adopt technologies to evade any blocks, so institutions’ Prevent strategies should aim to provide appropriate advice and support to anyone showing early signs of interest.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *