Sesimic Gaps in the Online Harm consultation
This is one of the most far-reaching consultations for a long time, so we asked Fraser Henderson, the Institute’s respected specialist on online issues to look at the detail. Here is his assessment….
It looks like the Government is getting serious about tackling online harms with the launch of this week’s recent consultation on new safety measures to be complimented by new laws.
This is taking shape in the form of a new “duty of care” and the creation of an independent regulator. The scope of the consultation suggests the ability to influence the proposed regulatory powers and their application areas but there are some seismic gaps – such as no ability to comment on what the responsibilities are in the draft duty of care and how these might dovetail with a code of practice.
For example, one of the requirements in the proposed duty is that users who have suffered harm should be directed to support but this might be considered as good practice. Similarly, an unlisted duty of care might include telling users that their content is being screened.
Along the same line, consultees are spared questions about whether they think the idea is good or not to begin with – which is always a recipe for discontent. Moreover, it has already been decided which ‘known harms’ will be excluded from scope and the proposals suggest that the regulator will not be responsible for policing any truth and accuracy (despite disinformation being listed as a harm).
Yet there is another, more complex and inherent problem. That is, technology policy has a tendency to lag behind the real world. Reduce harms from the visible web and risk shifting users onto the dark web or peer-to-peer networks. That said, we must give the Government credit for recognising this and separating out issues around privacy.
I don’t think there is any doubt that the internet could cause harm, but it is a regulation nightmare – not least because of the distributed nature of the content and speed at which content is both published and propagated. This is a beast of an issue and a better starting place might have been with the publication of an issues paper. For example, it is unclear if Government has solicited alternative ideas or engaged industry in any pre-consultation activity – if just to see if preferred options are technically feasible.
The key counter-argument against this new duty is that new freedoms are the joy of the internet. Like or loathe it, explosive contributions such as WikiLeaks could only have existed because they were not censored and were not easily prevented. The White Paper hints at these benefits – such as that ‘care will be given to ensuring freedom of expression is preserved’. On balance, however, the supporting information is overly biased towards the positive impacts of the proposed changes, -something consultation professionals are always careful to balance when informing a debate.
A more liberal approach would call for better education, support and control over harmful content at the ‘consumer’ side. This is partly embedded in the new proposals and the consultation – but the interpretation of harm which will cause friction. For example, could lobbying be considered as harassment or intimidating online behaviour?
So, this is a classic case of a consultation where it is not immediately clear what has and has not been decided – and what is up for influence, yet alone what will happen with responses.
Arguably there should have been a separate industry consultation and where industry feedback would have been useful to weigh into the body of evidence so that consultees can make more informed choices. After all, a lot of the companies for whom this new policy will apply claim to already be taking action to prevent or reduce online harms. It is the users who are actually creating the harms; these companies are just facilitating them. And I doubt users will sit back and relax believing they are free from any online harm once the new regulator is in place.
In terms of who the new proposals affect, the suggestion is that this is targeted at Companies which provide services or tools to UK users that ‘allow users to share or discover user-generated content or interact with each other online’. Does this mean machine generated content or chatbots are exempt?
It is also a shame that there is not a catch-all “anything else?” question. There are many unsolved issues around how to regulate in the UK when a service provider is not trading from this country or when it is serving a worldwide audience. Likewise, internet service providers are quoted as an enforcement option of last resort. As an easy option, it seems strange that there is no consultation question to challenge the logic of this. As a bare minimum, hyperlinks to the relevant proposal points would have made it simpler to digest these complex issues.
In essence this feels like a consultation where the proposal is so well advanced that the consultation questions cloud the fact that key principles cannot be influenced and where industry input is lacking to the extend where the fruits of the proposal are not easily considered.
Perhaps we should have some legislation around the online harms of poor consultation practice? Responses online only, please!