Regulation around online safety is desperately needed - highlighted most recently by sports professionals boycotting social media due to racial hatred online.
The ‘Online Safety Bill’ will introduce landmark laws aiming to protect young and vulnerable people on the internet and crack down on social media abuse. The new Bill is a welcome step in the right direction but has divided critics, some argue that the Bill does not do enough to protect online users, whilst others believe that rules around moderating and taking down content limit freedom of expression.
What is the new Bill?
The ‘Online Safety Bill’ was introduced to the House of Commons on 12 May 2021 with the aim of providing a single regulatory framework to keep online users (particularly children) safe, whilst also safeguarding freedom of expression. The Bill will establish Ofcom as the independent regulator responsible for overseeing and enforcing compliance with the new laws. The draft Bill marks a significant moment in the Government’s efforts to tackle harm that is caused online.
Key ways the Online Safety Bill could affect you
Services with users, like social media sites and apps, will be required to take action against illegal abuse
- For example, this includes racist posts on Facebook/Twitter and sexual harassment/direct threats in direct messages on Instagram
- Services will be required to consider the risks their sites pose to vulnerable people, as well as protecting children from inappropriate content
- The largest and most popular social media sites (category 1 platforms) will be required to meet higher standards. This will include cracking down on content that is lawful but still harmful, such as encouragement of self-harm and mis/disinformation.
- Services will be required to report child sexual exploitation and abuse content.
Companies will need to put in place safeguards for freedom of expression
- Ofcom will set out these safeguards in codes of practice. Could include things like employing moderators.
- Service users will need to be able to appeal decisions to remove their content that are made without good reason. Users will also be able to appeal directly to Ofcom.
- Category 1 platforms will have additional duties such as reviewing and publishing up-to-date assessments of their impact on freedom of expression.
- These measures are to ensure that companies won’t over-remove content to meet their duties.
Companies will be required to protect political content and must remain politically neutral when carrying out moderation
- Category 1 platforms will be required to protect content defined as ‘democratically important’. For example, Facebook posts or tweets promoting or opposing government policy or a political party ahead of a vote in Parliament, election or referendum, or campaigns on live political issues.
- Companies must set out clear policies to protect such content.
- Companies are expected to take context into account when moderating. For example, a campaign could release violent footage to raise awareness about violence of a specific group. The company may choose to keep the content up, but it should be subject to warnings and the policy must be applied consistently.
There is special protection for both professional and amateur journalism
- Content on news publishers’ websites (BBC, Sky, The Telegraph, The Guardian etc), and when this content is shared by users on platforms, will not be limited by any of the rules in the Bill.
- Category 1 platforms will have a duty to safeguard UK users’ access to journalistic content shared on their platforms.This means platforms will have to consider the importance of journalism when moderating content.
- Citizen journalists’ (e.g. amateur bloggers) content will have the same protections as professional journalists’ content.
Companies will be responsible for cracking down on fraudulent user-generated content
- This includes romance scams (e.g., when somebody believes a person is romantically interested in them are they are tricked into paying for something, or giving money to that person)
- Also includes fake investment scams (e.g., when somebody has promised to invest your money in a scheme which is not real and they take your money!)
- Fraud via advertising, emails or cloned websites however is not subject to the Bill! The government’s reason for this is that the Bill focuses on user-generated content.
Companies will be punished for failing to remove content
- Ofcom will be given the power to fine companies up to £18m or 10% of their annual global turnover (whichever is higher) if they fail to take down harmful content
- They will also have the power to block access to sites
- Senior managers of companies could also be subject to criminal action if they fail to protect their users from harmful content!
Protection vs. freedom of speech: Fierce critics on both sides
The Bill is a “wasted opportunity to put into place future proofed legislation to provide an effective and all-encompassing regulatory framework to keep people safe online" - Labour MP Jo Stevens
Protection from abuse
- The National Society for the Prevention of Cruelty to Children (NSPCC) say that the new law risks falling short if it does not tackle the complexities of online abuse and fails to learn the lessons from other regulated sectors.
- Some of the NSPCC’s main concerns are that the Bill does not tackle child sex abuse at an early stage and that it fails to place responsibilities on tech firms to address the cross platform nature of abuse.
Impact on freedom of speech
- Many critics are concerned about the effect the legislation will have on freedom of speech, and that a balance needs to be found between censoring harmful content and allowing expression.
- Executive Director of the ‘Open Rights Group’ Jim Killock criticised the Bill saying that it is a “flawed approach” and that treating online speech as inherently dangerous will only end up with over-reaction and content removal, which will heavily impede free speech on these platforms.
Lack of protection for fraudulent ads
- The Bill does not cover fraud by advertising, emails or cloned websites and this decision has received a lot of backlash.
- Founder of MoneySavingExpert.com Martin Lewis believes the government has stumbled at the first fence and that it has failed to protect millions “from one of the most damaging online harms to their financial and mental health.”
- However, the Government argues it has worked closely with industry, regulators and consumer groups to tackle this issue and that the Home Office will publish a Fraud Action Plan after the 2021 spending review. The Department for Digital, Culture, Media and Sport (DCMS) will also consult on online advertising, including the role it plays in enabling online fraud, later this year.
What happens next?
There is still a long way to go before the Bill becomes official legislation! The Bill will be scrutinised by MPs on the DCMS Select Committee in the next stage, but the report will need to be reviewed (and possibly amended) in the House of Commons before requiring approval by the House of Lords before it can be enacted as law. Watch this space!
Many thanks to James Tate for this post.