Public & Government Affairs

FTI Consulting Public Affairs Snapshot: The Online Safety Act has received Royal Assent, but its journey is far from over

After a much-delayed passage through Parliament, the Online Safety Act finally received Royal Assent yesterday. The controversial legislation has taken many forms since the Government’s 2017 Internet Safety Strategy, but the overarching aim to protect users online, particularly children, has remained constant throughout. However, much of the finer details on how the Act will impact online services are yet to be determined.

The legislation signals a monumental shift in liability online by making online platforms responsible for the content they host. As such, the Act will undoubtedly have an immense impact on providers of user-to-user online content and search services, including social media, video games, and instant messaging platforms, to name a few.

The Act will introduce a tiered approach, whereby all services will be required to protect users from illegal content, with additional obligations for services available to children to prevent them from accessing harmful content. There will also be further restrictions for so-called Category-1 services, the largest and most high-risk providers, which will need to give adult users greater control.

To ensure the regime has teeth, the penalty for breaches is high. The regulator has the power to issue fines of up to £18 million or 10 per cent of annual global turnover and make senior managers criminally liable for non-compliance.

The road to this point has been long, with delays attributed to the pandemic and the downfalls of Boris Johnson and Liz Truss, as well as considerable revisions due to concerns that the original draft was overburdensome. But while the full impact of the legislation is yet to be determined, one thing is certain: it looks very different from its debut as the Internet Safety Strategy, and even from its first draft in 2022.

Major changes include a significant decision last year by the Government to axe provisions that would have forced big tech platforms to take down material that could be “legal but harmful” to adults; offensive content that does not constitute a criminal offence. That is now beyond the scope of the new law.

Attempts to regulate such content spurred concerns among some Conservative backbenchers that the Bill was too far-reaching and placed too much responsibility on social media companies, potentially to the point of threatening free speech online. A divisive issue, its initial inclusion won both support and detraction in the wider public arena.

In response to these concerns, the Act now requires the largest platforms (Category 1) to enforce their terms and conditions for users, as well as to give adults more control over the content they see. From this point forward, if a platform’s terms explicitly prohibit content, Ofcom will have the power to ensure that the company’s policies are enforced.

Equally as controversial are the provisions that could potentially require platforms to break end-to-end encryption when necessary to identify certain kinds of content, including terrorist or child sexual abuse material. These provisions unsurprisingly garnered mass criticism from privacy activists and messaging services alike, with some services even threatening to pull out of the UK on these grounds.

In what has largely been seen as a climb down from the Government, the legislation now states that firms must use “accredited technology” to identify certain kinds of content if they receive a notice to do so from Ofcom. However, given that there is no accreditation scheme, it seems likely that this provision will have limited impact in practice, at least in the near term.

Meanwhile, the Labour Party has characterised the Act as one that has been gutted at the hands of Conservative infighting. Many critics argue it has significant gaps, such as the notable absence of plans to tackle misinformation and disinformation.

Labour has called for the Government to commit to a review of the legislation within five years of its enactment, with one of its core concerns being the use of algorithms that target people with harmful material. The Opposition has also indicated that it would make the Act more stringent if elected, including introducing changes to address “legal but harmful” material.

When the Government first published its Internet Safety Strategy in 2017, it positioned the UK as a world leader in tackling online harms. Fast forward to 2023, and the UK is playing catch-up with its counterparts, including the European Union and Australia, which passed the Digital Services Act in 2022 and the Online Safety Act in 2021, respectively.

Unfortunately for the Government, this game of catch-up is far from over. Yesterday, Ofcom set out its timeline for online safety implementation. Taking a phased approach, the regulator will start by publishing a consultation on illegal harms on 9 November and will set out its draft guidance for categorisation thresholds by Spring 2024.

In-scope services will be waiting eagerly for Ofcom to advise the Government on where these thresholds should sit, given this will determine the level of obligations companies will face. There are already concerns that they could unintentionally capture up to 25,000 organisations under Category 1, which could subject high-reach, low-risk platforms to the highest level of compliance obligations, even when they pose considerably less harm to users.

So, while the Online Safety Act has reached the end of its Parliamentary journey, this is just the beginning of the UK’s online safety regime. With key provisions yet to be determined by Ofcom, secondary legislation yet to be set out, and the possibility of further legislative changes from a Labour government, the tech sector will need to continue engaging closely with this regime as it evolves. Attention in the short term will, of course, turn to the AI Safety Summit as the Government continues to grapple with the economic and societal implications of emerging technology.

The views expressed in this article are those of the author(s) and not necessarily the views of FTI Consulting, its management, its subsidiaries, its affiliates, or its other professionals.©2023 FTI Consulting, Inc. All rights reserved. www.fticonsulting.com

Related Articles

Predictions for Cybersecurity in 2024: Communications and Reputational Perspectives

March 7, 2024—What will the cybersecurity space look like in 2024? And what do companies need to do to ensure they are prepared from a...

Cybersecurity in Latin America: Cyber Threats Evolve in a Landscape of Incipient Resilience

January 25, 2024—Organizations in Latin America should not wait for regulators to impose cybersecurity readiness requirements, as prepara...

A Year of Elections in Latin America: Navigating Political Cycles, Seizing Long-term Opportunity

January 23, 2024—Around 4.2 billion people will go to the polls in 2024, in what many are calling the biggest electoral year in history.[...

Global Public Affairs Newswire – 17 May 2024

May 17, 2024—Welcome to the latest edition of FTI Consulting’s fortnightly Global Public Affairs Newswire. In this installment, we ...

FTI Consulting News Bytes – 17 May 2024

May 17, 2024—FTI Consulting News Bytes Glass-half-full UK IPO news was prominent during the early part of this week’s news cycle wi...

ESG+ Newsletter – 16 May 2024

May 16, 2024—This week’s newsletter covers much of the latest regulation on ESG and sustainability across the globe, from efforts t...