On August 4, 2024, riot police clashed with anti-migration demonstrators in Rotherham, U.K. This incident was captured by Christopher Furlong for Getty Images. In London, Ofcom, the U.K.’s media regulator, was appointed last year to oversee harmful and illegal online content under new stringent online safety laws. Despite the rise in online misinformation leading to real-world violence, such as the recent stabbing in Southport, Ofcom has struggled to enforce these regulations effectively.
In Southport, a 17-year-old named Axel Rudakubana attacked children at a Taylor Swift-themed dance class, resulting in the deaths of three girls. Misinformation quickly spread on social media, falsely identifying the attacker as an asylum seeker. This false narrative fueled violent far-right protests, leading to attacks on shops and mosques.
U.K. officials have urged social media companies to combat false information more aggressively. Technology Minister Peter Kyle has engaged with firms like TikTok, Meta, Google, and X to address the spread of misinformation during the riots. However, Ofcom’s ability to act is limited because the full powers granted by the Online Safety Act have not yet been implemented.
The Online Safety Act will eventually require social media platforms to identify, mitigate, and manage harmful content risks. Once fully in effect, Ofcom will have the authority to impose significant fines or even jail time for repeated violations. Until then, the regulator cannot penalize companies for online safety breaches.
An Ofcom spokesperson told CNBC that the organization is working swiftly to enforce the act, but the new duties for tech firms won’t be fully operational until 2025. Ofcom is currently developing risk assessment guidance and codes of practice for illegal harms, which are necessary for the act’s implementation.
Gill Whitehead, Ofcom’s group director for online safety, emphasized in an open letter to social media companies the urgency of addressing harmful content now, even before the new laws take effect. She acknowledged the importance of balancing the removal of illegal content with protecting free speech.
Ofcom plans to release its final codes of practice and guidance on online harms by December 2024. Platforms will then have three months to conduct risk assessments for illegal content. These codes will be reviewed by the U.K. Parliament, and if approved, the online safety duties will become enforceable shortly thereafter. Provisions to protect children from harmful content will be effective from spring 2025, with the largest services facing enforcement from 2026.
For a weekly summary of top tech stories, subscribe to our newsletter.