Is It Possible To Make Social Media Safer?
On the proposed social media ban for unders 16s in the UK.
I have to admit that as a parent, my knee-jerk reaction was in favour of a blanket social media ban for under 16s. And professionally, I’ve been driven to absolute distraction by the failure of online platforms to self-regulate over the past 15 years, to identify and mitigate risks to human rights globally. Enough.
But there’s a reason this article is not titled “Why I support a blanket social media ban for under 16s” because like most things, it’s a bit more complicated than that.
Political momentum is behind a ban. The Online Safety Act in the UK places obligations on platforms to protect users, particularly children, from harmful content, but the completely foreseeable situation with X’s AI tool Grok being used by men to undress photos of women and children perhaps drove many in government to lose patience.
The Children’s Wellbeing and Schools Bill was happily making its way though the House of Lords this week, when a proposed amendment was voted in requiring the government to introduce regulation to prevent under 16s from accessing social media.
So the Bill goes back to the House of Commons and a ban is on the cards. 60 Labour MPs wrote to Prime Minister Keir Starmer to support the ban. Starmer, in an effort to slow things down, announced a consultation to gather evidence and report in the summer.
Child safety groups are largely against a blanket ban. Technically implementing a ban gives rise to a problematic age verification industry extracting data from children (an article for another time.) Of course teenagers will try and evade the ban- they are teenagers. Downloads of VPNs, tools that disguise your location and can mask the country you are in, surged when the Online Safety Act came into force last July as age verification kicked in. A less reported amendment to the Children’s Wellbeing and Schools Bill is one where regulations should be introduced to prevent children accessing VPNs.
Everyone is scrutinising Australia closely, where a social media ban was enacted in December, to see how it works out.
What would a ban achieve?
I completely understand the desire for a ban. As I wrote recently, platforms struggle to get safety right. They have failed to enforce their own age limits (currently 13+), failed to consistently remove harmful content for children such as suicide/self harm, failed to address the “sextortion” phenomenon, or stamp out online grooming. Trust and safety is expensive and many platforms have rolled back investment. Even services dedicated to children struggle to get it right- looking at you Roblox.
Longer term impacts link social media use to increased depression and anxiety- these studies come from Meta themselves , leaked by whistleblowers.
All this adds up to social media platforms not being safe for children, because they are not made for them.
A ban would support parents overwhelmed with navigating online safety resources, provide something concrete to help parents set boundaries and feel they are not the only ones setting these rules. It takes a village, as the saying goes.
On the other hand, a ban would let companies off the hook. There is broad awareness that social media platforms make money (a lot of it) by keeping you on the platform to show you adverts and it is this business model that has given rise to the problematic features and algorithms designed to achieve this goal.
The UK’s social media consultation will look at the features of social media that drive addictive or compulsive use such as “infinite scrolling”. Looking under the hood like this is key. We need to get to grips with how features are designed and why, because another bit of tech will be along in a minute that will present similar issues.
So the options here are a ban or making social media safer, but is this actually possible?
Is it possible to make social media safer?
Remember when the EU brought in regulation on political advertising last year, Google and Meta withdrew political advertising from the region? This indicates it was easier to cut off that part of the business than comply with regulation demanding more transparency and accountability.
Is it possible to make social media safer without starting from scratch with a different business model? Is that what we should ultimately be encouraging? Social media is not the internet, there are infinite possibilities for connection, innovation and positive experiences. We don’t need to be trapped in this small corner of the web.
What are we trying to ban?
Are we really talking about social media in general or are we talking about the platforms with staggering levels of harm and non compliance with existing laws? Are we really talking about X here? Australia’s ban includes YouTube but not YouTube Kids, seemingly sidestepping the issue of screentime and recommender systems. Messaging apps like WhatsApp are not included.
As companies scramble to incorporate AI in their platforms in the most reckless way (Grok, again) the EU and UK began to explore options of blocking whole platforms. This is a slippery slope of course and would not pass the test as outlined in human rights law with regards to restricting freedom of expression (that everyone seems to have forgotten existed) of whether a measure is legal, necessary and proportionate.
Film style classifications that apply risk based age ratings to platforms could target the worst platforms, support parents to enact boundaries and keep the connection benefits. The question remains whether this would amount to a technically implemented ban or guidance.
What do young people think?
Great question. Young people’s voices are often left out of this debate and we need them to feature prominently. When asked, young people want online spaces for them and are often doing their best to navigate an online world not designed for them.
“Young people” are not all the same and using social media for connection, exploration and information is not a positive impact across the board. Esther Grey, the mother of Brihanna Grey who was murdered in 2023, supports a ban, writing to Keir Starmer to describe how her daughter’s self harm and eating disorder was exacerbated by an obsession with TikTok influencers.
Where next?
Ultimately, banning social media lets platforms off the hook and won’t change their design or business model, the core elements which impact people of all ages, not just children. Building solutions that start with the needs of the most vulnerable creates effective solutions for everyone. This approach also avoids a “cliff edge” of children coming to platforms at 16 where nothing has changed and all the harms still exist.
The government needs to enact measures that force companies to change. Certain features considered addictive could be banned, age classifications in place to support parents set boundaries, with perhaps a targeted, temporary ban on certain platforms in their back pocket as the incentive. The question still remains if it is actually possible to make platforms safer, and if companies have both the will and the way.



It's interesting how you’ve managed to dissect this issue with such nuance; the initial headline could easily lead one to expect a less complex take. The age verification industry's potential for data extarction is definitely a red flag that many policymakers seem to conveniently overlook.