Makers of AI chatbots that put children at risk will face massive fines or even see their services blocked in the UK under law changes to be announced by Keir Starmer on Monday.
Emboldened by Elon Musk’s X stopping its Grok AI tool from creating sexualised images of real people in the UK after public outrage last month, ministers are planning a “crackdown on vile illegal content created by AI”.
With more and more children using chatbots for everything from help with their homework to mental health support, the government said it would “move fast to shut a legal loophole and force all AI chatbot providers to abide by illegal content duties in the Online Safety Act or face the consequences of breaking the law”.
Starmer is also planning to accelerate new restrictions on social media use by children if they are agreed by MPs after a public consultation into a possible under-16 ban. It means that any changes to children’s use of social media, which may include other measures such as restricting infinite scrolling, could happen as soon as this summer.
But the Conservatives dismissed the government’s claim to be acting quickly as “more smoke and mirrors” given the consultation has not yet started.
“Claiming they are taking ‘immediate action’ is simply not credible when their so-called urgent consultation does not even exist,” said Laura Trott, the shadow education secretary. “Labour have repeatedly said they do not have a view on whether under-16s should be prevented from accessing social media. That is not good enough. I am clear that we should stop under-16s accessing these platforms.”
The moves come after the online regulator Ofcom admitted it lacked powers to act against Grok because images and videos that are created by a chatbot without it searching the internet are not in the scope of the existing laws, unless it amounts to pornography. The change to bring AI chatbots under the Online Safety Act could happen within weeks, although the loophole has been known about for more than two years.
“Technology is moving really fast, and the law has got to keep up,” said Starmer. “The action we took on Grok sent a clear message that no platform gets a free pass. Today we are closing loopholes that put children at risk, and laying the groundwork for further action.”
Companies that breach the Online Safety Act can face punishments of up to 10% of global revenue and regulators can apply to courts to block their connection in the UK.
If AI chatbots are used specifically as search engines, to produce pornography or operate in user-to-user contexts, they are already covered by the act. But they can be used to create material that encourages people to self-harm or take their own lives, or even generate child sexual abuse material, without facing sanction. That is the loophole the government says it wants to close.
The chief executive of the NSPCC, Chris Sherwood, said young people were contacting its helpline reporting harms caused by AI chatbots and that he did not trust tech companies to design them safely.
In one case, a 14-year-old girl who talked to an AI chatbot about her eating habits and body dysmorphia was given inaccurate information. In others, they have seen “young people who are self-harming even having content served up to them of more self-harming”.
“Social media has produced huge benefits for young people, but lots of harm,” Sherwood said. “AI is going to be that on steroids if we’re not careful.”
OpenAI, the $500bn San Francisco startup behind ChatGPT, one of the UK’s most popular chatbots, and xAI, which makes Grok, were approached for comment.
Since the Californian 16-year-old Adam Raine took his own life after, his family allege, “months of encouragement from ChatGPT”, OpenAI has launched parental controls and is rolling out age-prediction technology to restrict access to potentially harmful content.
The government is also to consult on forcing social media platforms to make it impossible for users to send and receive nude images of children – a practice that is already illegal.
Liz Kendall, the technology secretary, said: “We will not wait to take the action families need, so we will tighten the rules on AI chatbots and we are laying the ground so we can act at pace on the results of the consultation on young people and social media.”
The Molly Rose Foundation, which was set up by the father of 14-year-old Molly Russell, who killed herself after viewing harmful content online, called the steps “a welcome downpayment”. But it called on the prime minister to commit to a new Online Safety Act “that strengthens regulation and makes clear that product safety and children’s wellbeing is the cost of doing business in the UK”.

