Artificial intelligence is transforming how people access information, but not always for the better. Recent research by the BBC and global partners has revealed that popular AI chatbots often distort news content, raising alarm bells about accuracy, public trust, and the future of journalism. At the same time, policymakers and public figures in the UK are calling for clearer regulatory frameworks as AI’s influence spreads through media, workplaces, and daily life.
A major study led by the BBC, involving several public service broadcasters worldwide, found that AI assistants frequently misrepresent or mishandle factual news content even when directly drawing from trusted journalism. These systems, including tools from major tech companies, provided answers to significant issues nearly half the time and showed problems with sourcing, accuracy, and context.
Experts warn this pattern could erode public confidence in both traditional media and new information channels. In the research, common errors ranged from incorrect dates and figures to misquoted statements and misinterpreted events.
Why it matters: As more people, especially younger audiences, turn to AI assistants for news, these inaccuracies could deeply influence public understanding and civic engagement.
In a controlled experiment, BBC journalists gave AI platforms 100 real news articles and asked them to summarise and answer questions about the content. The results were striking:
One senior BBC editor highlighted a fundamental concern: if AI tools can’t reliably reflect the facts, they could unintentionally misinform millions of users.
These findings aren’t just technical glitches, they have real-world implications:
The problem isn’t confined to the UK. An international study found similar issues across languages and regions, suggesting that the risk of AI-driven misinformation is widespread.
While the technology evolves quickly, policies governing its use lag behind.
UK leaders, including figures like London Mayor Sadiq Khan, have highlighted the potential risks AI poses to jobs and economic well-being as automation expands.
Simultaneously, parliamentary committees have warned that regulators need stronger frameworks to oversee AI’s use across sectors, including finance, media, and public services. Advocates argue that without clear laws, both consumers and institutions could face serious harm.
Although there’s an ongoing debate about regulation, much of the UK’s approach remains reactive rather than proactive. Critics say that authorities have adopted a “wait-and-see” strategy rather than setting enforceable standards to protect users, safeguard data, and ensure accountability when AI systems fail.
Discussion