×
News

AI May Destroy Jobs If Left Unchecked, Says Sadiq Khan

Written by Chetan Sharma Reviewed by Chetan Sharma Last Updated Jan 23, 2026

Artificial intelligence is transforming how people access information, but not always for the better. Recent research by the BBC and global partners has revealed that popular AI chatbots often distort news content, raising alarm bells about accuracy, public trust, and the future of journalism. At the same time, policymakers and public figures in the UK are calling for clearer regulatory frameworks as AI’s influence spreads through media, workplaces, and daily life.

AI Chatbots: Helping or Hindering Trustworthy News?

A major study led by the BBC, involving several public service broadcasters worldwide, found that AI assistants frequently misrepresent or mishandle factual news content even when directly drawing from trusted journalism. These systems, including tools from major tech companies, provided answers to significant issues nearly half the time and showed problems with sourcing, accuracy, and context.

Experts warn this pattern could erode public confidence in both traditional media and new information channels. In the research, common errors ranged from incorrect dates and figures to misquoted statements and misinterpreted events.

Why it matters: As more people, especially younger audiences, turn to AI assistants for news, these inaccuracies could deeply influence public understanding and civic engagement.

Inside the BBC’s AI News Accuracy Test

In a controlled experiment, BBC journalists gave AI platforms 100 real news articles and asked them to summarise and answer questions about the content. The results were striking:

  • Nearly half of all AI responses had significant problems
  • Some tools incorrectly quoted the source or changed factual details
  • Errors ranged from misrepresenting public health guidelines to omitting crucial context altogether

One senior BBC editor highlighted a fundamental concern: if AI tools can’t reliably reflect the facts, they could unintentionally misinform millions of users.

Why AI Misrepresentation Matters for Society

These findings aren’t just technical glitches, they have real-world implications:

  • Public trust in news institutions could weaken
  • Misinformation can spread faster than ever through AI channels
  • People may act on false facts or distorted narratives

The problem isn’t confined to the UK. An international study found similar issues across languages and regions, suggesting that the risk of AI-driven misinformation is widespread.

UK’s Struggle With AI Regulation

While the technology evolves quickly, policies governing its use lag behind.

Political Voices Call for Action

UK leaders, including figures like London Mayor Sadiq Khan, have highlighted the potential risks AI poses to jobs and economic well-being as automation expands.

Simultaneously, parliamentary committees have warned that regulators need stronger frameworks to oversee AI’s use across sectors, including finance, media, and public services. Advocates argue that without clear laws, both consumers and institutions could face serious harm.

Why Current Rules Aren’t Enough

Although there’s an ongoing debate about regulation, much of the UK’s approach remains reactive rather than proactive. Critics say that authorities have adopted a “wait-and-see” strategy rather than setting enforceable standards to protect users, safeguard data, and ensure accountability when AI systems fail.

Discussion