×
AI

Meta Expands AI Age Detection Across Facebook and Instagram

Written by Kelvin Chan Last Updated May 6, 2026

Meta is expanding its AI-powered age detection systems across Facebook and Instagram, introducing a controversial new method that analyzes visual signals like height and bone structure to determine whether users are underage. The move comes as the company faces increasing legal and regulatory pressure over child safety on its platforms.

Meta Is Moving Beyond Birthdate-Based Verification

For years, Meta largely relied on self-reported birthdays to determine a user’s age. That approach has proven ineffective, especially as younger users increasingly bypass restrictions by entering false birthdates during sign-up.

The company’s new AI system instead combines multiple signals across a user’s account. These include:

  • visual analysis of uploaded photos and videos
  • text-based clues in captions, comments, bios, and posts
  • behavioral patterns and interaction history

Meta says the AI looks for “general themes and visual cues” such as height or bone structure to estimate a person’s approximate age. The company emphasized that this is “not facial recognition” and claims the system does not identify specific individuals.

The Rollout Comes Amid Heavy Regulatory Pressure

The timing of the launch is significant. Just days before the announcement, the European Commission said Meta may be violating the EU’s Digital Services Act for failing to prevent children under 13 from accessing Facebook and Instagram. Regulators stated that Meta lacked effective age verification systems and failed to adequately remove underage accounts.

Meta is also facing growing scrutiny in the United States and Australia over teen safety, online addiction concerns, and harmful content exposure involving minors. In one recent case, a New Mexico jury reportedly ordered Meta to pay $375 million related to allegations around child safety failures.

Accounts Flagged by AI Could Be Removed Automatically

If Meta’s systems determine that a user is likely under 13, the account may be deactivated immediately. Users would then need to verify their age through identification documents or additional age-check systems to restore access.

The company says the technology is currently active in select countries, including the United States, before a broader international rollout.

Teen Accounts Are Becoming Central to Meta’s Safety Strategy

Meta has already been aggressively expanding its “Teen Accounts” system across Instagram, Facebook, and Messenger. These accounts apply stricter protections automatically, including:

  1. private-by-default settings
  2. restricted messaging from strangers
  3. content limitations
  4. livestream restrictions for younger users

The company says its AI systems have already moved “hundreds of millions” of teen users into protected account settings, even when those users registered as adults.

AI Age Detection Could Become Standard Across Social Media

Meta’s latest move reflects a larger shift happening across the tech industry. Governments are increasingly demanding stronger age verification systems, while platforms are trying to avoid requiring full ID verification for every user.

That has created a middle ground where AI-based “age estimation” systems are becoming more common. Companies like Yoti and k-ID already offer similar technology that estimates age through visual analysis rather than identity matching.

The larger question now is whether users and regulators will accept AI systems analyzing physical traits like bone structure as part of routine platform moderation.

Discussion