Google will use AI to determine if you're an underage user

The Mountain View tech giant has been under fire recently

Reading time icon 2 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Key notes

  • Google is testing AI to restrict adult content for users under 18.
  • YouTube will also apply age filters, rolling out globally in 2026.
  • Google expands parental controls and adds content warnings in Messages.
Google office building

Google is introducing a new AI-driven feature to help protect younger users across its platforms, including YouTube.

The Mountain View tech giant said that it will start testing a machine learning model that can estimate whether a user is under 18 based on their activity, such as search history, video preferences, and account age.

So, if the AI predicts a user is underage, it will apply appropriate age filters to restrict access to adult content and provide a safer, age-appropriate experience. This feature is part of Google’s wider efforts to enhance child safety and will be tested in the US this year, with a global rollout expected in 2026.

Besides this, YouTube will also introduce a new feature to filter out adult content for younger viewers. This will use similar AI technology to predict if someone is underage and apply content restrictions accordingly.

Google is also working on expanding its parental controls. The “School Time” feature, which was previously available only on smartwatches, will now be available on Android phones and tablets. It will allow parents to limit which apps and features their kids can use during school hours to minimize distractions.

“We are also rolling out a new sensitive content warning feature in Google Messages, this feature is opt-in for adults, managed via Android Settings, and is opt-out for users under 18 years, with parental controls for supervised accounts,” Google says.

Google has recently removed its pledge not to use AI for weapons or surveillance, saying it needs to be involved in important global discussions and government contracts.

The company believes this will help in areas like cybersecurity and biology, but critics worry it goes against the ethical standards Google set in 2018 to avoid harmful uses of AI.

User forum

0 messages