YouTube: Expanding the powers of the deepfake detection tool to cover journalists and officials

YouTube has announced that it has developed its AI-powered impersonation detection tool to include a new group of users, including government officials, political candidates, and journalists, with the aim of combating the spread of fake videos (“deepfakes”) that exploit the faces of prominent figures.

The tool relies on analyzing facial features in uploaded videos, and allows the affected person to review the reported video and request its deletion via the privacy complaints system, while preserving the content of satire and parody. To benefit from this service, interested users must verify their identities by presenting a government ID card and recording a personal video, and the platform confirms that this data will not be used to train its artificial intelligence models.

Leslie Miller, YouTube’s vice president of government affairs, said the expansion is intended to “protect the credibility of public debate,” given the increased risk faced by those working in the public sphere. This announcement comes in the context of a broader plan for 2026, which aims to enhance transparency in the use of artificial intelligence, classify content produced by it, and remove harmful synthetic materials, in addition to supporting a draft federal law (NO FAKES Act) to ensure a rapid response to requests to remove unauthorized content.

In the future, YouTube intends to expand access to this tool to include all target groups, research the possibility of detecting audio cloning techniques, and enable individuals to achieve financial gains when using their features, similar to the Content ID rights protection system.