YouTube announced it is expanding its AI technology. This technology detects deepfake videos. It will now include government officials, political candidates, and journalists. This move aims to help these individuals spot videos. These videos use their faces or digital identities falsely. They can then request removal if the content breaks YouTube's rules.
A Tool to Detect Fake Faces
The new technology detects faces. These faces are created or changed by AI tools. These videos can spread fake news. They can also sway public opinion. They show public figures saying or doing things they never did. TechCrunch reported this. العربيه Business also saw the report.
YouTube launched this tech last year for creators. Millions of creators in the YouTube Partner Program used it. About 4 million creators benefited after early tests.
The tech is like YouTube's Content ID system. Content ID finds videos with copyrighted material.
Protecting Public Discussion
YouTube's VP of Government Affairs and Public Policy, Liyelise Mera, spoke about this. She said the tool helps protect public discussion. This is important because AI makes impersonation risks higher. This is especially true in politics.
She added that YouTube wants to balance free speech. It also wants to fight risks from AI tech. This tech can copy public figures' faces and voices well.
Not All Content Will Be Removed
YouTube explained that detecting a deepfake doesn't mean it will be deleted. Each removal request will be checked. It will be checked against YouTube's privacy rules.
Some videos might be seen as political satire. These are allowed on YouTube.
Verify Identity First
To use the new tool, people in the test must prove who they are. They need to upload a selfie. They also need a government-issued ID. Then, they can create a profile. This profile lets them see videos matching their face. They can then ask for fake content to be removed.
Regulating AI Use
YouTube also supports a new bill in the US. It is called the NO FAKES Act. This bill aims to control AI use. It will stop fake digital copies of people's voices or images. This is for when it's done without permission.
Future Expansion
YouTube plans to improve this tech. It wants to detect fake voices too. It also wants to protect famous people and brands. This is to stop AI from exploiting them.
YouTube also said it will keep adding labels. These labels show when content is made by AI. How these labels appear will change. It depends on how sensitive the video topic is.