AI video generation is a real concern. Even in an era where AI videos tend to sport attributes that give the game away — like Coca Cola’s rapidly-transforming semi-truck in last year’s Christmas ad — it’s often good enough to fool audiences, which is why some platform owners are trying to get ahead of any potentially problematic deepfakes.
Today, YouTube is expanding its likeness detection too to support politicians, government employees, political candidates, and reporters. The company previously launched the feature last year for its YouTube partners, but now, those who fall into these newly-protected categories won’t need to be within that program to participate. Just as with Content ID, YouTube’s likeness detection works to find a facial match in AI-generated content on the platform before allowing a matched participant to send a takedown request to that specific video.
YouTube says it doesn’t automatically pull all matched content, with specific carveouts for parody and satire even against world leaders, but it does look for anything that violates its pre-existing privacy guidelines. Those who qualify for this program will need to verify their identity with Google, though the company states this data is not used to train AI models. YouTube is also using this announcement to call for the passing of the NO FAKES Act in Congress, which it says “establishes a federal right of publicity and acts as a blueprint for international adoption to ensure technology serves — and never replaces — human creativity.”
Unfortunately, if you aren’t in YouTube’s Partner Program or in one of these supported public-facing roles, likeness detection remains out of reach for the time being.
More on YouTube:
- YouTube for Android TV now shows Premium family plan members at account switcher
- A persistent YouTube bug is placing immovable ads in fullscreen videos
- YouTube test lets AI create a new video using someone else’s Shorts
FTC: We use income earning auto affiliate links. More.
Comments