That’s good, but soon every video will partially be AI because it’ll be build in into the tools. Just like every photo out there is retouched with Lightroom/Photoshop.
Creators must disclose content that:
Makes a real person appear to say or do something they didn’t do
Alters footage of a real event or place
Generates a realistic-looking scene that didn’t actually occur
So, they want deepfakes to be clearly labeled, but if the entire video was scripted by chatgpt, the AI label is not required?
Generates a realistic-looking scene that didn’t actually occur
Doesn’t this describe, like, every mainstream live action film or television show?
Technically, yes… but if it’s in movie/show, you already know it’s fiction
Bold of you to assume that everyone knows movies and shows aren’t real.
That’s a win, but it would need to be enforced… Which is harder to do
Harder, but in this with mutliple generations of people being trained to question every link and image on screen? Not necessarily impossible.
People will report this for sure if they feel confident.
There will definitely be false flags though
Will this apply to advertisers, too? They don’t block outright scams, so probably not. Money absolves all sins.
Your YouTube is not working optimally if you’re seeing ads there
My point was that ads are a big part of the typical user’s experience, and it is hypocritical to believe AI needs to be disclosed but not apply that to paid content.