• Jeena@jemmy.jeena.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    That’s good, but soon every video will partially be AI because it’ll be build in into the tools. Just like every photo out there is retouched with Lightroom/Photoshop.

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Creators must disclose content that:

    Makes a real person appear to say or do something they didn’t do

    Alters footage of a real event or place

    Generates a realistic-looking scene that didn’t actually occur

    So, they want deepfakes to be clearly labeled, but if the entire video was scripted by chatgpt, the AI label is not required?

    • Uvine_Umbra@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 months ago

      Harder, but in this with mutliple generations of people being trained to question every link and image on screen? Not necessarily impossible.

      People will report this for sure if they feel confident.

      There will definitely be false flags though

  • AmidFuror@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    8 months ago

    Will this apply to advertisers, too? They don’t block outright scams, so probably not. Money absolves all sins.

      • AmidFuror@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        My point was that ads are a big part of the typical user’s experience, and it is hypocritical to believe AI needs to be disclosed but not apply that to paid content.