When Adobe Inc. released its Firefly image-generating software last year, the company said the artificial intelligence model was trained mainly on Adobe Stock, its database of hundreds of millions of licensed images. Firefly, Adobe said, was a “commercially safe” alternative to competitors like Midjourney, which learned by scraping pictures from across the internet.

But behind the scenes, Adobe also was relying in part on AI-generated content to train Firefly, including from those same AI rivals. In numerous presentations and public postsabout how Firefly is safer than the competition due to its training data, Adobe never made clear that its model actually used images from some of these same competitors.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      It doesn’t matter how the image was made. It only matters what it is like and how it is used to affect the model.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        That’s what I’m saying. Synthetic images can help your model look better, but if you’re aiming for “realistic” output, but synthetic images are fundamentally not real images and too many will bias your model in a slightly different direction.