Scientists have built a framework that gives generative AI systems like DALL·E 3 and Stable Diffusion a major boost by condensing them into smaller models — without compromising their quality.
While I think the realism of some models is fantastic and the flexibility of others is great it is starting to feel like we’re reaching a plateau on quality. Most of the white papers I’ve seen posted lately are about speed or some alternate way of doing what ControlNet or inpainting can already do.
There’s way more to a game’s look than textures though. Arguably ray tracing will have a greater impact than textures. Not to mention, for retro games, you could just generate the textures beforehand, no need to do it in real time.
I meant putting the whole image through AI. Not just the textures. Tell it how you want it to look and suddenly a grizzled old Mario is jumping on a realistic turtle with blood splattering everywhere.
More mediocre images for everyone!
While I think the realism of some models is fantastic and the flexibility of others is great it is starting to feel like we’re reaching a plateau on quality. Most of the white papers I’ve seen posted lately are about speed or some alternate way of doing what ControlNet or inpainting can already do.
Well, when it’s fast enough you can do it in real time. How about making old games look like they looked to you as a child?
There’s way more to a game’s look than textures though. Arguably ray tracing will have a greater impact than textures. Not to mention, for retro games, you could just generate the textures beforehand, no need to do it in real time.
I meant putting the whole image through AI. Not just the textures. Tell it how you want it to look and suddenly a grizzled old Mario is jumping on a realistic turtle with blood splattering everywhere.