It’s literally just a JSON map of per pixel words used to “encode” the color.

The worst part of AI generated content is that people won’t give new ideas, art, etc. the benefit of the doubt and will just assume it’s slop.

  • altkey (he\him)@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 day ago

    Reading OP and thinking about their misinformed understanding of what they are doing, I came upon an idea I propose to all of you: the almighty Babylonian Compression Algorythm.

    As long as we have all combinations of (say, 256x256px) images in the database, we can cut down image size to just a reference to a file in said database.

    It produces a bit-by-bit copy of any image without any compression, so it puts OOP’s project to shame. Little, almost non-existent problem is having access to said database, bloated with every existing but also not-yet-existing image. But since OOP’s solution depends on proprietary ChatGPT on someone else’s server, we are on par there.

    • Armok_the_bunny@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      Funny enough that actually wouldn’t be more efficient of a compression algorithm, the size of the file reference would be at best exactly the same size as the image that is being referenced, just because any fewer bits would lead to duplicate reference locations.

      • qaz@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 day ago

        Funny thing is that it would probably be more efficient as OOP’s approach, since it stores a word in a JSON map for each pixel.