• toad31@lemmy.cif.su
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    17
    ·
    edit-2
    11 hours ago

    I’ll never understand how a statistical model for “next most probable pixel” can arrive at shit like this.

    Probably because you have no experience writing or reading code for AI. I doubt you’ve read a single book on AI or taken any classes related to it.

    Why would you think you could understand this? Dunning-Kruger effect?

    • pyre@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      5 hours ago

      “I’ll never understand this”

      “Why would you think you could understand this?”

      maybe you should work on understanding basic english

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      6 hours ago

      I’ve read enough about how the langauge models work to develop a statistical graph of words based on a huge corpus of training input. But no amount of training pixels adds up to z-index errors like this. Also, wow your attitude is hostile as hell. I feel sorry for you having to walk around like that. Try not to have a terrible day now, hm?