• 🍉 Albert 🍉@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    ·
    15 hours ago

    which fucking sucks, because AI was actually getting good, it could detect tumours, it could figure things fast, it could recognise images as a tool for the visually impaired…

    But LLMs are non of those things. all they can do is look like text.

    LLMs are an impressive technology, but so far, nearly useless and mostly a nuance.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          That’s not…

          sigh

          Ok, so just real quick top level…

          Transformers (what LLMs are) build world models from the training data (Google “Othello-GPT” for associated research).

          This happens by needing to combine a lot of different pieces of information together in a coherent way (what’s called the “latent space”).

          This process is medium agnostic. If given text it will do it with text, if given photos it will do it with photos, and if given both it will do it with both and specifically fitting the intersection of both together.

          The “suitcase full of tools” becomes its own integrated tool where each part influences the others. Why you can ask a multimodal model for the answer to a text question carved into an apple and get a picture of it.

          There’s a pretty big difference in the UI/UX in code written by multimodal models vs text only models for example, or utility in sharing a photo and saying what needs to be changed.

          The idea that an old school NN would be better at any slightly generalized situation over modern multimodal transformers is… certainly a position. Just not one that seems particularly in touch with reality.