• jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    Machine learning algorithm from 2017, scaled up a few orders of magnitude so that it finally more or less works, then repackaged and sold by marketing teams.

    • SoftestSapphic@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 days ago

      Adding weights doesn’t make it a fundamentally different algorithm.

      We have hit a wall where these programs have combed over the totality of the internet and all available datasets and texts in existence.

      There isn’t any more training data to improve with, and these programs have stated polluting the internet with bad data that will make them even dumber and incorrect in the long run.

      We’re done here until there’s a fundamentally new approach that isn’t repetitive training.

      • outhouseperilous@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 days ago

        Okay but have you considered that if we just reduce human intelligence enough, we can still maybe get these things equivalent to human level intelligence, or slightly above?

        We have the technology.

        Also literally all the resources in the world.

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        2 days ago

        Transformers were pretty novel in 2017, I don’t know if they were really around before that.

        Anyway, I’m doubtful that a larger corpus is what’s needed at this point. (Though that said, there’s a lot more text remaining in instant messager chat logs like discord that probably have yet to be integrated into LLMs. Not sure.) I’m also doubtful that scaling up is going to keep working, but it wouldn’t surprise that much me if it does keep working for a long while. My guess is that there’s some small tweaks to be discovered that really improve things a lot but still basically like like repetitive training as you put it. Who can really say though.