• agamemnonymous@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    4 hours ago

    The thing about AI is that it is very likely to improve roughly exponentially¹. Yeah, it’s building ladders right now, but once it starts turning rungs into propellers, the rockets won’t be far behind.

    Not saying it’s there yet, or even 18/24/36 months out, just saying that the transition from “not there yet” to “top of the class” is going to whiz by when the time comes.

    ¹ Logistically, actually, but the upper limit is high enough that for practical purposes “exponential” is close enough for the near future.

    • SuperNerd@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Then it doesn’t make sense to include LLMs in “AI.” We aren’t even close to turning runs into propellers or rockets, LLMs will not get there.

    • dreugeworst@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.

      to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums