Either (you genuinely belive) you are 18 (24, 36 does not matter) months away from curing cancer or you’re not.
What would we as outsiders observe if they told their investors that they were 18 months away two years ago and now the cash is running out in 3 months?
Now I think the current iteration of AI is trying to get to the moon by building a better ladder, but what do I know.
The thing about AI is that it is very likely to improve roughly exponentially¹. Yeah, it’s building ladders right now, but once it starts turning rungs into propellers, the rockets won’t be far behind.
Not saying it’s there yet, or even 18/24/36 months out, just saying that the transition from “not there yet” to “top of the class” is going to whiz by when the time comes.
¹ Logistically, actually, but the upper limit is high enough that for practical purposes “exponential” is close enough for the near future.
why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.
to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums
Either (you genuinely belive) you are 18 (24, 36 does not matter) months away from curing cancer or you’re not.
What would we as outsiders observe if they told their investors that they were 18 months away two years ago and now the cash is running out in 3 months?
Now I think the current iteration of AI is trying to get to the moon by building a better ladder, but what do I know.
The thing about AI is that it is very likely to improve roughly exponentially¹. Yeah, it’s building ladders right now, but once it starts turning rungs into propellers, the rockets won’t be far behind.
Not saying it’s there yet, or even 18/24/36 months out, just saying that the transition from “not there yet” to “top of the class” is going to whiz by when the time comes.
¹ Logistically, actually, but the upper limit is high enough that for practical purposes “exponential” is close enough for the near future.
Then it doesn’t make sense to include LLMs in “AI.” We aren’t even close to turning runs into propellers or rockets, LLMs will not get there.
why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.
to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums