ehm… cure cancer? I thought “cute cancer”. Sorry bout that
We are closer to making horny chatbots than a superintelligence figuring out a cure for cancer.
Actually, if the latter wins, would that super AI win a Nobel prize?
It would probably go to whoever uses it to find the cure… And to none of the authors who wrote the data that it was trained on
That’s how the Nobel prize always works. The price goes to whoever managed to cross the finishing line, not all the thousands of scientists before who conducted preliminary research.
To be fair, a better pattern finder could indeed lead to better ways of curing cancer.
Well that is just basicness implied as if was intelligence. If you cannot work with anyone do not fucking cry when you are the common problem. Quote me cus I will be quoting myself.
What?
What you trying to say there Skippy?
Skippy? Are you gonna act like being told to go fuck yourself was unwarranted? Suck my dick you want dialogue gurgle it
There’s not a single world where LLMs cure cancer, even if we decided to give the entirety of our energy output and water to a massive server using every GPU ever made to crunch away for months.
Not strictly LLMs, but neural nets are really good at protein folding, something that very much directly helps understanding cancer amount other things. I know an answer doesn’t magically pop out, but it’s important to recognise the use cases where NN actually work well.
I’m trying to guess what industries might do well if the AI bubble does burst. I imagine there will be huge AI datacenters filled with so-called “GPUs” that can no longer even do graphics. They don’t even do floating point calculations anymore, and I’ve heard their integer matrix calculations are lossy. So, basically useless for almost everything other than AI.
One of the few industries that I think might benefit is pharmaceuticals. I think maybe these GPUs can still do protein folding. If so, the pharma industry might suddenly have access to AI resources at pennies on the dollar.
integer calculations are lossy because they’re integers. There is nothing extra there. Those GPUs have plenty of uses.
But giving all the resources to LLMs slows/prevents those useful applications of AI.
which fucking sucks, because AI was actually getting good, it could detect tumours, it could figure things fast, it could recognise images as a tool for the visually impaired…
But LLMs are non of those things. all they can do is look like text.
LLMs are an impressive technology, but so far, nearly useless and mostly a nuance.
Multimodal LLMs are definitely a thing, though.
yhea, but it’s better to use the right tool for the job than throwing a suitcase full of tools at a problem
That’s not…
sigh
Ok, so just real quick top level…
Transformers (what LLMs are) build world models from the training data (Google “Othello-GPT” for associated research).
This happens by needing to combine a lot of different pieces of information together in a coherent way (what’s called the “latent space”).
This process is medium agnostic. If given text it will do it with text, if given photos it will do it with photos, and if given both it will do it with both and specifically fitting the intersection of both together.
The “suitcase full of tools” becomes its own integrated tool where each part influences the others. Why you can ask a multimodal model for the answer to a text question carved into an apple and get a picture of it.
There’s a pretty big difference in the UI/UX in code written by multimodal models vs text only models for example, or utility in sharing a photo and saying what needs to be changed.
The idea that an old school NN would be better at any slightly generalized situation over modern multimodal transformers is… certainly a position. Just not one that seems particularly in touch with reality.
And it’s clear we’re nowhere near achieving true AI, because those chasing it have made no moves to define the rights of an artificial intelligence.
Which means that either they know they’ll never achieve one by following the current path, or that they’re evil sociopaths who are comfortable enslaving a sentient being for profit.
It’s DEFINITELY both.
they’re evil sociopaths who are comfortable enslaving a sentient being for profit.
i mean, look what is happening in the united states. that would be completely unsurprising to happen here.
They sure do cure horny though.
There are tons of AIs besides the chat bots. There are definitely cancer hunter seekers.
Good thing I said “LLM” not “AI”.
No money in curing cancer with an LLM. Heaps of money taking advantage if increasingly alienated and repressed people.
There’s loads of money in curing cancer. For one you can sell the cure for cancer to people with cancer.
What a weird take, research use AI already? Some researchers even research things that, gasp, is not monetiseable right away!
You could sell the cure for a fortune. Imagine something that can reliably cure late stage cancers. You could charge a million for the treatment, easily.
Yes, selling the actual cure would be profitable…but an LLM would only ever provide the text for synthesizing it but none of the extensive testing, licensing, or manufacturing, etc… An existing pharmaceutical company would have to believe the LLM and then front the costs for the development, testing, and manufacture…which constitutes a large proportion of the costs of bringing a treatment to market. Burning compute time on that is a waste of resources, especially when fleecing horny losers is available right now. It is just business.
and LLMs hallucinate a lot of shit they “know” nothing about. a big pharma company spending millions of dollars on an LLM hallucination would crack me the fuck up were it not such a serious disease.
Right, that is why I originally said there is no money in a cancer cure invented by LLM. It’s just not a serious possibility.
Well, guess I know what I’m using ASI for.
But how else would it find the hard lump on yoir testicles?
Ow, that’s why they are restricting “organic” porn, to sell AI porn. Damn.
Either (you genuinely belive) you are 18 (24, 36 does not matter) months away from curing cancer or you’re not.
What would we as outsiders observe if they told their investors that they were 18 months away two years ago and now the cash is running out in 3 months?
Now I think the current iteration of AI is trying to get to the moon by building a better ladder, but what do I know.
The thing about AI is that it is very likely to improve roughly exponentially¹. Yeah, it’s building ladders right now, but once it starts turning rungs into propellers, the rockets won’t be far behind.
Not saying it’s there yet, or even 18/24/36 months out, just saying that the transition from “not there yet” to “top of the class” is going to whiz by when the time comes.
¹ Logistically, actually, but the upper limit is high enough that for practical purposes “exponential” is close enough for the near future.
Then it doesn’t make sense to include LLMs in “AI.” We aren’t even close to turning runs into propellers or rockets, LLMs will not get there.
why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.
to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums
What about an AI naughty nurse that does both?
You can use AI to fulfill your fantasies! for example: having healthcare (if you’re not American, this joke does not apply)
i’d rather lucid dream i have healthcare my friend. then i can use my care bear stare laser beam to apply vengeance to incompetent healthcare providers and administrators such that they will never know what it is like to satiate their hunger again. then ride a giant flying tardigrade named Hairy Terry off into the sunset.
LLMs only let me imagine it, not (from my perception) experience it. and remember, no crimes without Hairy Terry on lookout
Porn can pay your way through school, so to speak
every once in a while i think about selling feet pics (mine are recognizable) but i don’t think people want pictures of my genders feet
You can’t solve cancer because cancer is not a problem. It’s a solution. Humans are the problem.
careful not to cut yourself on that edge.
My dog died to cancer.
Sorry for your loss.
The kind of slop not even AI spits out
Wow, you’re really edgy and cool, aren’t you? How does it feel to be a 13 year old nihilist that just realized that people are bad sometimes?
You should lead by example.
I’m procrastinating. It’ll take me half a century or so.
False dichotomy.
People using AI to cure cancer are not the people implementing weird chatbots. Doing one has zero effect on the other.
Pretty sure this is a direct dig against Sam Altman specifically who is making huge claims despite no evidence that they’re making progress on AGI.
The people actually using AI to cure cancer were probably doing it before OpenAI (remember when we called it Machine Learning?) and haven’t been going to the media and Musking the completion date
Yeah, the people actually making progress were doing it before Sam Altman. RFdiffusion was made by the same people who released Rosetta@home 20 years ago
Musking the completion date
🤣🤣🤣
Well, current AI seems incredibly good as a basic assistant. Just ask it for sources, it will only piss you off 10% of the time.
It’s certainly not useless, I’m not a blanket hater
Yeah it’s not a very good one though because it’s predicated on the idea that a company can’t make more than one product.
I also don’t believe OpenAI is anywhere close to AGI, but obviously they can try to make AGI and make horny chatbots at the same time.
No, it’s still a pretty good dig. Their inflated valuation hinges on AGI but the only news they actually provide is that they’re going to let subscribers fuck their chatbots for $200/month (or whatever it costs)
It couldn’t be more obvious that they’re grasping at straws
It’s making fun of things like: “ChatGPT boss predicts when AI could cure cancer”.
sam altman and openai just announced they are allowing erotica for “verified users”
its only a matter of time before they allow full blown pornographic content, the only thing is that you have to verify your ID. so, openai and the “gubment” will know all the depraved shit you will ask for, and i guarantee it will be used against you or others at some point.
itll either become extremely addictive for the isolated who want the HER experience, or it will be used to undermine political discidents and anti fascist users.
despite what people think, openai does in fact hand over data to the authorities (immediate government representatives) and that information is saved and flagged if they deem it necessary.
basically if you say anything to chagpt, you can assume at some point it will be shared with law enforcement/government/government adjacent surveillance corporations like palantir.
they used to say they would refuse to make this type of content, knowing full well the implications of what might happen if they did. now due to “public demand” they are folding.
my advice, get a dumb phone, a digital camera, and a laptop to still have access to the internet and tools. reduce your physical ability to access the internet so readily. its saturated with AI, deep fakes, agents, and astroturfing bots that want you plugged in 100% of the time so they can further your addictions, manipulate you, and extract as much data from you as possible.
Fully automated luxury kompromat
basically if you say anything to chagpt, you can assume at some point it will be shared with law enforcement/government/government adjacent surveillance corporations like palantir.
That’s why I have all my private with deepseek.
I’m an adult, there’s no reason I can’t have the bot talk dirty to me. That’s a lot of text for essentially saying you wish the censorship stayed.
Surveillance state and data extraction are real issues that need to be tackled at the root (which isn’t AI).
That is absolutely not what he is saying. He is saying that governments across the world are starting to crack down on anything they deem unsocial behavior, and the companies that provide you with those services are 100% willing to sell you out when asked to.
I should be allowed to buy crack cocaine or a prostitute since it is no one’s business what i do in my free time. Unfortunately Uncle Sam disagrees, so it’s in my best interest to not pay for those services with a credit card that can be traced right back to me.
No, the thought here is that they’re going to hit the holy Grail
A super intelligent being would be able to cure cancer from first principles, just like anything else. It would understand the laws of reality so well it would be like magic, coming up with wonders we might never understand
That’s the idea anyways. A digital deity
Yes, AI already helps in oncology research and has for years and years, probably decades.
Think about all of the erotic chatbots that those oncologist phds could have created instead.
and then they could get together with boston dynamics and we’ve got
I mean, at least they’d be smart and horny, right?
My favorite kind of phd
Pound HarDer?
Probably High on Drugs
ProbablyPretty
Maybe thats what they have been doing all this time, every time someone new comes in thinking they are infallible and will solve the issues they then see the amazing anatomically correct sex chats the scientists have made and get sucked in to a world of nonstop orgasms
This is the most sane take in the whole post.
You’re getting downvoted because of how you put it. Most people do not understand the difference between AI used for research (like protein sequencing) and LLMs.
Also, the people making LLMs are not making protein sequencers.
No, OP is about how OpenAI said they were releasing a chatbot with PhD level intelligence about half a year ago (or was it a year ago?) and now they are saying that they’ll make horny chats for verified adults (i.e. paying customers only).
What happened to the PhD level intelligence Sam?! Where is it?
I agree, for most people ‘AI’ is ChatGPT and their perception of the success of AI is based on social media vibes and karma farming hot takes, not a critical academic examination of the state of the field.
I’m not remotely worried about the opinions of random Internet people, many of which are literally children just dogpiling on a negative comment.
Reasonable people understand my point and I don’t care enough about the opinions of idiots to couch my language for their benefit.
You’re my role model for the day
Ah I see the misunderstanding. Government pivoting is the problem.
NIH blood cancer research was defunded few months ago while around same time government announced they will be building 500-billion datacenters for LLMs.
“If LLM becomes AGI we won’t need the image-recognition linear algebra knowledge anymore, obviously.”
Researchers are still the good and appreciated no matter what annoying company is deploying their work.
Exactly. Gen AI is a very large field.