“The real benchmark is: the world growing at 10 percent,” he added. “Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.”
Needless to say, we haven’t seen anything like that yet. OpenAI’s top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail’s pace and requires constant supervision.
I’ve been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.
So why is the AI at the top end amazing yet everything we use is a piece of literal shit?
The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.
If you don’t know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That’s no one’s fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.
Same for coding, if you understand what your code does, it’s a helpful tool for unsticking part of a problem, it can’t write the whole thing from scratch
Exactly - I find AI tools very useful and they save me quite a bit of time, but they’re still tools. Better at some things than others, but the bottom line is that they’re dependent on the person using them. Plus the more limited the problem scope, the better they can be.
Yes, but the problem is that a lot of these AI tools are very easy to use, but the people using them are often ill-equipped to judge the quality of the result. So you have people who are given a task to do, and they choose an AI tool to do it and then call it done, but the result is bad and they can’t tell.
True, though this applies to most tools, no? For instance, I’m forced to sit through horrible presentations beause someone were given a task to do, they created a Powerpoint (badly) and gave a presentation (badly). I don’t know if this is inherently a problem with AI…
LLMs could be useful for translation between programming languages. I asked it to recently for server code given a client code in a different language and the LLM generated code was spot on!
I remain skeptical of using solely LLMs for this, but it might be relevant: DARPA is looking into their usage for C to Rust translation. See the TRACTOR program.
For coding it’s also useful for doing the menial grunt work that’s easy but just takes time.
You’re not going to replace a senior dev with it, of course, but it’s a great tool.
My previous employer was using AI for intelligent document processing, and the results were absolutely amazing. They did sink a few million dollars into getting the LLM fine tuned properly, though.
Just that you call an LLM “AI” shows how unqualified you are to comment on the “successes”.
What are you talking about? I read the papers published in mathematical and scientific journals and summarize the results in a newsletter. As long as you know equivalent undergrad statistics, calculus and algebra anyone can read them, you don’t need a qualification, you could just Google each term you’re unfamiliar with.
While I understand your objection to the nomenclature, in this particular context all major AI-production houses including those only using them as internal tools to achieve other outcomes (e.g. NVIDIA) count LLMs as part of their AI collateral.
The mechanism of machine learning based on training data as used by LLMs is at its core statistics without contextual understanding, the output is therefore only statistically predictable but not reliable. Labeling this as “AI” is misleading at best, directly undermining democracy and freedom in practice, because the impressively intelligent looking output leads naive people to believe the software knows what it is talking about.
People who condone the use of the term “AI” for this kind of statistical approach are naive at best, snake oil vendors or straightout enemies of humanity.
Not this again… LLM is a subset of ML which is a subset of AI.
AI is very very broad and all of ML fits into it.
No and if you label statistics as AI you contribute to the destruction of civil rights by lying to people.
This is the issue with current public discourse though. AI has become shorthand for the current GenAI hypecycle, meaning for many AI has become a subset of ML.
deleted by creator
LLMs are a type of machine learning. Input is broken into tokens, which are then fed through a type of neural network called a transformer model.
The models are trained with a process known as deep learning, which involves the probabilistic analysis of unstructured data, which eventually enables the model to recognize distinctions between pieces of content.
That’s like textbook machine learning. What you said about interpreting sentiment isn’t wrong, but it does so with machine learning algorithms.
I’m a researcher in ML and LLMs absolutely fall under ML. Learning in the term “Machine Learning” just means fitting the parameters of a model, hence just an optimization problem. In the case of an LLM this means fitting parameters of the transformer.
A model doesn’t have to be intelligent to fall under the umbrella of ML. Linear least squares is considered ML; in fact, it’s probably the first thing you’ll do if you take an ML course at a university. Decision trees, nearest neighbor classifiers, and linear models all are machine learning models, despite the fact that nobody would consider them to be intelligent.
LLMs are deep learning models that were developed off of multi-head attention/transformer layers. They are absolutely Machine Learning as they use a blend of supervised and unsupervised training (plus some reinforcement learning with some recent developments like DeepSeek).