As far as I can tell, this product never panned out. It was backed by 132 people to cover 150k GBP in 2017. It was called the “Cyclotron Bike”.
Starting off with “we’ve heard your feedback” is something I’ve never heard from an abusive parent?
It’s run well for me. A little hiccup with text entering, but that’s standard.
About to be a lot of “accidental” falls out of windows.
Maybe more apt for me would be, “We don’t need to teach math, because we have calculators.” Like…yeah, maybe a lot of people won’t need the vast amount of domain knowledge that exists in programming, but all this stuff originates from human knowledge. If it breaks, what do you do then?
I think someone else in the thread said good programming is about the architecture (maintainable, scalable, robust, secure). Many LLMs are legit black boxes, and it takes humans to understand what’s coming out, why, is it valid.
Even if we have a fancy calculator doing things, there still needs to be people who do math and can check. I’ve worked more with analytics than LLMs, and more times than I can count, the data was bad. You have to validate before everything else, otherwise garbage in, garbage out.
It’s sounds like a poignant quote, but it also feels superficial. Like, something a smart person would say to a crowd to make them say, “Ahh!” but also doesn’t hold water long.
I generally agree. It’ll be interesting what happens with models, the datasets behind them (particularly copyright claims), and more localized AI models. There have been tasks where AI greatly helped and sped me up, particularly around quick python scripts to solve a rote problem, along with early / rough documentation.
However, using this output as justification to shed head count is questionable for me because of the further business impacts (succession planning, tribal knowledge, human discussion around creative efforts).
If someone is laying people off specifically to gap fill with AI, they are missing the forest for the trees. Morale impacts whether people want to work somewhere, and I’ve been fortunate enough to enjoy the company of 95% of the people I’ve worked alongside. If our company shed major head count in favor of AI, I would probably have one foot in and one foot out.
This has been my general worry: the tech is not good enough, but it looks convincing to people with no time. People don’t understand you need at least an expert to process the output, and likely a pretty smart person for the inputs. It’s “trust but verify”, like working with a really smart parrot.
Yeah, this phrase makes way more sense within the context of a game or game theory. For me, it goes back to fighting games or sports. People play to win in those settings. The rules are heavily defined, and the players must abide. These other examples are people misusing the phrase.
There was a similar study reported the other day about using FMRI imagining and AI to recreate the “thought content” of someone’s brain. It required training for the AI in the person’s brain and some other training. It does seem these techniques can work with some specified models, but yeah, it doesn’t seem like hooking someone’s brain up to this would create a movie of their mind or something.
I think the more dangerous part is “This is step 0,” which this tech would have seemed impossible 10 years ago. Very strange times.
Easy back for me. The original RoA is one of my favorite platform fighters. I’m happy to support Dan and crew for their next venture. I can’t wait till beta opens. :)
What is Mark has been a sentient AI for some time?