I wouldn’t really say it’s the outside going all the way through us because there’s doors along the way. That would be like saying if you have a front door and a back door, your house is outside.
I wouldn’t really say it’s the outside going all the way through us because there’s doors along the way. That would be like saying if you have a front door and a back door, your house is outside.
You jumped in to a thread about trump and epstein’s preference for young girls with your own preference for small breasts and to talk about sex you’ve had. No one asked and no one gives a shit about that here, plus it’s weird af to use a thread about pedophiles as a springboard for talking about your own sexual preferences.
If you want to talk about your preferences, I suggest something like chaturbate or a sex-focused community or thread (where the topic isn’t about potential or likely pedophiles).
I think translations should involve a pair of people where both know both languages but one is fluent in the one while the other is fluent in the other. Or a single person fluent in both, but if you don’t know the other language, it can be hard to verify that fluency and I’m sure it’s not very hard to find people willing to lie about their proficiency to get a job.
Though it’s not really relevant to anyone not speaking that language, even if it’s not review bombing.
Some arcades were actually a bit more manipulative than that in that they’d get harder depending on how long it was since you last put a quarter in.
Mortal Kombat was one. I noticed this pattern on the snes version of MK3 (can’t remember if it was ultimate or not that I had): I’d easily win one fight, then get demolished by the next fighter. Then continue and that same fighter would be easy, only for the next one after that to be much more difficult. I didn’t have to put quarters into my snes but they just used the same tuning from the arcade machines.
Eventually when I played that game, I was spending much more time on the space invaders minigame lol.
Also if the CEO of target decides he really doesn’t like a popular shirt and is able to force everyone to only shop at target, then he can come a lot closer to snuffing out the existence of that shirt.
“Haha the website runner thinks I’d actually be interested in this”
Yeah, if your reaction to “website wants permission to push notifications to your device” isn’t some mix between mirth and revulsion, I don’t fundamentally understand you.
The water tasted like damp basement! Oddly enough, it was a kinda pleasant flavour.
I mean, even if they had to pay in the event of self-afflicted injury, that first quote is fraud on its own, though I bet the huge payout is also a part of it.
It’s because they are horrible at problem solving and creativity. They are based on word association from training purely on text. The technical singularity will need to innovate on its own so that it can improve the hardware it runs on and its software.
Even though github copilot has impressed me by implementing a 3 file Python script from scratch to finish such that I barely wrote any code, I had to hold its hand the entire way and give it very specific instructions about every function as we added the pieces one by one to build it up. And even then, it would get parts I failed to specify completely wrong and initially implemented things in a very inefficient way.
There are fundamental things that the technical singularity needs that today’s LLMs lack entirely. I think the changes that would be required to get there will also change them from LLMs into something else. The training is a part of it, but fundamentally, LLMs are massive word association engines. Words (or vectors translated to and from words) are their entire world and they can only describe things with those words because it was trained on other people doing that.
I don’t hate AI or LLMs. As much as it might mess up civilization as we know it, I’d like to see the technological singularity during my lifetime, though I think the fixation on LLMs will do more to delay than realize that.
I just think that there’s a lot of people fooled by their conversational capability into thinking they are more than what they are and using the fact that these models are massive with billions or trillions of weighs that the data is encoded into and no one understands how they work to the point of being able to definitively say “this is why it suggested glue as a pizza topping” to put whether or not it approaches AGI in a grey zone.
I’ll agree though that it was maybe too much to say they don’t have knowledge. “Having knowledge” is a pretty abstract and hard to define thing itself, though I’m also not sure it directly translates to having intelligence (which is also poorly defined tbf). Like one could argue that encyclopedias have knowledge, but they don’t have intelligence. And I’d argue that LLMs are more akin to encyclopedias than how we operate (though maybe more like a chatbot dictionairy that pretends to be an encyclopedia).
Calling the errors “hallucinations” is kinda misleading because it implies there’s regular real knowledge but false stuff gets mixed in. That’s not how LLMs work.
LLMs are purely about word associations to other words. It’s just massive enough that it can add a lot of context to those associations and seem conversational about almost any topic, but it has no depth to any of it. Where it seems like it does is just because the contexts of its training got very specific, which is bound to happen when it’s trained on every online conversation its owners (or rather people hired by people hired by its owners) could get their hands on.
All it does is, given the set of tokens provided and already predicted, plus a bit of randomness, what is the most likely token to come next, then repeat until it predicts an “end” token.
Earlier on when using LLMs, I’d ask it about how it did things or why it would fail at certain things. ChatGPT would answer, but only because it was trained on text that explained what it could and couldn’t do. Its capabilities don’t actually include any self-reflection or self-understanding, or any understanding at all. The text it was trained on doesn’t even have to reflect how it really works.
Those aren’t pens.
Not sure where 1440p would land, but after using one for a while, I was going to upgrade my monitor to 4k but realized I’m not disappointed with my current resolution at all and instead opted for a 1440p ultrawide and haven’t regretted it at all.
My TV is 4k, but I have no intention of even seriously looking at anything 8k.
Screen specs seem like a mostly solved problem. Would be great if focus could shift to efficiency improvements instead of adding more unnecessary power. Actually, boot time could be way better, too (ie get rid of the smart shit running on a weak processor, emphasis on the first part).
Or maybe you get gravel in the same sense that someone could own Jupiter or a star. “You now own all the gravel in that quary!” But it doesn’t inform the workers of that fact, or the officials who still rely on whatever paperwork was filled out by the agents of the guy who paid them to ensure the quary belongs to his corporation’s corporation. The whole idea of ownership is pretty abstract in the first place.
Could be that every pill just means that, under the jurisdiction of the entity who made the pills, you are legally allowed to do what the pills claim, though you need to figure out the rest from there, and people from other jurisdictions are able to disagree even if you do figure out the how.
Unless he was as skilled in robotics and engineering as a fish was at climbing trees.
Get a “replace the pin in the grenade before the time runs out” alarm clock. Then, if you sleep in anyways, it won’t be your problem anymore.
I thought you were going to say it was a BBQ sauce spiced with Carolina Reapers. In which case, avoid using heavy amounts in groin area and sleep with goggles on. Maybe even tie your hands up so you don’t scratch anywhere in your sleep.
Add to the confusion by changing your briefly turning off your camera, changing your shirt, then camera back on.