

The Algorithm™
Not joking, there is a not insignificant portion of the population spurred to click videos based on thumbnails like these.
I’m a lonely smut writer in Portugal! Feel free to say hello! :3


The Algorithm™
Not joking, there is a not insignificant portion of the population spurred to click videos based on thumbnails like these.
I actually just spent the better part of this afternoon doing just that! I messed around with Mint and it basically ran perfectly fine. Literally no issues at all (besides some of me not understanding how things worked). All my critical stuff works perfectly well, with the sole exception being a game I run where mods are pretty heavily windows-based. I did find a decent Linux community around that, though, and they seem to be running things pretty well, too.
I know I shouldn’t dual boot with a partition, but that’s what I’m gonna do to see if I can make it a couple weeks without anything major going wrong. I tried the live boot disk, but at the moment all I have is an external HDD and it makes some things insanely slow, so partitioning is the move for now. I’ll drop Windows in a couple weeks, though.
Edit: I’ll probably also try out some other distros once I’ve got Mint set up as well, just to be sure I’m not missing out on something else that will work. For now, I just want easy to use and easy to learn.
I appreciate the advice! All my looking around so far has me thinking Mint or Bazzite, but I think Mint will end up being what I go with so I actually learn how to troubleshoot in case I need to move to something else in the future.
I’ll look for a free drive and try and test this afternoon!
I would if I could handle significant downtime on my computer. I absolutely cannot have my only device not working for some elements of my life at the moment, so I’m hesitant to just swap and hope. Someone else recommended testing live distros, though, so I may try that for a couple weeks just to see if anything goes wrong.
I unfortunately don’t have the money for a second SSD at the moment. I considered partitioning my drive, but I only have 500GB and it feels like that would be a pretty big issue if I don’t figure things out quickly enough.
A live distro? I’m not actually sure what that is. I’ll do a bit of looking around about it, though!
I’ve heard that as well, but at this point I can’t really risk something going wrong and not working at the moment. I am tempted to give it a shot, but I need to at least make sure I have a fallback if something goes wrong.
I actually super appreciate these videos. I’m absolutely convinced at this point that, regardless of compatibility issues, my next system will be Linux, but I have absolutely bounced off of it the last couple times I tried. Being able to see different people’s initial experiences with different distros feels pretty invaluable to me at this point.
At the moment, I only have a laptop and can’t afford either a new system or for my current system to go down, though, so I’ve been hesitant about making any actual changes. That said, my laptop’s about six years old at this point and starting to really struggle, so I know it’s coming in the next year or so.
EDIT: I found an external HDD to work with! I’m setting it up with Cinnamon now and I’ll spend the next couple weeks trying to treat it like my main driver to see if I can handle any quirks that comes with it.


Puritanism.


For your first question, what you’re describing is a problem with education and staffing, not a problem of the tool itself. I’m not suggesting you keep around ‘one old man who hates AI’, my pitch you bar the use of AI for human-level checks.
For your second, yes I saw the part about how news and media are representing AI in healthcare, but I don’t really see how news or media are relevant here. Could you explain this a bit for me?
I don’t intend to gloss over the issues with Generative AI/LLMs, I tried to be specific in my separation of ML from them in my original comment where I said LLMs in their public facing version (ChatGPT, Claude, whatever) aren’t very useful.
The original comment I replied to asked “is “AI” even useful (etc)” but also mentioned LLMs. I was trying to make the point that LLMs aren’t the only type of AI and that others can be employed to great effect. If that was unclear, that’s my bad but that was my intention.
The reason I don’t want to engage with a hypothetical is because I could just as easily counter with “what if it diagnoses at a 100% success rate? What if fear of losing skills results in doctors never wanting to use AI, resulting in more deaths?” Neither hypothetical argument is really very helpful for the discussion. I promise you I’ve thought about this a lot (but again, I’m not an expert, nor am I in the field), but more importantly I have friends finishing doctorates in the bioinformatics field whom I get some insight from, and I’m, at least at this point, convinced of the benefits.

I don’t live in the US. I wouldn’t say it’s a Nihilistic comment to suggest fixing the system, though.


I read both articles you linked, but I’m not really seeing how they support your point. The first article seemed to support the idea that healthcare staff would welcome more seamless, user-friendly AI tools in the field and the second discussed biases within tools they selected for cancer diagnoses and a tool they used to reduce those biases. Am I misunderstanding what you’re saying somewhere?
Also, with regard to the reduction in diagnostic accuracy of diagnosticians with AI, I would need to see the specific article to be sure, but if it’s the one that was posted across reddit a few months back, I read through that one as well. It seemed to agree with a similar article about students writing papers with and without the use of ChatGPT (group A writes with it, group B writes without it, and afterwards they are asked to both write without the LLM. Group B’s essay was shown to be better. This is a hugely reductive description of the experiment, but gets the idea across). Again, it makes sense that if you use a tool to facilitate an action, that tool is replacing that skill and you get “rusty”. It does not mean that the existence of a tool would reduce skill in those who do not use it, though. My suggestion of using it as a screening tool wouldn’t affect the diagnostician’s skill unless they also used it, which sorta defeats the purpose of them being a human check on the process, post-screening flag.
I can’t speak to your other points as they’re hypothetical. Obviously, I wouldn’t advocate for an inaccurate tool that causes an already overworked field to take on more work. I’m only suggesting that ML is a tool that has use-cases and can be used to supplement current processes to improve outcomes. They can, and are, being improved constantly. If they’re employed thoughtfully, I just think they can be a huge benefit.


Regarding the doctor’s signature thing, that seems a bit preemptive to say a single flawed study invalidates the entire field and tech, especially when the tech is working as intended in that case and it is user error in the study.
And of course, like any tool it should be utilized thoughtfully. Any form of technology directly takes away from the skill previously utilized to get results. Flint and steel took away from the rubbing sticks together skill. The combustion engine took away from many different professional skills.
Consider that, in this case, we don’t just have to replace diagnosis but could augment it instead. What if every hospital around the world could augment regular medical care with a single machine processing results. Every single check-up could include a quick cancer screening. If the machine flags you as ‘at risk’, a doctor could then see you for human diagnosis and validation. The skill of diagnosis is still needed and utilized, but now everyone can have regular screening instead of overwhelming an already overtaxed healthcare system.
Again, all I’m saying is that there are practical, useful use-cases for the technology, they’re just not what we are doing with them.
Edit: as an after thought, I’m no expert here. As far as I understood, LLMs are a type ML, but ML encompasses a way broader category of ‘AI’. I’m mostly against LLMs for just general use like they are currently. I am advocating for ML as a whole, with thoughtful application.


Generative AI in its current, public-facing form? Probably not. It’s sort of like an invention of the internet situation. It CAN be used to facilitate learning, share information, and improve lives. Will it be used for that? No.
A friend of mine is training local LLMs to work in tandem for early detection of diseases. I saw a pitch recently about using AI to insulate moderators from the bulk of disturbing imagery (a job that essentially requires people to frequently look at death, CSAM, and violence and SIGNIFICANTLY ruins their mental health). There are plenty of GOOD ways to use it, but it’s a flawed tech that requires people to responsibly build it and responsibly use it, and it’s not being used that way.
Instead it’s being scaled up and pushed into every possible application both to justify the expenses and enrich terrible people, because we as a society incentivize that.
Edit: hugely belated, I misspoke here after checking with my friend. He’s using local models, but they aren’t LLMs. This is why I’m no expert. 😅

Sure, but at some point ya gotta think, “Maybe I should destroy the de-limbing machine,” instead of continuing to put part of your body in there.
(This isn’t a criticism of you or your beliefs, just a jokey perspective.)
My god, I could understand some level of argument with regard to format innovation. But for music to be added to slides? Obscenity. Barbarism.
Holy shit, they’re a poet
Oh damn. Well… ignore my comment then. 😭
You’re 18. It’s somewhere in the early 2000s and you’ve just graduated. You’re soaking in the warm summer night air in your bed watching The Office. The world is so far away and simultaneously rushing at you at the speed of fuck. Your Blink-182 CD loops back again.
Suddenly, a bright pinprick of light bathes your dim room in an eerie blue-white glow. The light begins to grow and you realize it’s undulating, like a fluid unbound by gravity as it roils in the air. You’re too stunned to speak and cover your eyes at the harsh light. Your hairs stand on end and chills run along your skin. Somewhere inside, you associated such luminosity with heat, but the sphere—no, the disc, seems to be consuming the energy in the room, like some kind of ethereal whirlpool.
You gasp as a shadow moves through the shimmer. First a hand, then the upper half of what looks like a torso. The figure cocks their head as they look around the room. You can only make out their silhouette, but… they’re vaguely familiar.
It’s… you! They’re different, a bit more worn down, perhaps, but they’re unmistakably you. After a moment, your breath catches in your throat. They’re older. Your mind, stunned by the absurdity of what has just occurred, finally catches up.
“You’re me… from the future,” you say. The statement immediately sounds stupid. Of course they are. The portal, the older you, what else could be happening. You scramble for a pen and an old school journal at your nightstand. You’ve fantasized about this before. You know what to do. Write down what they say and you’ll be rich. No, you’ll stop some horrible cataclysm. Maybe you’ll keep your true love from leaving!
You turn back to yourself expectantly, anxiety causing your hand to shake on the page. You’re holding your breath. Your lungs burn but you hardly can bring yourself to care.
The older you looks down at you from the swirling light.
“You are eighteen,” they say with a shit-eating grin. In an instant, the light is gone. Darkness floods your room again as if nothing at all had happened. Outside your window, crickets continue to chirp. Your mind races, generations of genetically perfected pattern recognition searching for meaning in the words until you remember shitpostd about this exact scenario on 4chan.
“Oh go fuck yourself,” you say, tossing aside the journal.
There was an interview I saw recently with Asmodei where he said that Anthropic aren’t categorically against autonomous weapons, only that they didn’t think they were ready, seemingly implying they would make mistakes similar to how LLMs hallucinate. A lot of the media coverage around them seemed to imply that they had a higher ethical standard than the others, and I mean… maybe? I guess it could be argued that wanting to minimize collateral damage is more ethical, but regardless, I think it’s important to keep perspective when we see how they act in the coming weeks and months.