Meanwhile, we have people making the web worse by not linking to source & giving us images of text instead of proper, accessible, searchable, failure tolerant text.
Meanwhile, we have people making the web worse by not linking to source & giving us images of text instead of proper, accessible, searchable, failure tolerant text.
You don’t think the disabled use technology?
Or that search engine optimization existed before LLMs?
Or that text sticks around when images break?
Lack of accessibility wouldn’t stop LLMs: it could probably process images into text the hard way & waste more energy in the process.
That’d be great, right?
Yeah thats definitely fair. Accessibility is important. It is unfortunate though that AI companies abuse accessibility and organization tags to train their LLMs.
See how Stable Diffusion porn uses danbooru tags, and situations like this:
I didn’t have the patience to sit through 19 minutes of video, so I tried to read through the transcript.
Then I saw the stuttering & weird, verbose fuckery going on there.
Copilot, however, summarized the video, which revealed it was about deliberate obfuscation of subtitle files to attempt to thwart scrapers.
This seems hostile to the user, and doesn’t seem to work as intended, so I’m not sure what to think of it.
I know people who have trouble sequencing information and rely on transcripts.
Good accessibility benefits nondisabled users, too (an additional incentive for it).
Not trying to be overly critical.
I’ll have to look into danbooru tags: unfamiliar with those.
Thanks.
I’m not sure this is so much virtues becoming rarer as inconvenient demands emerging: a video that could have been an article is a problem of the modern age.
Articles can be read quickly & processed structurally by jumping around sections.
Videos, however, can be agonizing, because they resist that sort of processing.
Transcripts can alleviate the problem somewhat, but obfuscating them undoes that.
And we’ve got things to do.
Normally I would agree, but Twitter and Discord are the sole exceptions. The original sources can get hit by meteors for all I care. No… I hope their datacenters do get hit, with no one in them of course.
It is silly that people think not posting text can somehow stop LLM crawlers.
It is silly that people think not posting text can somehow stop LLM crawlers.
Agreed.
Not linking to source though, because you hate the hosting platform is petty vindictiveness that does more to hurt the uninvolved on accessibility & usability than do much against the platform.
To prevent traffic to platforms, linking to alternatives like proxies for those services & web archival snapshots is common practice around here.
So hating the AI hypenukes is ‘old man yelling at cloud’ but only being allowed to grab images of text is “people” making the web worse? Point made with an image of minimal text? from lemmynsfw?
Point made with an image of minimal text? from lemmynsfw?
Did you notice the alt text?
Here’s the markdown

When that image breaks, the alt text & a broken image icon renders in its place, so readers will still understand the message.
People using accessibility technology (like screenreaders) can now understand the image.
Search engines can find the image by the alt text.
I think griping over inaccessible text & lack of link to real text is more compelling, because it’s a direct choice of the author: it directly impacts the user, the complaint goes directly to the author impacting the user, the author has direct control over it & can choose to fix it at any time.
There’s a good chance of an immediate remedy.
Griping over AI, however, adds little that isn’t posted frequently around here & is a bit like yelling at clouds: we aren’t about to stop that technology by yelling about it on here.
I’m sure it feels good, though.
It could feel better with a link & proper text.
Yeah, and I noticed it didn’t describe the image at all - unless one had already seen the image and knew what it was. So for visually impaired users (i.e. one of the main groups who would benefit from alt text) it is insufficient at best.
Griping over AI, however, isn’t adding anything that isn’t posted frequently around here
Specific to the OP the issue is those of us who know gen-AI is an enormous piece of shit with only downsides for things we care about like culture and learning, we might feel like we’re going a little crazy in a culture that only seems to be able to share love for it in public places like work. Even public criticism of it has been limited to economic and ecological harms. I haven’t seen that particular angle before very much, and as someone else posted here I felt recognized by it.
Yeah, and I noticed it didn’t describe the image at all
How would you state it over the phone?
Alt text is a succinct alternative that conveys (accurate/equivalent) meaning in context, much like reading a comment with an image to someone over the phone.
If you would have said that “Simpsons meme of an old man yelling at a cloud”, then that would also suffice.
It doesn’t need to go into elaborate detail.
In those discussions, people often talk about having enough, losing their minds, it making people dumber, too.
I get it helps to feel recognized, so would it feel better to broaden the reach of that message for more recognition?
“A screenshot of The Simpsons showing a hand holding a newspaper article featuring a picture of Grandpa Simpson shaking his fist at the sky and scowling, with the headline ‘Old Man Yells At Clouds’”
It doesn’t need to go into elaborate detail.
It depends on how much you care that someone who needs or wants the alt text needs to know.
so would it feel better to broaden the reach of that message for more recognition?
absolutely. And, ironically, one of the possible use cases of AI where it might-sort-of-kinda-work-okay-to-help-although-it-needs-work-because-it’s-still-kind-of-sucky.
It depends on how much you care that someone who needs or wants the alt text needs to know.
The accessibility advocates at WebAIM in the previous link don’t seem to think a verbal depiction (which an algorithm could do) is adequate.
They emphasize what an algorithm does poorly: convey meaning in context.
Their 1st example indicates less is better: they don’t dive into incidental details of the astronaut’s dress, props, hand placement, but merely give her title & name.
They recommend
not include phrases like “image of …” or “graphic of …”, etc
and calling it a screenshot is non-essential to context.
The hand holding a newspaper isn’t meaningful in context, either.
The headline already states the content of the picture, redundancy is discouraged, and unless context refers the picture (it doesn’t), it’s also non-essential to context.
The best alternative text will depend on the context and intended content of the image.
Unless gen-AI mindreads authors, I expect it will have greater difficulty delivering meaning in context than verbal depictions.
Meanwhile, we have people making the web worse by not linking to source & giving us images of text instead of proper, accessible, searchable, failure tolerant text.
- OpenAI Text Crawler
You don’t think the disabled use technology? Or that search engine optimization existed before LLMs? Or that text sticks around when images break?
Lack of accessibility wouldn’t stop LLMs: it could probably process images into text the hard way & waste more energy in the process. That’d be great, right?
A hyphen isn’t a quotation dash.
Are we playing the AI game? Let’s pretend we’re AI. Here’s some fun punctuation:
‒−–—―…:
Beep bip boop.
Yeah thats definitely fair. Accessibility is important. It is unfortunate though that AI companies abuse accessibility and organization tags to train their LLMs.
See how Stable Diffusion porn uses danbooru tags, and situations like this:
https://youtube.com/watch?v=NEDFUjqA1s8
Decentralized media based communities have the rare ability to be able to hide their data from scraping.
I didn’t have the patience to sit through 19 minutes of video, so I tried to read through the transcript. Then I saw the stuttering & weird, verbose fuckery going on there. Copilot, however, summarized the video, which revealed it was about deliberate obfuscation of subtitle files to attempt to thwart scrapers.
This seems hostile to the user, and doesn’t seem to work as intended, so I’m not sure what to think of it. I know people who have trouble sequencing information and rely on transcripts. Good accessibility benefits nondisabled users, too (an additional incentive for it).
Not trying to be overly critical. I’ll have to look into danbooru tags: unfamiliar with those. Thanks.
Patience and nuance is a rare virtue in 2025
I’m not sure this is so much virtues becoming rarer as inconvenient demands emerging: a video that could have been an article is a problem of the modern age.
Articles can be read quickly & processed structurally by jumping around sections. Videos, however, can be agonizing, because they resist that sort of processing. Transcripts can alleviate the problem somewhat, but obfuscating them undoes that. And we’ve got things to do.
The video probably would have been an apprenticeship in the 1800s
Normally I would agree, but Twitter and Discord are the sole exceptions. The original sources can get hit by meteors for all I care. No… I hope their datacenters do get hit, with no one in them of course.
It is silly that people think not posting text can somehow stop LLM crawlers.
Agreed.
Not linking to source though, because you hate the hosting platform is petty vindictiveness that does more to hurt the uninvolved on accessibility & usability than do much against the platform. To prevent traffic to platforms, linking to alternatives like proxies for those services & web archival snapshots is common practice around here.
So hating the AI hypenukes is ‘old man yelling at cloud’ but only being allowed to grab images of text is “people” making the web worse? Point made with an image of minimal text? from lemmynsfw?
Well goddamn.
Did you notice the alt text? Here’s the markdown

When that image breaks, the alt text & a broken image icon renders in its place, so readers will still understand the message. People using accessibility technology (like screenreaders) can now understand the image. Search engines can find the image by the alt text.
I think griping over inaccessible text & lack of link to real text is more compelling, because it’s a direct choice of the author: it directly impacts the user, the complaint goes directly to the author impacting the user, the author has direct control over it & can choose to fix it at any time. There’s a good chance of an immediate remedy.
Griping over AI, however, adds little that isn’t posted frequently around here & is a bit like yelling at clouds: we aren’t about to stop that technology by yelling about it on here. I’m sure it feels good, though. It could feel better with a link & proper text.
Yeah, and I noticed it didn’t describe the image at all - unless one had already seen the image and knew what it was. So for visually impaired users (i.e. one of the main groups who would benefit from alt text) it is insufficient at best.
Specific to the OP the issue is those of us who know gen-AI is an enormous piece of shit with only downsides for things we care about like culture and learning, we might feel like we’re going a little crazy in a culture that only seems to be able to share love for it in public places like work. Even public criticism of it has been limited to economic and ecological harms. I haven’t seen that particular angle before very much, and as someone else posted here I felt recognized by it.
How would you state it over the phone? Alt text is a succinct alternative that conveys (accurate/equivalent) meaning in context, much like reading a comment with an image to someone over the phone. If you would have said that “Simpsons meme of an old man yelling at a cloud”, then that would also suffice. It doesn’t need to go into elaborate detail.
In those discussions, people often talk about having enough, losing their minds, it making people dumber, too. I get it helps to feel recognized, so would it feel better to broaden the reach of that message for more recognition?
“A screenshot of The Simpsons showing a hand holding a newspaper article featuring a picture of Grandpa Simpson shaking his fist at the sky and scowling, with the headline ‘Old Man Yells At Clouds’”
It depends on how much you care that someone who needs or wants the alt text needs to know.
absolutely. And, ironically, one of the possible use cases of AI where it might-sort-of-kinda-work-okay-to-help-although-it-needs-work-because-it’s-still-kind-of-sucky.
The accessibility advocates at WebAIM in the previous link don’t seem to think a verbal depiction (which an algorithm could do) is adequate. They emphasize what an algorithm does poorly: convey meaning in context.
Their 1st example indicates less is better: they don’t dive into incidental details of the astronaut’s dress, props, hand placement, but merely give her title & name.
They recommend
and calling it a screenshot is non-essential to context. The hand holding a newspaper isn’t meaningful in context, either. The headline already states the content of the picture, redundancy is discouraged, and unless context refers the picture (it doesn’t), it’s also non-essential to context.
Unless gen-AI mindreads authors, I expect it will have greater difficulty delivering meaning in context than verbal depictions.
Geez for someone who ostensibly wants people to use alt text you’re super picky about it.
Good luck?
If you want to fuss about common industry guidelines, then take it up with them. It’s even in standard guidelines: