“A screenshot of The Simpsons showing a hand holding a newspaper article featuring a picture of Grandpa Simpson shaking his fist at the sky and scowling, with the headline ‘Old Man Yells At Clouds’”
It doesn’t need to go into elaborate detail.
It depends on how much you care that someone who needs or wants the alt text needs to know.
so would it feel better to broaden the reach of that message for more recognition?
absolutely. And, ironically, one of the possible use cases of AI where it might-sort-of-kinda-work-okay-to-help-although-it-needs-work-because-it’s-still-kind-of-sucky.
It depends on how much you care that someone who needs or wants the alt text needs to know.
The accessibility advocates at WebAIM in the previous link don’t seem to think a verbal depiction (which an algorithm could do) is adequate.
They emphasize what an algorithm does poorly: convey meaning in context.
Their 1st example indicates less is better: they don’t dive into incidental details of the astronaut’s dress, props, hand placement, but merely give her title & name.
They recommend
not include phrases like “image of …” or “graphic of …”, etc
and calling it a screenshot is non-essential to context.
The hand holding a newspaper isn’t meaningful in context, either.
The headline already states the content of the picture, redundancy is discouraged, and unless context refers the picture (it doesn’t), it’s also non-essential to context.
The best alternative text will depend on the context and intended content of the image.
Unless gen-AI mindreads authors, I expect it will have greater difficulty delivering meaning in context than verbal depictions.
“A screenshot of The Simpsons showing a hand holding a newspaper article featuring a picture of Grandpa Simpson shaking his fist at the sky and scowling, with the headline ‘Old Man Yells At Clouds’”
It depends on how much you care that someone who needs or wants the alt text needs to know.
absolutely. And, ironically, one of the possible use cases of AI where it might-sort-of-kinda-work-okay-to-help-although-it-needs-work-because-it’s-still-kind-of-sucky.
The accessibility advocates at WebAIM in the previous link don’t seem to think a verbal depiction (which an algorithm could do) is adequate. They emphasize what an algorithm does poorly: convey meaning in context.
Their 1st example indicates less is better: they don’t dive into incidental details of the astronaut’s dress, props, hand placement, but merely give her title & name.
They recommend
and calling it a screenshot is non-essential to context. The hand holding a newspaper isn’t meaningful in context, either. The headline already states the content of the picture, redundancy is discouraged, and unless context refers the picture (it doesn’t), it’s also non-essential to context.
Unless gen-AI mindreads authors, I expect it will have greater difficulty delivering meaning in context than verbal depictions.
Geez for someone who ostensibly wants people to use alt text you’re super picky about it.
Good luck?
If you want to fuss about common industry guidelines, then take it up with them. It’s even in standard guidelines: