YouTube threatens to suspend creators who fail to disclose AI-generated videos::Starting next year, YouTubers must indicate whether they’re posting AI-generated videos, such as realistically depicting an event that never happened.
What do you want to bet this is just going to end up being another way for them to mass demonetize videos?
I think the reasoning is that if you train AIs with AI generated data you can end up in a situation where the development halts and no improvement happens anymore.
This is the number one reason I don’t trust AI trained from public data. It’s going to be learning from itself indefinitely and basically going inbred
I’m Mr. Meseeks, look at me!
Worse than that, it can lead to artifacts similar to what happens when you jpeg a jpeg too much
So just png it once in a while.
Unless its Elsa pegging Spiderman on YTkids with 10 midrolls for shitty pay to win mobile games and 10 million views.
It seems like they’re mostly concerned with people/events being falsely depicted. I think this is valid concern. Within the next year or two it will be possible to make a video of anyone you want saying and doing whatever you want. Nobody will be able to trust anything they see.
This is the best summary I could come up with:
YouTube is rolling out new rules for AI content, including a requirement that creators reveal whether they’ve used generative artificial intelligence to make realistic-looking videos.
In a blog post Tuesday outlining a number of AI-related policy updates, YouTube said creators that don’t disclose whether they’ve used AI tools to make “altered or synthetic” videos face penalties including having their content removed or suspension from the platform’s revenue sharing program.
“Generative AI has the potential to unlock creativity on YouTube and transform the experience for viewers and creators on our platform,” Jennifer Flannery O’Connor and Emily Moxley, vice presidents for product management, wrote in the blog post.
Under the latest changes, which will take effect by next year, YouTubers will get new options to indicate whether they’re posting AI-generated video that, for example, realistically depict an event that never happened or show someone saying or doing something they didn’t actually do.
The platform is also deploying AI to root out content that breaks its rules, and the company said the technology has helped detect “novel forms of abuse” more quickly.
YouTube’s privacy complaint process will be updated to allow requests for the removal of an AI-generated video that simulates an identifiable person, including their face or voice.
The original article contains 331 words, the summary contains 206 words. Saved 38%. I’m a bot and I’m open source!