The real danger lies in those images that are crafted with the explicit intention of deceiving people — the ones that are so convincingly realistic that they could easily pass for authentic historical photographs.
Fundamentally/meta level, the issue is one of is; are people allowed to deceive other people by using AI to do so?
Should all realistic AI generated things be labeled as such?
There’s no realistic way to enforce that. The answer is to go the other way. We used to have systems in place for accountability of information. We need to bring back institutions for journalism and historians to be trustworthy sources that cite their work and show their research.
You can still mandate through laws that any AI generated product would have to have a label on it, identifying itself as such. We do the same thing today with other products that are manufactured and sold (recycling icons, etc).
As far as enforcement goes, the public themselves would ultimately (or in addition to) be the enforcers, as the recent British royal family photos scandal suggests.
But ultimately Humanity has to start considering laws that affects the whole species, ones that don’t just stop at an individual country border.
Don’t get me started on the sham that is recycling icons 😂
I’m all for making regulation that would require media companies to disclose that something is fake if it could be reasonably taken as truth. But that doesn’t solve the problem of anyone with a computer pumping fake images on to the web. What you’re suggesting would require a world government that has chip level access to anything with a CPU.
As for the public enforcing the truth; that’s what I’m suggesting. Assume anything you see online could be fake. Only trust trustworthy institutions that back up their media with verifiable facts.
Problem with that is that for data, it’s much easier to lie and get away with it. If a bot throws up an unlabelled AI generated image, law enforcement agencies would have a much harder time tracking down who made it.
There could be hundreds, or even thousands, and the moment they pin one down, more will appear.
By comparison, physical products can only be made and enter the country so quickly. There are physical factories where they can be tracked down, and it’s prohibitively expensive to spin up a new product line every time the other one is shut down.
If a bot throws up an unlabelled AI generated image, law enforcement agencies would have a much harder time tracking down who made it.
Well they would just start with the person who has the user account, or the site that the user account is associated with (we might end the days of being able to have sock puppet accounts). Or they get that information from the NSA (the government knows every one of your porn fetishes).
Honestly, I realize what I’m stating is not as easy to do as I’m saying it is, and making it actually work would be kind of ugly and not completely fair to all parties, but it is something that is actually doable, and needed.
We shouldn’t just throw up our hands on day one and say “fuck it, nothing can be done about it”, and then we all suffer in the pollution of the human conversational-sphere to the point that no one can converse with each other anymore because of all the garbage.
When we stop talking to each other, because we think everything is just AI generated, that’s a formula for destruction for the human race. We have to be able to talk to each other, and be confident that we’re actually talking to each other, and not a robot.
Well for the majority of human existence we got by on talking to each other in person. So I think the collapse of humanity is a bit dramatic.
Now, as we’ve seen with torrenting, if any country doesn’t comply or enforce laws against how their citizens should interact with the internet you can just VPN through that country to do what you want.
Ok so
Create the infrastructure for an entire world government.
Force every country to join and fully enforce laws tying every person to their online accounts.
Of course this will create a dangerous police-state like China’s government for many countries where speaking out against your government is dealt with harshly. So either abolish free speech or fix all corruption in all the countries in the world.
Of course this level of control over the world will attract a lot of corruption itself, so build an unassailable global set of checks and balances for how this government should be run that literally everyone on earth can agree on.
Well for the majority of human existence we got by on talking to each other in person. So I think the collapse of humanity is a bit dramatic.
We never had the ability to con each other over so completely and in very large numbers than we do today with the Internet and specialized Networks.
And more importantly, you always knew you were talking to another person, and not a conflict bot or an astroturfing bot, or a political party bot, etc. Now, you don’t, which is my point. We can’t solve problems if we don’t know we’re talking to a person versus a not person.
I wouldn’t be so quick to dismiss what I’m saying.
From the article…
Fundamentally/meta level, the issue is one of is; are people allowed to deceive other people by using AI to do so?
Should all realistic AI generated things be labeled as such?
There’s no realistic way to enforce that. The answer is to go the other way. We used to have systems in place for accountability of information. We need to bring back institutions for journalism and historians to be trustworthy sources that cite their work and show their research.
You can still mandate through laws that any AI generated product would have to have a label on it, identifying itself as such. We do the same thing today with other products that are manufactured and sold (recycling icons, etc).
As far as enforcement goes, the public themselves would ultimately (or in addition to) be the enforcers, as the recent British royal family photos scandal suggests.
But ultimately Humanity has to start considering laws that affects the whole species, ones that don’t just stop at an individual country border.
Don’t get me started on the sham that is recycling icons 😂
I’m all for making regulation that would require media companies to disclose that something is fake if it could be reasonably taken as truth. But that doesn’t solve the problem of anyone with a computer pumping fake images on to the web. What you’re suggesting would require a world government that has chip level access to anything with a CPU.
As for the public enforcing the truth; that’s what I’m suggesting. Assume anything you see online could be fake. Only trust trustworthy institutions that back up their media with verifiable facts.
Well, not something that harsh, but I think we’re looking at losing some of the faux anonymity that we have (no more sock puppet accounts, etc.).
Most people haven’t thought far enough ahead on what this means, all of the ramifications, if we let AI run rampant on the human ‘public square’.
Instead of duplicating my other comment on this subject, I’ll just link to it here.
Physical products are not the same as digital products. Your suggestions are very unrealistic.
Problem with that is that for data, it’s much easier to lie and get away with it. If a bot throws up an unlabelled AI generated image, law enforcement agencies would have a much harder time tracking down who made it.
There could be hundreds, or even thousands, and the moment they pin one down, more will appear.
By comparison, physical products can only be made and enter the country so quickly. There are physical factories where they can be tracked down, and it’s prohibitively expensive to spin up a new product line every time the other one is shut down.
Hot take incoming…
Well they would just start with the person who has the user account, or the site that the user account is associated with (we might end the days of being able to have sock puppet accounts). Or they get that information from the NSA (the government knows every one of your porn fetishes).
Honestly, I realize what I’m stating is not as easy to do as I’m saying it is, and making it actually work would be kind of ugly and not completely fair to all parties, but it is something that is actually doable, and needed.
We shouldn’t just throw up our hands on day one and say “fuck it, nothing can be done about it”, and then we all suffer in the pollution of the human conversational-sphere to the point that no one can converse with each other anymore because of all the garbage.
When we stop talking to each other, because we think everything is just AI generated, that’s a formula for destruction for the human race. We have to be able to talk to each other, and be confident that we’re actually talking to each other, and not a robot.
/getsoffsoapbox
Well for the majority of human existence we got by on talking to each other in person. So I think the collapse of humanity is a bit dramatic.
Now, as we’ve seen with torrenting, if any country doesn’t comply or enforce laws against how their citizens should interact with the internet you can just VPN through that country to do what you want.
Ok so
Or
Proper journalism.
We never had the ability to con each other over so completely and in very large numbers than we do today with the Internet and specialized Networks.
And more importantly, you always knew you were talking to another person, and not a conflict bot or an astroturfing bot, or a political party bot, etc. Now, you don’t, which is my point. We can’t solve problems if we don’t know we’re talking to a person versus a not person.
I wouldn’t be so quick to dismiss what I’m saying.
If the last couple of years proves anything, that’s not going to save us, not that alone.
You’re making an assumption that 100% of people are aware enough to consume the proper journalism and make the proper decisions.
Right now large swaths of people are being convinced the things that are not true through improper journalism.