As more and more AI creation tools arrive, the risk of deepfakes, and of misrepresentation through AI simulations, also rises, and could potentially pose a significant risk to democracy through misinformation.
Indeed, just this week, X owner Elon Musk shared a video that depicted U.S. Vice President Kamala Harris making disparaging remarks about President Joe Biden, which many have suggested should be labeled as a deepfake to avoid confusion.
This is amazing ????
— Elon Musk (@elonmusk) July 26, 2024
pic.twitter.com/KpnBKGUUwn
Musk has essentially laughed off suggestions that anyone could believe that the video is real, claiming that it’s a parody and “parody is legal in America.” But when you’re sharing AI-generated deepfakes with hundreds of millions of people, there is indeed a risk that at least some of them will be convinced that the content is legit.
So while this example seems pretty clearly fake, it underlines the risk of deepfakes and the need for better labeling to limit misuse.
Which is what a group of U.S. senators has proposed this week.
Yesterday, Sens. Coons, Blackburn, Klobuchar, and Tillis introduced the bipartisan “NO FAKES” Act, which would implement definitive penalties for platforms that host deepfake content.
As per the announcement:
“The NO FAKES Act would hold individuals or companies liable for damages for producing, hosting, or sharing a digital replica of an individual performing in an audiovisual work, image, or sound recording that the individual never actually appeared in or otherwise approved – including digital replicas created by generative artificial intelligence (AI). An online service hosting the unauthorized replica would have to take down the replica upon notice from a right holder.”
So the bill would essentially empower individuals to request the removal of deepfakes that depict them in unreal situations, with certain exclusions.
Including, you guessed it, parody:
“Exclusions are provided for recognized First Amendment protections, such as documentaries and biographical works, or for purposes of comment, criticism, or parody, among others. The bill would also largely preempt state laws addressing digital replicas to create a workable national standard.”
So, ideally, this would implement legal process facilitating the removal of deepfakes, though the specifics could still enable AI-generated content to proliferate, under both the listed exclusions, as well as the legal parameters around proving that such content is indeed fake.
Because what if there’s a dispute as to the legitimacy of a video? Does a platform then have legal recourse to leave that content up till it’s proven to be fake?
It seems that there could be grounds to push back against such claims, as opposed to removing them on demand, which could mean that some of the more effective deepfakes still get through.
A key focus, of course, is AI-generated sex tapes, and misrepresentations of celebrities. In instances like these, there does generally seem to be clear cut parameters as to what should be removed, but as AI technology improves, I do see some risk in actually proving what’s real, and enforcing removals in line with such.
But regardless, the bill is another step toward enabling enforcement of AI-generated likenesses, which should, at the least, implement stronger legal penalties for creators and hosts, even with some gray areas.
You can read the full proposed bill here.