Deepfakes Are Hard to Punish—A Federal Criminal Law Can Fix That

Deepfakes Are Hard to Punish—A Federal Criminal Law Can Fix That

A viral photo of Pope Francis walking outside the Vatican in a white puffer jacket was circulating on the internet in March 2023. It was a youthful and stylish look for the pontiff that immediately changed my impression of him.

Within hours, news sources reported that the photo was fake and created using generative artificial intelligence. My initial reaction was disappointment and amusement. With the passage of time, I now view that photo as an inflection point.

Photos created with generative AI have only improved since then, and most people, including experts, struggle to determine if a photo is real or fake. And there’s no longer any significant barrier to entry. Deepfakes can be created within seconds by anyone with a smartphone.

With the recent launch of AI-based video creation apps such as Sora by OpenAI, we will soon be flooded with videos and photos that have no foundation in the physical world. Everyone now wields the power to destroy another person’s reputation or manipulate an audience.

The deluge of deepfakes could reset our default instincts to assume that every video and photo is fake. The truth will be impossible to discern unless drastic action is taken. It’s to criminalize the knowing dissemination of deepfakes.

Poor Legal Remedies

At present, the best remedy for a victim of most types of deepfakes is to file a civil action under the right of publicity recognized in certain states that enables a person to recover damages for the defendant’s unauthorized use of the person’s likeness.

However, civil suits are grossly inadequate in this situation. It often isn’t clear who the harmed party should sue, and even when they bring a case, damages are difficult to adjudicate, and cash payments are hardly enough to repair ruined reputations.

The harm typically is inflicted not only by the individual who created the deepfake but by every person who disseminates the image through social media. Damages can be difficult to quantify. Traditionally, when monetary damages are inadequate, injunctive relief usually provides an effective remedy, but in this situation, injunctive relief also will be inadequate because preventing the initial perpetrator from continuing the act won’t prevent others from disseminating the deepfake.

You can read the full article at Bloomberg Law.