ýappٷ

Table of Contents

Government AI regulation could censor protected speech online

Person uses phone as social media icons float out of it

Shutterstock.com

Edan Kauer is a former FIREintern and a sophomore at Georgetown University.


Elliston Berry was just 14 years old when a male classmate at Aledo High in North Texas used AI to create fake nudes of her based on images he took from her social media. He then did the same to seven other girls at the school and shared the images on Snapchat. 

Now, two years later, Berry and her classmates  for Senator Ted Cruz’s  (TIDA), a recently enacted law which gives social media platforms 48 hours to remove “revenge porn” once reported. The bill considers any non-consensual intimate imagery (NCII), including AI deepfakes, to fall under this category. But despite the law’s noble intentions, its dangerously vague wording is a threat to free speech.

This law, which covers both adults and minors, makes it illegal to publish an image of an identifiable minor that meets the definition of “intimate visual depiction,” which is defined as certain explicit nudity or sexual conduct,  with intent to “” or “.”&Բ;

Artificial intelligence. Technology web background.

Artificial intelligence, free speech, and the First Amendment

FIREoffers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.

Read More

That may sound like a no-brainer, but deciding what content this text actually covers, including what counts as “arousing,” “humiliating,” or “degrading” is highly subjective. This law risks chilling protected digital expression, prompting  social media platforms  to censor harmless content like a family beach photo, sports team picture, or images of injuries or scars to avoid legal penalties or respond to bad-faith reports.

Civil liberties groups such as the Electronic Frontier Foundation (EFF)  that the language of the law itself raises censorship concerns because it’s vague and therefore easily exploited:

Take It Down creates a far broader internet censorship regime than the Digital Millennium Copyright Act (DMCA), which has been  to . But at least the DMCA has an anti-abuse provision and protects services from copyright claims should they comply. This bill contains none of those minimal speech protections and essentially greenlights misuse of its takedown regime … Congress should focus on enforcing and improving these existing protections, rather than opting for a broad takedown regime that is bound to be abused. Private platforms can play a part as well, improving reporting and evidence collection systems. 

Nor does the law cover the possibility of people filing bad-faith reports.

In the 2002 case Ashcroft v. Free Speech Coalitionthe Court said the language of the  was so broad that it could have been used to censor protected speech. Congress passed the CPPA to combat the circulation of computer-generated child pornography, but as Justice Anthony Kennedy  in the majority opinion, the language of the CPPA could be used to censor material that seems to depict child pornography without actually doing so.

While we must acknowledge that online exploitation is a very real issue, we cannot solve the problem at the expense of other liberties.

Also in 2002, the Supreme Court heard the case Ashcroft v. ACLU, which came about after Congress passed the  (COPA) to prevent minors from accessing adult content online. But again, due to the broad language of the bill, the Court found this law would restrict adults who are within their First Amendment rights to access mature content.

As with the Take It Down Act, here too were laws created to protect children from sexual exploitation online, yet established using vague and overly broad standards that threaten protected speech.

But unfortunately, stories like the one at Aledo High are becoming more common as AI becomes more accessible. Last year, boys at Westfield High School in New Jersey  AI to circulate fake nudes of Francesca Mani, who is 14 years old, and other girls in her class. But Westfield High administrators were caught off guard as they had never experienced this type of incident. Although the Westfield police were notified and the perpetrators were suspended for up to 2 days, parents criticized the school for their weak response. 

So to Speak podcast: ‘Robotica: Speech Rights & Artificial Intelligence’

A year later, the school district developed a comprehensive  and amended their bullying policy to cover harassment carried out through “electronic communication” which includes “.” What’s true for Westfield High is true for America — existing laws are often more than adequate to deal with emerging tech issues. By classifying AI material under electronic communication as a category of bullying, Westfield High demonstrates that the creation of new AI policies are redundant. On a national scale, the same can be said for classifying and prosecuting instances of child abuse online.

While we must acknowledge that online exploitation is a very real issue, we cannot solve the problem at the expense of other liberties. Once we grant the government the power to silence the voices we find distasteful, we open the door to censorship. Though it is essential to address the very real harms of emerging AI technology, we must also keep our First Amendment rights intact.

Recent Articles

Get the latest free speech news and analysis from ýappٷ.

Share