Georgia Bill Seeks to Ban AI Deepfakes in Elections, Make Releasing Them a Felony Crime

John Albers

A bill was filed last week that could see “deepfake” audio and images, including those created using artificial intelligence (AI) technology, made a felony in Georgia if they are used in the context of an election.

Georgia State Senator John Albers (R-Alpharetta) filed SB 392, which, according to its summary, would make it a criminal offense to use deepfake technology to interfere with an election.

Read the full story

The Senate’s ‘No Section 230 Immunity for AI Act’ Would Exclude Artificial Intelligence Developers’ Liability Under Section 230

The Senate could soon take up a bipartisan bill defining the liability protections enjoyed by artificial intelligence-generated content, which could lead to considerable impacts on online speech and the development of AI technology.

Republican Missouri Sen. Josh Hawley and Democratic Connecticut Sen. Richard Blumenthal in June introduced the No Section 230 Immunity for AI Act, which would clarify that liability protections under Section 230 of the Communications Decency Act do not apply to text and visual content created by artificial intelligence. Hawley may attempt to hold a vote on the bill in the coming weeks, his office told the Daily Caller News Foundation.

Read the full story

Commentary: AI Is Coming for Art’s Soul

While AI-based technology has recently been used to summon deepfakes and create a disturbing outline for running a death camp, the ever-pervasive digital juggernaut has also been used to write books under the byline of well-known authors.

The Guardian recently reported five books appeared for sale on Amazon that were apparently written by author Jane Friedman. Only, they weren’t written by Friedman at all: They were written by AI. When Friedman submitted a claim to Amazon, Amazon said they would not remove the books because she had not trademarked her name.

Read the full story

Detecting Deepfakes by Looking Closely Reveals a Way to Protect Against Them

 by Siwei Lyu   Deepfake videos are hard for untrained eyes to detect because they can be quite realistic. Whether used as personal weapons of revenge, to manipulate financial markets or to destabilize international relations, videos depicting people doing and saying things they never did or said are a fundamental threat to the longstanding idea that “seeing is believing.” Not anymore. Most deepfakes are made by showing a computer algorithm many images of a person, and then having it use what it saw to generate new face images. At the same time, their voice is synthesized, so it both looks and sounds like the person has said something new. Some of my research group’s earlier work allowed us to detect deepfake videos that did not include a person’s normal amount of eye blinking – but the latest generation of deepfakes has adapted, so our research has continued to advance. Now, our research can identify the manipulation of a video by looking closely at the pixels of specific frames. Taking one step further, we also developed an active measure to protect individuals from becoming victims of deepfakes. Finding flaws In two recent research papers, we described ways to detect deepfakes with flaws…

Read the full story