New Hampshire officials launched an investigation after a robocall with an AI-doctored voice of President Biden urged voters to skip the state’s primary. The source was traced to a Texas-based organization. Now, New Hampshire is among 39 states considering measures to increase transparency for AI-generated deepfake ads or calls. Most states focus on identifying AI-produced content rather than controlling or banning its distribution.
In Wisconsin, a law requires political ads created using AI to include a disclaimer. Non-compliance results in a $1,000 fine per violation. Florida passed a stricter law, making non-disclosure of AI-enabled messages a criminal misdemeanor. Arizona is considering similar measures.
Congress, unlike the states, is interested in regulating the content of deepfakes. Several bills aim to ban their circulation, including one prohibiting the distribution of AI-generated material targeting a federal office candidate. Another bill would remove Section 230 protections for AI-generated content, forcing online platforms to face legal liability for posting such material. Neither bill has received a committee vote. Is there a tech solution to this problem?
Read More
Boston Herald Rating
Discover more from News Facts Network
Subscribe to get the latest posts sent to your email.