On Thursday, the House introduced a bipartisan bill mandating disclaimers for AI-generated content online, necessitating digital signatures on such content to inform users of its AI origin. Sponsors, including Rep. Anna Eshoo (D-CA) and Rep. Neal Dunn (R-FL), advocate for transparency in distinguishing deepfakes and AI-manipulated media. The Federal Trade Commission would enforce this requirement, with violators facing potential civil lawsuits.

This legislative move is part of broader efforts to regulate the burgeoning AI content landscape. In 2023, major tech firms like Amazon, Google, Meta, Microsoft, and OpenAI adopted voluntary guidelines under Biden administration oversight, incorporating cybersecurity testing and disclaimer practices. Google and Meta also introduced disclosure mandates for AI-influenced political ads.

Additionally, President Biden issued an executive order on AI use within federal agencies, while the EU has outlined extensive AI regulations effective in 2025, focusing on public safety, developer obligations, and continuous oversight.

Read More
Scripps News Rating

Share this:

Leave a Reply

Discover more from News Facts Network

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from News Facts Network

Subscribe now to keep reading and get access to the full archive.

Continue reading