The United States, along with 17 other countries, unveiled an international agreement that aims to keep artificial intelligence systems safe from rogue actors and urges providers to follow "secure by design principles." The 20-page document, jointly published on Sunday by the U.S. Department of Homeland Security's Cybersecurity and Infrastructure Security Agency and the United Kingdom's...
Share this:

Eighteen countries, led by the United States, have introduced an agreement to ensure the safety of artificial intelligence (AI) systems. This 20-page guideline, developed by the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency and the UK’s National Security Centre, aims to protect AI systems from misuse and ensure they operate securely and as intended.

The document is divided into four sections, offering recommendations for each stage of AI development, from design to maintenance. It emphasizes the need to safeguard AI assets, responsibly release AI systems, and continuously monitor them post-deployment.

Participating countries include Australia, Canada, Chile, the Czech Republic, Estonia, Germany, Israel, Italy, Japan, Nigeria, Poland, and Singapore. Although the agreement is nonbinding and offers broad advice, it underscores the global emphasis on AI safety and responsible development. Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Agency, highlighted the agreement’s focus on prioritizing AI system safety over rapid market deployment.

This initiative aligns with the Biden administration’s recent executive order on AI, which sets new safety standards and promotes privacy in AI training data. The order also reviews the use of personal data in government agencies and supports AI research in critical areas like health care and climate change.

Read Full Story
The Hill Rating


Discover more from News Facts Network

Subscribe to get the latest posts sent to your email.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x