Global cybersecurity officials, led by CISA’s Jen Easterly, are urging tech companies to embed robust safeguards in AI systems. This push aims to prevent exploitation by rogue states, terrorists, and others for cyberattacks and weapon creation. The emphasis is on reducing vulnerabilities that can be weaponized.

CISA, along with international organizations, has released guidelines for secure AI development, highlighting the necessity of responsible, secure AI operation and deployment. The initiative follows concerns raised by OpenAI’s ChatGPT, which, while innovative, presents potential abuse risks.

Key security concerns include adversarial machine learning, where attackers manipulate AI to prompt unauthorized actions or extract sensitive data. Sami Khoury of Canada’s Cyber Centre warns of dangers like AI data poisoning and sophisticated cybercrimes if security is overlooked in AI systems.

The guidelines aim to mitigate risks such as dataset manipulation, AI-assisted cybercrimes, and unauthorized malware creation. OpenAI has established restrictions on its tools to prevent illegal and harmful uses, including disinformation and weapon development.

Read More
The Canadian Press Rating

Share this:

Leave a Reply

Discover more from News Facts Network

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from News Facts Network

Subscribe now to keep reading and get access to the full archive.

Continue reading