Several top AI companies, including Amazon, Google, Meta, and Microsoft, have pledged to follow a set of AI safeguards brokered by President Joe Biden’s administration. The goal of these voluntary commitments is to guarantee the safety and accountability of AI products before they are made available to the public.
These commitments involve third-party supervision of AI systems, independent security testing, the disclosure of potential weaknesses, and the use of digital watermarking to differentiate between real and AI-generated images.
These voluntary pledges have been made in response to the need to address any potential risks posed by AI. While lawmakers work on enacting regulations for AI, these commitments provide an immediate solution to ensure that AI products are safe and secure.