Biden announces ‘strongest’ regulations yet to ensure safety of AI
Developers of powerful AI will have to share safety results and critical information with the US government.
United States President Joe Biden has issued a sweeping executive order to regulate the development of artificial intelligence (AI) amid growing concern about its potential impact on everything from national security to public health.
“To realise the promise of AI and avoid the risk, we need to govern this technology,” Biden said on Thursday. “In the wrong hands, AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run.”
The executive order includes a provision that developers of the most powerful AI models must notify the government of their work and share safety test results.
It also calls on the National Institute of Standards and Technology to establish “rigorous standards” for testing AI prior to its release, the Department of Commerce to develop guidelines for identifying AI-generated content, and agencies funding “life science projects” to establish “strong new standards of biological synthesis screening” to ensure AI cannot engineer biohazards.
Biden also called on Congress to pass data privacy legislation and for the Department of Justice to address “algorithmic discrimination” among landlords and federal benefits programmes.
White House Deputy Chief of Staff Bruce Reed hailed the measures as “the strongest set of actions any government in the world has ever taken on AI safety, security and trust” and the “next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks”.
Fears about the risks of AI have grown exponentially since the release last year of Open AI’s ChatGPT, whose capabilities caught regulators and government officials around the world off guard.