As has been expected for months, the Biden Administration on Monday issued an executive order laying out standards and basic regulations for safety and security around artificial intelligence.
Calling these "the most sweeping actions ever taken to protect Americans from the potential risks of AI systems," the order establishes a framework of expectations for AI companies as they continue to develop new tools and technologies. The order follows meetings at the White House and on Capitol Hill with the leaders of major AI companies, including OpenAI's Sam Altman and Google's Sundar Pichai, and it comes ahead of a major AI safety summit this week in the UK called by Prime Minister Rishi Sunak.
"AI can help government deliver better results for the American people," says the administration in the order. "It can expand agencies’ capacity to regulate, govern, and disburse benefits, and it can cut costs and enhance the security of government systems. However, use of AI can pose risks, such as discrimination and unsafe decisions."
Among other initial regulations, the executive order requires companies to loop the federal government in as it is performing safety tests of any new AI tool, in accordance with the Defense Production Act. And the order directs the Department of Commerce to "develop guidance for content authentication and watermarking to clearly label AI-generated content," in an effort to combat deep-fake videos and the like.
Further, the order states:
The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.
When it comes to issues of civil rights and the application of AI in criminal justice — such as in sentencing — the order recommends "developing best practices" around teh AI applications, but doesn't go much further. As TechCrunch notes, "some might interpret the order as lacking real teeth, as much of it seems to be centered around recommendations and guidelines."
Those "real teeth" would come from actual legislation passed in Congress, but as we all know, Republicans aren't really capable of cooperating on anything, and getting such legislation passed given the current state of Congress would be nearly impossible.
Altman of OpenAI has not yet commented on the order, but Jack Clark, co-founder and head of policy at SF-based Anthropic, said on X today, "We’re pleased to see such a heavy emphasis on testing and evaluating AI systems in the Executive Order. You can’t manage what you can’t measure, and with this order the government has made meaningful steps towards creating third-party measurement and oversight of AI systems."
St. Thomas University College of Law Professor Kevin Frazier says in a comment to the Chronicle, "On the whole, this EO outlines several meaningful steps, but its effectiveness is highly dependent on execution."
Frazier adds, "I would also like to see the Administration and Congress recognize that this is too important of a regulatory challenge to leave up to folks on the Hill and AI lab leaders."
The order addresses issues of consumer privacy and the potential harm to workers as well. The administration calls on Congress "to pass bipartisan data privacy legislation to protect all Americans, especially kids." And it vaguely directs someone in government, or in the AI industry, to "Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection."
"The Biden-Harris Administration will continue working with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide," the order states. This will include "lead[ing] an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety."
Photo via 2001: A Space Odyssey/MGM