President Joe Biden has issued a comprehensive executive order regulating federal agencies' use of artificial intelligence, a significant move to address the risks of AI.
A senior administration officials says the order reshapes AI usage in the federal government, impacting areas like health care, education, trade, housing and more.
“President Biden believes that we have an obligation to harness the power of AI for good while protecting people from potentially profound risks. In response to the President's leadership on the subject, 15 major American technology companies have begun to implement voluntary commitments to be sure that AI technology is safe, secure and trustworthy before releasing it to the public. That is not enough, however," a senior administration official said.
The order mandate developers of advanced AI systems to disclose safety test results and essential data, create safety standards and tools, protect against AI-enabled fraud, establish cybersecurity measures, and issue a National Security Memorandum for further AI security actions.
The executive order also requires developers to share safety test results with the government through the Defense Production Act, set standards for red team testing, and direct the Department of Commerce to craft guidance to help authenticate content and pursue fixes to vulnerabilities in critical software.
“President Biden is rolling out the strongest set of actions any government in the world has ever taken on AI safety, security and trust. It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks,” said White House Deputy Chief of Staff Bruce Reed.
Although the order is extensive, its objectives include safeguarding Americans' privacy through the advancement of privacy-preserving techniques, the enhancement of research and technology, the evaluation of data collection methods, guidance provision to various entities, tackling algorithmic discrimination, ensuring fairness within the criminal justice system, and crafting AI best practices for workers.
The order directs coordination for best practices to investigate civil rights violations related to AI and develop best practices for its use in the criminal justice system, and directs the Department of Health and Human Services to establish a safety program when it comes to AI in health care. It also requires an examination of AI’s potential impact on the labor market.
While prior White House initiatives aimed at addressing AI have faced criticism, the new order will empower numerous agencies to exert influence in the market. Despite congressional attempts to craft legislation to address AI risks, no comprehensive measures have been introduced thus far, but the order renews calls on Congress to pass legislation on data privacy.
Biden is scheduled to meet with Congressional leaders on Tuesday to discuss AI.
"The two words we approach this with are one, urgency, we got to move quickly. We can't move too quickly, or screw because we'll screw it up. But other countries or bad actors will get ahead of us. But the second is humility. It's about the hardest thing I've attempted to undertake legislatively," Senate Majority Leader Chuck Schumer said on Monday.
Additionally, the administration lays out goals for working with international partners and efforts to establish an international framework, with officials saying they’ve had conversations with dozens of countries.
"We've been working with both close allies and partners in agreement on those principles, as well as a broader set of countries to get agreement on how we can all ensure that we develop models in a safe and secure way," said Anne Neuberger, deputy national security advisor for Cyber and Emerging Technology.
Vice President Kamala Harris is expected to speak at an AI summit in the United Kingdom this week.
While the Biden administration views the executive order as a step forward in regulating AI, legal challenges may be ahead.
"I've heard people talk about potential first amendment issues and and others there, I think it's gonna take some time to work out how this this applies and who it applies to," said Cameron Kerry, a global thought leader on privacy and AI for the Brookings Institution.