A U.S. initiative to encourage “responsible” military use of artificial intelligence has picked up support from dozens of other countries, officials announced this week, as governments seek to grapple with the potential dangers of the new technology.
The State Department announced that 45 foreign governments have joined with the U.S. to launch the implementation of the “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.”
The declaration was created in February and has been promoted by Vice President Kamala Harris. She previously said that 31 nations had signed onto the accord, but that figure has since increased.
WHITE HOUSE UNVEILS AI EXECUTIVE ORDER, REQUIRING COMPANIES TO SHARE NATIONAL SECURITY RISKS WITH FEDS
The document establishes a set of 10 norms related to the use of AI in the military, including compliance with international law, correct training, implementing safeguards and subjecting technology to “rigorous” testing and review.
“The Declaration lays the foundation for states to engage in continued dialogue on responsible military use of the full range of AI applications, including sharing best practices, expert-level exchanges, and capacity-building activities,” the State Department said in a statement.
The U.S. has also launched initiatives related to general AI safety, including the creation of a new institute and policy guidance for the government’s use of the technology.
“History will show that this was the moment when we had the opportunity to lay the groundwork for the future of AI,” Harris said in a speech in London this month in a major policy speech. “And the urgency of this moment must then compel us to create a collective vision of what this future must be.”
OPENAI CEO SAM ALTMAN SPEAKS ABOUT ‘CHANGES’ IN LABOR MARKET WITH TECH REVOLUTIONS
That speech came days after President Biden signed what the White House called a “landmark” executive order to protect Americans from the potential risks of AI. The order requires that companies developing “foundational AI” models share their safety-test results with the federal government.
“The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board,” it added. “The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.”
CLICK HERE TO READ MORE ON FOX BUSINESS
The growth of AI technology has also raised concerns about its impact on the labor market. Biden’s order requires a report on the potential impacts to be drawn up.
Meanwhile, the Senate held a bipartisan Senate AI forum in September, featuring tech leaders including Tesla CEO Elon Musk, Microsoft founder Bill Gates and Meta’s Mark Zuckerberg.
Fox Business’ Greg Norman contributed to this report.