On Sunday, the U.S., Britain, and more than a dozen other nations unveiled the first detailed international agreement on protecting AI from rogue actors, urging companies to create “secure by design.”
A 20-page paper released Sunday by the 18 governments stated that firms developing and deploying AI must protect consumers and the public from exploitation.
General guidelines like monitoring A.I. systems for abuse, securing data from manipulation, and evaluating software vendors are in the non-binding pact.
However, Jen Easterly, head of the U.S. Cybersecurity and Infrastructure Security Agency, said that many countries must sign a declaration that AI systems must prioritize safety.
“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly told Reuters, saying the rules are “an agreement that the most important thing that needs to be done at the design phase is security.”
The pact is the latest in a succession of government measures, few of which have teeth, to influence A.I. research, rapidly affecting industry and society.
Besides the U.S. and U.K., 18 nations endorsed the new guidelines: Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.
The framework recommends only releasing models after security testing to prevent hackers from hijacking A.I. technology.
It does not address controversial A.I. usage or data collection for these models. A.I.’s growth has raised fears about its potential to undermine democracy, boost fraud, and cause massive job losses.
European politicians are crafting A.I. regulations before the U.S. France, Germany, and Italy have agreed to “mandatory self-regulation through codes of conduct” for fundamental A.I. models, which provide a wide range of outputs.
Biden has pushed for A.I. legislation, but a divided Congress has stalled. In October, the White House issued an executive order to decrease A.I. risks to consumers, workers, and minorities while strengthening national security.
Comment Template