Connect with us

Hi, what are you looking for?

DOGE0.070.84%SOL19.370.72%USDC1.000.01%BNB287.900.44%AVAX15.990.06%XLM0.080.37%
USDT1.000%XRP0.392.6%BCH121.000.75%DOT5.710.16%ADA0.320.37%LTC85.290.38%

AI

US, Britain, and other countries ink agreement to make AI ‘secure by design’

Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration
Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic... Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration
Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration
Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic... Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration

Listen to the article now

On Sunday, the U.S., Britain, and more than a dozen other nations unveiled the first detailed international agreement on protecting AI from rogue actors, urging companies to create “secure by design.”

A 20-page paper released Sunday by the 18 governments stated that firms developing and deploying AI must protect consumers and the public from exploitation.

General guidelines like monitoring A.I. systems for abuse, securing data from manipulation, and evaluating software vendors are in the non-binding pact.

However, Jen Easterly, head of the U.S. Cybersecurity and Infrastructure Security Agency, said that many countries must sign a declaration that AI systems must prioritize safety.

“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly told Reuters, saying the rules are “an agreement that the most important thing that needs to be done at the design phase is security.”

The pact is the latest in a succession of government measures, few of which have teeth, to influence A.I. research, rapidly affecting industry and society.

Besides the U.S. and U.K., 18 nations endorsed the new guidelines: Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.

The framework recommends only releasing models after security testing to prevent hackers from hijacking A.I. technology.

It does not address controversial A.I. usage or data collection for these models. A.I.’s growth has raised fears about its potential to undermine democracy, boost fraud, and cause massive job losses.

European politicians are crafting A.I. regulations before the U.S. France, Germany, and Italy have agreed to “mandatory self-regulation through codes of conduct” for fundamental A.I. models, which provide a wide range of outputs.

Biden has pushed for A.I. legislation, but a divided Congress has stalled. In October, the White House issued an executive order to decrease A.I. risks to consumers, workers, and minorities while strengthening national security.


Comment Template

You May Also Like

Business

China Retaliates Against US and EU:  Imports of commonly used plastic from the United States, the European Union, Taiwan, and Japan are the subject...

Business

Easing Inflation in the US Spurs Debate:  In recent months, there have been palpable concerns surrounding the pace of price increases in the United...

Business

Biden Imposes Increased Tariffs: More and more Chinese-made commodities, including electric automobiles, solar panels, steel, and more, are subject to higher tariffs. These tariffs...

Business

At a turning point in the Horizon IT Inquiry, Post Office lawyer Simon Clarke revealed that a crucial expert had knowledge of system flaws....

Notice: The Biznob uses cookies to provide necessary website functionality, improve your experience and analyze our traffic. By using our website, you agree to our Privacy Policy and our Cookie Policy.

Ok