Connect with us

Hi, what are you looking for?

DOGE0.070.84%SOL19.370.72%BNB287.900.44%USDC1.000.01%AVAX15.990.06%XLM0.080.37%
USDT1.000%XRP0.392.6%BCH121.000.75%DOT5.710.16%ADA0.320.37%LTC85.290.38%
THE BIZNOB – Global Business & Financial News – A Business Journal – Focus On Business Leaders, Technology – Enterpeneurship – Finance – Economy – Politics & LifestyleTHE BIZNOB – Global Business & Financial News – A Business Journal – Focus On Business Leaders, Technology – Enterpeneurship – Finance – Economy – Politics & Lifestyle

Technology

Technology

OpenAI CEO visits South Korea to promote AI development

According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.
According to a plan that was published on the website of the artificial intelligence business OpenAI... According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.
According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.
According to a plan that was published on the website of the artificial intelligence business OpenAI... According to a plan that was published on the website of the artificial intelligence business OpenAI on Monday, the company has already established a framework to handle safety in its most sophisticated models. This framework includes the provision of the board with the ability to reverse safety judgments. The OpenAI project, which is supported by Microsoft (MSFT.O), will only implement its most recent technology if it is determined to be risk-free in certain domains, such as cybersecurity and nuclear dangers. Also, the corporation is establishing an advisory committee that will be responsible for reviewing safety reports and delivering them to the company's board of directors and management. Executives will be responsible for making choices, but the board has the ability to cancel such decisions. Since the debut of ChatGPT a year ago, the potential risks posed by artificial intelligence have been at the forefront of both the minds of AI researchers and the general public. Users have been blown away by the power of generative artificial intelligence technology to compose poems and essays, but it has also raised worries about the potential for the technology to disseminate misinformation and exercise influence on people. An open letter was signed by a collection of executives and professionals in the artificial intelligence business in April. The statement called for a six-month freeze in the development of systems that are more powerful than OpenAI's GPT-4. The letter cited possible threats to society. According to the findings of a study conducted by Reuters and Ipsos in May, more than two-thirds of Americans are concerned about the potential adverse impacts of artificial intelligence, and 61% feel that it might pose a threat to society.

OpenAI CEO visits South Korea to promote AI development. Open AI CEO Sam Altman will meet with South Korean President Yoon Suk Yeol to boost AI competitiveness.

Altman visited Israel, Jordan, Qatar, UAE, India, and South Korea this week after meeting with politicians and national leaders across Europe last month to discuss AI’s potential and risks.
“People are focused on not stifling innovation, and any regulatory framework has got to make sure that the benefits of this technology come to the world,” Altman told nearly 100 South Korean businesses on Friday.

Since Microsoft Corp (MSFT.O)-backed OpenAI launched ChatGPT last year, generative AI has grown rapidly and become popular, prompting lawmakers worldwide to address safety concerns.

The EU’s draft AI Act is anticipated to become law this year, while the US is considering updating existing rules for AI.

South Korea’s new AI regulations, less stringent than the EU’s, are pending legislative approval.

A parliament committee passed an AI law draft in February that ensures freedom to produce AI products and services unless regulators judge them harmful to people’s lives, safety, and rights.

In April, South Korea’s Ministry of Science and ICT announced plans to promote local AI development, including providing training datasets for “hyperscale” AI. AI ethics and legislation talks continued.

Naver (035420. KS), Kakao (035720. KS), and LG (003550. KS) are among the few South Korean tech corporations that have established foundation models for artificial intelligence in a market dominated by the US and China.

The startups are pursuing niche or specialized markets that big tech in the US and China has not yet addressed.

LG AI Research chairman Kyunghoon Bae said, “In order for Korean companies to have strength in the global AI ecosystem, each company must first secure specialised technology for vertical AI,” or AI optimized for specific needs.

Naver wants to develop localized AI applications for politically sensitive Middle Eastern and non-English-speaking countries like Japan and Southeast Asia.


Comment Template

You May Also Like

Technology

Although Taiwan chipmaker TSMC (2330. TW) reported virtually unchanged fourth-quarter revenue on Wednesday, it exceeded both the company’s and the market’s expectations. The largest...

Technology

BMW (BMWG.DE) said on Wednesday that it will invest 650 million euros ($711 million) to convert its main plant in Munich to build only...

Technology

According to U.S. officials, on Wednesday, the German software corporation SAP (SAPG.DE) agreed to pay around $222 million to end investigations into bribery schemes...

Technology

On Wednesday, at an investment conference, Tata Sons chairman Natarajan Chandrasekaran stated that the Tata Group is almost ready to announce plans to construct...

Notice: The Biznob uses cookies to provide necessary website functionality, improve your experience and analyze our traffic. By using our website, you agree to our Privacy Policy and our Cookie Policy.

Ok