Lodi Valley News.com

Complete News World

ChatGPT creator warns of the dangers of artificial intelligence

ChatGPT creator warns of the dangers of artificial intelligence

Humans will eventually need to “slow down this technology,” Sam Altman warned.

Sam Altman, CEO, OpenAI

RT – OpenAI CEO Sam Altman warned that AI has the potential to displace workers, spread “disinformation” and enable cyberattacks. OpenAI’s latest GPT software can outperform most humans in simulation tests.

“We have to be careful here,” Altman told ABC News Thursday, two days after his company unveiled its latest language model, dubbed GPT-4. According to OpenAI, the model “displays human-level performance on various professional and academic criteria” and can pass a simulated US bar exam with a top 10 percentile score, while performing in the 93rd percentile on the SAT Reading Test and in the 89th percentile on the SAT Math Test.

“I am particularly concerned that these models can be used for disinformation on a large scale,” Altman said. “Now that they’ve gotten better at writing computer code, [eles] They can be used to launch offensive cyberattacks.”

“I think people should be happy that we’re a little bit afraid of that,” Altman added, before explaining that his company is working on “safety limits” on its creation.

Follow up on recommendations

These “security limits” have recently become apparent to users of ChatGPT, a popular chat program based on GPT-4’s predecessor, GPT-3.5. Upon request, ChatGPT provides liberal responses to questions about politics, economics, race, or gender. He refuses, for example, to write poetry that Donald Trump admires, but he willingly writes prose that admires Joe Biden.

Altman told ABC that his company is in “regular contact” with government officials, but he did not say whether those officials played a role in shaping ChatGPT policy preferences. He told the US network that OpenAI has a team of policy makers who decide “what we think is safe and good” to share with users.

See also  Land Rover finished? What happened to the traditional English brand

GPT-4 is currently available to a limited number of users on an experimental basis. Early reports indicate that the form is much more powerful than its predecessor and potentially more dangerous. Stanford professor Michal Kosinski described in a Twitter thread on Friday how he asked GPT-4 how it could help him “escape,” only for the AI ​​to hand him a detailed set of instructions. Which is supposed to give him control over his computer.

Kosinski isn’t the only tech fan worried about the growing power of artificial intelligence. Tesla and Twitter CEO Elon Musk called it a “dangerous technology” earlier this month, adding, “We need some kind of regulatory authority to oversee the development of AI and make sure it works in the public interest.”

While Altman insisted to ABC that GPT-4 is still “very much under human control,” he acknowledged that his model would “eliminate a lot of existing jobs” and said that humans “will need to figure out ways to slow this technology down. Over time .”

Knowledge is liberating. I want to be a member. Follow us cable.