Home Innovation Software AI could be dangerous because ...

AI could be dangerous because of 'societal misalignments,' says CEO of OpenAI


Software

AI could be dangerous because of 'societal misalignments,' says CEO of OpenAI

The "very subtle societal misalignments" that could cause the systems to wreak havoc are the artificial intelligence-related risks that keep the CEO of ChatGPT maker OpenAI up at night, he said on Tuesday.

Sam Altman, addressing the World Government Summit in Dubai through a video conference, restated his demand for the establishment of an organization akin to the International Atomic Energy Agency to supervise artificial intelligence, which is probably developing more quickly than anyone anticipates.

According to Altman, there are a few situations where it's easy to see where things go terribly wrong. Furthermore, he showed little attention to the murderous robots who were strolling down the street in the direction of trouble. He went on to say that he was far more interested in the extremely subtle social misalignments that occur when certain institutions are in place in society and, for no apparent reason, terrible things happen.

Altman emphasized, though, that the AI sector—including OpenAI—shouldn't be in charge of formulating laws that govern the sector.

Altman stated that there was still a lot of talk to come. Thus, all throughout the world, conferences are being held. Everybody had a policy paper and an idea, and that was OK.

One of the pioneers in the artificial intelligence space is the San Francisco-based startup OpenAI. Microsoft has made a $1 billion investment in OpenAI. In order to grant OpenAI access to its news archive, the Associated Press has inked a contract. The New York Times, meanwhile, has filed a lawsuit against Microsoft and OpenAI for using its content to train its chatbots without getting consent.


Business News


Recommended News

Latest Magazine