Lodi Valley News.com

Complete News World

OpenAI calls for the regulation of artificial intelligence to ensure human safety

OpenAI calls for the regulation of artificial intelligence to ensure human safety

leaders Open AIthe organization responsible for the development of ChatGPT, advocates the regulation of “super-intelligent” intelligences.

They advocate the creation of a regulatory body similar to the International Atomic Energy Agency to protect humanity from the dangers of creating intelligence capable of destruction.

In a brief statement posted on the company’s website, co-founders Greg Brockman and Ilya Sutskiver, along with CEO Sam Altman, called for the creation of an international regulatory body to begin work on certifying AI systems.

The proposal aims to conduct audits and tests for compliance with security standards and to implement restrictions in deployment and security levels. These measures aim to reduce the existential risks associated with these systems, seeking to protect humanity from potential dangers.

For decades, researchers have been highlighting the potential dangers associated with superintelligence, where we’ve already seen an evolution. However, with the rapid progress in the development of artificial intelligence, these risks are becoming more and more tangible.

a Artificial Intelligence Security Center (CAIS), based in the US, is dedicated to mitigating societal risks associated with artificial intelligence and has identified eight categories of risks considered “catastrophic” and “existential” related to the development of artificial intelligence, for example. There are serious risks that we are exposed to.

OpenAI leaders are calling for AI regulation

It is possible to imagine that in the next 10 years, AI systems will reach a level of skill that specializes in many areas and become as productive as some of the largest companies in existence today, OpenAI experts report.

See also  Vulcabras (VULC3) agrees to distribute intermediate dividends

This rapid development of artificial intelligence has the potential to dramatically transform many industries while maintaining efficiency and automation in many productive activities.

In terms of potential benefits and protections, the leaders’ message states, superintelligence is a more powerful technology than what humanity has dealt with in the past.

There is the potential for a significantly more prosperous future, but it is necessary to carefully manage the risks involved in realizing this scenario. Faced with the potential for existential risks, they report in a note, it is crucial to adopt a proactive attitude, rather than simply responding to situations as they arise.

In the immediate context, the group stresses the need for a “certain level of coordination” between companies involved in advanced AI research, in order to ensure the harmonious integration of increasingly powerful models into society, with particular priority given to security.

This coordination could be created through collective initiatives or agreements that seek to limit the advancement of AI capabilities. These approaches will be key to ensuring that AI is allowed to be developed in a controlled and responsible manner, taking into account the risks involved.