ChatGPT’s Parent OpenAI Now Looking at Humans to Regulate AI, Proposes a Regulatory Body
In a blog post on Monday, Microsoft-backed OpenAI shared the reason behind regularising AI.
ChatGPT maker OpenAI has now proposed the idea of creating a new international body in order to regulate artificial intelligence (AI). Led by CEO Sam Altman, the company mentioned that AI systems in the next ten years could have expert skills in most domains and, hence, be able to perform tasks productively, similar to the largest corporations today.
In a blog post on Monday, OpenAI shared the reason for regularising AI. The company stated, “The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. We believe people around the world should democratically decide on the bounds and defaults for AI systems.”
“We don’t yet know how to design such a mechanism, but we plan to experiment with its development,” it added.
The blog post, which was written by Open AI founder Sam Altman, President Greg Brockman, and Chief Scientist Ilya Sutskever, compared ‘superintelligence’ to nuclear energy, thereby proposing the idea of the creation of an authority akin to the International Atomic Energy Agency in order to curb the risks of AI.
OpenAI’s plan to tackle AI challenges
In the blog post, OpenAI shared a three-point agenda to combat the risks of superintelligent AI systems of the future.
1) Coordination among AI makers
OpenAI’s blog post suggested that companies that make AI systems such as Bard, Anthropic, and Bing should work in a coordinated effort so that the development of ‘superintelligence’ is planned in such a way that it ensures safety and helps the smooth integration of AI systems into society.
Hence, governments around the world could either set up a regulatory system involving leading AI manufacturers or the companies could themselves limit AI growth to a certain rate per year.
2) International regulatory body
OpenAI has also proposed the idea of a new international body, much like the International Atomic Energy Agency. The new body would ideally have the authority to inspect systems, require audits, test for compliance with safety standards, place restrictions, and do other things.
3) Safer superintelligence
OpenAI has also revealed that it is working on making artificial intelligence systems safer so that they follow human values and intent.