Big Tech giants Google, Microsoft, ChatGPT maker OpenAI and AI startup Anthropic announced on July 26 they would form a new industry watchdog group to help regulate AI development.
In a joint statement released on the Google blog, the companies revealed their new Frontier Model Forum aimed at monitoring the “safe and responsible” development of frontier AI models.
It pointed out that while governments across the world have already begun putting efforts towards regulating AI development and deployment, “ further work is needed on safety standards and evaluations.”
The current core goals of the initiative are to advance research on AI safety, identify best practices for responsible development and deployment of frontier models, collaboration with governments and civil leaders and supporting efforts to develop applications.
Related: OpenAI launches ‘custom instructions’ for ChatGPT so users don’t have to repeat themselves in every prompt
Membership to the Forum is open to organizations that fit the predefined criteria, which includes developing and deploying frontier models.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," said Brad Smith, the vice chair and president of Microsoft.
According to the announcement, the Frontier Model Forum will establish an advisory board in the coming months in order to direct the group’s priorities and strategy. It also says the founding companies plan to consult “civil society and governments” regarding the design of the Forum.
Anna Makanju, the vice president of global affairs at OpenAI said there is a “profound benefit” to society from advanced AI systems but to achieve this potential there needs
Read more on cointelegraph.com