The Group of Seven (G7) industrial countries are scheduled to agree upon an artificial intelligence (AI) code of conduct for developers on Oct. 30, according to a report by Reuters.
According to the report, the code has 11 points that aim to promote “safe, secure, and trustworthy AI worldwide” and help “seize” the benefits of AI, while still addressing and troubleshooting the risks it poses.
The plan was drafted by G7 leaders back in September. It says it offers voluntary guidance of actions for "organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.”
Additionally, it suggests that companies should publicize reports on the capabilities, limitations, use and misuse of the systems being built. Robust security controls for said systems are also recommended.
Countries involved in the G7 include Canada, France, Germany, Italy, Japan, the United Kingdom, the United States and the European Union.
Cointelegraph has reached out to the G7 for confirmation of the development and additional information.
Related: New data poisoning tool would punish AI for scraping art without permission
This year’s G7 took place in Hiroshima, Japan, and a meeting was held between all participating Digital and Tech Ministers on April 29 and 30.
Topics covered in the meeting included emerging technologies, digital infrastructure and AI, with an agenda item specifically dedicated to responsible AI and global AI governance.
The G7’s AI code of conduct comes as governments across the world are trying to navigate the emergence of AI with its useful capabilities and its concerns. The EU was among one of the first governing bodies to establish guidelines with its landmark EU AI
Read more on cointelegraph.com