The United States National Institute of Standards and Technology (NIST) and the Department of Commerce are soliciting members for the newly-established Artificial Intelligence (AI) Safety Institute Consortium.
Participate in a new consortium for evaluating artificial intelligence (AI) systems to improve the emerging technology’s safety and trustworthiness. Here’s how: https://t.co/HPOIHJyd3C pic.twitter.com/QD3vc3v6vX
In a document published to the Federal Registry on Nov. 2, NIST announced the formation of the new AI consortium along with an official notice expressing the office’s request for applicants with the relevant credentials.
Per the NIST document:
The purpose of the collaboration is, according to the notice, to create and implement specific policies and measurements to ensure US lawmakers take a human-centered approach to AI safety and governance.
Collaborators will be required to contribute to a laundry list of related functions including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.
These efforts come in response to a recent executive order given by US president Joseph Biden. As Cointelegraph recently reported, the executive order established six new standards for AI safety and security, though none appear to have appear to have been legally enshrined.
Related: UK AI Safety Summit begins with global leaders in attendance, remarks from China and Musk
While many European and Asian states have begun instituting policies governing the development of AI systems, with respect to user and citizen privacy, security, and the potential for unintended consequences, the U.S. has comparatively lagged in this arena.
Presiden
Read more on cointelegraph.com