United States Senator Michael Bennet has urged tech companies to label AI-generated content and monitor any misleading content produced by artificial intelligence (AI).
In a June 29 letter sent to executives of major tech companies involved with AI, including ChatGPT creator, OpenAI, Microsoft, Meta, Twitter and Alphabet, Bennet stressed that users should be aware when AI was used to make content.
Bennet said fake images have disruptive consequences for the economy and trust, especially when they are politically oriented.
The senator also stressed that while some companies have begun to label some AI-generated content, the policies are “alarmingly reliant on voluntary compliance.“
In the letter, Bennet asks company executives to answer concerns about standards in identifying AI-generated content and their implementation, along with repercussions for rule violations by July 31.
None of the companies have responded, except for Twitter which reportedly responded with a poop emoji.
Related: Inflection AI raises $1.3B in funding led by Microsoft and Nvidia
This same fear of non-labeled AI content leading to misinformation has been expressed by European lawmakers as well.
Earlier this month on June 5th, the European Commission Vice President, Vera Jourova, told the media that she believes companies deploying generative AI tools with the “potential to generate disinformation” should have labels on the content created to stop the spread of disinformation.
Although the U.S. does not currently have any comprehensive AI legislation in place, on June 8, U.S. lawmakers proposed two bipartisan bills which target transparency and innovation in the AI space.
One of the said bills proposed by Democratic Senator Gary Peters, and
Read more on cointelegraph.com