Video streaming social platform YouTube released new community guidelines relating to the disclosure of artificial intelligence (AI) used in content.
The platform published a blog on Nov. 14 saying that the updates will have creators on its platform inform their viewers if the content that is being shown is “synthetic.”
An example given in the update was an AI-generated video that “realistically depicts” something that never happened or the content of a person saying or doing something they didn’t do.
This information will be displayed for viewers in two ways, according to YouTube, with the first being a new label added to the description panel and, if the content is about “sensitive topics,” a more prominent label to the video player.
Sensitive topics, according to YouTube, include political elections, “ongoing conflicts,” public health crises and public officials.
YouTube says it will work with creators to help its community better understand the new guidelines. However, it said for anyone who does not abide by the rules, their content is subject to removal, “suspension from the YouTube Partner Program, or other penalties.”
Related: Google sues scammers over creation of fake Bard AI chatbot
The platform also touched on the topic of AI-generated deep fakes, which have become both increasingly common and realistic. It said it is integrating a new feature that will allow users to request the removal of a synthetic video that “simulates an identifiable individual, including their face or voice, using our privacy request process.”
Recently, multiple celebrities and public figures, such as Tom Hanks, Mr. Beast, Gayle King, Jennifer Aniston and others, have battled with deep fake videos of themselves endorsing products.
AI
Read more on cointelegraph.com