YouTube, the Google-owned video platform, has announced a new set of rules for artificial intelligence (AI) content. As per the new rules, creators will soon be required to reveal whether a video was produced using generative AI. YouTube is making this move to tackle the harmful effects of generative AI technology.
In a blog post published on November 14, Vice Presidents of YouTube product management Jennifer Flannery O’Connor and Emily Moxley mentioned a number of AI- policy updates. Mandatory disclosure requirements are the main highlight of the new policy. YouTube said that AI’s powerful new forms of storytelling can also be used to create content that has the potential to mislead viewers. This situation arises when the users are unaware whether the video has been altered or is synthetically created. To address this, YouTube has found a solution. “Specifically, we’ll require creators to disclose when they”ve created altered or synthetic content that is realistic, including using AI tools,” said YouTube in the blog post.
YouTube also informed that they would introduce new content labels. With this, while uploading content, users will have the option to disclose whether it contains realistic, altered, or synthetic material. YouTube says that this is important in cases where the content is to topics including elections, ongoing conflicts, and public health crises. Notably, if the creators fail to follow these policies, it will lead to strict actions like content removal, suspension from the YouTube Partner Program, or other penalties. These changes will take effect next year.
Also, using YouTube”s new privacy request process, creators and artists will be able to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice. The platform is also deploying AI technology for content moderation. With the help of generative AI, YouTube is aiming to address the threats on a large scale.