YouTube has announced that creators will now be required to disclose if the content in their videos have AI-generated realistic elements. In a blog post, Google’s video streaming platform said that this change will provide users more transparency about the content they are watching.
YouTube said it is introducing a new tool within its Creator Studio which will require creators to disclose to viewers whether the content has a person, place, scene or event that has been created or altered using “synthetic media”. However, this requirement does not extend to the use of animation or special effects where the content clearly appears to be unrealistic. Additionally, creators are not bound to disclose whether they have used generative AI for production assistance such as for generating scripts, content ideas, automated captions, and more.
For most of the videos, which have AI- generated realistic individuals, scenes or altered footage of real events and places, YouTube will flag a label in the expanded description. However, videos that cover more sensitive issues such as health, news, election, and more, a prominent label will be displayed on the video itself.
YouTube said that the labels will start to appear to viewers in the coming weeks. The label will first appear on the smartphone app for both Android and iOS and then on the desktop and TV versions. The company said that they are currently working on the enforcement measures for creators who consistently choose not to disclose this information. Additionally, it plans to add labels to video even if the creator has not disclosed altered content, especially for those videos which has the potential to confuse or mislead people.