Children’s cartoons free to enjoy YouTube’s deepfake disclosure rules

YouTube updates its rulebook for the era of deepfakes. Starting today, anyone uploading a video to the platform must disclose certain uses of synthetic media, including generating artificial intelligence so viewers know what they are seeing is not real. YouTube says it works with “realistic” modified media, such as “making it look like a real building is on fire” or “swapping one person’s face with another’s.”

The new policy shows YouTube is taking steps to help curb the spread of artificial intelligence-generated misinformation as the U.S. presidential election approaches. What it allows is also shocking: AI-generated animation aimed at children is exempt from the new synthetic content disclosure rules.

YouTube’s new policy excludes animated content entirely from disclosure requirements. This means that emerging get-rich-quick, AI-generated content scammers can continue to produce videos targeting children without revealing their methods. Parents concerned about the quality of hastily produced nursery rhyme videos will have to identify AI-generated cartoons on their own.

YouTube’s new policy also says creators don’t need to flag the use of AI for “minor” edits that are “primarily aesthetic”, such as beauty filters or cleaning up videos and audio. The use of artificial intelligence to “generate or improve” scripts or subtitles is also allowed without disclosure.

There’s no shortage of low-quality content on YouTube produced without AI, but generative AI tools speed up video production by lowering the barrier to entry. YouTube parent company Google recently said it was adjusting its search algorithms to eliminate the recent spate of AI-generated clickbait enabled by tools like ChatGPT. Video generation technology is less mature but is improving rapidly.

established issues

YouTube is a children’s entertainment giant that dwarfs rivals like Netflix and Disney. The platform has struggled in the past to moderate large amounts of content aimed at children. It has been criticized for hosting content that on the surface appears to be suitable or attractive to children, but on closer inspection contains disgusting themes.

Wired recently reported on the rise of YouTube channels aimed at children that appear to use artificial intelligence video generation tools to create shoddy videos featuring generic 3D animations and anachronistic iterations of popular nursery rhymes.

An exemption for animation in YouTube’s new policy could mean that parents who let their children watch popular and heavily censored channels like PBS Kids or Ms. can’t easily filter such videos from search results or block YouTube ’s recommendation algorithm automatically plays AI-generated cartoons. Rachel.

Under the new rules, some problematic AI-generated content aimed at children does need to be flagged. In 2023, the BBC investigated a series of videos targeting older children that used artificial intelligence tools to promote pseudoscience and conspiracy theories, including climate change denialism. These videos mimic traditional live-action educational videos (for example, showing the real pyramids of Giza), so unsuspecting viewers may mistake them for factually accurate educational content. (The Pyramid video later suggested the structures could generate electricity.) The new policy would crack down on such videos.

“We require children’s content creators to disclose content that has been meaningfully modified or synthesized to appear realistic,” YouTube spokesperson Elena Hernandez said. “We do not require disclosure of obvious Content that is unrealistic and does not mislead viewers into believing it is true.”

YouTube Kids, a dedicated children’s app, is curated using a combination of automated filters, human review and user feedback to find well-made children’s content. But many parents simply use the main YouTube app to prompt content for their children, relying on visual video titles, lists, and thumbnails to determine what content is appropriate.

So far, most of the apparently AI-generated children’s content WIRED has found on YouTube has been produced in a similar way to more traditional low-effort children’s animation, with poor production quality.They have ugly visuals, incoherent plots, and zero educational value, but that’s not it specifically Ugly, incoherent, or pedagogically worthless.

AI tools make it easier to produce this type of content, and in larger quantities. WIRED found some channels uploading lengthy videos, some lasting more than an hour. Requiring labels for AI-generated children’s content could help parents filter out cartoons that might be published with little or no human review.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *