Bollywood AI Videos Disappear From YouTube Following Reuters Investigation
New Delhi – Scores of deepfake videos featuring Bollywood stars have been removed from YouTube after a Reuters story highlighted the proliferation of AI-generated content exploiting the likenesses of Indian actors without their consent. The takedowns began shortly after Reuters published its findings on February 29, 2024, revealing a surge in unauthorized AI videos on the platform.
The removal of these videos underscores growing concerns about the misuse of artificial intelligence and the lack of robust safeguards to protect individuals from digital impersonation. The issue is especially acute in India’s prolific film industry, where celebrity images are widely circulated and easily exploited. the videos, often sexually suggestive or depicting fabricated endorsements, raised alarm among actors and industry stakeholders, prompting calls for stricter regulations and platform accountability.
Reuters identified at least 83 deepfake videos featuring prominent Bollywood actors, garnering millions of views before being flagged for removal. The videos utilized AI technology to convincingly portray actors in scenarios they never participated in, raising legal and ethical questions about consent, defamation, and intellectual property rights.
Aditya Kalra,Company News Editor for Reuters in India,led the reporting,focusing on the business coverage and strategies of major companies impacted by the issue. Arpan Chaturvedi,a Reuters correspondent based in New Delhi,contributed to the coverage by reporting on the legal ramifications and court cases related to the unauthorized use of celebrity likenesses.
YouTube confirmed to Reuters that it removed the videos for violating its policies prohibiting the creation of misleading or deceptive content. The platform stated it is continuously refining its detection systems and working with industry partners to address the evolving threat of deepfakes. though, experts warn that the rapid advancement of AI technology necessitates a more proactive and extensive approach to combat the spread of unauthorized AI-generated content.