Home » Technology » Now Norwegians can also make AI videos at Google

Now Norwegians can also make AI videos at Google

Google’s Veo 3 AI Video Generator Debuts in Norway Amid Controversy

The Veo 3 AI video generator from **Google** is now available in Norway for paying subscribers, yet it faces criticism for allegedly fueling the spread of racist content online, highlighting the challenges of AI content moderation.

Veo 3 Capabilities and Limitations

**Google** Norway is offering the Gemini Veo 3 model to AI Pro subscribers, enabling them to produce eight-second video clips at 720p resolution with 16:9 aspect ratio. A notable enhancement in version three is the inclusion of sound generation, encompassing speech. Currently, users are limited to creating three videos per day.

Initial user tests revealed inconsistent results. For example, videos depicting robot lawnmower in sunset were reasonably credible. However, others such as Friends playing board games were easily identifiable as AI-generated due to unnatural interactions. Similarly, a the sand volleyball video seemingly required no commentary due to its poor quality. More elaborate prompts involving Fire Tek-Neerdene battling a dragon yielded unexpected results, with the dragon appearing in the background instead of being the focus.

Another experiment involved transforming a robot vacuum cleaner into headphones, which also produced mixed results. At the same time as the model is launched in Norway, it is under criticism to contribute to a wave of videos with racist content on Tiktok, among other things.

Concerns Over Hateful Content

The release of Veo 3 coincides with mounting concerns about its potential misuse. Reports indicate the platform is contributing to a surge in videos containing racist content on TikTok. According to Statista, TikTok removed 176.2 million videos globally in the first quarter of 2024 for violating community guidelines, demonstrating the scale of content moderation challenges.

Media Matters for America, an organization focused on combating misinformation, has identified numerous videos, seemingly generated with Veo 3, that promote hateful stereotypes. These videos, often eight seconds long and marked with the Veo identifier, target black Americans and other minority groups, sometimes depicting dehumanizing or violent scenarios. Disturbingly, one video portraying police brutality has amassed over 14 million views, while others recreate historical traumas such as concentration camps and Ku Klux Klan attacks.

Google’s Response

**Google** acknowledges these concerns and states that they built Veo with responsibility and safety in mind. **Google’s** Veo website states: We block harmful requests and results, we test how new features can affect safety and we have both our own teams and experts from outside who try to find and fix potential problems before release.

**Sondre Ronander**, **Google** Norway’s PR manager, stated in an email to Tek.no, We have safety rules to stop videos that do not follow our Generative AI guidelines, while at the same time creating tools that try to do what the user is asking for. This means that they can create content that some people think is offensive, if the user has asked for it. **Ronander** also emphasized that We have clear rules for what kind of content we allow and do not allow. We are also building protective mechanisms to prevent abuse, and we are constantly developing new methods to make our services secure.

**Google** also intends to integrate Veo 3 technology directly into YouTube shorts, as announced at the Cannes festival.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.