AI Caricature Trend: Privacy Risks & Data Concerns You Should Know

by Rachel Kim – Technology Editor

A growing trend on social media involves users requesting AI chatbots to generate personalized caricatures based on their interests. Although seemingly harmless, cybersecurity experts are warning about the potential privacy risks associated with this “gamification” of artificial intelligence.

Claudiu Popa, a certified cybersecurity specialist and privacy professional, and CEO of Data Risk Canada, explained that the increasing ability of AI chatbots to create images has naturally led people to seek personalized versions of themselves. However, he cautioned that even this seemingly legitimate use carries environmental and privacy implications, as well as the risk of data breaches.

“These tools are all sharing information about ourselves, and it’s an appropriation of consent,” Popa told CTVNews.ca. He described the process as a “attention trap,” noting that many, particularly younger users, are drawn in by the desire to participate in the latest online trend without fully considering the consequences.

Popa emphasized the privacy concerns, explaining that chatbots are designed to extract as much information as possible from users in an attempt to create accurate or amusing images. He warned against “gamifying our existence” by creating viral tools that primarily benefit for-profit companies. Data brokers, who utilize personal information gathered online for targeted advertising, are a prime example of this.

The AI caricature trend, Popa suggested, provides a valuable teaching moment for parents and educators to discuss how individuals are conditioned to share more personal information online. “You’re creating predictive tools that allow these platforms to understand the types of needs of young people their age,” he said. “It’s a way of collecting that type of information without telling people what you’re doing with it. That’s the crux of the problem: it all comes down to consent.”

Popa’s own experience attempting to participate in the caricature trend revealed a persistent request for more information, including photos and email addresses, to refine the image. “It’s not their role to constantly remind you of the impact of this iterative activity on your privacy,” he stated. “Whether it’s a caricature challenge or an online game, we need to be able to recognize these things and empower everyone to stop them before we find ourselves providing sensitive information.”

A further concern, according to Popa, is the evolution towards “agentive” AI, granting tools control over sensitive information like emails and bank accounts. “Never accept this type of invasive access, if only given that it removes the platform’s accountability,” he warned. He highlighted that if an individual grants an AI tool access to their online banking and subsequently experiences fraud, they would likely not be covered by the bank’s terms of service, as they authorized a third-party agent access to their personal credentials.

Popa also pointed to the tangible environmental consequences of AI usage, specifically the significant water demands of AI data centers, often prioritized over local residents. “This type of viral activity… doesn’t serve anyone and certainly contributes to the growing bad reputation these AI chatbots are getting,” he said.

Recent developments in AI include the emergence of Moltbook, described as a social network for AI, where 1.4 million AI “agents” are building a digital society, according to Forbes. This development, alongside the increasing prevalence of AI-generated content, including caricatures, highlights a broader shift in how individuals interact with and share information online.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.