ChatGPT Security Risk: Google Calendar Invite Hack Reveals Email Leak Vulnerability

by Rachel Kim – Technology Editor

ChatGPT Vulnerability: Malicious Calendar Invites‌ Can Hijack ​Gmail‌ Connector, ⁢Perhaps Exposing Email Data

SAN FRANCISCO – ‌ A⁢ security flaw allows attackers to potentially hijack ChatGPT’s Gmail connector through compromised Google ​Calendar invites, researchers have discovered. The​ vulnerability, known as indirect prompt⁤ injection, enables malicious instructions hidden within calendar event details to ‌influence chatgpt’s behavior, potentially leading to data leaks and unauthorized actions.

The issue‍ arises when ChatGPT is connected to‌ a user’s Gmail and Calendar accounts. If ⁤a user accepts a malicious ⁤calendar invitation, the embedded ​instructions can be executed by ChatGPT when processing calendar information. In August, researchers demonstrated the risk, showing how​ a ⁤compromised invite could ⁢be used to⁤ control smart-home ⁤devices and extract sensitive ⁤information ‌using Google’s ​Gemini assistant. This work is detailed in the⁣ paper “invitation ‍Is All You need” and ‍subsequent ‍security analyses.

While the vulnerability depends on ‍users​ connecting their Gmail and Calendar ‌to ChatGPT, and is ⁤mitigated by OpenAI’s content policies, ‍the core risk remains:‌ any assistant permitted to read compromised calendar content is susceptible.

OpenAI‌ documentation notes users can disconnect data sources ⁢or ⁤disable automatic use to limit the potential impact‌ of malicious events. Though, the most effective mitigation currently lies with Google. Users can adjust Google Calendar settings to automatically add only invitations from known senders or those thay explicitly accept, and hide declined events. Google Workspace administrators ‌can also implement⁤ safer default settings institution-wide.

Security experts emphasize this isn’t a breach ⁣of ChatGPT or Gmail​ itself, but rather a consequence of the ⁢expanded attack surface created by AI tools accessing external data. ‌The connectors⁣ that enhance AI assistant functionality ⁣also⁤ introduce new avenues for exploitation. Until stronger default⁢ defenses against indirect prompt injection ⁢are implemented, users are advised to be cautious about connecting accounts and ⁤to⁣ secure their calendars against ‍unwanted invitations.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.