Deepfakes, AI Phishing & Dark LLMs: Rising Cybercrime Threat
Rise in AI-driven Cybercrime activity – Key Takeaways
Here’s a summary of the key points from the provided text regarding the increase in AI-driven cybercrime:
* Significant Growth: AI-related cybercrime activity on the dark web has seen a massive increase, with first-time posts referencing AI keywords up 371% between 2019 and 2025. This isn’t a temporary spike, but a sustained trend.
* ChatGPT Impact: The release of ChatGPT in late 2022 significantly accelerated this growth, and interest has remained consistently high since.
* Established Market: By 2025, the dark web shows a stable underground market for AI misuse, with tens of thousands of forum discussions annually.
* LLM Exploitation: 251+ posts explicitly focus on exploiting large language models (LLMs), primarily those based on OpenAI systems.
* AI Crimeware Vendors: A structured economy has emerged with at least three vendors offering self-hosted, unrestricted Dark LLMs for $30-$200/month, some claiming over 1,000 users.
* Impersonation Services Surge: mentions of deepfake tools for bypassing identity verification have risen 233% year-on-year.
* Deepfake Costs: Synthetic identity kits start at $5,while real-time deepfake platforms range from $1,000-$10,000.
* Deepfake Fraud: One institution recorded 8,065 deepfake-enabled fraud attempts between January and August 2025, resulting in $347 million in verified global losses.
* AI-Assisted Malware: AI is being integrated into malware-as-a-service platforms and remote access tools, including AI-generated phishing.
In essence, the text paints a picture of a rapidly growing and increasingly sophisticated AI-powered cybercrime landscape.
