Since the public debut of ChatGPT in late 2022, the utility of large language models has undergone a rapid, illicit translation. What began as a curiosity for generating human-like prose has become an essential infrastructure for cybercriminals. By utilizing generative AI, bad actors have moved beyond the clumsy, grammatically suspect phishing emails of the past, replacing them with sophisticated, targeted campaigns and hyperrealistic deepfakes designed to bypass even the most skeptical human filters.
The impact of AI on the digital underworld is less about a single breakthrough and more about the industrialization of the entire exploit lifecycle. Criminals are now deploying AI to obfuscate malware code, making it invisible to traditional security software, and to automate the tedious process of scanning networks for vulnerabilities. Once inside a system, AI tools can rapidly parse through vast oceans of stolen data to identify the most lucrative assets, effectively turning a messy data breach into a streamlined extraction process.
This technological shift has significantly lowered the barrier to entry for aspiring attackers. Interpol has already raised alarms regarding scam centers in Southeast Asia that use inexpensive AI tools to manage high volumes of victims and pivot their operations with unprecedented speed. As these tools become cheaper and more accessible, the threat landscape is shifting from artisanal, manual attacks to a persistent, automated siege, challenging the systems designed to keep the digital world secure.
With reporting from MIT Technology Review.
Source · MIT Technology Review


