Google is shifting its defensive strategy in the ongoing war against digital fraud, integrating its Gemini AI model to identify and neutralize malicious advertisements. The move comes as bad actors increasingly use the Google Ads ecosystem to impersonate legitimate brands, leveraging sophisticated phishing schemes to distribute malware and harvest user data. By applying the contextual reasoning of a large language model, Google aims to catch subtle deceptive patterns that traditional automated systems often miss.
The scale of the intervention is vast. According to recent disclosures, the company’s safety measures have already led to the removal of approximately 8.3 billion malicious ads. Furthermore, Google suspended nearly 25 million advertiser accounts and blocked 602 million ads specifically linked to fraud attempts during its most recent reporting cycle. This algorithmic offensive is a direct response to hackers who purchase ad space to redirect unsuspecting users to spoofed URLs that mimic trusted services.
As generative AI lowers the barrier for criminals to create convincing fraudulent content, the defense must evolve in kind. The deployment of Gemini represents a shift toward "AI vs. AI" security, where the platform's integrity depends on its ability to outpace the creative tactics of attackers. For Google, the stakes are as much about trust as they are about safety, as the company seeks to protect the commercial core of its business from becoming a primary vector for cybercrime.
With reporting from Canaltech.
Source · Canaltech


