Telegram has long positioned itself as the last redoubt of digital privacy, a sanctuary for dissidents and the privacy-conscious. That same infrastructure, however, is increasingly being co-opted to facilitate a more intimate and pervasive form of harm. In Spain, a network of Telegram groups has emerged where thousands of users exchange non-consensual imagery, using generative AI to digitally "undress" or otherwise humiliate women who, in many cases, are not public figures but private citizens and micro-influencers with modest online footprints.
The mechanics of this harassment are fueled by the democratization of AI tools. What once required sophisticated technical skill — creating convincing deepfakes or manipulated imagery — now requires little more than a prompt and a photograph. Within these digital enclaves, users request so-called "tributes" or solicit help to alter photos of acquaintances, creating a marketplace of degradation that thrives on the platform's hands-off approach to content moderation.
The architecture of impunity
The phenomenon is not without precedent. Non-consensual intimate imagery — sometimes referred to as "revenge porn" — has been a documented problem since the early days of social media. What has changed is the scale, the ease of production, and the degree of anonymity afforded to perpetrators. Earlier waves of image-based abuse typically required access to real intimate photographs, which at least imposed a limiting friction. Generative AI has removed that constraint entirely. A clothed photograph scraped from a social media profile is now sufficient raw material.
Telegram's role in this ecosystem is structural, not incidental. The platform's architecture — large group capacity, minimal content moderation, encrypted messaging, and a permissive stance toward channel creation — makes it a natural host for communities that would be swiftly removed from more heavily moderated platforms such as Meta's services or Reddit. Telegram has historically resisted cooperation with law enforcement and content takedown requests, framing its posture as a principled defense of free expression and user privacy. For the targets of these groups, that principle translates into a near-total absence of recourse.
Spain is not the only country grappling with this problem. South Korea confronted a similar crisis when networks distributing deepfake pornography targeting students and acquaintances were uncovered, prompting legislative action and public outcry. In both cases, the pattern is consistent: generative AI lowers the barrier to producing abusive content, encrypted platforms provide distribution infrastructure, and existing legal frameworks struggle to keep pace.
Platform economics and regulatory friction
This is not merely a failure of oversight; it is a feature of the current platform economy. Telegram's subscription-based model and its tolerance for large, semi-public groups allow the service to sustain engagement — and, by extension, revenue — in environments where abusive content flourishes. The incentive structure does not reward proactive moderation. It rewards growth and retention.
The European Union's Digital Services Act, which imposes content moderation obligations on platforms operating within the bloc, represents one regulatory response. But enforcement against a platform headquartered outside EU jurisdiction and philosophically opposed to moderation mandates remains an open challenge. Spain's own legal framework criminalizes the distribution of non-consensual intimate imagery, yet prosecution depends on identifying anonymous users within encrypted channels — a task that is technically demanding and, under current platform cooperation norms, often impractical.
The tension at the center of this issue is not new, but generative AI has sharpened it considerably. On one side sits the legitimate demand for encrypted communication — a tool that protects journalists, activists, and ordinary citizens from state surveillance. On the other sits the observable reality that the same encryption shields the systematic production and distribution of fabricated intimate imagery targeting people who never consented to any of it. Neither side of that equation is trivial, and neither can be wished away by the other.
What remains unresolved is whether the regulatory instruments now being deployed across Europe can impose meaningful accountability on platforms that treat moderation as antithetical to their identity — or whether the architecture of impunity will continue to outpace the architecture of protection.
With reporting from El País Tecnología.
Source · El País Tecnología



