In the early days of generative artificial intelligence, the "tells" were structural and often grotesque: six-fingered hands in images or nonsensical citations in text. But as large language models have matured, their idiosyncrasies have become more subtle, migrating from factual errors to stylistic tics. Among the most pervasive of these is a specific rhetorical pivot: "It’s not just [X] — it’s [Y]."
This construction has become a hallmark of the synthetic voice. It functions as a low-stakes transition, providing a veneer of insight and momentum without requiring a complex logical leap. While humans certainly use the phrase, its overrepresentation in AI-generated copy suggests a preference within the models for certain persuasive rhythms found in their training data—likely from marketing materials and mid-tier explanatory journalism.
The emergence of these linguistic fingerprints highlights a growing homogeneity in digital communication. As AI-generated content begins to saturate the web, the models risk entering a feedback loop, training on their own predictable structures. This specific phrase is no longer a mere stylistic choice; it has become a diagnostic tool, a signifier that the prose in front of you was assembled by probability rather than intent.
With reporting from TechCrunch.
Source · TechCrunch



