The corporate world is currently fixated on the \"North Star\" of human redundancy. From IBM’s projected replacement of 7,800 roles to Klarna’s aggressive downsizing in favor of AI assistants, the trajectory has a cold, compelling internal logic. By automating routine cognitive work and thinning out junior positions, boards hope to see productivity gains compound year over year. In this view, AI is the internal combustion engine of the 21st century—an inevitable force that makes the previous era’s labor models look like the horse and buggy.
Yet, as we outsource the \"cognitive load,\" we may be inadvertently flattening the landscape of human output. A 2024 study in the UK involving 300 writers highlights this tension. When asked to produce short fiction, those aided by GPT-4 were judged to be more creative on average than those working alone. At first glance, this seems to validate the machine’s role as a creative multiplier, a tool that elevates the floor of human performance.
The risk, however, lies in the ceiling. This \"moral momentum\" for automation assumes that intelligence is a solo performance that can be upgraded with a better processor. But if every writer, coder, and strategist uses the same underlying models to optimize their work, the collective result is a regression toward a polished, algorithmic mean. We may be gaining efficiency at the cost of the very outliers and idiosyncratic perspectives that define true innovation.
With reporting from 3 Quarks Daily.
Source · 3 Quarks Daily


