The most significant barrier to innovation is rarely a lack of talent; it is the friction of the blank page. For students, entrepreneurs, and mid-career professionals, the transition from a vague concept to a concrete project plan has long been a notorious bottleneck — one characterized less by an absence of good ideas than by the paralysis that accompanies too many possible directions and no obvious first step.
Large language models like ChatGPT are increasingly being deployed as a remedy for this "cold start" problem. Rather than staring at an empty document, users can input specific parameters — goals, constraints, target audiences, timelines — and receive a structured spectrum of viable starting points within minutes. The practice is gaining traction across corporate innovation teams, university programs, and freelance workflows alike, turning what was once an open-ended creative struggle into something closer to a curation exercise.
From Generator to Curator
The shift carries implications that extend well beyond convenience. For decades, the dominant model of professional creativity placed the individual at the center of ideation: the strategist who conjures the concept, the designer who sketches the first wireframe, the writer who drafts the opening line. Generative AI disrupts that sequence not by replacing the professional but by compressing the earliest, least structured phase of the work.
When a product manager can prompt a model to generate fifteen potential feature concepts for a given user segment, the cognitive task changes. It moves from divergent thinking under uncertainty — historically the most energy-intensive phase — to convergent evaluation against known criteria. The professional still decides what is worth pursuing. But the raw material arrives faster, and in greater volume, than any single mind could produce unassisted.
This pattern has historical parallels. The introduction of spreadsheet software in the early 1980s did not eliminate the need for financial analysts; it eliminated the hours spent on manual arithmetic and allowed analysts to spend more time on interpretation and scenario modeling. Search engines did not replace researchers; they collapsed the time between a question and the universe of available answers. In each case, a bottleneck was removed, and the human role migrated toward higher-order judgment. Generative AI appears to be doing the same for the ideation layer of knowledge work.
The Risks of Frictionless Starting Points
The convenience, however, introduces its own set of tensions. When the cost of generating an idea approaches zero, the risk of shallow execution rises. A brainstorming session that yields twenty project concepts in five minutes may create an illusion of progress while masking the deeper analytical work required to determine which concepts are genuinely viable. Speed of ideation is not the same as quality of ideation.
There is also the question of homogeneity. Large language models are trained on broad corpora of existing text, which means their outputs tend to reflect dominant patterns in the data. If thousands of professionals use the same tool with similar prompts, the resulting project ideas may converge toward a narrow band of conventional thinking — precisely the opposite of what innovation demands. The curation role, then, is not merely about selecting the best AI-generated option. It requires the professional to recognize when the model's suggestions are derivative and to push beyond them.
For educational settings, the dynamic is particularly delicate. Students using AI to overcome the blank page may develop stronger project-scoping skills, or they may skip the formative struggle that builds creative confidence in the first place. Institutions are still working out where to draw the line between productive assistance and intellectual outsourcing.
What emerges is not a simple story of efficiency gains. The blank page had a function: it forced confrontation with ambiguity, rewarded original thinking, and served as a filter for commitment. Generative AI removes that friction — but whether what replaces it is a net improvement depends entirely on how rigorously the human on the other side of the screen exercises judgment. The tool has changed. The burden of discernment has not.
With reporting from Exame Inovação.
Source · Exame Inovação



