The rapid integration of artificial intelligence into the search experience has created a curious paradox of adoption without conviction. According to a recent survey of 2,200 U.S. adults conducted by Yelp and Morning Consult, nearly two-thirds of Americans have used an AI-powered search tool in the last six months. However, the enthusiasm for the technology largely stops at the results page: only 15% of respondents say they trust the information provided “a lot.”
This skepticism is rooted in what users describe as a “walled garden” effect. Roughly half of those surveyed feel that AI results are isolated from the broader web, making it difficult to verify the veracity of the claims. Instead of acting as a portal to information, these models often act as a barrier; 63% of users find themselves double-checking AI-generated answers against traditional news sites and review platforms. This extra step suggests that while AI is convenient, it has yet to become authoritative.
The industry has made significant strides in reducing “hallucinations”—the confident fabrication of facts that plagued early models. Yet, solving the technical glitch has not solved the psychological one. When platforms strip away citations and direct links to the primary sources that inform their answers, they erode the user’s agency. For AI search to move from a novelty to a core utility, product builders must shift their focus from the accuracy of the output to the transparency of the process.
With reporting from *Fast Company*.
Source · Fast Company



