For decades, the science of learning has quietly accumulated a wealth of knowledge on how students absorb information and master new skills. Cognitive psychology, neuroscience, and educational research have produced robust findings on spaced repetition, retrieval practice, interleaving, and feedback loops — techniques that demonstrably improve retention and comprehension. Yet there remains a persistent friction between the findings published in academic journals and the practical tools that actually reach the classroom. This disconnect often leaves teachers stranded between outdated pedagogical methods and products marketed as "innovative" that lack a rigorous evidentiary basis.
The challenge, as Sandra Liu Huang, president of Learning Commons, frames it, is primarily one of synthesis. Research is inherently incremental and dense, requiring years of meta-analysis to distill into actionable strategies. The system as it stands places an impossible demand on educators: to act as full-time researchers, continuously reviewing academic literature while simultaneously managing the real-time, individualized needs of a diverse classroom. The result is a market flooded with edtech products whose claims of efficacy often rest on little more than testimonials and pilot studies of questionable design.
The translation problem in education technology
The gap between research and practice is not unique to education. Medicine spent decades building the infrastructure of evidence-based practice — randomized controlled trials, systematic reviews, clinical guidelines — before it became standard for physicians to rely on institutional knowledge systems rather than individual literature review. Education has no comparable pipeline. Academic papers on learning science are written for other academics. Product developers, meanwhile, operate under commercial pressures that reward speed to market and feature novelty over pedagogical rigor.
This structural misalignment means that even well-intentioned developers may cherry-pick studies that support a predetermined product vision rather than designing around the weight of evidence. Teachers, who are the end users, have limited time and few reliable signals to distinguish between a tool built on solid research and one dressed in the language of science without the substance. The What Works Clearinghouse, established by the U.S. Department of Education to review educational interventions, represents one attempt to provide such signals, but its reviews are slow and cover only a fraction of the products on the market.
The problem compounds at scale. School districts making procurement decisions often rely on vendor presentations and peer recommendations rather than independent evidence reviews. Without a shared standard for what constitutes sufficient evidence, purchasing decisions default to marketing effectiveness.
Toward an infrastructure of evidence in product design
Huang's argument points toward a different model: embedding learning science into the product development cycle itself, rather than treating it as an afterthought or a marketing claim. This would mean involving researchers at the design stage, building products around instructional principles with established empirical support, and committing to ongoing measurement of learning outcomes in real classroom conditions.
Such an approach has precedents in adjacent fields. In healthcare, the concept of "translational research" — moving findings from bench to bedside — required new institutions, funding mechanisms, and professional roles to bridge the gap between laboratory discovery and clinical application. Education may need its own version: intermediary organizations, shared evidence standards, and incentive structures that reward developers for demonstrating genuine efficacy rather than simply claiming it.
The tension, however, is real. Rigorous evidence takes time to generate, and the edtech market moves fast. Startups face pressure from investors to grow quickly; districts face pressure from parents and policymakers to adopt the latest tools. A system that demands higher evidentiary standards risks slowing adoption of genuinely useful innovations alongside the ineffective ones.
The question facing the sector is whether it can build the connective tissue between research and product design without creating bureaucratic bottlenecks that stifle experimentation. The stakes extend beyond any single product or classroom. If learning science remains locked in journals while classrooms fill with tools built on intuition and hype, the cost is borne by students — particularly those in under-resourced schools least equipped to recover from ineffective interventions.
With reporting from Fast Company.
Source · Fast Company



