For decades, the field of robotics suffered from a persistent gap between ambition and utility. While researchers envisioned machines capable of navigating the complexities of human environments—aiding the elderly or performing hazardous tasks—the reality was often confined to the repetitive precision of auto-plant assembly lines. We aimed for the versatile androids of science fiction and arrived, instead, at the Roomba. This history of over-promising and under-delivering left venture capital wary of the sector for years.

That hesitation has evaporated. In 2025, investors poured $6.1 billion into humanoid robotics, a fourfold increase over the previous year. This capital flight isn’t driven by a sudden improvement in hardware, but by a fundamental shift in how machines are taught to interact with the physical world. The industry is moving away from the brittle, rule-based programming of the past toward the same large-scale learning models that have revolutionized digital intelligence.

The traditional approach to a task as simple as folding a shirt required an exhaustive list of instructions: calculating fabric deformation, identifying collars, and adjusting for every possible rotation. This "if-then" logic fails the moment it encounters the unpredictability of a real home. By treating physical movement as a data problem rather than a geometric one, roboticists are finally building systems that can generalize, turning the static tools of the factory into the adaptable agents of the future.

With reporting from MIT Technology Review.

Source · MIT Technology Review