Last summer, AI systems from Google and OpenAI demonstrated a near-mastery of abstract logic, correctly answering five out of six complex problems at the International Mathematical Olympiad. For the world’s elite high school mathematicians, such a feat represents the pinnacle of human cognitive training. Yet, just months later, a software engineer in Sri Lanka, Anuradha Weeraman, exposed the fragility of this brilliance. When she asked several advanced chatbots whether she should walk or drive her car to a mechanic located only 50 meters away, the systems uniformly suggested she walk—ignoring the fundamental premise that the car itself required repair.
This dissonance—the ability to navigate high-level mathematics while failing at basic situational logic—is increasingly described by researchers and engineers as "jagged intelligence." Unlike human development, which typically scales across a broad front of related skills, AI capabilities advance in sharp, unpredictable peaks. In domains like computer programming and theoretical math, where rules are rigid and data is abundant, the models are peerless. In the messy, context-heavy realm of "common sense," they remain remarkably hollow.
The "jagged frontier" of AI suggests that we are not yet dealing with a general intelligence, but rather a sophisticated pattern-matcher that lacks a coherent world model. For users and developers, this creates a deceptive experience: the more "human" a chatbot sounds when discussing poetry or physics, the more jarring its failures in practical reasoning become. As we integrate these systems into our infrastructure, the primary challenge lies in identifying exactly where the peaks end and the valleys begin.
With reporting from La Nación.
Source · La Nación — Tecnología



