The narrative of Tesla’s ascent has long been tethered to the promise of Full Self-Driving (FSD), a suite of software marketed as the harbinger of a post-accident era. However, a recent investigative report from Swiss public broadcaster RTS suggests that this vision may have been sustained by a systematic suppression of failure. The report alleges that Tesla concealed thousands of incidents, including fatal accidents, in an effort to prevent regulatory intervention and ensure the continued testing of its autonomous systems on public roads.
The implications of such a data gap are profound. For autonomous vehicles to move from experimental novelties to reliable infrastructure, the feedback loop between failure and refinement must be absolute. If the allegations are accurate, Tesla’s strategy suggests a prioritization of rapid iteration over the transparency required for public safety. By obscuring the frequency and severity of crashes, the company may have bypassed the very guardrails designed to evaluate whether its AI is truly fit for the complexities of human-dominated traffic.
This controversy arrives at a critical juncture for the industry, as the boundary between "driver-assist" and "self-driving" remains dangerously blurred. For regulators and the public alike, the question is no longer just about when the technology will be ready, but how much risk is being offloaded onto unsuspecting drivers in the interim. As the gap between corporate claims and road realities widens, the cost of innovation is being recalculated in increasingly human terms.
With reporting from RTS.
Source · Hacker News


