NHTSA escalates review over crashes in low visibility
US auto safety regulators have intensified their scrutiny of Tesla’s Full Self-Driving system, escalating an existing investigation into a more advanced engineering analysis after reviewing crashes tied to poor visibility conditions. The move increases pressure on Tesla at a time when the company is leaning heavily on autonomy as a central part of its long-term strategy.
The National Highway Traffic Safety Administration is examining whether Tesla’s FSD software may contain safety defects that make it risky to use in situations such as fog, sun glare and other conditions that reduce roadway visibility. According to the agency, the system may at times fail to detect deteriorating visibility or fail to warn the driver early enough when camera performance is affected.
The change in status matters because an engineering analysis is a more serious phase of the regulatory process. It means investigators are moving beyond an initial review and taking a closer look at whether a wider safety problem may exist. For Tesla, that shifts the issue from a series of complaints to a broader question about how one of its most heavily promoted systems performs in real-world driving conditions.
Camera visibility is at the center of regulators’ concern
The agency’s focus is not on a single crash pattern, but on the way Tesla’s system behaves when its visual inputs are degraded. Regulators say that in the collisions they reviewed, the software did not properly recognize common roadway conditions that impaired camera visibility and did not alert the driver until just before the crash occurred.
That issue is especially important because Tesla’s driver-assistance system depends largely on camera-based perception. If those cameras are obstructed by glare, fog or airborne particles, the system may lose some of its ability to correctly interpret the road environment. The investigation is therefore centered on whether Tesla’s warning and detection systems are robust enough to compensate for those limitations before a dangerous situation develops.
The probe now covers 3.2 million Tesla vehicles, including the Model S, Model X, Model 3, Model Y and Cybertruck, all of which can use the company’s FSD-branded driver-assistance technology. The scale of the review shows how central the software has become across Tesla’s lineup and why any adverse regulatory finding could have broad implications.
Crash complaints pushed the case into a more serious stage
The investigation was elevated after a series of complaints involving crashes in which FSD was reportedly active within 30 seconds of impact. Among the incidents reviewed is one in which a Tesla driver using FSD struck and killed a pedestrian. That fatal case appears to have helped push regulators toward a deeper examination of whether the problem is systemic rather than isolated.
The agency’s concern is not simply that crashes occurred while the feature was in use. It is that the software may not have detected visibility-related limitations quickly enough or warned drivers in a meaningful way before the collision became unavoidable. That distinction goes to the core of how Tesla presents the system: as a driver-assistance tool that still requires supervision, but one that is expected to handle increasingly complex tasks.
If regulators conclude that drivers are not being adequately warned when the system’s effectiveness is compromised, that could raise wider questions about whether Tesla’s supervision model is functioning as intended in normal road conditions that are neither rare nor extreme.
Autonomy ambitions now face a broader regulatory test
The timing is significant because Tesla has made autonomy one of the central pillars of its future growth story. The company continues to market FSD as an advanced driver-assistance system while also tying its broader vision to robotaxis and more autonomous transportation services. A deeper federal review therefore goes beyond a technical defect question and touches one of the company’s most important strategic narratives.
An engineering analysis does not automatically lead to a recall or enforcement action, but it does move the case closer to that possibility. Regulators will now examine in greater detail how the software responds to reduced visibility, how warnings are delivered to drivers and whether earlier updates have meaningfully addressed the risks under review.
For Tesla, the outcome could matter well beyond compliance. If the probe identifies broader weaknesses, it could affect public trust in the system and intensify debate over how aggressively the company has positioned its self-driving technology. If Tesla can satisfy regulators that the system’s performance and warnings are adequate, it may limit the damage. For now, however, the escalation leaves the company facing a sharper test of whether its autonomy claims can withstand closer regulatory scrutiny under real-world conditions.

