There’s more to the murky situation here. Back in 2019, researchers demonstrated how adversarial images — which could be something as simple as stickers — could trick a Tesla, steering a Model S towards dangerous traffic. Notably, these stickers placed over a lane marking are perfectly visible to human eyes, but the camera-driven system on a Tesla won’t see it at all.
Autopilot’s reliability caveats with lane identifications put the onus on users in two ways: First, they need to keep the camera sensors clean. Second, if the camera sensors fail, they must quickly pay heed to the warnings and take control of the car based on their sensory awareness. There’s little in terms of a proactive fallback solution here.
For example, some carmakers rely on a geo-fencing system, and their cars only engage an automated driving system on certain roads — facilitated via GPS. Tesla’s support documents don’t mention any such GPS-linked system for scenarios where lane markings are tricky to process, and GPS can serve as a secondary route for safe navigation through satellite imagery.
Tesla, on the other hand, has repeatedly avoided legal heat by claiming that it has put explicit warnings in place about the limitations of Autopilot and Full Self-Driving systems. The automaker isn’t technically wrong, as the company markets its ADAS tech at SAE Level 2 autonomy, which strictly specifies that a car’s driver must always be ready to wrest control of their vehicle — irrespective of the road ahead and the situation around them.