Earlier this month, news broke about how security researchers used a drone to fly over a Tesla and automatically hack into its infotainment system. Although it is clearly a click-bait story, there is a good deal of legitimate concern about this type of hack. For example, does it allow a bad actor to remotely take over any Tesla? Could it threaten any other vehicles from other manufacturers? What lessons can Tesla and automakers learn from this? We’ve digested the security whitepaper and YouTube video describing the low-level hack details and have put together a FAQ to answer some of these questions.
What’s the hack called?
Researchers Ralf-Philipp Weinmann and Benedikt Schmotzle officially call their hack T-BONE, which seems an appropriate name given that a T-bone is a type of fatal crash.
What models did the hack affect?
The hack affected all models of Tesla at the time of its detection: the Model 3, Model S, and Model X.
How long does the hack take to execute?
A successful exploit typically takes between 30-45 seconds, although in “back luck” cases it could take up to 3 minutes.
Is this hack some sort of anti-Tesla propaganda?
Not at all – it was Tesla themselves who submitted a Model 3 to the PWN2OWN 2020 competition, thereby opening themselves up to the hack. PWN2OWN is an event where hackers compete to try to take over products donated by companies to win them. In exchange for a successful exploit demonstration, the hackers win the product (such as a car). In exchange, the manufacturer gains insights into cybersecurity vulnerabilities.
The pandemic ended up cancelling the automotive portion of the PWN2OWN 2020 event, however the hackers persevered and were successful in developing an exploit. The researchers informed Tesla through their bug bounty program so they could fix the problem before any details were released to the public – which Tesla did. Far from showing the weakness of their cars, Tesla’s involvement with these types of events shows they take security seriously.
Can the hack take over driving or steering functions?
By itself, no. It only allows access to functions that can be controlled by the infotainment system, such as opening doors, changing seat position, using the climate control system, messing with the audio system, or changing the vehicle modes.
However, if researchers wanted to access the driving features, with more time they could have probably done so. They would use weaknesses elsewhere in the system to chain another hack that would send arbitrary vehicle bus messages or take over another vehicle module via the infotainment system.
Does the hack affect non-Tesla vehicles?
It’s almost certain that portions of the developed Tesla drone hack affect other OEMs.
Two of the vulnerabilities used to break into the infotainment system were within a piece of software called ConnMan (or connection manager), which is used to manage network connections. ConnMan is an open-source module contributed to and maintained by Intel. Because it is lightweight and easy to modify for different connectivity options, it’s widely used in IoT and embedded platforms.
According to the researchers, there are other OEMs that use ConnMan. They contacted a number of them to make them aware of the problem, and although the specific list of OEMs was not shared publicly, they are quoted in Forbes as saying “half of the industry uses ConnMan”. As it turns out, ConnMan is also a default component in automotive reference implementations such as AGL’s unified code base (UCB) and the GENIVI reference architecture.
Is ConnMan unsafe to use?
The vulnerabilities found in ConnMan in the T-BONE hack have been patched. However, the assessment from the researchers is that other possible vulnerabilities may be in the code. They recommend a safer alternative should be used. (As part of their response to this hack, Tesla replaced ConnMan with dnsmasq.)
How exactly did the hackers get in?
As with most hacks, the Tesla drone hack was a several part exploit. The hackers took advantage of the fact that Tesla vehicles automatically connect to a WiFi network for service called “Tesla Service”. By obtaining the correct WiFi credentials from various people who posted them online, they were able to connect to that service using a WiFi access point on a drone flying overhead.
After connecting, they used a vulnerability in the DHCP service to read arbitrary regions of the stack. That allowed them to get addresses of the functions they needed. Then they used a separate vulnerability in the DNS service to send over a small bit of malicious code. This small code disabled Tesla’s firewall and created a mini-server that could download and run a larger executable. With this mechanism in place as a proof of concept, the hackers downloaded specially crafted software that used Tesla’s infotainment system to open the doors.
Does the hack require a drone?
No – the drone primarily provided a bit of splash, and the researchers admit that it was done for the cool-factor. However, the researchers also suggest that a drone hovering over a Tesla supercharger station could provide a target-rich environment and if it’s flying high enough the drone could go undetected.
A similar method that would probably be more practical is to operate the Tesla drone hack from a laptop in a nearby car. The target car could be stationary or in motion, so long as the attacking vehicle is within WiFi range for at least 45 seconds. The researchers also note with some limitations a version of this attack should have been possible over a cellular network.
Does the hack work on any version of Tesla software?
The hack needs to be fine-tuned and hand-modified by the hacker to work with any specific version. It was tested on Tesla software versions 2020.4.1 (the PWN2OWN 2020 firmware version provided by Tesla) and 2018.42.3 (from a salvaged infotainment unit the researchers acquired off eBay).
Since Tesla doesn’t automatically push OTA updates but relies on user-initiated downloads, multiple versions of Tesla software are in the field at any one time. This means that the Tesla drone hack cannot be used to exploit any arbitrary vehicle. A hacker would need to obtain firmware for the software version that’s running in an intended target vehicle, examine it for the specific addresses required in the hack, and hand-modify the attack scripts. Additionally, this would only work on earlier versions of the software since Tesla software releases after October 2020 have been patched to remove these vulnerabilities.
What poor security practices does this hack reveal?
A successful hack always reveals security practices that need improvement. A few things come to mind with the Tesla drone attack.
- The connection to a fixed SSID for service diagnostics was automatic. To increase security, this connection should have been initiated from the vehicle. Plus, the whole process of initiating diagnostics should be much better protected.
- WPA2-PSK credentials were published through various Twitter accounts. Tesla could have discovered this and invalidated earlier credentials. However, the credentials could also be found in the firmware. Tesla proactively revoking those credentials by itself may not have provided much benefit.
- The ConnMan code contained vulnerabilities in its DNS and DHCP services. These problems could have been detected by fuzz testing, which is how the researchers uncovered these problems.
Does the hack reveal any good security practices?
There will always be unfound bugs that can lead to vulnerabilities. Forcing hackers to work around these issues makes turning vulnerabilities into exploits more difficult.
- The stack was marked as non-executable. This is a simple protective measure, but it forces hackers to use complicated workarounds like return-oriented programming, which needs precise knowledge of the firmware being used.
- The stack, text, and library base addresses were randomized. This requires the hackers to employ additional methods to leak software addresses before any hack can proceed, complicating the hack.
- The software used stack canaries, which can detect overflows and immediately terminate the application if they’ve been modified. This is good because it also made it harder for the hackers to take advantage of Tesla’s vulnerability. They had to inject problems very carefully to avoid changing the stack canaries.
- Tesla has set apps to the least privilege level necessary and created unique users for different services. They also used tools to filter syscalls. All these things make it harder to jump from a compromised module to something with more privileges.
- Tesla is working with the security community through PWN2OWN and their bug bounty program. Automakers engaging with experts is a best practice to help improve a vehicle’s security.
Are there any recommendations that come out of this Tesla drone hack?
Yes. The researchers summarized with four observations. We expound with the implications for the greater automotive industry and what we can do to fix them.
“Stack buffer overflows are still an exploitable problem in 2020.”
Mitigation strategies like non-executable stack memory, stack canaries, and randomized addresses are all excellent technologies that make exploits more difficult. However, they don’t stop the problem; they only slow it. There are tools in the experienced hacker’s toolkit to work around all these strategies.
We think that continued reliance on traditional languages and techniques that have built-in memory-safety issues will leave automakers open to attacks. Languages like C have no inherent way to guard against stack-related vulnerabilities. They should be reconsidered in favor of memory-safe languages like Rust, Ivory, Erlang, or Haskell. If developers are using C++, they should use a modern C++ variant (such as C++17 or C++20). Consider restricting C++ use to a subset containing only functional programming or other memory-safe design patterns and paradigms. If none of these things are possible, the resulting software should be considered a high security risk.
“If you don’t use or understand bug fuzzing technology, you’ll miss vulnerabilities in your code.”
Third Law believes that Fuzzing should be part of your testing for all components in the automotive software stack. Get your engineering teams familiar with fuzzing technology. Learn what it does, how it can help point out errors in code, and what its limitations are.
“Automotive cybersecurity research and hack development is possible without requiring actual hardware.”
The researchers were able to emulate nearly everything they needed for the hack on desktop machines. We’ve seen very intelligent peers fall into the trap of thinking automotive systems will be difficult to crack because the hardware is hard or expensive to acquire.
We think it’s best to build embedded systems with the security principles and rigor needed as if the system was fully reverse engineered and sitting on a desktop under constant attack – because it probably will be.
“Infotainment systems have become very similar to desktops.”
Because Linux was used in the Tesla, the researchers were able to use a lot of Linux tools and insights from desktop development to reverse engineer the car. We aren’t against Linux development in embedded systems – far from it. But a monoculture environment benefits hackers tremendously.
Moving automotive software development to specialized OSes like QNX Neutrino RTOS or GreenHills INTEGRITY doesn’t make it impossible for hackers to gain a foothold, but it would markedly increase the effort and the friction to make any forward progress.
[…] 2021 T-Bone drone-enabled attack on Tesla vehicles illustrates well how connectivity is always a security vulnerability: hackers […]