Reading time: 7 minutes
George Hotz, the first person to jailbreak an iPhone as well as the CEO and brains behind Comma, is a controversial figure in the world of self-driving cars. He’s refreshingly honest, insightful, and intriguing. What’s the controversy? Well – the things he says are often pretty unorthodox. One commenter said about his unpredictability: “[He’s] the Kanye West of Silicon Valley”. I personally think he sounds more like a younger, crazier Elon Musk.
Hotz frequently shares opinions that irritate others in the industry, creating a bit of a stir. As an example, recently, Comma tweeted the following about Lidar, which got a rise from a number of people.
The debate this generated got us thinking about “mythbusting” the top George Hotz controversial self-driving statements. So, let’s start with this one.
“Lidar’s a scam.”
While as cool as it might sound to have a head-mounted laser, it is definitely true that humans can drive cars without spinning lasers. However, that’s a flawed analogy. You could also say that humans drive cars without the need for silicon processors, so chips are a scam. Or, humans can drive without smartphones and humans drive cars, so any smart-phone based self-driving system (like the Comma Two) is a scam.
Technological solutions might take inspiration from biological analogs, but they don’t have to duplicate them. If our technology had to exactly copy birds or fish, we’d never have had jet planes or propeller boats. Lidar, radar, ultrasonic – even side-mounted or 360° vision cameras – add redundant mechanisms in order to gain enough information to duplicate (or improve) their ability to drive. Lidar isn’t intended to replicate human vision, and there’s nothing “scammy” about it especially when you’re trying to make self-driving cars that are safer than human drivers.
“Self-driving cars are a scam.”
Hotz says that SAE Level 4 robotaxis won’t be viable until at least 2030, and that in general, self-driving is a scam. Hotz believes that using HD maps to support self-driving is bound to fail. That’s because it requires an infrastructure to work – cars that depend on centimeter-level mapping and location accuracy won’t be generically useful unless every road and every circumstance are covered in real-time.
As another part of the scam, he rails against self-driving companies who he feels are taking advantage of investors. As an example, he said that Zoox has raised $990 million and is running out of money yet hasn’t made any revenue. He feels similarly about Waymo, Uber, and Cruise. Hotz offers that most self-driving car companies are taking tons of money and delivering little for it.
Finally, the main reason he thinks autonomous is a scam is not due to the lack of technology, but economic realities. For example, he says it costs $250,ooo to outfit a Waymo vehicle that still requires a full time “safety” driver. As Hotz says, “It’s not a product, it’s a press demo.” His point is that the cost ratio is completely skewed – a dedicated human driver is available for far less, and until the cost of self-driving becomes less than a human driver, it’s not economically viable.
Is he right? Partially, but let’s take a look at each part of his argument.
The first part is easiest to assess. Certainly, an approach to self-driving that doesn’t rely on HD maps would be better than one that does. It would be better to avoid the effort in building and maintaining global high-resolution maps, and it would be much better if the technology dynamically adapted to changing road geometries. Any vehicle technology that independently works well is preferable to one that’s dependent on expensive-to-deploy and difficult-to-maintain infrastructure.
As far as companies taking investor’s money, he is also right that there have been billions invested in autonomous tech over years that don’t end up creating legitimate products. After seeing some fancy Silicon Valley offices and blow-out parties, I think even the biggest self-driving fanfolk would be hard to argue that every penny of these investments has been spent with investor intentions in mind. But are these companies intentionally ripping off their investors? I feel it’s unlikely that they are, however, it’s probably not provable without much more detailed research into every company accepting VC money. If we’re being generous to Hotz by acknowledging some of these companies might not be fully scrupulous (with examples like Uber’s Travis Kalanick in mind), we can call this part a “maybe.”
He’s also right about the cost of a self-driving system far outweighing the cost of a human driver it might replace. However, this is only relevant if you’re buying an autonomously-outfitted car. Most of these self-driving companies won’t be selling cars, they’ll be offering mobility-as-a-service (MaaS). In a MaaS model, the cost of the autonomous system will be amortized over several years. This will make robotaxis cheaper than a human-driven taxi over the long haul. Hotz’s passionate opinion here is probably because putting our transportation in the hands of big tech is something that he – and many others – find unsettling.
“[Autonomous] permits are a showboating thing.”
Here, Hotz is talking about the government of California (as well as other locales like Nevada, Michigan, and Florida) granting permits to companies to allow their self-driving technology on public roadways. Hotz says that the companies who get these permits aren’t at all safer and are just spending money in order to get more press.
It’s hard to argue with the publicity portion of his statement. Certainly, the companies who apply for these permits want to be seen as leading in autonomous trials and are willing to spend money with the government to do so. And for the most part, the press that it generates looks like “bragging rights” for the first to claim each new level, or for the first to have the most permitted self-driving cars. Just as important is the publicity generated for the states. They want to garner attention as autonomous leaders and thereby attract more technology business.
But is there a non-cynical view of this? Could government permits actually make self-driving safer? Let’s look at California’s permit process.
California’s Autonomous Vehicle Testing permit that most people have opted for (currently 56 permit holders) requires a driver. As Hotz argues, this requirement is meaningless if the car is driving itself. The Uber fatality in Arizona is a perfect case in point: when drivers have nothing to do, they are inherently distracted and unprepared to take over control of the vehicle in split-second emergencies. I agree with Hotz here. The car needs to actively prevent distracted driving through monitoring, such as with a driver-facing camera and inattention monitor, or even the less effective Tesla-style “hands on wheel” check. Without these safety-checks being mandatory, a self-driving car with a permit isn’t any safer just because a person sits behind the wheel.
What about California’s Autonomous Vehicle Tester program for a completely driverless car? In addition to the previous permit’s requirements, this tester program includes several other conditions that must be met:
- Self-driving system must operate at SAE Level 4 or 5
- Self-driving vehicle must have a link to a remote operator and these operators must be properly trained
- The company who owns a self-driving vehicle must notify police and emergency services when tests are run, and the vehicle must communicate vehicle-owner information in case of a crash
Given that few cars claim L4 or 5, this program contains a much smaller pool of candidates. (There currently are only six companies registered with a driverless permit – Waymo, Zoox, Cruise, Baidu, Nuro, and Autox.) The remote operator requirement should provide a big boost to the vehicle’s safety if they’re properly trained and certified like those at Designated Driver. (Disclaimer: I’m friends with the CEO.)
So what’s the bottom line – are permits a showboating thing? Seems like the permits have some safety benefits, but perhaps not for the majority of those currently in use.
“’If’ statements kill.”
This one might not make a lot of sense unless you’re a programmer but Bloomberg provides a great interpretation:
As Hotz puts it in developer parlance,”’If’ statements kill.” They’re unreliable and imprecise in a real world full of vagaries and nuance. It’s better to teach the computer to be like a human, who constantly processes all kinds of visual clues and uses experience, to deal with the unexpected rather than teach it a hard-and-fast policy.
This matches with Comma’s use of machine learning algorithms and massive data sets to train their self-driving technology. It also matches with most computer science practitioners at any other self-driving company doing the exact same thing.
With enough data, you can program machines to write poems and stories, defeat Go masters, or win at Jeopardy – even drive cars. It’s also the case that machine learning systems can fail if they haven’t been trained properly. When the data sets don’t match reality, our AIs can fail too.
However, the ‘if’ corresponds to making true/false (Boolean) decisions. We know that these are bad in the messy world, especially when it comes to life-or-death situations that might have never been anticipated. Most AI systems being built in the latter half of the twentieth century were no more than complex decision trees: if/then/else taken to the extreme. They were very brittle, and they couldn’t manage situations that they weren’t explicitly programmed to handle. We’ve moved away from these systems to data-driven AIs for a reason.
“We are living in a simulation.”
Although this isn’t self-driving related, it is one of the wackier things Hotz has said so we’re going to look at it just for fun. The belief that we’re living in the Matrix is another thing Hotz has in common with Elon who has also claimed this. Interestingly enough, so do an increasing number of regular people. Probably because of the romantic idea of being able to “hack” yourself some superpowers like Neo does is so appealing. The basic thrust of the argument is that if you consider all of the possible universes with hyper-intelligent beings creating universe simulations, the number of virtual worlds would outweigh the real worlds, and statistical likelihood of us being in one of those simulations is near certainty.
I don’t find this line of reasoning sound at all. It would assume that there are a huge number of hyper-intelligent beings, that they would develop incredibly sophisticated and vast computers, that they would dedicate these vast resources to running simulations, and that these virtual worlds would outnumber those in (as far as we know) an infinite physical reality.
Given the vast complexity of the universe as we know it, it doesn’t sound plausible in the slightest. The “gotcha” on this particular theory is that our electronic overlords would keep us from seeing outside our virtual environment, so we could never know for certain. Although I think that it’s about as improbable as I could imagine, I have to concede that it’s not impossible.
What’s the verdict?
Did we learn anything from mythbusting George Hotz? He may irritate people sometimes, but he’s not as off-the-wall as he can seem at first blush. While he is guilty of oversimplification, much of what he says is – if not outright true – somewhat true, or true with caveats. Probably the reason he’s so controversial is that he’s not beholden to corporate interests, and as such he always speaks his mind. If only we were all so liberated.