I remains lost on me why people so badly want self-driving cars, and why both the automotive and tech industries are racing against time, each other and all logic to bring them to us.
Actually I’m not so sure people do want them. The industry seems to want them. Maybe the tech sector has run out of interesting ways for us to send cat pictures to each other and it needs something new to market. Normal people seem quite content to drive their cars with their eyes, hands and feet, but the people who know these things insist that self-driving cars are the future, and so we’d all better get ready for them.
Advertisement - story continues below
There is one problem, which some of us have been trying to point out ever since this nonsense gained prominence: Cars can’t actually drive themselves, no matter how clever you think the technology is. It can’t instantly see, understand and react to all the things the human eye can. And expecting it to is going to get people killed.
One would seem to need little more than common sense to recognize this, but since some people always insist on “studies” before they’ll believe anything, the folks at AAA have got you covered:
TRENDING: Trump Gets Rousing Response from Supporters as He Returns to Florida
Testing by AAA shows that electronic driver assist systems on the road today may not keep vehicles in their lanes or spot stationary objects in time to avoid a crash.
The tests brought a warning from the auto club that drivers shouldn’t think that the systems make their vehicles self-driving, and that they should always be ready to take control.
Advertisement - story continues below
AAA also said that use of the word “pilot” by automakers in naming their systems can make some owners believe the vehicles can drive themselves.
“These systems are made as an aid to driving, they are not autonomous, despite all of the hype around vehicle autonomy,” said Greg Brannon, AAA’s director of automotive engineering. “Clearly having ‘pilot’ in the name may imply a level of unaided driving, which is not correct for the current state of the development of these systems.”
Now let’s game this out a little:
The proposition is that self-driving car technology can do a lot of things, but that actual people sitting behind the wheel should stand ready to take back the controls at any moment on the chance that the supposedly autonomous system fails. I have a few questions about all this:
First, how is the driver supposed to know the autonomous system is about to fail? Is it going to tell him, “I don’t recognize our impending impact with that truck?”
Advertisement - story continues below
Second, what does the driver have to do to take back controls? Push a button? Just turn the wheel? Hit the brake? If you’re about to crash into something, is there time to do this?
Third, what if the technology keeps trying to drive the car while the human is trying to override it? What happens then?
Fourth, if the driver has to be ready to take over at any time, I’m assuming that means drivers should be expected to be at a 100 percent lever of alertness at all times. Who do you think is going to be more alert? Someone who’s actually driving the car all the time? Or someone who’s letting the car “drive itself” but is supposed to still be paying attention just in case?
All the arguments I hear in favor of driverless cars rely on statistical models that insist the technology, for all its faults, will still get fewer people killed than unbridled human error. It basically assumes and accepts a certain number of mishaps, but assures us that we shouldn’t worry because if you leave humans at the controls things will be much worse.
Advertisement - story continues below
You see the problem with this, right? That model might be defensible if you substitute autonomous vehicle technology 100 percent for the total driving population at large. But what if you took only the bad drivers out of the mix, left the good drivers, and then forced the good drivers to give way to autonomous vehicle technology?
In other words, if you’re a good driver, you’re probably much less likely to get in an accident driving yourself than in an “autonomous” vehicle. That means that once you cede control to the so-called driverless car, you are now at the same risk of an accident as the worst driver on the road. If you’re a good driver, why would you accept that risk?
Not only that, but you’re now at greater risk of being crashed into by a car that, before autonomous vehicle technology, would not have crashed into you because it also had a good driver behind the wheel.
I’m starting to understand why these people like this idea so much. It’s like socialism for driver safety. It equalizes everyone’s risk, and takes away the advantage of those who drive responsibly. It also makes you responsible for taking over at a moment’s notice, but gives you no real way to know when that moment is upon you.
So what we’re left with is this: We’re going to replace human drivers with technology we don’t need, which isn’t as safe as the best human drivers, who will now be at greater risk of being injured or killed. And for those of you who demand studies, we’ve got a study from AAA that shows all this.
I think I’ll just keep driving my regular car if it’s all the same to you. I don’t know why anyone would do otherwise.