The problem with human intelligence is that its normal range is too broad. The difference between the lower limit of normal compared to the upper point of the normal range is so wide that the average comes in at such a low number that the ordinary performance of typical human functions is beyond the capacity of a majority of the population. The low average also explains the fraud that flummoxes a large segment of the populace. This phenomenon explains why nothing works very well. Put in any important activity and give it a grade. Start with the government, the media, the military, the Post Office, the DMV, computers, smartphones – fill in the rest. That anything works at all is a tribute to machine over man. Why worry about artificial intelligence when native thinking is in the pit? We’d probably do better if we had fewer people at the top of the intelligence range. There’d be less design and production of systems and gadgets beyond the capacity of the rest of us.
Consider the issue of self-driving cars if you would understand how confused we are. Read up on Cruise a self-driving car company headquartered in San Francisco, California. Cruise tests and develops autonomous car technology. The company is a largely autonomous subsidiary of General Motors. Would you be surprised to learn that occasionally a self-driven car gets into an accident? What standard should one use to judge the safety and utility of such a machine? Mechanical failure is not an unknown phenomenon.
Apparently, the standard that government and regulators use is perfection. One and done. If the autonomous car gets into an accident the entire enterprise is stopped. Why assume the auto car is at fault? The accident may be caused by another driver or conditions beyond the control of any self-driving car. And what’s the state of ordinary cars driven by distracted, impaired, careless, or otherwise negligent humans? There are about 20,000 auto accidents every day in the US. Unless you are involved in one of them, nobody seems to be very concerned about the carnage in the streets. GM doesn’t suspend sales of its non-self-driving cars. Most of these accidents don’t even merit a mention in the local press and TV stations. A high school science project could likely design a self-driving car that would be safer than the ones now on the road operated by humans.
In addition to auto accidents, planes occasionally fall from the sky, trains run off the rails, trucks driven by professionals crash into whatever is handy, and drivers frequently drive like maniacs. A self-driving car is likely already safer than one driven by the average human, but it will never be 100% risk-free. Given our weak capacity for risk-benefit analysis, we will still prefer mayhem to protection when we get behind or don’t get behind the wheel.
Each time you drive to the supermarket or your friend’s house you risk death or dismemberment. And you do so without a care on your mind as you listen to the latest podcast during your journey. Have you ever run a red light, failed to stop at a sign requiring immobility, gotten a speeding ticket, or been in an accident? If you’re honest you’ve yielded multiple yeses. Your truthful answer is likely yes to all and a few more I’m too nice to list.
So revamp your expectations of self-driving cars. Don’t let California set the standard for their safety – don’t let California set any standard. We’d doubtless be a lot safer if humans stopped driving cars under any circumstances. We just aren’t smart enough to operate lethal machines unless we are at war or have harm as our goal. Leave machines to the care of highly trained professionals (who though fallible will screwup less than we do) or sentient algorithms with better attention spans than ours.