Who decides how driverless vehicles keep occupants safe?

Silicon Valley is abuzz with news of driverless cars. Google Inc. has put its technological know-how to use in figuring out how to make driverless cars safe for occupants and others on the road. And, of course, the company is looking at ways to persuade all of us that going driverless is actually safer than having a human behind the wheel. Eliminate human error from California’s roads, the message seems to be, and lives will be saved.

Human error plays a significant role in most car accidents. A driver dozes off, a driver glances at a text or a driver goes through a “pink” light to get somewhere a few seconds faster. More than 32,000 people die in motor vehicles accidents every year in this country, and experts say that at least 90 percent of those deaths are linked to distracted driving, drunk driving and other poor choices made by drivers.

Google, then, has a point in emphasizing its vehicle’s safety. What we cannot forget, however, is that Google is not a thing. Google is a group of people, some of whom are programming the software that makes those vehicles safer. Human error cannot be taken out of the equation entirely, according to a professor at California Polytechnic State University. We can slide it up or down the production process for vehicles and everything else we do, we can keep it as remote as possible from the finished product, but we cannot excise it completely.

The professor uses an interesting example in his argument. Suppose, he says, a driverless sedan is faced with one of those rare occasions when a collision is unavoidable. The car can hit a delivery truck or a lighter economy car with a poor safety rating.

The programmer — or government regulator or legislator, even — must decide if the car will respond in a way that will minimize harm to its own occupants or to occupants of the other vehicle. Hitting the truck is more likely to result in damage to the driverless car and its passengers. Hitting the lighter vehicle will put the safety of its occupants on the line.

The scenario poses ethical questions as well as legal questions. Are we just out for ourselves? Where does the fault lie if there is an injury or a fatality?

What do you think?

Source: Wired, “The Robot Car of Tomorrow May Just Be Programmed to Hit You,” Patrick Lin, May 6, 2014

The post Who decides how driverless vehicles keep occupants safe? appeared first on The Cartwright Law Firm, Inc..

Categories: