In 2014, as publications and automakers began making greater noise about autonomous vehicles, researchers at MIT’s Media Lab issued some questions to the public. The institute’s Moral Machines experiment offered up a series of scenarios in which a self-driving car that has lost its brakes has to hit one of two targets, then asked the respondents which of the two targets they’d prefer to see the car hit.
Four years later, the results are in. If our future vehicles are to drive themselves, they’ll need to have moral choices programmed into their AI-controlled accident avoidance systems. And now we know exactly who the public would like to see fall under the wheels of these cars.
However, there’s a problem: agreement on who to sacrifice differs greatly from country to country.
Read More >















Recent Comments