Beyond the technological challenge they represent, ‘autonomous cars’ confront us with an unprecedented moral issue. For autonomy, in the strongest sense, belongs only to human beings, capable of setting their own rules for action. A machine, no matter how sophisticated, simply executes what has been encoded in it. The question of the illusory responsibility of autonomous vehicles is therefore ultimately a question of the ethical and cultural choices that we, as citizens, agree to delegate to the engineers who design them.
Originally proposed by the philosopher Philippa Foot in 1967, the ‘tram dilemma’ is a good illustration of the problem. This thought experiment involves deciding whether, faced with a critical situation on the road, you should sacrifice one person to save five, an elderly person to save a child, passengers to save pedestrians. Researchers who have explored these dilemmas – such as those at MIT with their ‘Moral Machine’ survey[1] – show that our moral preferences are not universal: they vary according to culture, tradition and personal worldview. In reality, no solution can be programmed ‘objectively’.

It could be argued that, faced with the choice between a child and an old man, it goes without saying that the child must be saved. But what if that child is destined to become a tyrant? A fake advert, devised by German students, features a Mercedes avoiding all obstacles… except one: a young boy, soon revealed to be Adolf Hitler. The final slogan – ‘Detect dangers before they happen’ – provocatively questions technology’s claim to be the judge of right and wrong. This desire to encode in advance what is a matter for human judgement embodies a fantasy of ‘total control’, which threatens precisely what makes us human: our ability to decide freely.
This is the thesis defended by Matthew B. Crawford, philosopher and mechanic, in his latest essay Why We Drive: Toward a Philosophy of the Open Road. Driving is not a chore from which we should be relieved, he contends, but a fundamental experience of freedom. Taking the wheel means exercising your skills, coordinating your senses, improvising in the face of the unexpected and taking responsibility among others. In this way, driving constitutes a genuine ‘cognitive extension’: it sets us in motion, in every sense of the term, and contributes to our individual and collective fulfilment. Making us mere passengers means risking the atrophy of our human capacities.
The autonomous car thus illustrates the risk of a society obsessed with security and control, ready to sacrifice human autonomy in favour of algorithmic efficiency. Because it is based on the ‘presumption of competence’ (the trust we place in others to know how to act and cooperate), democracy itself is threatened, according to Crawford. If we delegate our decision-making power to machines, we undermine the foundations of our free societies. The question is no longer whether cars will be able to drive themselves, but whether we will still want to drive ourselves.
[1] ‘Moral Machine’ is an online test in which a driverless car must choose the lesser of two evils, and compare its answers with those of other users: https://www.moralmachine.net/hl/fr
Sophie Chassat, Partner, Accuracy
Accuracy Talks Straight #14 – The cultural corner