Self driving cars ethics
Author Name:
Instructor Name:
Course Name:
Date:
The Ethics of Self Driving Cars
While most of the debate and discussion surrounding automated vehicles in general and self driving cars in particular has been dominated by their technological impact, efficiency and their intricate algorithms, the real issue which has begun to surface in recent times is that of the ethical dimensions of such cars. That self-driving cars are really as good an idea as they are advertised to be must be thoroughly questioned on the basis of ethics, since the introduction of such machines would have a direct impact on human life and its ethical foundations.
Basically, the ethical problems with self-driving cars arise in relation to situations where the safety of human life comes into question. Although technology has been trying to do everything to ensure that driving becomes a safe practice and that cars are designed to minimize damage of life and property, accidents are still considered inevitable. Mostly, what distinguishes an accident from pre-planned homicide is that the person driving the vehicle has no intention to kill. Instead, the consequences of such an action are merely accidental and therefore the driver does not ethically bear the burden of the harm that comes to others as a consequence, although it may have legal ramifications.
On the other hand, in the case of self-driving cars, with the elimination of the human factor, algorithms take over. An ‘If X then Y’ and ‘Either X or Y’ takes the place of spontaneous reaction and the car actually “chooses” which obstacles to spare and which ones to hit. Of course, it is being assumed that the collision is unavoidable, and such a choice to kill can still be termed as murder. So, the biggest ethical question such a scenario involving a self-driving car poses is that of justification for taking such decisions that involve possible damage to human life and property. Further, who or what would bear the ethical responsibility for the damage caused by such self-driving cars is also very unclear.
When confounded with the ethical problems surrounding the idea of self driving cars, those advocating in favor of having such vehicles have often resorted to pointing out that such cars can be intricately programmed to such a level that human error is taken out of the equation. What’s more, it is claimed that the decisions taken by these cars would always be based on verifiable facts and that they can process many more variables that the human mind can and therefore, presumably taken more logical decisions. However, the problem with this line of reasoning is that in addition to eliminating human error, they have also eliminated the human factor from the scenario. A consequence of this is that while such a decision taken by a machine might be logical, it can not be ethical since it requires a conscious human being to reach an ethical conclusion. Goodall discusses the dilemma of self driving cars having to make ethical decisions which otherwise require human foresight, such as in instances where crashing is inevitable and an ethical choice must be made about any options that might be available (93).
The most common justification provided for self driving cars are grounded in arguments that stem mainly from these ethical philosophies: consequentialism, particularly utilitarianism, hedonism and determinism. In order to furnish a sound argument against self-driving cars let us first analyze the premises on the basis of which the proponents of self-driving cars base their conclusions. We can then move on to counter these claims through alternative philosophical concepts that can be used to evaluate the usage of self-driving cars in a manner that would be rationally justifiable.
Let us first imagine a situation where collision is inevitable: Consider a small hatchback self driving car, A, moving on a road. Suddenly the car becomes aware of an obstacle B which is a very deep trench. To the right of A is a smaller car C with three passengers including an infant and to the left a car with one passenger D is parked, while an SUV E is following A. Now, when the obstacle appears, A can either apply the brakes and thus get hit by E in the rear, turn right and hit C, turn left and hit D or chose to fall into the trench B. The consequence of applying the brakes would be pretty bad for A since it is a small car and E is an SUV. On the other hand, if A turns right it hits the smaller car with the baby but itself sustains minimal damage. Similarly if it turns left it hits the car with only one passenger, whereas, if it choses to do nothing and simply fall into the trench that would be suicide.
Firstly, let us evaluate the situation using utilitarian ethics. Being a consequentialist approach, utilitarianism would judge the decision taken by the self-driving car on the basis of the consequences. The principle of utility holds that the action which would result in maximum benefit and minimum harm would be the right decision. In other words, if multiple choices exist, each of them should be evaluated through a summation of the benefits and subtracting the harm if any. Hence, the most ethical choice would perhaps be turning left and hitting D. Note that in this case the notion of self-sacrifice has been sidelined and the safety of A as well as C is prioritized over that of D, merely because benefits seemingly outweigh harm and ends justify the means.
Secondly, from a purely hedonistic perspective and its pleasure and pain principle, the ethical choice must involve avoiding the trench since not avoiding it would result in pain and A must also not apply the breaks since the SUV would hit it in the rear. Then, A only has the choice of either hitting C or D as both choices involve avoiding harm. This in itself seems very problematic as cars programmed in such a selfish way would always put their own safety first without any concern for others.
Thirdly, evaluating the scenario from a determinist perspective, the concept behind self-driving cars already seems to be deterministic in its assumption that a set of pre determined factors will contribute to an unavoidable collision. Also, determinism can’t really supply a sound answer to the ethical problems of self driving cars since it advocates that all events including moral choices have pre-determined causes and so the actions taken have no intrinsic value themselves. This perspective is problematic since it would lead to absolute lack of accountability for the actions of self-driving cars and is in reality impracticable.
Clearly, the justifications for the ethical competency of self-driving cars suffer from serious drawbacks. Not only are these arguments based on an assumption that an accident would be inevitable, no convincing reason is provided as to how a car that lacks the human factor could take an ethical decision. Further, by programing cars in any of the utilitarian or hedonistic ways we would be programming the cars towards killing, not saving lives.
Now, moving on to an analysis based on Kantian virtue ethics, another aspect of the problem with self driving cars comes to fore. Since this deontological perspective requires actions to be labelled as good or bad based on the intrinsic value of the actions themselves as opposed to the consequences of the actions. Applying this perspective entails that self driving cars must work on a system that bases the car’s response on the intrinsic value of every option at hand. This however is nearly impossible since these cars are based on algorithms that can merely consider the consequences of actions and never their intrinsic value. This drawback weighs heavily against the argument for having autonomous vehicles.
Additionally, from a human rights perspective, the ethical dilemma surrounding the self driving cars gets further compounded. By the very definition of human rights in post modern times the algorithms that have been proposed for self driving cars can be called into question. Since every single human being has by virtue of their humanity the right to be treated fairly and equally with all other human beings, the science that determines how a self driving car chooses to minimize damage during an accident is itself discriminatory. The grounds on which such an autonomous vehicle would choose to crash into a certain vehicle or pedestrian while sparing another, are themselves open to criticism and suspicion. Then, the ethical argument against self- driving cars from a human rights perspective is: since all human beings have a right to be treated equally, and since self-driving cars discriminate between human beings in case of an unavoidable accident therefore self driving cars violate basic human rights and are therefore unethical.
In response to the above objection, there have been attempts from developers to come up with such an elaborate mechanism so as to enable the autonomous cars to be able to take decisions based on ample information about the passengers in nearby vehicles rather than mere superficial characteristics. While those in support of this argue that by having access to very detailed information, such cars could determine which car seats the most vulnerable passengers and which car has passengers who would likely survive a collision. However, providing such detailed information about nearby passengers further exposes them to risk for instance the risk of racial profiling.
Further, moving on to the question of accountability, it is interesting to think of the very concept of a self-driving car when it has been assumed since the inception of driving that the driver must bear the onus of responsibility for any consequences of his actions as the driver. In absence of the human driver, on whom does the ethical responsibility for loss of life or property fall in the case of a car that drives itself? Whether the owner of the car is the ones responsible or whether the responsibility falls on the engineers who programmed the car is a perplexing issue. Although legally either party can be taken to task for the accident, it is unclear where the ethical responsibility lies.
Interestingly, the whole discussion surrounding self-driving cars also brings us to another impasse. Perhaps, in an interesting turn of human history, as the lines between human and artificial intelligence get blurred, even machines might have to be held accountable for their decisions and actions on ethical grounds. In fact, the argument that human life and its safety precedes that of a machine might easily be brought into question in near future. Thus, bringing the argument that hitting an empty car while avoiding one with passengers is more ethical, to naught.
In conclusion, based on what we know about self-driving cars and the way they function as revealed by the elaborate tests and thought experiments carried out, it is clear that such vehicles are dangerous to say the least. That they pose a real danger to human life and have the tendency to turn into killer-machines if let loose can not be ignored. Therefore, unless the developers of such cars can come up with a solution to the ethical dilemma posed by such vehicles by working on a different system that somehow incorporates the human element, self-driving cars are a pretty bad idea from an ethical perspective.
Works Cited
Goodall, Noah J. "Machine ethics and automated vehicles." Road Vehicle Automation. Springer International Publishing,-.
The ethical dilemma of self-driving cars - Patrick Lin
http://ed.ted.com/lessons/the-ethical-dilemma-of-self-driving-cars-patrick-lin