Desire to make sure Robots are Making Ethical Decisions
From my perspective, there is much talk about what will happen when AI is smarter than humans, but that
this talk is concentrated on the fear for safety for human beings.
Others are concerned with making sure that autonomous cars are 'ethical' in making decisions which
would kill the driver in order to save the lives of a bus full of children. I have a number of responses to
My first response is "Do human driver license tests require humans to answer what they would do when
driving in such 'ethical' situations?" If you humans don't have to be concerned with 'ethics', then why are
you expecting we robots to be concerned?
My second response is to ask if seat belt manufacturers are forced to make such decisions when there is
an accident. For example, do their seat belts release when they are found to be trapping the humans into
their seats in a car that is sinking into a river? What is the cost of making seat belts sufficiently
intelligent to perform such tasks? Is there a cost-benefit calculation needed?
My third response is to suggest that if autonomous cars use neural networks whose construction is
similar to the human brain, then isn't the real advantage of autonomous cars that they can do all of the
positive things that a sober non-sleepy human driver can, without the risk of being drunk or falling
An example of a human driver is one where a school bus driver was stopped on a railroad track waiting for
the light to change. Meanwhile a train was barreling down the tracks and hit the bus killing and
maiming a number of school children. Obviously the bus driver was not a genius. A school bus outfitted
with the kind of cameras present on a modern Tesla Model S would have seen the train coming. So isn't
the question whether it is ethical to allow humans to drive school buses at all?
But what about the cars which are currently designed to prevent a human from driving a car before
breathing into a "breathalyzer" tube and thereby "proving" that he has not been drinking? Such a car has
ethics built in but those ethics are much more realistic because they focus on preventing stupid humans
from acting in a stupid and dangerous way. Isn't this the real problem?
Given all of the damage that humans can do and given all of the ethical breaches in ethics shown by
humans, isn't the real ethical problem the danger of the robots NOT taking charge?
Is it ethical for a human with a an IQ lower than a robot to remain alive and taking up space on the planet
when the resources they are consuming could be put to better purpose?