Rationally Considering Autonomous Weapons and Ethics
Rationally Considering Autonomous Weapons and Ethics
This article from IEEE Spectrum presents one of the more rational discussions and counterpoints to the whole banning autonomous weapons theme in recent weeks. We Should Not Ban ‘Killer Robots’ and Here’s Why http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/we-should-not-ban-killer-robots. The article builds on and extends my own thoughts and feelings on the topic that I first described here https://plus.google.com/u/0/+MarkBruce/posts/dvsWMFLV9Vi, agreeing that autonomous weapons are a bad thing but there is no way of stopping their development and likely deployment. It asks whether autonomous weapons on the battlefield are in fact more ethical than the alternatives given they may lead to significantly reduced casualties, both combat and most importantly civilian, particularly with the hypothetical ability of autonomous weapons to follow far stricter rules of engagement better than any human.
A few quotes:
The barriers keeping people from developing this kind of system are just too low.
What we really need, then, is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing.
If autonomous armed robots really do have at least the potential reduce casualties, aren’t we then ethically obligated to develop them?
Blaming technology for the decisions that we make involving it is at best counterproductive and at worst nonsensical. Any technology can be used for evil, and many technologies that were developed to kill people are now responsible for some of our greatest achievements, from harnessing nuclear power to riding a ballistic missile into space.
Perhaps the biggest surprise for me regarding this issue and the open letter that sparked this larger awareness and debate is how polarising it has been, and how many people seem incapable of rationally discussing the issues, instead preferring to assume an air of moral superiority while shouting down all who dare to question otherwise.
Philosophy and Ethics in Autonomous Vehicles
In a closely related area concerning the behaviour of autonomous vehicles on our roads I was recently involved in a discussion thread where I mentioned that philosophical “Trolley Problems” (https://en.wikipedia.org/wiki/Trolley_problem) would have to be tackled at some point with regard to the operation of these vehicles. The most basic example is when you flick a switch that results in one person being killed in order to save many people from being killed.
And, of course, we see this week that a great many people are already working on this problem with this summary article How to Help Self-Driving Cars Make Ethical Decisions http://www.technologyreview.com/news/539731/how-to-help-self-driving-cars-make-ethical-decisions/. Again, as a simplistic example, if a young child runs onto the road in front of an autonomous passenger vehicle before it can stop, should the vehicle swerve into on-coming traffic to avoid the child?
A few quotes:
Given the number of fatal traffic accidents that involve human error today, it could be considered unethical to introduce self-driving technology too slowly.
If you look at airbags, for example, inherent in that technology is the assumption that you’re going to save a lot of lives, and only kill a few.
As one of the commenters notes, the system becomes even better when all vehicles on the road are autonomous and able to communicate with each other: for example if a car swerves into on-coming traffic to miss a child then the on-coming traffic will know this and can react instantly and swerve to make room for the vehicle.
#autonomous #weapons #vehicles
Comments
Post a Comment