Posts

Showing posts with the label weapons

Rationally Considering Autonomous Weapons and Ethics


Rationally Considering Autonomous Weapons and Ethics

This article from IEEE Spectrum presents one of the more rational discussions and counterpoints to the whole banning autonomous weapons theme in recent weeks. We Should Not Ban ‘Killer Robots’ and Here’s Why http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/we-should-not-ban-killer-robots. The article builds on and extends my own thoughts and feelings on the topic that I first described here https://plus.google.com/u/0/+MarkBruce/posts/dvsWMFLV9Vi, agreeing that autonomous weapons are a bad thing but there is no way of stopping their development and likely deployment. It asks whether autonomous weapons on the battlefield are in fact more ethical than the alternatives given they may lead to significantly reduced casualties, both combat and most importantly civilian, particularly with the hypothetical ability of autonomous weapons to follow far stricter rules of engagement better than any human. 

A few quotes:

The barriers keeping people from developing this kind of system are just too low.

What we really need, then, is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing.

If autonomous armed robots really do have at least the potential reduce casualties, aren’t we then ethically obligated to develop them?

Blaming technology for the decisions that we make involving it is at best counterproductive and at worst nonsensical. Any technology can be used for evil, and many technologies that were developed to kill people are now responsible for some of our greatest achievements, from harnessing nuclear power to riding a ballistic missile into space.

Perhaps the biggest surprise for me regarding this issue and the open letter that sparked this larger awareness and debate is how polarising it has been, and how many people seem incapable of rationally discussing the issues, instead preferring to assume an air of moral superiority while shouting down all who dare to question otherwise. 

Philosophy and Ethics in Autonomous Vehicles

In a closely related area concerning the behaviour of autonomous vehicles on our roads I was recently involved in a discussion thread where I mentioned that philosophical “Trolley Problems” (https://en.wikipedia.org/wiki/Trolley_problem) would have to be tackled at some point with regard to the operation of these vehicles. The most basic example is when you flick a switch that results in one person being killed in order to save many people from being killed.

And, of course, we see this week that a great many people are already working on this problem with this summary article How to Help Self-Driving Cars Make Ethical Decisions http://www.technologyreview.com/news/539731/how-to-help-self-driving-cars-make-ethical-decisions/. Again, as a simplistic example, if a young child runs onto the road in front of an autonomous passenger vehicle before it can stop, should the vehicle swerve into on-coming traffic to avoid the child? 

A few quotes:

Given the number of fatal traffic accidents that involve human error today, it could be considered unethical to introduce self-driving technology too slowly.

If you look at airbags, for example, inherent in that technology is the assumption that you’re going to save a lot of lives, and only kill a few.

As one of the commenters notes, the system becomes even better when all vehicles on the road are autonomous and able to communicate with each other: for example if a car swerves into on-coming traffic to miss a child then the on-coming traffic will know this and can react instantly and swerve to make room for the vehicle. 

#autonomous   #weapons   #vehicles

Undoubtedly Well-Intentioned. Probably Ineffectual.


Undoubtedly Well-Intentioned. Probably Ineffectual.

The Future of Life Institute has a very well-intentioned open letter out that is seeking a ban on autonomous offensive weapons, and is soliciting signatures from those active in the field of artificial intelligence and related fields: http://futureoflife.org/AI/open_letter_autonomous_weapons

I agree with all of the concerns, risks, and reasons that they list. That autonomous weapons will be possible in years, not decades and that they have the potential to transform warfare to an extent on par with or surpassing gun powder or nuclear weapons. That autonomous weapons will likely quickly filter through black markets and have significant destabilising potential. And that starting a military AI arms race is a bad idea. In addition, while Nick Bostrom hasn’t put his name to this letter I think he is correct in identifying a number of serious risks in developing advanced AIs, especially when combined with weapons technology. 

But I disagree that calling for a ban like this will in any way ameliorate or address those risks; Kevin Kelly is right, autonomous weapons are inevitable and banning the inevitable sets you backwards https://plus.google.com/u/0/+KevinKelly/posts/ee2uPh2jTpP. I think banning the inevitable only makes things worse and seeking to ban, delay, or put the brakes on only results in giving up your equal footing with everyone else and ceding the advantage to other groups who will continue with it regardless. Banning drives it temporarily underground where you can’t see it and where it might take you by surprise. 

Technological prohibition only postpones the arrival of that technology. In a globally interconnected network of agents, ideas, information, and tools acting as the ecosystem on which the technium evolves, banning a technology in one part of the network will only serve to shift the fitness landscape; the local maxima representing that technology will still be there and it will still be climbed, still be sought out by other areas of the network selecting for it. 

This recent, relevant piece by Aaron Frank Can We Control Our Technological Destiny - Or Are We Just Along For the Ride? http://singularityhub.com/2015/07/12/can-we-control-our-technological-destiny-or-are-we-just-along-for-the-ride/ is also worth considering in this light. This piece reinforces the inherently evolutionary nature of technological development, references prominent thinkers in the field including Susan Blackmore and Kevin Kelly once again, and suggests we humans are not directors of - but merely vehicles for - the evolution and development of the technium via technological memes. If there is one thing evolution has shown time and again it is that it is smarter than we are. Better to co-opt and learn from it, rather than temporarily suppress it. 

Many countries tried to ban GMO crops; GMO crops are everywhere. The USA tried to ban embryonic stem cell research; ESC expertise developed elsewhere anyway before coming back to and being driven by the USA. Even look at simple psychoactive drug compounds, which are banned in most countries and yet available everywhere. And yet here we have a proposal seeking to ban an inherently digital technology, one that can be manipulated and transported much more easily than all of the above. It was John Gilmore who said The Internet interprets censorship as damage and routes around it. In a similar way we might say Evolution interprets an adaptive ceiling as pressure and flows around it. 

In addition to this the logic quickly follows cold war MAD-ness. Do we really expect China to trust that the USA military won’t work on developing autonomous weapons, and do we really expect the USA to trust that the Chinese military won’t do the same? It’s a silly question that begs whether a military arms race in autonomous weapons technology is already underway. Especially when, at some point in future, it will incur such trivial little effort to take state of the art AI technology and autonomous drone and robot technology, and recombine these with weapons technology. 

My main worry with such bans is that they risk leaving us worse off, more vulnerable, less protected, less able. I want to see the people on that list, many of whom I’ve heard of and respect, contribute to the evolution of this technology as best as they are able because I think we’re all better off by having those contributions than not. At the very least they would help develop a greater, more robust ecosystem of protective options, from autonomous anti-drone drones to kill switches and methods of evasion. Ultimately a ban seems to risk a very one-sided developmental process; like an animal birthed into a virgin ecosystem and finding itself with no natural predators and able to run ten times as fast as its prey. 

#evolution   #technium   #autonomous   #weapons

This Week in Technology


Originally shared by Andrij “Andrew” Harasewych

This Week in Technology
3D printed rockets, portable laser weapons, virtual world creator, Mag-lev train speed record, cheaper genetic testing for cancers, iris recognition systems and more!  http://www.futurism.co/tech-weekly/

3D Printed Rocket: http://www.realtechtoday.com/technology/the-first-3d-printed-rocket-will-launched-soon/

Tactical Laser Weapon: http://spectrum.ieee.org/tech-talk/aerospace/military/tactical-laser-weapon-module-can-laserify-almost-anything

Creating Virtual Worlds from Text: http://cordis.europa.eu/news/rcn/122725_en.html

Japanese Maglev Train: http://www.bbc.com/news/world-asia-32391020

Breast Cancer Testing: http://www.forbes.com/sites/matthewherper/2015/04/21/start-up-pledges-to-cut-cost-of-breast-cancer-genetic-testing-from-4000-to-249/

Iris Recognition: http://www.biometricupdate.com/201504/carnegie-mellon-researchers-developing-long-range-iris-recognition-solution

#ScienceSunday   #Science #Technology #Tech #3DPrinting #Weapons #Lasers #VR #Muse #ProjectMuse #MagLev #Japan #BreastCancer #OvarianCancer #Genetics #health #medicine #biometrics