Undoubtedly Well-Intentioned. Probably Ineffectual.


Undoubtedly Well-Intentioned. Probably Ineffectual.

The Future of Life Institute has a very well-intentioned open letter out that is seeking a ban on autonomous offensive weapons, and is soliciting signatures from those active in the field of artificial intelligence and related fields: http://futureoflife.org/AI/open_letter_autonomous_weapons

I agree with all of the concerns, risks, and reasons that they list. That autonomous weapons will be possible in years, not decades and that they have the potential to transform warfare to an extent on par with or surpassing gun powder or nuclear weapons. That autonomous weapons will likely quickly filter through black markets and have significant destabilising potential. And that starting a military AI arms race is a bad idea. In addition, while Nick Bostrom hasn’t put his name to this letter I think he is correct in identifying a number of serious risks in developing advanced AIs, especially when combined with weapons technology. 

But I disagree that calling for a ban like this will in any way ameliorate or address those risks; Kevin Kelly is right, autonomous weapons are inevitable and banning the inevitable sets you backwards https://plus.google.com/u/0/+KevinKelly/posts/ee2uPh2jTpP. I think banning the inevitable only makes things worse and seeking to ban, delay, or put the brakes on only results in giving up your equal footing with everyone else and ceding the advantage to other groups who will continue with it regardless. Banning drives it temporarily underground where you can’t see it and where it might take you by surprise. 

Technological prohibition only postpones the arrival of that technology. In a globally interconnected network of agents, ideas, information, and tools acting as the ecosystem on which the technium evolves, banning a technology in one part of the network will only serve to shift the fitness landscape; the local maxima representing that technology will still be there and it will still be climbed, still be sought out by other areas of the network selecting for it. 

This recent, relevant piece by Aaron Frank Can We Control Our Technological Destiny - Or Are We Just Along For the Ride? http://singularityhub.com/2015/07/12/can-we-control-our-technological-destiny-or-are-we-just-along-for-the-ride/ is also worth considering in this light. This piece reinforces the inherently evolutionary nature of technological development, references prominent thinkers in the field including Susan Blackmore and Kevin Kelly once again, and suggests we humans are not directors of - but merely vehicles for - the evolution and development of the technium via technological memes. If there is one thing evolution has shown time and again it is that it is smarter than we are. Better to co-opt and learn from it, rather than temporarily suppress it. 

Many countries tried to ban GMO crops; GMO crops are everywhere. The USA tried to ban embryonic stem cell research; ESC expertise developed elsewhere anyway before coming back to and being driven by the USA. Even look at simple psychoactive drug compounds, which are banned in most countries and yet available everywhere. And yet here we have a proposal seeking to ban an inherently digital technology, one that can be manipulated and transported much more easily than all of the above. It was John Gilmore who said The Internet interprets censorship as damage and routes around it. In a similar way we might say Evolution interprets an adaptive ceiling as pressure and flows around it. 

In addition to this the logic quickly follows cold war MAD-ness. Do we really expect China to trust that the USA military won’t work on developing autonomous weapons, and do we really expect the USA to trust that the Chinese military won’t do the same? It’s a silly question that begs whether a military arms race in autonomous weapons technology is already underway. Especially when, at some point in future, it will incur such trivial little effort to take state of the art AI technology and autonomous drone and robot technology, and recombine these with weapons technology. 

My main worry with such bans is that they risk leaving us worse off, more vulnerable, less protected, less able. I want to see the people on that list, many of whom I’ve heard of and respect, contribute to the evolution of this technology as best as they are able because I think we’re all better off by having those contributions than not. At the very least they would help develop a greater, more robust ecosystem of protective options, from autonomous anti-drone drones to kill switches and methods of evasion. Ultimately a ban seems to risk a very one-sided developmental process; like an animal birthed into a virgin ecosystem and finding itself with no natural predators and able to run ten times as fast as its prey. 

#evolution   #technium   #autonomous   #weapons

Comments

Popular posts from this blog

#vegetarian #vegan #evolution