Combat robotics

After reading this article http://www.dailyrecord.co.uk/news/uk-world-news/its-not-terminator-putins-robot-10240379 I couldn’t help but write down a few thoughts on combat robotic systems.

There is no denying that advances in robotics technology has utterly revolutionised warfare in the past decade, from the means of moving munitions aboard a ship, to clearing mines, to delivering precision munition strikes automated systems have enabled a rapidity and flexibility of military action never before imagined. Despite the utility of these systems, however, the ethics and legality of war have never been more open for debate – as human being are ever increasingly cut from the loop of deadly decision making our ability to control the wars we initiate may be incrementally decreased.

Combat drone systems may currently rely on human operators, but as the technology progresses it will be increasingly constrained by the processing power of the human brain – robotic swarms and nanite systems are already being researched that will necessarily require an automated decision making process in order to operate, and from this stage it is only a small step to allowing for a fully automated robot able to make its own decisions on whether to kill.

This may sound like the hysterical protestations of a digital luddite, but it should be remembered how much of the science fiction of the past has become science fact today; people are accustomed to the idea of ‘killer robots’ in films they watch, but seemingly unable to grasp that such technologies are already under development. A prevalent argument is that these dangerous technologies are “still a long way off” and that therefore it is not worth considering potential ramifications just yet, however drone technology already exists and is as yet unfettered by a codified international law; it is better to prepare for the worst now than only begin to plan when things are too far gone to halt.

There is an interesting area of debate surrounding the possibility of deciding whether certain avenues of research should simply not be explored, and I believe that combat robotics should near the top of this list, alongside biological weaponry and certain designs of nuclear weapon that were thankfully discontinued during the Cold War.

Whilst I fully advocate the use of battlefield drones as a vital asset in modern warfare, completing the three-D missions (those which are dull, dirty, and dangerous) to a superb standard, a human being should always be in the loop in terms of their control. There may be certain modest gains to be made from allowing machines the autonomy to decide for themselves when to act, but – at risk of entering into an argumental cliche – this begins a slippery slope whose abyssal bottoms should well remain unplumbed. The downing of Iran Air Flight 655, we should remember, was a result of an automated system telling its human crew that it had spotted a target; such mistakes will invariably occur when human judgement is not involved. The consequences of automated defence systems may run far greater, and the most terrifying example of a malfunction I have listed below. So long as wars are fought for human goals, the participants should also be human; it is not simply that the idea of machines deciding who should live or die is distasteful, but that the inevitable consequence of them being granted such power would be too appalling for the mind to truly comprehend.

 

https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s