Even if we could solve these problems, there may be another one we’d then have to worry about. Let’s say we were able to create a robot that targets only combatants and that leaves no collateral damage—an armed robot with a perfectly accurate targeting system. Well, oddly enough, this may violate a rule by the International Committee of the Red Cross (ICRC), which bans weapons that cause more than 25% field mortality and 5% hospital mortality. ICRC is the only institution named as a controlling authority in IHL, so we comply with their rules. A robot that kills most everything it aims at could have a mortality rate approaching 100%, well over ICRC’s 25% threshold. And this may be possible given the superhuman accuracy of machines, again assuming we can eventually solve the distinction problem. Such a robot would be so fearsome, inhumane, and devastating that it threatens an implicit value of a fair fight, even in war. For instance, poison is also banned for being inhumane and too effective. This notion of a fair fight comes from just-war theory, which is the basis for IHL.
Hat tip to Zinnia Jones via Tumblr