The serious stuff: autonomous weapons

by Matthew Cobb

Not really the usual WEIT fare, and certainly not what I normally post here, but I feel it is pretty important. This is a 7 minute video, ‘Slaughterbots’, about autonomous weapons and what the future could hold. Watch it and be chilled.

The video was made by the Campaign to Stop Killer Robots (once that might have sounded funny). On their webpage, they point out that an intergovernmental meeting is taking place right now:

“Representatives from more than 70 states are expected to attend the first meeting of the Convention on Conventional Weapons (CCW) Group of Governmental Experts (GGE) on lethal autonomous weapons systems on 13-17 November 2017, as well as participants from UN agencies such as UNIDIR, the International Committee of the Red Cross (ICRC)”

They made the video to draw attention to the problem and pressure the GGE meeting which “is not working towards a specific outcome or negotiating a new CCW protocol to ban or regulate lethal autonomous weapons”. Nevertheless, 19 nations have supported a ban on the development of such devices, and the European Parliament voted to ban “development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention.” Their website explains:

“More than 3000 artificial intelligence experts signed an open letter in 2015 affirming that they have “no interest in building AI weapons and do not want others to tarnish their field by doing so.” Another 17,000 individuals also endorsed this call. The signa tories include Tesla CEO Elon Musk, Apple co – founder Steve Wozniak, Skype co – founder Jaan Tallin, Professor Stephen Hawking, and Professor Noam Chomsky . They include more than14 current and past presidents of artificial intelligence and robotics organizations and professional associations such as AAAI, IEEE – RAS, IJCAI, ECC AI. They include Google DeepMindchief executive Demis Hassabis and 21 of his lab’s engineers, developers and research scientists.”

If the scientists potentially involved in making this stuff are worried, we should all be. How can we stop the future described in the video from coming to be?

97 Comments

  1. Brujo Feo
    Posted November 16, 2017 at 9:57 am | Permalink

    The Terminator: “In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug…”

  2. GBJames
    Posted November 16, 2017 at 10:02 am | Permalink

    It does not bode well.

  3. allison
    Posted November 16, 2017 at 10:05 am | Permalink

    Aggression by robots would violate Asimov’s First Law of Robotics, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

    • DrBrydon
      Posted November 16, 2017 at 11:30 am | Permalink

      That would assume that the robot’s creator also held that as a belief.

    • Posted November 16, 2017 at 11:37 am | Permalink

      Good luck trying to implement them!

    • Posted November 16, 2017 at 1:07 pm | Permalink

      There are a number of science fiction books segueing from Asimov’s three robot laws to robots that don’t adhere to the three laws.

      Our police forces are already being too heavily militarized by being given weapons from the military. I believe they include robots.

      Robots are being used in some Walmarts to perform functions that free up human employees to help customers. Robots are being used in care facilities to assist the elderly in timely and accurate pill taking. I think I recently read about sex robots also.

      I think there should be thorough discussion by the appropriate participants about whether or not to increase the capabilities and uses of military robots.

      • Posted November 17, 2017 at 11:14 am | Permalink

        Sexbots have progressed to the point (even as of 2012 or so) that Korean courts apparently ruled that they aren’t *yet* prostitutes.

    • barael
      Posted November 17, 2017 at 3:10 am | Permalink

      You mean the laws Asimov came up with just so he could write novels about how easily they can be subverted?

  4. Posted November 16, 2017 at 10:26 am | Permalink

    “How can we stop the future described in the video from coming to be?”

    We can’t. At least not with certainty. I am old enough to remember the ridiculous “duck and cover” routines that were supposed to protect us in the event of nuclear war, a far scarier scenario than the movie. One that very nearly happened in real life.

    So what do we do about the AI weapons threat? I think if we look back on the cold war MAD policies and learn from the successes and mistakes and apply them to this new terror, we can find a way to mitigate the threat. Whatever solutions we come up with, we will have to accept some leakiness. AI weapons will be developed.

    • barael
      Posted November 17, 2017 at 3:13 am | Permalink

      Biological and chemical weapons have not proliferated (to an alarming degree) mostly thanks to treaties banning them and biologists/chemists anathematizing them. We just need governments and AI researchers to do the same.

    • Posted November 18, 2017 at 4:25 am | Permalink

      One huge trouble with AI weapons treaties is that most of the relevant technologies are dual-use. We already have the small explosives imagined in the video; it’s the AI facial recognition and quick-maneuvering capabilities that are lacking. But both of those technologies have enormous numbers of civilian uses.

      • Posted November 21, 2017 at 1:57 am | Permalink

        But the research to develop them will be largely financed by the military…

  5. Mark Reaume
    Posted November 16, 2017 at 10:30 am | Permalink

    The scariest part of this video is the realization that it is technically feasible to develop this technology today. I wouldn’t have said that even a few years ago.

  6. Randall Schenck
    Posted November 16, 2017 at 10:35 am | Permalink

    Yes, after all, who needs regulations.

  7. Posted November 16, 2017 at 10:41 am | Permalink

    You can’t stop these weapons from bring developed, because some rogue state or organisation is guaranteed to develop them. At that point, everybody will be forced to join this particular arms race. My guess is that every country with significant military resources is pouring lots of money into the development of such weapons even as we speak. This certainly applies to the USA and China. It would be crazy not to.

    • BJ
      Posted November 16, 2017 at 11:03 am | Permalink

      It really is the large and powerful nations that are/will be developing this tech, rather than rogue states that lack the resources and brainpower to do so. Even if conventions against the development of such weapons are agreed upon, the development will simply be done in secret. As in the Cold War and many other situations before, nobody wants to end up in a war, only to discover that the other side has been developing massive advantages for years, while the “good guys” dithered and tried to follow the rules.

      The only regulation on arms races that has ever proven (mostly) effective is the imposition of rules by a more powerful state or organization upon less powerful states. The largest and most powerful are exempt from such regulation by force, though economic force sometimes makes a dent.

    • infiniteimprobabilit
      Posted November 16, 2017 at 5:21 pm | Permalink

      No it is NOT necessary to develop autonomous weapons to counter other autonomous weapons. Superior firepower from conventional weapons will take them out. Or alternatively remote-controlled weapons if there is too much aversion to risking lives against these things. (Land mines are a sort of crude precursor autonomous weapon).

      The risk is that it will appeal to idiots like Drumpf or countries like the USA – yes *exactly* like the USA – that believe in gadgetry, love starting wars in other people’s countries, but are extremely averse to casualties on their own side.

      cr

      • Posted November 16, 2017 at 5:30 pm | Permalink

        “The risk is that it will appeal to idiots like Drumpf or countries like the USA – yes *exactly* like the USA – that believe in gadgetry, love starting wars in other people’s countries, but are extremely averse to casualties on their own side.”

        Yes.

        • Posted November 17, 2017 at 11:15 am | Permalink

          I’ve been told by people working on warbots for the US that “minimize US soldier casualties” is one of the goals.

          • Posted November 18, 2017 at 1:56 am | Permalink

            Is this bad?

            • infiniteimprobabilit
              Posted November 18, 2017 at 2:13 am | Permalink

              It is if you think that citizens of all countries should have equal rights.

              In particular, that approach enables an aggressor to go to war with even less consideration for public opinion at home, and hence less restraint.

              cr

              • Posted November 18, 2017 at 9:05 am | Permalink

                I don’t think that you understand how wars are fought.
                This isn’t sports and equal rights don’t mean that when I’m sent to battle, my side will guarantee fair fight to the other side, but will try to get the advantage.
                In fact, I expect my side to do everything possible to maximize the risk for the enemy while minimizing it for me.

              • Posted November 18, 2017 at 5:14 pm | Permalink

                And yet, during the Vietnam war, under the draft, people- especially men- did care very much about being involved in a war that effected them personally. Without the draft, we all have less of a stake in what is done in our name.

              • Posted November 18, 2017 at 10:15 pm | Permalink

                So, by your logic, America has to guarantee that its own soldiers are killed in wars to get them involved?

              • Posted November 18, 2017 at 11:49 pm | Permalink

                No. Americans would have more of a stake in their own country if we still had the draft, and- better yet- some sort of National Service that applies to women and men equally and that exempts *no one.* Then maybe we wouldn’t be so willing to “get us involved” in yet more continuing, undeclared wars.

              • infiniteimprobabilit
                Posted November 18, 2017 at 5:59 pm | Permalink

                S A Gould – the Vietnam war was very much a case of the US public being involved in what was going on. Both due to press coverage and US casualties.

                Would it have made such an impact on American public opinion (to the point where that virtually forced a cessation) if the US involvement had been limited to e.g. supplying killbots to the South Vietnamese to do their deadly work out of sight of cameras?

                I agree that many American draftees had a double reason to be outraged – they were being forced to risk their lives in a war that they felt was morally unjustifiable (to put it mildly).

                cr

      • BJ
        Posted November 16, 2017 at 5:54 pm | Permalink

        “Superior firepower from conventional weapons will take them out.”

        Conventional or controlled remotely, it requires human control, and humans (especially properly trained and/or enlisted humans) cannot be produced endlessly.

        “The risk is that it will appeal to idiots like Drumpf or countries like the USA – yes *exactly* like the USA – that believe in gadgetry, love starting wars in other people’s countries, but are extremely averse to casualties on their own side.”

        You may think it’s all just stupid stuff that appeals to imbeciles who like toys, but you hit on an important point: if one side has autonomous weapons and the other side requires soldiers on the battlefield, the latter side will quickly be decimated and lose. This makes the power and significance of autonomous weapons clear, among other advantages they would confer.

        I don’t necessarily want to see any of this happen, but I don’t see how anyone can stop it. China knows the US has its R&D teams will be working on this stuff; the US knows China’s R&D teams will be working on this stuff. Russia knows the EU will be working on this stuff; the EU knows Russia will be working on this stuff. And on and on and on.

        It’s the same as the proliferation of any other weapon that changes the landscape of war: no army wants to be the one left behind.

        • infiniteimprobabilit
          Posted November 18, 2017 at 2:20 am | Permalink

          What happens then, of course, is that the high-tech country is able to walk all over the opposition on the battlefield. Even guerilla war becomes impossible. So then the other side resorts to what it can practically do, which is ‘taking the war to the enemy’ in low-tech fashion i.e. terrorism.

          cr

  8. Posted November 16, 2017 at 10:48 am | Permalink

    These weapons are inevitable, though I suspect their presence will be regulated. Unlike assault weapons which have been around longer than sufficient regulations, the scientific community’s involvement will help put administrative, if not engineered, controls on the technology. Unlike guns, regulations can precede the industry of assault drones.

    In the public, who would want such a device? I imagine, some gun owners may lose some of their motivation to horde guns and rather pick up a few drones for protection. It may not be long before intended or, more likely, unintentional assassinations take place.

    • DrBrydon
      Posted November 16, 2017 at 11:57 am | Permalink

      Regulations, as you say, are inevitable, but the components and skill to create these things is going to be public. Very sophisticated, battlefield-ready devices will require a large investment, and more skill, but I would bet that there is already someone out there who has mounted a shotgun to an autonomous device. It could get scary fast. (As I wrote this, though, I couldn’t find anything for ‘shotgun robot’ on YouTube, but there is a video about magnetic shotgun rounds for anti-robot work.)

      • athiest in a foxhole
        Posted November 16, 2017 at 2:58 pm | Permalink

        Badguys are already using off the shelf recreational drones for military use in Iraq and Syria.

        http://www.washingtonpost.com/world/national-security/use-of-weaponized-drones-by-isis-spurs-terrorism-fears/2017/02/21/9d83d51e-f382-11e6-8d72-263470bf0401_story.html?utm_term=.3dbdb5ec0ee5

        So if civilian robotics and drone development continues (which it will) it won’t take much to take any off the shelf system and adapt it for nefarious purposes.

        Self driving cars with built in wifi or other systems that talk to the smart road and other smart cars: Just hack some part of the software, or get a job as a mechanic and install custom hardware that allows remote operation. They won’t need to steal a vehicle to run over a bunch of people – they’ll just hack a vehicle to do it.

        A few years ago I saw show on one of the science or military channels about future weapons. They showed a self driving 4 wheeler ATV (like Kawasaki or Polaris) that could drive itself. They demoed a couple of weapons systems that could be mounted on it – a .50 cal mg and an antitank missile both adapted from weapons the US army currently has in its arsenal. They both worked just fine. The anti tank missile could be set in position and wait for anything that looked like a tank to come by and it would automatically shoot it. The next thing they talked about on that episode was legality and would we even want machines making the shoot – don’t shoot decision….

        • gravelinspector-Aidan
          Posted November 16, 2017 at 5:25 pm | Permalink

          The next thing they talked about on that episode was legality and would we even want machines making the shoot – don’t shoot decision….

          The officer who chooses whether to set up the kill bot has already made the kill/no kill decision.

    • gravelinspector-Aidan
      Posted November 16, 2017 at 5:18 pm | Permalink

      In the public, who would want such a device? I imagine, some gun owners may lose some of their motivation to horde guns and rather pick up a few drones for protection.

      Joe Random Gun Nut probably won’t wait 5 years to spend hundreds of thousands of dollars on an autonomous people-killing machine when they can buy effective off-the-shelf ones yesterday for at most hundreds of dollars.
      A land mine – anti-personnel or anti-tank – is an effective way of killing people who attempt to go into an area that you don’t want them to go into. It just doesn’t have any ability to discriminate between the person who set it and anyone else. Given that you’ve already decided to deny anyone else access to that piece of land, and you’re willing to use lethal force to back up that decision, then you’re already most of the way down the road to minefields already.
      If you really want discrimination between mine-setter and [rest of the world], then even my pretty crude electronics skills could probably rig up something using … a portable short range radio transmitter, a number of receivers, and a lot of wiring to suppress the detonator mechanisms (per-mine, or per group of mines) … which should allow me to suppress detonation for maintenance, or even access. A lot more fiddly, many more potential failure points. But essentially an off the shelf area-denial tool tomorrow, and a damned sight cheaper than an AI system. Less flexible too, but you’ve already made the decision to kill people who go into an area, so I suspect that availability trumps flexibility.
      There’s moral difference between a land mine and a kill-bot in my mind. That ship sailed … in the 16th century.

      • infiniteimprobabilit
        Posted November 16, 2017 at 5:25 pm | Permalink

        “There’s moral difference between a land mine and a kill-bot in my mind.”

        Did you mean “NO moral difference”?

        I agree, landmines are a sort of crude precursor to autonomous weapons.

        cr

        • gravelinspector-Aidan
          Posted November 17, 2017 at 3:40 pm | Permalink

          Damn. dodgy keyboard. Normally it produces misspellings which are easier to catch than omissions.

  9. Laurance
    Posted November 16, 2017 at 11:09 am | Permalink

    Oh, just what we need! More stuph to be stressed about! As if Trump Trauma weren’t enough!

    Life has always been uncertain. Earth isn’t always a friendly place. Natural disasters are real.

    But earthquakes and tornadoes just happen, that’s all. There’s no intention or deliberate malevolence behind them.

    Trump, nuclear war, and now these slaughterbots are different. This is malevolence that is on purpose. Somebody wants to do harm, somebody wants to hurt people.

    I need to find a balance between activism and self-care. On the one hand I don’t want to sit on my butt and do nothing about it. OTOH I don’t want to get so obsessed that I get all miserable and crazy (I was miserable and crazy during the Vietnam War).

    So I do take time out for beautiful things. It could be argued that maintaining one’s emotional well-being in the face of these malevolent horrors we now live with is an act of resistance.

    • Posted November 16, 2017 at 11:23 am | Permalink

      “More stuph to be stressed about! As if Trump Trauma weren’t enough!”

      Want more stuff to be stressed about from the Trumpistas? The U.S. is lifting the ban on elephant trophies. Trump Jr. can finally bring home that bloody tail he hacked off a dead elephant – the one he was photographed holding like a prize.

      • Mark R.
        Posted November 16, 2017 at 11:41 am | Permalink

        It’s amazing all the stupid little things he does for a tiny minority of people. His sense of “helping” people is seriously flawed…well that must come from the fact that he is seriously flawed.

  10. Posted November 16, 2017 at 11:40 am | Permalink

    There is a whole sub-sub-specialty of the ethics and related topics for warbots as part of the computing and philosophy movement. People involved are bumping up against the state of the art in development moral psychology and other topics related to the *human* aspect already.

    I was in the very first class ever taught on robot ethics (by Peter Danielson at UBC), and that was almost 18 years ago. His conclusion was that the course *then* was way too late. Now what?

    I might add that I’ve been told that the US UPMJ does not allow a bot to question the legality of the conflict as a whole, as it is not an officer. (This is of course against the precedent at Nuremberg, but …)

    • Heather Hastie
      Posted November 16, 2017 at 1:14 pm | Permalink

      University courses are too late.

      We need to take a cue from religion. Get to the kids. Teach them this stuff from the beginning.

      I know the state schools in a few countries, including NZ, do teach ethics from the start. Methinks a greater emphasis is needed in the future.

      The problem is always going to be those who teach children they’re part of the in-group and others are part of an out-group. There will always be some who decide the out-group needs to be eliminated.

      • Posted November 17, 2017 at 11:18 am | Permalink

        I agree, but nothing is being done yet that I know. About 10 years ago I saw that my old high school (I graduated almost 25 years ago) had been doing “online presentations” and such for years at that point. I took a glance at their curriculum (because traditionally my high school had been at the forefront of computing education for kids) and I saw almost *nothing* about media and computing ethics or the like. Quebec mandates a “moral education” curriculum, so it could be an interesting cross-over topic, but …

  11. Randall Schenck
    Posted November 16, 2017 at 11:48 am | Permalink

    A civilized world should be able to set regulations and rules concerning these weapons as they once did with things like chemical weapons or torture. You can probably count the United States out on that.

  12. rickflick
    Posted November 16, 2017 at 11:50 am | Permalink

    It’s frightening. The slaughterbots maybe impossible to avoid. Maybe we’ll just have to face their inevitability and build millions of anti-bot-bots.

    • Posted November 16, 2017 at 12:19 pm | Permalink

      In terms of fighting off large scale attacks by AI weapons we have an ace in the hole; EMP. The US has an active program in developing non-nuclear EMP devices. Could render anything electronic useless including, I suspect, these AI weapons.

      The current non-nuclear EMP tech wouldn’t be useful for small AI weapons attack ; even though it is non-nuclear, it’s still a helluva bomb.

      • darrelle
        Posted November 16, 2017 at 1:46 pm | Permalink

        It adds costs, but electronics can be hardened effectively against EMP. Nothing’s perfect, there are limits, etc., but not an ace in the hole against a prepared opponent.

        • Posted November 16, 2017 at 1:52 pm | Permalink

          Raytheon and the US Military are somewhat more hopeful about their effectiveness than you.

          • infiniteimprobabilit
            Posted November 16, 2017 at 5:31 pm | Permalink

            Sort of a scorched-earth tactic because it would even more reliably fuck up every electronic device within its radius. Everybody’s cars, computers, fridges, phones – all dead.

            Some asshole (i.e. the military) might love to use it on an ‘enemy’ population, but it would be a really hard choice to use it on a domestic killbot outbreak.

            cr

    • Posted November 16, 2017 at 2:14 pm | Permalink

      The truly terrifying thing is- in the USA we will continue to to even give more $$$ to the Pentagon, and to destroy innocent civilians everywhere with our current program of just taking out a whole village, surrounding hospitals, schools, etc. (The only way to be ‘safe.’)Because, let’s face it, Americans don’t much care who we kill anyway. So who needs that much precision?

      • Posted November 16, 2017 at 2:42 pm | Permalink

        “Because, let’s face it, Americans don’t much care who we kill anyway.”

        I do not agree with this statement at all. It is base nonsense.

        • Posted November 16, 2017 at 3:30 pm | Permalink

          We don’t particularly care about incarceration or shootings of blacks by cops, and we’re not treating the people in Puerto Rico with quite the enthusiasm we’d show as if they were Texans, much less the rest of the (non-white) world. And clearly,this statement is just my own.

          • Posted November 16, 2017 at 3:58 pm | Permalink

            Obviously, it is a deliberate overstatement to say “Americans don’t care too much who we kill”, but there are way too many killings that could be prevented if the majority of Americans felt more strongly about them. I’m thinking about domestic gun violence, the police killing at the drop of a hat, military interventions with “collateral damage”, death penalty executions. Did I leave any out?

            • Posted November 16, 2017 at 4:25 pm | Permalink

              Leave any out? Probably. But I agree with your list and assessment.

              • Posted November 16, 2017 at 5:43 pm | Permalink

                I know is hyperbole but like most hyperbole, it’s baloney.

                If it were really true that “We don’t particularly care about incarceration or shootings of blacks by cops…” we wouldn’t be having the national crisis we ARE having about the incarceration and shooting of blacks by cops.

                Blacks are incarcerated at a higher rate than all other races, that’s true, but is does not follow from that fact that Americans don’t care about it. There are many reasons for the problem but claiming it is derived from unconcern about other’s lives is nothing more than an assertion, one which sweeps aside any other reasons.

                And shootings of blacks by cops? Actually whites are shot by cops at a higher rate than blacks (it’s true….look it up). Do we care about that? If not, is it because we have so little concern for the lives of white people?

                Or could it be that these problems -which are very real- have other causes and characteristics other than “Americans don’t much care who we kill anyway?”

                “And clearly, this statement is just my own.” <- Not baloney

              • Brujo Feo
                Posted November 16, 2017 at 6:13 pm | Permalink

                Um, no, mikeyc, it is not true that whites are killed by cops “at a higher rate” than blacks, unless by “at a higher rate” you mean “in raw numbers.” In which case you could have just said: “Cops kill more whites than blacks,” which would be true.

                But per capita, it’s not even close. Blacks are approximately 2.5 times more likely to end up toe-tagged in a cop shooting than whites are. Read it here: https://www.snopes.com/do-police-kill-more-whites-than-black-people/

              • Posted November 16, 2017 at 9:28 pm | Permalink

                “If it were really true that “We don’t particularly care about incarceration or shootings of blacks by cops…” we wouldn’t be having the national crisis we ARE having about the incarceration and shooting of blacks by cops.”

                National crisis?? You mean more talk and no action? We’ve done that before. Many, many times before. I’m from Chicago.

              • mikeyc
                Posted November 16, 2017 at 11:35 pm | Permalink

                Brujo feo – you’re using the wrong denominator. The rate should not be per capita. It should be per encounter with police.

              • Brujo Feo
                Posted November 17, 2017 at 2:11 am | Permalink

                mikeyc…was this intended as humor?

                Do you leave no room for the argument that “encounters-with-police-per-capita” might be part of the problem? You’ve heard the expression “DWB”?

              • Posted November 17, 2017 at 10:41 am | Permalink

                No Brujo that was NOT humor. When you are asking what is the rate at which different races are killed by the police and you use per capita, you are comparing two populations that are not the same. You are leaving out the police – a critical player in the comparison. It’s like trying to define the risk of shark attack without including sharks – a yak herder in Ulan Bator has considerably lower risk of shark attack than a surfer off Cape Town. To get a meaningful idea of your risk of shark attack it MUST take into account the shark.

                When you ask what is the likelihood that you will be killed by the police when you encounter the police, your risk is higher if you are white. This is true whether the encounters with police are over violent or non-violent crimes.

                You did put your finger on the right point though – differences between the races in the frequency of encounters with police, is the source of much of the current race/police problems. Blacks are far more likely to engage with the police than whites, though when they do they are less likely to die from it than whites. There are many reasons for this higher frequency of encounters, including the fact that blacks commit more crimes than any other race – a fact that itself has a myriad of causes, but I think it also caused by the deep and intractable racial problems in society in general and many police forces in particular.

  13. Posted November 16, 2017 at 11:53 am | Permalink

    We can’t stop it. Safeguards in software can always be hacked. Morality clauses can always be ignored. And tiny drones with face recognition and shaped explosives are only one of many such threats.

    • Laurance
      Posted November 16, 2017 at 2:58 pm | Permalink

      Face recognition. Now, if we found ourselves in danger from these things, could we change our faces? Like with a mask? Or makeup? Or those Groucho glasses with the mustache and big nose?

      • Posted November 16, 2017 at 3:10 pm | Permalink

        The ones with Groucho glasses are the first to be killed!

        • Posted November 16, 2017 at 3:14 pm | Permalink

          *exits quietly removing Groucho glasses*

  14. darrelle
    Posted November 16, 2017 at 12:04 pm | Permalink

    Like any other technology we can work to regulate by a variety of methods but it will always be a risk. Even if initially specific weapons applications are prevented by agreements among the big state actors, or something similar, that is unlikely to last.

    The problem is that eventually robotics and the computing systems that run them will be sophisticated enough that the top-of-the-line Roomba GPHB (General Purpose House Bot), available at all major retailers, will be capable enough to function as a killer bot simply by handing it a gun, giving it some simple verbal instructions and a bit of time to learn how to use the gun.

  15. Posted November 16, 2017 at 12:10 pm | Permalink

    On a more positive note, see this announcement of an AI Alignment Prize:

    “Stronger than human artificial intelligence would be dangerous to humanity. It is vital any such intelligence’s goals are aligned with humanity’s goals.”

    https://www.lesserwrong.com/posts/YDLGLnzJTKMEtti7Z/announcing-the-ai-alignment-prize/

    … but it won’t be enough, I’m afraid.

    • Gregory Kusnick
      Posted November 16, 2017 at 12:46 pm | Permalink

      Unfortunately, before we can align AI with our goals, we have to be able to articulate those goals with sufficient precision to be implemented by a machine. My suspicion is that in attempting to do so, we’ll run into something along the lines of Gödel’s Incompleteness Theorem, Turing’s halting problem, or the Arrow Impossibility Theorem, and discover that our most cherished values are ultimately incoherent, and that there can be no consistent system of ethics that encapsulates them all.

      • Posted November 16, 2017 at 12:53 pm | Permalink

        This sounds like a version of the argument that the brain has some sort of “secret sauce” that computers do not, and that limits to algorithms are not limits to brains. IMHO, it is just a matter of our lack of knowledge as to what algorithms the brain uses. Just because we do not know this now is not reason enough to bring in magic. We’ll figure it out, but perhaps not soon enough to save us from this apocalypse.

        • Gregory Kusnick
          Posted November 16, 2017 at 1:17 pm | Permalink

          You’ve misunderstood my point. I’m not suggesting that brains are somehow magical; I’m saying they’re all too fallible, and we could very well be fooling ourselves by thinking that the ethical intuitions cobbled together for us by natural selection can be formalized and codified into anything resembling a rigorous algorithm.

          • Posted November 16, 2017 at 1:24 pm | Permalink

            Ok, I see what you mean. But don’t we solve that problem all the time with legal contracts? There’s a long history of research into using programming languages (ie, hard logic) to express contracts. These won’t be infallible, of course, but much more air-tight because they are computable.

            • Gregory Kusnick
              Posted November 16, 2017 at 3:13 pm | Permalink

              Depends on what you mean by “solve”. Not infallible may be OK for legal contracts, where the consequences of an unintended loophole are relatively circumscribed. For world-governing AI, the potential consequences are much more serious, so we’d want something much closer to genuine infallibility.

              • Posted November 16, 2017 at 3:20 pm | Permalink

                I don’t think there’s any possibility of creating an infallible solution. Whether our protection is implemented as software, hardware, legalese, or moral commitment, it can always be circumvented.

              • infiniteimprobabilit
                Posted November 16, 2017 at 5:44 pm | Permalink

                If I was in charge of the world, I would conclude that the biggest threat to the future environmental sustainability of the Earth was – humans.

                cr

              • Posted November 17, 2017 at 1:39 pm | Permalink

                These days, I don’t believe we could collectively agree on Asimov’s Three Laws of Robotics as a good thing.

      • barael
        Posted November 17, 2017 at 3:23 am | Permalink

        >Unfortunately, before we can align AI with our goals, we have to be able to articulate those goals with sufficient precision to be implemented by a machine.

        This isn’t actually quite so; recent ideas in solving this involves using a (limited) kind of superintelligence itself to do the heavy lifting since it can, by definition, learn our goals and ethics much better than we can express them ourselves. A kind of ethical bootstrapping via machine learning.

        Not that that isn’t an awesomely difficult thing to do as well, but it does avoid the chicken-and-egg problem.

        • Gregory Kusnick
          Posted November 17, 2017 at 3:53 am | Permalink

          My suspicion is that one of the first things such an ethical superintelligence might say will be something along the lines of “Wait, you want to maximize human well-being and personal freedom at the same time?” — and then show us a proof of why it can’t be done.

        • Posted November 18, 2017 at 4:37 am | Permalink

          +1. Since children can do that, a sufficiently bright AI can do it too. Admittedly children start from an advanced baseline of instincts about fairness etc., but those traits are ultimately revealed through human behavior, which the AI can study.

      • Posted November 17, 2017 at 11:23 am | Permalink

        The only relevant one is the HP – but I would argue it applies to us. So if we want an AI to share something like “our” values, IMO we have to train it. How we do that when its interactions with the environment are ex hypothesi different than ours, I have no idea. People in the robot ethics communities are struggling with this, and we get people even asserting that we can “program ethics” somehow, even “divine command theories”, which in my view are almost fools-errands.

        • Posted November 17, 2017 at 11:37 am | Permalink

          I don’t see much difference between training an AI to be ethical and building an “ethics module” into its software. Code is code and it all can be hacked, be given “bad ethics” in the first place, or simply not behave as we intended (ie, bugs).

        • Gregory Kusnick
          Posted November 17, 2017 at 1:19 pm | Permalink

          Arrow seems relevant to the extent that voting is an instance of applied utilitarianism: You want to maximize the number of people who are happy with the outcome while minimizing the possibility of perverse outcomes. Arrow showed that there can be no system that satisfies all the criteria we would rationally expect of it. It doesn’t seem so far-fetched to imagine that the same could be true of ethics writ large.

  16. kevin7alexander
    Posted November 16, 2017 at 12:21 pm | Permalink

    As others have pointed out, these weapons are inevitable. The US would never sign on for their ban. Americans politicians love Jesus but they worship weapons.
    Nukes are not even nearly the scariest thing out there; MAD can work only because it is so difficult to make nuclear weapons, only governments have the power to do it in any dangerous numbers. This is not so with micro-robots. Someone somewhere will crank up the production line and in little time crank out enough of these things to kill everyone on earth.

    • Posted November 16, 2017 at 12:31 pm | Permalink

      Actually, I think you’ve got this a bit wrong. Nukes are far scarier for their potential for large scale damage. These AI weapons are unlikely to useful for that – EMPs can destroy them with ease. So, unlike nukes, they aren’t much use as strategic weapons.

      But there is way in which these can become very big problems. EMP devices are not much use tactically – even the non-nuclear EMP devices are very large bombs. So AI weapons can be very dangerous because they would be used tactically and the use of EMP devices in response to tactical weapons could make things much worse.

      • Gregory Kusnick
        Posted November 16, 2017 at 1:03 pm | Permalink

        “EMPs can destroy them with ease.”

        That may be overly optimistic. As I understand it, EMPs do most of their damage by inducing high voltages in long conductors such as network cables and power transmission lines. Small, self-contained devices are relatively easy to shield from such effects. In a lightning storm, it’s the stuff plugged into the wall that gets fried, not the stuff in your pockets.

        • darrelle
          Posted November 16, 2017 at 1:41 pm | Permalink

          Yes, EMP is not a killer app against killer robots. Perhaps an arms race, but the US military, and others I’m sure, developed EMP resistant electronics during the later half of the Cold War.

          Whether or not equipment is hardened against EMP is more a question of cost/risk analysis than whether it’s possible (which it is, it’s been done). If the US, for one example, was committed to developing killer AI robots and concluded that it was likely enough that opponents would use EMP to defend against them, then they would pay the added costs to have them hardened.

          • Posted November 16, 2017 at 2:00 pm | Permalink

            Technology marches on. Things have changed somewhat since the Cold War. Only last month Boeing tested Raytheon’s newest EMP weapon against military-hardened electronic targets;

            “Though speculation exists surrounding the weapon’s effectiveness against military-hardened electronics, the prospects of its use are bright.

            “This technology marks a new era in modern-day warfare,” said Keith Coleman, Boeing Phantom Works’ CHAMP Program Manager. “In the near future, this technology may be used to render an enemy’s electronic and data systems useless even before the first troops or aircraft arrive.”
            http://mil-embedded.com/news/raytheon-emp-missile-tested-by-boeing-usaf-research-lab

            This is not the bad old bomb EMP devices which are effective against hardened electronics. But those are bombs so not of much use unless you don’t mind destroying things/people along with the electronics. The new and improved ™ generation of EMP missiles don’t even damage structures.

            A brave new world with such wonders in it.

            • darrelle
              Posted November 16, 2017 at 2:19 pm | Permalink

              Sounds like the age old arms race. Someone increases the capability of an offensive system and gains a clear advantage. For some period of time. Until someone else devises an improved defense against it. Weapons and defenses are not ever really completely obsolete, they just become less effective. How many times have people said some new weapon will be unstoppable? How many times has that been the case? For very long any way.

              If this system turns out to be as effective as an ace in the hole against electronics on a large enough scale to crush a future AI killer-bot apocalypse then it would be more significant than a nuclear arsenal. I hope it ain’t so.

  17. claudia baker
    Posted November 16, 2017 at 12:38 pm | Permalink

    Oh great. Another thing to keep me awake at night.

    • Laurance
      Posted November 16, 2017 at 3:11 pm | Permalink

      Hello, Claudia…another thing to keep us awake at night, yes, but I am making an effort to NOT be kept awake. In my earlier post I was talking about self-care as a form of resistance.

      Today I’m doing art work. I’m painting, and I have music going while I paint. I’m focusing on making something beautiful today. Tomorrow I am going to our book club, Right now we’re reading “The Poetry of Impermanence, Mindfulness, and Joy” edited by John Brehm. For an hour I will be with people I like, reading things that put us in a good frame of mind.

      We’re all suffering from Trump Trauma, and we find that spending some time focusing on good things helps us keep from going sh*t-smearing crazy at the horror show going on and the peril we’re constantly in with this dangerous lunatic in the White House.

      Claudia, I hope you and others can find something good to help you get some peace of mind in these troubling times.

      I remind myself that if I go batty from stress I won’t be of much use when it’s time to take action.

      • claudia baker
        Posted November 16, 2017 at 5:17 pm | Permalink

        Thank you for this Laurence. Yes, I am doing some things to try to ease the stress. Trying to practice mindfulness is probably the biggest thing. It’s very hard, but is getting easier with practice. And meditation. It’s helping.

  18. Dean Reimer
    Posted November 16, 2017 at 1:14 pm | Permalink

    Daniel Suarez’s 2013 novel “Kill Decision” is all about this possibility. It will scare the crap out of you.

    Suarez did a TED talk on the subject of “lethal autonomy.” You can see it here: https://www.ted.com/talks/daniel_suarez_the_kill_decision_shouldn_t_belong_to_a_robot

  19. Posted November 16, 2017 at 1:28 pm | Permalink

    This is a case in which I wish we could trust
    the wisest of us to determine what AI weapons will exist and how they will be used. DARPA has been heavily involved in funding university studies on numerous potential weapons and defenses. We can assume some forms of AI weaponry will be created and used. Will it be used in accordance with with rules of “humane warfare”? There are ample examples now and throughout history of our governmental and military leadership using inhumane weapons and military tactics, sometimes lying to us, sometimes reporting what they want us to believe, and frequently treating us like mushrooms. This will not be different.

  20. clarkia
    Posted November 16, 2017 at 2:15 pm | Permalink

    Time to dust off that burka. Or at least the Trump mask.

    • Laurance
      Posted November 16, 2017 at 3:13 pm | Permalink

      Yep. That’s what I was just wondering. If some sort of mask or face altering garment could keep one of those awful things away if it was aimed at us.

  21. Posted November 16, 2017 at 7:50 pm | Permalink

    @darrelle at 2:19 pm:

    —Sounds like the age old arms race. Someone increases the capability of an offensive system and gains a clear advantage. For some period of time. Until someone else devises an improved defense against it.—

    This reminds me of a quote:

    “Whenever man develops a better mousetrap, Nature immediately comes up with a better mouse.” (James Carswell).
    .-

  22. Posted November 16, 2017 at 7:57 pm | Permalink

    Is this AI threat comparable to the “neutral” discovery of nuclear fission leading to the development of the atom bomb?
    .-

  23. helenahankart
    Posted November 17, 2017 at 8:17 am | Permalink

    Only a good robot with a gun can stop a bad robot with a gun

  24. Mark Reaume
    Posted November 17, 2017 at 8:27 am | Permalink

    The Future of Life Institute (https://futureoflife.org/) started by Max Tegmark and friends is quite interesting. I’m reading Max’s book on AI now (Life 3.0: Being Human in the Age of Artificial Intelligence) which goes over many of the scenarios that could arise if AI’s break out.

    The institute has many heavy hitters on the board and they also look at the effects of biotech, climate change and nuclear.

  25. Posted November 17, 2017 at 11:27 am | Permalink

    BTW, I should have also mentioned. There is also already a *textbook* on this subject:

    _Governing Lethal Behavior in Autonomous Robots_, by Ronald Arkin.

    Arkin is the one who told me that the US UPMJ regards a bot as “ineligible” for the precedent at Nuremburg.


%d bloggers like this: