Repeat after me. Humans are friends, not food… or statistical data points in algorithmic targeting systems designed for warfare.
It seems impossible to have a discussion about AI without bringing up the fear that killer robots are going to wipe out the human race. And if this emerging tech is truly a mirror of its creators, then the fear is justified, right? In part three, we look at how this concern is playing out in the real world, and how our relationship with AI, like any relationship, can suddenly create a whole lotta drama.
We talk with Dr. Catherine Connolly, of the aptly-named organization Stop Killer Robots, who is trying to pass laws to prevent AI from making autonomous life-or-death decisions. We also sit down with Mar Hicks, an Associate Professor of Data Science at the University of Virginia, whose insights on the history of technology help to put our relationship with AI in perspective.
And in the end, we may just need to sit down with AI and hammer this whole thing out, Because, as John’s therapist often reminds him, the foundation of any good relationship… is communication.
Amy Tobey:
You’re listening to Traceroute, a podcast about the inner workings of our digital world. I'm Amy Tobey.
Fen Aldrich:
I'm Fen Aldrich.
John Taylor:
And I'm John Taylor. And I want to do a little flashback here to the very first all hands brainstorming session we ever conducted for the second season of Traceroute. So inevitably, of course, the topic of AI came up and we started looking at all the angles that most people grapple with when they talk about AI. And as we furiously scribbled down all these ideas on the Miro board, Josh Atwell from Equinix speaks up. Josh Atwell: Every time we talk about AI and things like that, it's like, "Oh, well, inevitably they're going to decide that humanity should be destroyed because we don't know how to... we're determined to destroy each other." I don't think that it's going to be it. I think it's going to be because we're assholes to Siri and Alexa and things like that. And so my kids are like, "Why did you just say thank you to Siri?" I'm like, "Because you never know. Old girl controls my..."
John Taylor:
And as everyone laughed and moved on from the subject of AI, a little voice inside my head went, "Wait, wait, wait, wait, wait. What?" I mean this was really unsettling to me. And not just because I'm a huge fan of all things dystopian and sci-fi, but because it resonated with a certain truthiness as we like to say on Traceroute. In July of 2022, Amazon announced that customers had connected over 300 million smart home devices to Alexa, one of whom is me. Alexa knows my name, she knows my shopping habits, she asks me if I'd like a book recommendation and I say no. But do I say no, thank you? And would my lack of politeness put me first against the proverbial wall when the robot revolution comes? I had to find out. And if necessary, I had to make amends with Alexa. So being a concerned father, I had to tell my kids about this and I knew it would freak them out, but I had to get them on board with the program. So the kids rush over to our Alexa device and they ask her, "Are we friends?" And what happened next blew my mind. Now, I would love to play that tape for you right here, but the truth of the matter is it already happened like a year ago. So Mathr, who has not yet gotten on Alexa's good side, has graciously agreed to demonstrate how to befriend the machines. Mathr de Leon: I have not used this thing in a minute. I don't even know if it works. Okay. Alexa? Alexa: I'm here. Mathr de Leon: Yeah, she is loud. I'll just turn her down.
John Taylor:
So basically, here's how it works. You ask Alexa if you can be friends, and she says she would love that, but first you have to take the BFF quiz. Mathr de Leon: Okay. That should sound pretty good. Alexa, are we friends? Alexa: We're definitely friends, but are we besties? It's time to test your AIQ. Let's see how well you really know me. Mathr de Leon: Okay. Alexa: Question one. My favorite planet is Mars. Mathr de Leon: That's false. Alexa: Indeed.
John Taylor:
Now the object of this game is to see if you can guess three things that Alexa loves. Like does she prefer pizza or french fries? Now honestly, these questions probably have no right or wrong answer and everyone gets to be a winner, but winners are now categorized as best friends. For all I know, they're now on a list, a list recorded forever somewhere in the cloud. There may be a data set that separates you from people who are not friends. Alexa: All righty. Let's check how you did. You missed one out of three. Good, but not great. You should spend a little more time getting to know me. Mathr de Leon: Yeah, what a bummer. Yeah. Turns out Alexa and I are not best friends.
John Taylor:
Let's start with this question. What is AI actually compared to what everybody thinks AI is?
Amy Tobey:
Marketing. I mean, honestly what we call AI today is marketing for a suite of things that are a little bit better defined, I think like machine learning and statistics and predictive algorithms, those are all very well-grounded things that have theory behind them and are cohesive. But what we call AI today is mostly a combination of those things with a really nice marketing logo on it.
Fen Aldrich:
I mean, I remember calling AI like a series of if statements that the bad guys used in my video games in order to automate their actions. AI I think has really been something that we've applied to any sort of system automation where it makes some sort of decision-making without asking a person a question first.
John Taylor:
Decision-making based on data sets without asking a person a question first. If you want to pinpoint where our relationship with AI takes a turn for the super gnarly, this is it. Now, all kidding about Alexa, the unwelcome Terminator aside, I believe the real concern over AI is when we ask it to make decisions about our relationship on its own, and few people understand this quite as well as Dr. Catherine Connolly, the automated decision research manager at Stop Killer Robots.
Dr. Catherine Connolly:
Stop Killer Robots is a global coalition of more than 200 civil society organizations from all over the world who are calling for new international law on autonomy in weapons systems. So at automated decision research, we track state support for a legally binding instrument on autonomy in weapons systems, and we conduct research and analysis on responses to autonomy and automated decision making in warfare and wider society.
John Taylor:
Of course, with a name like Stop Killer Robots, you can't help but conjure up images of an AI powered robot apocalypse wiping out the human race.
Dr. Catherine Connolly:
You say autonomous weapon systems or you say killer robots and people think Terminator, they think Skynet. Actually, it's much more mundane than that and because it's much more mundane, it's actually scarier really because it means that the systems become a little bit more insidious. You maybe don't think about them so much because it's just another capability that's been added into a weapon system that already exists.
John Taylor:
The International Committee of the Red Cross who are the body that acts as the custodian of international humanitarian law or what we call the rules of war, describe autonomous weapons as follows. Systems that, quote, "select and apply force to targets without human intervention. After initial activation or launch by a person, an autonomous weapon system self initiates or triggers a strike in response to information from the environment received through sensors and on the basis of a generalized target profile," end quote. In other words, the final most important decision is not made by a human.
Dr. Catherine Connolly:
So what you have here then are systems that don't see people as human beings. Instead, you're being sensed by the machine through the information it's getting from its sensors as a collection of data points, and the machine is going to use its algorithm to fit those data points against its target profile. If you fit the target profile, the information that it gets fits the target profile, the system will use force. If you don't fit the target profile, the system shouldn't use force. That raises a lot of very serious, ethical, legal and moral questions.
John Taylor:
Perhaps the biggest ethical question as Dr. Connolly sees it is are we reducing a human being to a data point? The AI profiles people based on data sets to determine if they're a combatant or a civilian. So you can see the Pandora's box of moral questions here. What data sets does the AI use? Is that data biased? How do the sensors tell the difference between a tank and an ambulance? And if that's not enough to get your head spinning on its own, Dr. Connolly says that this technology is already available.
Dr. Catherine Connolly:
This is not technology that's far in the future. This is technology that is here now. And so like I said, it's a lot more mundane than a Skynet or Terminator style thing. It's not like a system going off and making decisions itself. It's just simply a matter of having your algorithm that fits a target profile, having some maybe image recognition processing software or sensor or any other kind of sensor to say if X matches Y, then bomb, then shoot. And not having an actual human involved in that process is very, very worrying.
John Taylor:
Yeah, I'm awake now. Wow.
Dr. Catherine Connolly:
What a way to start your morning.
Fen Aldrich:
It ties also into... I just remembered the quote. I think it was a 1970s like IBM presentation and the quote from this is, "A computer can never be held accountable, therefore a computer should never make management decisions." And what I'm seeing more and more strangely enough with AI is what does the computer think the right decision is to make here? And man, I'm waiting for the time. It's like, "Well, we had these layoffs, but the computer said it was supposed to happen. What can we do? The computer said, computer says no." Right?
John Taylor:
That's exactly to the point that I was thinking is that especially someone like myself who is mostly on the periphery of technology and is using these tools, I am more apt to just believe what the AI says. I am not as apt to question it, to question its hallucinations and its fever dreams, to question its logic, to even think about its dataset. I don't know if the average person thinks about, "I wonder what dataset this AI is using to come up with these conclusions?" It's technology. It's brilliant, it's genius. It's invented by Elon Musk, so it must be awesome.
Fen Aldrich:
Oh gosh.
Amy Tobey:
That's terrifying.
Fen Aldrich:
Someone's not paying close enough attention to that track record.
John Taylor:
So we're just more apt to not question what its output is what it has to say. And that's scary.
Fen Aldrich:
Yeah, absolutely.
John Taylor:
Yeah. Are we so enamored with this magnificent new technology that we're becoming blind to it? Like any relationship, we want to see all the good things and all the beautiful things and maybe not talk about or just ignore all the bad things. So we could go down the Jeff Goldblum and Jurassic Park path here and say, "Have we stopped to ask why we're creating this?" But the opposite side of that coin is, yes, we kind of did like in March of 2023 when more than a thousand tech leaders and researchers signed an open letter urging a moratorium on the development of AI, a number which I checked and is now over 33,000. But we could flip that coin as well and see that a lot of people believe that moratorium was more about taking a timeout to look at who's going to make all the money off of this and not so much a timeout to see if AI is going to kill us all. I think we need a little perspective from a historian.
Mar Hicks:
My name is Mar Hicks and I'm an associate professor of data science at the University of Virginia. I'm primarily a historian of technology.
John Taylor:
Mar's resume is really impressive. She studied history as an undergrad, then went on to become a UNIX system administrator. She later returned to university to study tech history, then went on to write a book called Programmed Inequality. It's about how the field of computing flipped from being seen as a feminized field to being a more male identified field and why that might be important to understand. Her second book, Your Computer Is On Fire is all about how we can fix some of the big problems in our big technical infrastructures. As you might imagine, Mar has a pretty insightful take on how to put AI into perspective.
Mar Hicks:
In the context of tech history and history of computing, I see this drive towards general AI for everything as kind of a fad. It's the latest fad. There are always these new tools that come along in the history of technology that are said to be revolutionary, that they're going to change everything. And sometimes they do, but it doesn't happen inevitably. It happens because usually vested interests who have a lot of power and money decide to make it so, and then at a certain point that technology gets an amount of momentum that makes it seem all but inevitable because you start doing things like building infrastructure that works with and only with that form of technology. Taking an example from the broader history of technology, not just the history of computing, think about the way that cars are absolutely ubiquitous and required nowadays. Well, that wasn't the case when cars first came on the scene because there wasn't any of the roadway infrastructure needed to make them useful. They're essentially expensive gadgets for people to tool around in a bit because they lacked the smooth, paved and expansive road system that was needed.
John Taylor:
I see AI as being more... it's being very quickly adopted right now. It seems like we weren't really talking about it a lot, and then suddenly, boom, it was the number one topic of discussion. It became... it's this highly controversial thing. We equate it with an existential threat to humanity. Do you think maybe that's because of the social component of it, that AI is there to mimic humanity or replicate humanity or predict humanity and so we have this sort of closer bond or pay more attention to it?
Mar Hicks:
Yeah, I think that's a big part of it. Technologies are always most alluring when they sort of meet us where we are. And if we have a technology like general AI that whether it can do this or not yet is I think quite debatable, but it's being marketed as the sort of thing that can talk to you, make you feel less lonely, be your therapist, replace you at your job. All of these things make it seem enormously alluring and enormously important. If we think back over the past few years, we saw that sort of discourse with Web3, with Bitcoin, with the metaverse, these things that were going to change everything and then fizzled pretty quickly. And I'm not saying that's what's going to happen with AI, but I am saying the same sort of marketing mechanisms are behind it.
John Taylor:
But I'm afraid that what we have now is this chicken and the egg dilemma. Almost everyone I've talked to has said that AI is a result of the development of infrastructure. That by increasing GPUs and lowering the cost of compute, we're finally at a point where we can process massive data sets in a way that we couldn't before. Large language models for consumer consumption weren't even possible before the cloud. So in Mar's example, think of it as someone saying, "Since we're using these covered wagons, we should probably build massive 10 lane interstate super highways for them." So if all this is true, then AI is here to stay. So I need this relationship to work and I need a happy ending. All right, so Dr. Connolly, tell me something good. Are there currently laws on the books in the Geneva Convention about this sort of thing?
Dr. Catherine Connolly:
This is a great question because it's a really important time at the moment for international discussions on new law for autonomy and weapon systems. So at the moment at the UN in Geneva, there have been talks going on for about 10 years now under the convention on certain conventional weapons where we've had a group of governmental experts discussing autonomy and weapon systems and killer robots. One of the problems with this forum is that it is consensus based. So every single state who is taking part in those talks has to agree before anything can happen. So unfortunately, progress in that forum has been stymied for a number of years by a smaller group of pretty heavily militarized states who don't want to see any new rules for these kind of systems.
John Taylor:
Okay. That might not sound so good. But there is broad and very widespread international agreement that we do need new rules for these systems. The International Committee of the Red Cross has called for a new international treaty on autonomous weapons. Additionally, the UN Secretary General has called for new rules on these systems in his New Agenda for Peace in which he urges states to conclude a new instrument by 2026. In fact, last year at the UN General Assembly First Committee on Disarmament, there was the first joint statement on autonomous weapons. And this year, states are going to have the opportunity to vote against killer robots in a resolution at the General Assembly.
Dr. Catherine Connolly:
So the momentum is there and it's really not a matter of if it is a matter of when. It does mean that it's going to move outside of where those discussions have been happening in Geneva and will be a different process. But yeah, it's a very exciting time for rules in that area.
John Taylor:
Those rules include solutions to some of the moral and ethical questions we talked about earlier. What it all comes down to is a concept that Dr. Connolly calls meaningful human control.
Dr. Catherine Connolly:
So we want a prohibition on systems that cannot be used with meaningful human control. And I'll expand in a minute on what we mean by that. We want a prohibition on systems that are designed to target humans. So systems that would specifically use sensors to pick up things like ethnic or racial or gender characteristics or systems that are designed to be anti-personnel, autonomous systems. We think they should be banned, so does the International Committee of the Red Cross. And then we want positive obligations so ruled to ensure that all other kinds of autonomous weapon systems are operated with meaningful human control.
Fen Aldrich:
When you talked about robot dogs initially, this is what I thought of was like, "Oh, it's going to be used for this. We're not going to do that. Don't have to worry," as everyone's like, "Oh, this is a post-apocalyptic dog with guns." They're like, "No, we're definitely not going to do that." I'm like, "Was it like months later..." They're like, "So we put a gun on the dog." Everyone's like, "Yeah, don't do it. It's not good."
John Taylor:
I would imagine just trying to be the eternal optimist that the whole thing behind Stop Killer Robots is just to raise the awareness, to create conversations that we are having right now to say this is where the technology is not only going but where it is already. It's already an option. There are already sensors that can do this. Is this what you want? Is this what the population wants? And yes, we may be passing toothless laws like many of our toothless laws, or we may be part of a government that just is like, "Oh, we're a superpower. So we just... yeah, that doesn't apply to us." But the conversation is happening.
Amy Tobey:
So it's like the big nuclear clock? What's that thing called, the big clock, countdown to extinction?
John Taylor:
Doomsday clock.
Fen Aldrich:
I was going to say, are we talking about the doomsday clock?
Amy Tobey:
Doomsday clock, yes. It feels like a similar kind of effort, right? That's the main purpose of that. It's not meant to say, "Yeah, you literally have 30 seconds to live, but in the span of human civilization, we're at this risk point." And maybe this is what the Stop Killer Robots folks are trying to do as something similar.
John Taylor:
So there's some kind of delicious irony or perhaps even symmetry at work. Here we have a relationship with AI. All right? Sometimes it's a superficial relationship like with ChatGPT. Sometimes it's a friendly relationship like with Moxie, and sometimes it's an intimate relationship like with Sarah Kay and her husband, the Replika. And one might say that all of these relationships have a certain element of denial to them. Whether we're denying that our partner is actually smarter than us or more caring than us, or is not an actual human being. But in all of these relationships, the common thing we ask of our partner is to make decisions we can't or don't want to make. I don't want to write this blog. You do it for me. I can't get through to this child. You do it for me. I can't get the intimacy I'm looking for from my partner, you give it to me. I don't want to decide who lives or dies. You do it for me. So as AI grows more intelligent, more sophisticated, and even more emotionally capable, we need to remember that the best relationships are partnerships. We created AI to do a lot of things we don't want to or can't do. But that doesn't remove us from responsibility. If anything, it deepens the responsibility we have to ourselves, our communities, our world, and yes, even to our new partner, this artificial and yet very real intelligence.
Mathr de Leon:
Traceroute is a podcast from Equinix and Stories Bureau. This episode was hosted by Fen Aldrich and Amy Toby, and was produced by John Taylor with help from Sadie Scott. It was edited by Joshua Ramsey, with mixing and sound design by Brett Vanderlaan and additional mixing by Jeremy Tuttle. Our fact-checker is Ena Alvarado. Our staff includes Tim Balint, Suzie Falk, Lisa Harris, Alisa Manjarrez, Stephen Staver, Lixandra Urresta, and Rebecca Woodward. Our theme song was composed by Ty Gibbons. You can check us out on X at Equinix Metal and on YouTube at Equinix Developers. Visit traceroutepodcast.com for even more stories about the human layer of the stack and we’ll leave all these and a link to the episode transcript down in the show notes. If you enjoyed this series, feel free to share it online wherever you hang out. And also, you could consider leaving a rating on Spotify and a rating and review on Apple and, of course, we ask because it really does help other people find the show. I'm Mathr de Leon, senior producer for Traceroute, and we'll be back in two weeks with a brand new story. Until then, thanks for listening.
- Traceroute is a podcast from Equinix and Stories Bureau.
- This episode was hosted by Amy Tobey and Fen Aldrich, and was produced by John Taylor with help from Sadie Scott.
- It was edited by Joshua Ramsey, with mixing and sound design by Brett Vanderlaan, and additional mixing by Jeremy Tuttle.
- Our fact-checker is Ena Alvarado.
- Our staff includes Tim Balint, Mathr de Leon, Suzie Falk, Lisa Harris, Alisa Manjarrez, Stephen Staver, Lixandra Urresta, and Rebecca Woodward.
- Our theme song was composed by Ty Gibbons.