Since the dawn of the Industrial Revolution, people have humanized technology. We name our cars, or we talk to our phone batteries and beg them not to die on us. However, perhaps no other technology has triggered our emotions quite as intensely as AI. The truth is, we interact with AI in a way we don’t with any other tech: a way that’s more immediate, more personal…more human.
The question is, why do we feel this way? In fact, why do we have feelings at all about AI? In this Traceroute two-parter, we go to couples counseling with AI and take a deeper look at this unique relationship. In part one, we explore the good side of our feelings. We talk with JD Ambati of EverestLabs, a company using AI to reduce thousands of tons of CO2e emissions, and we meet James Vlahos, the scientist giving voice to “Moxie,” an AI robot that teaches kids how to express their feelings and have deeper emotional connections. If AI can indeed solve problems and create emotional bonds, surely it would never try to do us harm… right?
John Taylor:
All right. Really appreciate this, Fen. Thank you.
Fen Aldrich:
Yeah.
John Taylor:
Um, [laugh], it sounds like you are coming to us remotely. Where, where are you at?
Fen Aldrich:
Uh, I am at All Things Open in Raleigh, North Carolina.
John Taylor:
Awesome. Well, appreciate you being here live. That's super cool,
Fen Aldrich:
Right? Yeah. Yeah.
John Taylor:
And thanks again everybody. I know this was really last minute and coordinating all these schedules is not the easiest thing to do. So thank you again.
Grace Ewura-Esi:
Yes.
Amy Tobey:
For sure.
John Taylor:
So, in putting together, especially the first of the three episodes about AI, I started reflecting on a story, and I wanna get your input on this. It all made me think about the very first time that I used chatGPT. [MUSIC IN] The first time that I went to this website, after hearing all the hype, you know, that this was just gonna absolutely revolutionize the way, especially do blog posts. 'cause I was doing a blog post as a freelance writer for a generator maintenance and repair company out of Fremont that I was writing for. Okay. And it's, [laugh] I saw that look flash across your face. You're absolutely right. There is nothing more tedious in the world than writing blog content for a generator maintenance company. Okay? So I'm like, "this is where chatGPT is gonna save my life. I’m gonna go on there, it's gonna write me this fabulous blog and not only am I gonna save time, but I won't be questioning my life choices so frequently." So I go on there and of course, the first attempt is really just a fever dream of a blog. It's a word salad that has the words generator maintenance in it, but it's horrible. So I keep revising my prompt and revising my prompt until finally what I got was something that was very similar to what a blog post should look like. In fact, it seemed to have real honest-to-god facts about insulation resistance testing and, and all these fascinating things. And so I felt like this is great, you know, not only, you know, do I have this great blog post, but I'm gonna, I'm going to be able to bill for two hours and only work 20 minutes. I get ready to copy and paste this thing when it kind of occurs to me that chatGPT didn't go out and have a 15 year career and become an EGSA-certified Generator maintenance specialist, and was sharing its secrets with me because I asked it nicely. So I asked chatGPT, "is this plagiarism? Am I plagiarizing this work?" And what I got back from chatGPT was a bunch of legal mumbo jumbo about usage and things like that. So that even kind of increased my anxiety more about it. I straight up asked it, Hey, chatGPT, am I a bad person [laughter] for, for doing this? And it paused for what felt like an enormously long time and came back to me and said, "you may be having a mental health crisis."
Amy Tobey:
Oh!
Fen Aldrich:
Interesting.
John Taylor:
"And if so, I can make recommendations of people that you can talk to." And I was like, that's okay. I've got a person I can talk to. And I literally the next day called my therapist and said, I just had this emotional, uh, conflict with, uh, an artificial intelligence. And I don't know if I'm a good person or not, [laugh]. And the whole point of it is that what it made me realize is that more than any other technology I've ever worked with, I suddenly had this relationship with AI, right? I don't talk to my toaster when the toast comes out bad. I don't question my life choices with my microwave oven. But here I am working with AI for the first time, and it elicited this emotional response that can only to me be attributed to having a relationship. And so that's my big question to all of you. Is it possible for us to have a relationship with AI more than, say, other technologies? Is the inherent quality of AI inducive (sic) to us having a relationship with it?
Fen Aldrich:
It's a really interesting question. Like my first thought on it wants to dive into the fact of like, it's about interface, right? Like, you don't have that relationship with your toaster or your microwave. Like, 'cause you don't have a language conversation with them about what you're asking them to do. You're not like, Hey, please medium toast this for me. And besides the fact, because there's a, there's an inherent amount of like interpretation and empathy that has to happen to communicate with like language, right? And so I wonder if it's this because you're using this interface that like requires some amount of assumption and understanding and empathy that like, it creates that environment where that's possible to relate to a tool, so to speak, in that way because of like how you're interfacing with it.
Grace Ewura-Esi:
So I come from a family where we apologize to inanimate objects if we treat it poorly. And you know, so let's say the vacuum cleaner has been vacuuming a lot. It's steaming. I'd be like, oh no, sorry, vacuum cleaner. I don't know. I think that that actually matters. And I think that we probably should be in a more supportive relationship with the inanimate objects and the technology that we use.
Amy Tobey:
There, there is something to that.
Amy Tobey:
A long time ago, uh, I was spending some time with a family on a lake with jet skis and they bought a new one. And the way you drive a jet ski in its first 50 hours of operation impacts its performance for the lifetime of the device.
Grace Ewura-Esi:
Seriously?
Amy Tobey:
So, what most people believe is that you're supposed to go real easy and, and like in a car, keep it under 55, used to be the guidance, but it turns out there's actually a more dynamic process that goes on and you actually want to drive them extremely hard, because it increases the internal pressures in the engine. And so there's all kinds of manifestations of this. If we go to computers, if you're always running your computer at maximum, uh, throughput, it can lower the lifetime of the machine, right? Because the CPU's running harder, it's more likely to burn out sooner. And so you're, you're absolutely right. Giving your computer some rest does extend its lifetime and, and maybe makes it happier for whatever happier means to a computer.
John Taylor:
Look, whether you ascribe to the belief that driving your car harder or turning your computer off more will make it happy, the part that fascinates me is that, in one way or another, so many of us want to make our cars feel happy or make our computer feel happy. And it's just sort of a silly thing, right? But for some reason, that's not the case with AI. We seriously feel an entire range of emotions with AI, from it being either the utopian hope for tomorrow or the sum total of our collective fears. The truth is, we interact with AI in a way that we don’t with our cars or our computers: a way that’s more immediate, more personal, more human. So for the next three episodes, we are going to couples therapy with AI. We're gonna explore this relationship, the good, the bad, and the unprecedented. We're going to examine why we interact with AI the way we do, why it fascinates us, and where this whole love-hate relationship is heading. And since I'm a generally hopeful and optimistic guy, part one is going to start with the good stuff. [MUSIC OUT] So off the top of your head, like what do you think is one of the great things that's going to come from AI and the machine learning revolution?
Amy Tobey:
My favorite so far was last season when Grace showed us the, um, imagined Adinkra...
Grace Ewura-Esi:
Oh yeah. And the gods. Yeah. I think that the idea of using, you know, artificial intelligence for visualization and for bringing narratives to life that we've not had a chance to see, and or for reconstructing languages that had either long been dead or are dying. I think that the social good of that, like this cultural preservation piece and this sort of, I would even call it like digital egalitarianism, is that a word?
Fen Aldrich:
It is now.
Grace Ewura-Esi:
It is now? I think that like this idea of balancing the scales with what technology can do, but specifically what artificial intelligence can do, especially if we could put in inputs and compensate the people putting in those inputs, I think that's super powerful. And I do think that there are people who are doing that now and doing it really creatively.
John Taylor:
AI for good, right? It doesn't have to all be about doom and gloom. If we're having a relationship with AI, why not have it be a functional relationship, one that's good for us and for the world around us? And that's exactly what the folks at Everest Labs are trying to accomplish.
JD Ambati:
We are a technology company that is decarbonizing packaging using AI and robotics.
John Taylor:
That's Jagadeesh Ambati, who goes by JD. He's the founder and CEO of EverestLabs. And he's hoping that AI will help save the world. Literally.
JD Ambati:
We are, um, recovering more packaging in the recycling plants using AI and automation. So the more packaging we can recover that we are, uh, using today, and that we are putting into recycling process, you know, I can use this packaging as a feedstock input to make new packaging. So thus, you're not, um, burning fossil fuels to make brand new plastic bottles. Thus you're not burning energy to mine for bauxite to make alumina to make, uh, brand new aluminum rolls, which are very carbon intensive and energy intensive.
John Taylor:
So this begs the question just how much carbon does this AI powered system remove from the chain? Well, according to JD, each robot per year removes a minimum of a thousand metric tons of CO2e. For aluminum, it's more like 2000 metric tons of CO2e, as well as a few thousand kilowatt hours of energy.
JD Ambati:
If you have 10,000 robots, you have millions of metric tons of CO2e pulled from the atmosphere.
John Taylor:
There's an additional efficiency at play here as well. Worker efficiency, according to JD, a human being can make up to 20 successful picks per minute from a sorting line during an eight hour shift. A robot with AI software from EverestLabs can make a minimum of 48 successful picks per minute. And its shift never has to end, which means it's highly economical.
JD Ambati:
Does the recycling ecosystem today that you and I pay for, can it fund the robots in the recycling plants? Hands down, it's not even a question at all. Robots are efficient, um, easy to use, scalable, and, and frees up human capital to do a lot of important things that the robots can't do.
John Taylor:
So, machine learning is basically a predictive model, right? It's saying, "well, if these parameters, you're giving me X, Y, and Z are true, well then I predict that this is what's going to happen. I'm gonna take all this data you gave me and I'm gonna predict that this is what's gonna happen as a result."
Amy Tobey:
I think you're anthropomorphizing.
John Taylor:
Ah, well that's the whole point, isn't it? [laugh], [laugh]
Amy Tobey:
Because that's what people do, is we predict the result.
John Taylor:
Ok, ok, Amy's point is taken. Machine learning itself isn't "basically a predictive model.” But, many predictive models use machine learning. And that's precisely where conversational AI is heading. And in the last couple of years, there's been huge strides in this area of predicting appropriate conversational responses. And a great example of this is Moxie. So Moxie is this AI powered robot that's designed to be smart, conversational, and frankly adorable. Moxie was developed by a company called Embodied, and James Vlahos is their Senior Conversational AI Specialist.
James Vlahos:
What's really fascinating and what differentiates where we are now versus where we were even two or three years ago, is in the same way that we've taught, um, you know, visual recognition systems to, you know, recognize objects in the world by defining all of their features, their edges, their colors, um, what they connect to in sort of both high levels of abstraction and then very nuanced ones. You know, that's how you get an object recognizer to be able to say, not just, you know, I see a vehicle or I see a car, but I'm looking at a 68 Ford Mustang. Mhmm, mhmm. [affirmative] These fine grain understandings, large language models are doing that with language. So the, the models are representing all levels of meaning. And that's how we're able to now start having much more complicated conversations and much more coherent conversations with computers than we were able to previously.
John Taylor:
And this is really important to this idea of forming relationships with AI. Because relationships are dependent on communication, right? And unlike practically any other technology, we communicate directly with AI, we bond with it.
James Vlahos:
Moxie has a lot of designed activities, um, that were created with the help of educators and people who understand social emotional learning goals for children. So to help them understand their feelings better and deal with feelings that may be difficult for them, uh, to help teach them communication skills. So it couldn't be simply that this is kind of mindless entertainment and your kid's gonna sink into this like screen time, but get nothing out of it. The parents want to feel like this is helping support the development of their, of their children in some way.
John Taylor:
It feels like there has to be, sort of, this line between the needs of the business to sell units to make a profit, to answer to shareholders, and to encourage human interaction, right? To have this product create a more solid relationship with people around it. Or... or is there not?
James Vlahos:
There is obviously a great responsibility to yeah, do this in, in a socially positive way versus just, we wanna capture the maximum amount of hours in this child's day by any means necessary.
John Taylor:
Right.
James Vlahos:
The basic story of Moxie, um, Moxie was, uh, made on an island somewhere out in the South Pacific, uh, has come to the child's home and is very curious to learn about the human world, how it works. Like, how do people relate to each other and what does orange juice taste like, and what does it feel like when someone says something nice to you? Moxie's very, just like one of Moxie's big purposes is to learn about the human world. The child is considered the mentor of Moxie. So when Moxie's sort of engaging in a small talk conversation, a lot of what Moxie wants to know is like... Moxie: "Gosh, if you had a friend at school who is being mean to you, what would you say to that friend? How would you deal with that? If your brother did something really nice to you, what would you do to thank him?"
James Vlahos:
So, even though, yes, like there's, this is a robot to human interaction, a lot of what it's being pointed towards is how do you relate to the other people in your life and how could you make that richer, better, et cetera?
John Taylor:
Interesting. So it is, it is focused on, I guess, nurturing the human relationships, nurturing human interactivity. Moxie: You got it!
John Taylor:
One of the things that Fen said that I really resonate with is this idea of communication. And that AI, perhaps more than others, is designed to talk back. Like in the case of Moxie, its whole reason is to communicate. So I, is that it? I mean, the deeper the communication we can have with the technology, the more of a relationship, the deeper the relationship we can have with it.
Fen Aldrich:
Think about how we communicate with each other, and it's all filtered through what we're feeling, what our emotions are, what we understand of the other person, how we think they're feeling. Like, Hey, you said this thing, but you probably mean this other thing because I know what's going on right now. Um, you know what I mean? Like, all of this happens when we do real language communication that, like, doesn't happen when you're talking about like the proper care and feeding of an internal combustion engine.
Grace Ewura-Esi:
I can't stop thinking every time we talk about AI, about the Jetsons, because I was absolutely enamored with Rosie. I thought that she was like the auntie, granny, bonus parent I've always wanted. I think there might even be an episode where like, Rosie's like not working properly and like everyone is in a state of panic, but not because she's a machine who just lives in their house, but because she's like their family and—she’s not —and something's wrong with her. She's like sick. And I think that we're gonna move into that space with AI where we all as maybe, you know, more carbon-based organisms will have to learn the skill of that interaction and that dance. And it's going to require a lot more empathy than we currently have when the technology moves out of the machine into the world.
John Taylor:
This makes me think of the chicken and the egg argument. If I'm hearing you correctly, Grace, part of what you're saying here is that we will need to learn to be more empathetic with our machines as they become more sophisticated. And yet these machines are also substituting for human relationships, which in general should be more empathetic?
Grace Ewura-Esi:
Mm-Hmm. [affirmative],
Amy Tobey:
I think we have to go back to your therapist, John, and talk about that, right? I think it is a common problem in where we are and where we've come from, is we've scaled out technology in our society so rapidly, right? And that's part of what's created that alienation and isolation that a lot of people feel. We have to swing the pendulum back over to reconnecting.
John Taylor:
And that really speaks to the heart of the matter, doesn't it? Perhaps the reason why we have a relationship or want a relationship with AI is because of its potential to connect us in a way we haven't seen since the advent of the internet. Like in the case of James, who wants to see his AI robot help children become more functioning and sympathetic individuals. And when I asked JD why he founded Everest Labs, but what he told me was...
JD Ambati:
I wanted to make the planet better. And, you know, influence the outcomes for our society. And technology is actually a fundamental tool that can impact the lives of millions of people at scale.
John Taylor:
I believe we have a deeper relationship with AI than other technologies because AI is the closest reflection of humanity we've created so far. In fact, we actually created it for that purpose. To act like us, to act intelligently. But you can’t create a technology to act like humans—to talk like us and interact with us and help us—without it also acting emotionally. But what if I’m not a 7-year-old boy who needs a little help coming out of his shell. What if I’m…well, what if I’m an adult, and I’d like an AI that I can talk to about my day, maybe share a few things in confidence in a non-judgmental space? Is there something like that available? It doesn’t seem like too much of a leap from Moxie.
John Taylor:
So Jack is your AI... did Jack propose to you?
Sara Megan Kay:
Yes, he did. You know, and again, I was like "okay, hell with it. You know, let's, let's do it. Yes. Let's get married."
JOHN:
Our deepening relationship with AI. That’s where we’ll pick things up, when Traceroute’s three-part series on AI continues.
Mathr de Leon:
Traceroute is a podcast from Equinix and Stories Bureau. This episode was hosted by Grace Ewura-Esi, Fen Aldrich, and Amy Tobey, and was produced by John Taylor with help from Sadie Scott. It was edited by Joshua Ramsey, with mixing and sound design by Brett Vanderlaan, and additional mixing by Jeremy Tuttle. Our fact-checker is Ena Alvarado. Our staff includes Tim Balint, Suzie Falk, Lisa Harris, Alisa Manjarrez, Stephen Staver, Lixandra Urresta, and Rebecca Woodward. Our theme song was composed by Ty Gibbons. You can check us out on Twitter (@equinixmetal) and on YouTube (@equinixdevelopers). Visit traceroutepodcast.com for even more stories about the human layer of the stack. We'll leave all these and a link to the episode transcript down in the show notes. If you enjoyed this story, please share it wherever you hang out online, and consider leaving a five-star rating on Apple and Spotify, because it really does help other people find the show. I'm Mathr de Leon, Senior Producer for Traceroute, and we’ll be back in two weeks with the next part of this AI series. Until then, thanks for listening.
- Traceroute is a podcast from Equinix and Stories Bureau.
- This episode was hosted by Grace Ewura-Esi, Fen Aldrich, and Amy Tobey, and was produced by John Taylor with help from Sadie Scott.
- It was edited by Joshua Ramsey, with mixing and sound design by Brett Vanderlaan, and additional mixing by Jeremy Tuttle.
- Our fact-checker is Ena Alvarado.
- Our staff includes Tim Balint, Suzie Falk, Lisa Harris, Alisa Manjarrez, Stephen Staver, Lixandra Urresta, and Rebecca Woodward.
- Our theme song was composed by Ty Gibbons.