These days, our producer John Taylor's got more on his plate than just production for Traceroute. You see, by night he's also… kind of a rock star. And his band is really more like a second family. Lately, though, that family is going through ch-ch-ch-ch-changes. For starters his keyboardist, Arman, has moved away and left John at a real crossroads. Does he hire a new face to fill the void? Or does he cling to the vibe he's shared with Arman for the past year and turn instead to tech for a solution?
In this episode of Traceroute, we delve into a quest for the perfect tool to help bridge the physical and digital divides that increasingly appear between musicians like John and Arman. From California’s Bay Area to the buzzing streets of Hong Kong, we find a host of technologists who, spurred on by the impacts of a global pandemic, are already hard at work tackling the kind of low-latency global networking solutions that just might be the key to keeping the band in one semi-remote piece.
But within this particular stack, there lies an even bigger conundrum. Because even if we somehow manifest the right tool for the job, is the magic of this so-called “vibe” even replicable? Or is there another solution altogether?
Amy Tobey:
You're listening to Traceroute. I'm Amy Tobey.
Mathr de Leon:
And I'm Mathr de Leon, filling in for John Taylor who's got a bit of a predicament. You see, when he is not producing for Traceroute, John moonlights as a bit of a rock star. And for the past 35 years, John and his brother Steve have collaborated with a number of very talented musicians, who've not only helped shape the sound and trajectory of their band, The Uninvited, but have become what John calls a second family. And the newest member of that family is a young up and comer from San Diego.
Arman Sedgwick-Billimoria:
Hello, my name is Arman Sedgwick-Billimoria.
Mathr de Leon:
Arman is an Emmy nominated sound editor, mixer, Foley artist and composer, who also happens to be...
Amy Tobey:
An outstanding musician.
Mathr de Leon:
Back in 2016, Arman relocated to San Francisco to pursue a degree in cinema, emphasis sound design. And a little over a year ago was scrolling through Facebook.
Arman Sedgwick-Billimoria:
And I saw this post from this big, goofy dude named JT and he's like, "Hey, looking for a keyboardist." And immediately was like, wow, I love the music, I love the sound, I love the vibe, I love the energy. And I shot my shot so to speak, and immediately got a message back, "Hey, thanks for reaching out. Can you make Tuesday's rehearsal?" And I was like, "Sure, absolutely." And apparently nailed the audition on the fly and the rest is history, so to speak.
Mathr de Leon:
Except that part is history. After seven years in the city, Arman is at a crossroads and unfortunately...
Arman Sedgwick-Billimoria:
There isn't really much of a film industry scene in the Bay Area. And so I had to make the tough decision. Yes, music is always going to be the dream. I'm still going to pursue that, but I also need to make money. You got to survive.
Mathr de Leon:
I'm going to hand the baton to John here and ask you, this might be an uncomfortable question. So this happens all the time, right? I mean, historically someone moves away, you find a new keyboard player, you find someone to replace them. With all the talent in and around the city, why not just find a new keyboard player?
Amy Tobey:
That's a fantastic question, and one would think that that is absolutely the logic. But for us, vibe is the single most important thing. The band is my outlet of joy. This is what I do to make up for the dystopian hellscape that is the rest of my life, right? Playing with the band is the ultimate source of joy. It's a family. The right band is a family and everybody is on the same page and is bringing happiness and love and this incredible vibe. That's just super critical. And it's not easy. It is not easy to find that family, to make that vibe super just not easy.
Mathr de Leon:
So with Arman back in Southern California, John's predicament isn't that he can't find another keyboard player. It's that Arman and the band are a family and they don't want to give up the experience of jamming together in real time, nor do they simply want to default to what we'll call asynchronous collaboration. What they do want is to keep the vibe alive. Amy, if you recall our first recording session together, when Grace asked you what you were most looking forward to from the future of tech. You said that someday you'd love to sing a duet with someone over a Zoom call, right?
Mathr de Leon:
And if we're able to address the latency that occurs between two people communicating at a great distance, not only would we be able to sing in time, but we might also hear the nuances of each other's performance, and then we really know what it means to be remotely in sync. So in trying to help John with his situation, the big question that I think we're here to try and answer is, is this possible? Will we ever get to a point when network latency is so low that John and Arman can maintain the same vibe online that they've grown accustomed to when performing together offline?
Amy Tobey:
Well, I have a thing I'm going to send you in Slack real quick to take a look at. And so there's this thing that's been going around for a little over a decade. Basically it's a list of how long different things in a computer take. It starts at the top, an L1 cache reference, which means on the CPU, only a little tiny bit of that is the actual processor, and then a bunch of it is actually memory that's right on the same silicon. And so that takes half a nanosecond to go and get some data and bring it back to the core.
Amy Tobey:
And then as you go down this, we get out of nanoseconds and we get into microseconds. Stuff like transfer one kilobyte over a one gigabit network, and then we get to SSDs where we can read 4K randomly 150 microseconds. But if you look at the next one for hard drives, all of the times in the first 16 lines are below what is considered the human ability to perceive.
Mathr de Leon:
So this doc we’re referencing, “Latency Numbers Every Programmer Should Know” by Jeff Dean, based on a post by Peter Norvig is actually from 2019 and might be a little out of date but suffice to say, there are a lot of processes in computing that add latency, and we’ll get more into those later. But there’s one in particular that jumps out here, and that’s line 16 which is the time it takes a packet to make the trip from California to the Netherlands and back. 150 ms. And while that doesn’t fall quite below the threshold of what most people perceive as obvious latency, which is actually between 100 and 120ms, it does fall well below the threshold at which networked communication starts to break down, around 250-300ms. And that’s where it gets interesting. Because why is there such a discrepancy between what we perceive and what we can tolerate when communicating online? Like, in a way, this feels to me like we’re adapting to the shortcomings of our digital environment.
Amy Tobey:
And that's how Zoom does its trick, is it kind of dials in the latency and tries to match people up, but keep it under that human perception threshold.
Mathr de Leon:
Interesting, so it’s almost like digital sleight-of-hand. Like a clever element of its design that seems reliant on our sort of natural tendency to compensate for sensory input that doesn’t line up. Right? Which makes a ton of sense when you understand how we actually perceive the world around us. Because… the real world is not a zero latency environment. Ok, let me give you an example. Imagine you're at a comedy club and maybe you're 10 feet from the stage. The time it takes a joke to travel that distance is around eight or nine milliseconds, which is generally thought to be below that human ability to perceive. But then the joke enters your ear and causes your eardrums to vibrate. Those vibrations are transmitted through the middle ear to the inner ear, which converts them into electrical signals. And then those signals travel along the auditory nerve to the brain where it interprets the joke as specific sounds, locates its origin, integrates it with other sensory input and laughter. And the whole process can take up to a few hundred milliseconds. But because our brains are so good at compensating for these naturally occurring latencies, the experience feels instantaneous.
Amy Tobey:
So when we talk about things like communicating over the speed of light, most people think speed of light is instantaneous because it's so fast. How long could it take to go from Michigan to California? Well, it turns out the answer to that is about 65 milliseconds at the speed of light in glass.
Mathr de Leon:
So there it is. It's the wrinkle. Light makes its way through space at an astonishing 300,000 kilometers per second. That's rounded up. The theoretical speed limit of the universe. However, that's not how light moves through the internet. Think of it this way. Let's say you're at your desk and with a pencil and the steadiest hand imaginable, you draw a straight line from one end of a piece of paper to the other. That's speed of light in a vacuum, straight as can be just hurtling through space at top speed.
Mathr de Leon:
Now say you're on a train and you try to draw that same line on that same piece of paper, and you've got a steady hand, but it's nowhere near as stable as when you were doing this at home. But when you're done, it looks pretty good, right? These are two relatively straight lines. But if you were to zoom in, you'd see hundreds, if not thousands, of tiny little zigzags where your pencil was vibrating from side to side along to the rhythm of the train. That's the speed of light in glass, bouncing back and forth, back and forth, all the way down the cable slowed by the material's refractive index.
Mathr de Leon:
So while your data is technically being carried along at the speed of light, it's continuously being reflected within the material's core. Meaning its true path is in fact longer than the straight line from end to end. And that is just one of several issues that technologists face when developing low latency networks. If we want to recreate the experience of jamming in real time so that John and Arman can continue their working relationship with the same vibe they're accustomed to, then the issue of this tech starts to sound a bit more like an issue of physics.
Amy Tobey:
That's where we hit the wall, I think, because I have to explain this to people all the time. They'll be like, "Well, can't we make it faster?" It's like we are already running up against the limits of physics. And so unless you want to go and prove Einstein wrong, that's pretty much as fast as we can go. This is why you don't see choirs on Zoom or bands on Zoom where the guitar players in California and the singer is in New York and the bass player is in Brazil. Well, the reason why is because it doesn't work.
Mathr de Leon:
Well, it doesn't work yet. As Amy said, we're already butting up against the wall of physics. I mean, I ran a trace route between John and the Bay Area and Arman and San Diego. And the round trip latency between them can get up over 80 milliseconds, and that's like eight times the threshold of human perception. So clearly there's no way they're going to jam in that environment. But what if we could build a tool that allows for that kind of remote musical collaboration? What would it look like? What are the core elements we first need to consider?
Mathr de Leon:
Well, since we've already established that speed is going to be essential, let's take a closer look at the latency issue.
Ilias Bergström:
There's many causes for latency.
Mathr de Leon:
That's Ilias Bergström.
Ilias Bergström:
I am a currently a developer at Elk.
Mathr de Leon:
Elk is a Linux-based operating system developed specifically for real-time audio. And allegedly a device running Elk Audio OS...
Ilias Bergström:
...has less than a millisecond round-trip latency. That is low.
Mathr de Leon:
As the senior software engineer for Elk Audio, Ilias knows pretty much all there is to know about the causes of latency.
Ilias Bergström:
Absolutely. We have the latency of the network itself. That connection also has jitter, so how much the latency varies over time. There's internet relay servers. My voice now doesn't reach you directly over a single wire. It jumps from point to point. If we hang up and call each other again, it might actually follow a different route. There's the analog to digital conversion, which takes time. Then also you don't immediately play back every single morsel of sound you receive, you have to buffer it. And that adds latency. The more jitter you have, the bigger the buffer is.
Ilias Bergström:
There is time also just encoding and decoding the audio from a compressed signal to a raw sound. And on top of all that, there's a software. To write low latency software, you have to use a programming language that's close to the metal. For example, C++. That's what we use. You explicitly allocate, de-allocate all the resources of the computer, work with the timings of what happens, and you don't let some library framework do it for you. In a normal computer program, there's many little moments where the computer waits ever so slightly for microseconds.
Ilias Bergström:
Sometimes one is ready after the other. They need to wait for each other. You need to pause. All of these little pauses and all of these little waits add up and result in unusable latency unless you're very careful about what you're doing so that none of them should really ever be waiting for any other one or pausing for any reason if you want to have the lowest latency. And that's what we developers at Elk do, making sure that doesn't happen.
Mathr de Leon:
I find Ilias to be such an interesting figure in this story because aside from his work as a developer and an academic who has published dozens of papers, Ilias is at heart a performer himself. And in our conversation he expressed the kind of longing for more people in our increasingly remote society to have access to just the sort of experience that John and his band so desperately cling to.
Ilias Bergström:
There's more people playing music today than ever. There's a guitar in every other home. There's synthesizers that you can buy for like a hundred dollars or you just get a laptop and you run some software and you learn to make music. That's fantastic. But on the flip side, I'm not sure that all of these people that do play music play with others. And I would say people are missing out because it's a wonderful thing.
Mathr de Leon:
And that brings us to the second thing this tool we're looking for really needs, accessibility. If John and Arman are going to be able to get together and jam out on a regular basis, then this thing not only has to be lightning fast, but it really ought to be almost plug and play. And Ilias is far from the only person working on this problem.
Alexander Carôt:
Alexander Carôt is my full name.
Mathr de Leon:
Alex is a professor of media computer science at Anhalt University of Applied Sciences in Germany.
Alexander Carôt:
I prefer working with audiovisual signals including the transmission of audiovisual signals. And in particular, network music performances.
Mathr de Leon:
Network music performance or distributed music, as Alex refers to it, is an idea that he spent the better part of his career as both the musician and an academic working to solve. And interestingly, he seems to think that we're not quite there yet.
Alexander Carôt:
This distributed music thing is not necessarily a plug and play and internationally adopted principle. You also have to know what you're doing. It is pretty difficult to develop a system that is plug and play. And this is what most artists typically don't like, to fight with technology and figure out stuff that goes beyond their artistic expression.
Mathr de Leon:
And Alex would know, because like I said, he's been developing this tech for most of his adult career.
Alexander Carôt:
Actually, you being in California is a nice coincidence because 22 years ago I was at Stanford University at Carmel where I discovered the precursor of the JackTrip software. And that is the basis of all I do, which is currently known as the Soundjack project.
Mathr de Leon:
So much like the platform Ilias is working on. Alex's Soundjack project is a software application designed for high quality, low latency audio streaming over the internet. It was developed specifically to help overcome geographical constraints between musicians seeking to perform in real time over a network. And its inspiration, JackTrip is widely regarded as the predecessor of most modern remote music collaboration systems. So in our quest for the perfect tool, we still need to iron out this matter of accessibility.
Mathr de Leon:
If Alex is right and the technology doesn't inherently lend itself to a plug and play mindset, it might considerably raise the barrier of entry for the average user. Speaking of average users.
Russell Gavin:
When the pandemic hit, like so many people, I'm a musician, I'm a music teacher, and we were like, what are we going to do?
Mathr de Leon:
That's Russ Gavin, director of Bands at Stanford University in California. When he's not teaching classes...
Russell Gavin:
I also get to interact with the Stanford Center for Computer Research and Music and Acoustics.
Mathr de Leon:
Also known as CCRMA, which as Alex mentioned, is where JackTrip got its start. And through Russ's connection to CCRMA...
Russell Gavin:
I knew JackTrip as a thing. I had seen enough of its coolness and I was like, "Oh, I need to do this." And then it was very quickly made aware to me that it was a command line tool.
Mathr de Leon:
So this is where I'm starting to think that maybe Alex is right because I just pulled up a document called "Installing and running JackTrip command line 1.3.0 on Windows 10." It's dated May 21st, 2021, and was posted by Cynthia Payne who appears to have been involved with JackTrip as a volunteer, a SAS instructor and consultant, and a live stream and recording engineer since 2020. Let me tell you, as helpful as I'm sure this document is, it's 1400 plus words of detailed step-by-step operating procedure for JackTrip are more than enough to make me say ,hard pass. And unfortunately, Russ had a similar sentiment.
Russell Gavin:
And somebody was like, "Can you open the terminal on your MacBook?" And I was like, "Well, no, I cannot. What are you talking about?" I am, but a simple musician out here trying to survive when I can't be in the room with other musicians. What are we doing here?
Mathr de Leon:
So Russ is stuck trying to figure out how to get JackTrip to work on his machine. But little does he know that only a mile away in Palo, Alto, California, longtime software engineer and entrepreneur, Mike Dickey, is already on the case. Now, Mike is exactly the kind of person you want on a job like this. This is a guy who back in the 1980s, taught himself to code using Rainbow Magazine and his dad's TRS-80 Color Computer. And so if you've never seen one of these old computing magazines, I want you right now to head over to archive.org. Search Rainbow Magazine.
Mathr de Leon:
December 1982, should be the first result. I want you to open her up. Turn to page 98 and check out an article titled, "Go Adventuring with GAPAD" by Geoff Wells. Now, GAPAD stands for General All Purpose Adventure Driver, and it looks to be a framework of around 6,000 lines of code intended to simplify the process of making your very own text-based adventure game. Back then, programs like GAPAD were commonly published as a kind of tutorial for using machines like the TRS-80. In fact, utilizing Rainbow's wealth of articles you might even be inspired to modify these codes yourself, expand on them.
Mathr de Leon:
Maybe share your creations with other Color Computer enthusiasts within your community. Not only would readers have been learning essential programming concepts and design principles, but the hands-on approach and DIY culture of early personal computing may likely have been the foundation upon which an entire generation of software engineers like Mike built their careers. Now flash forward to May of 2020, Mike's son is singing in a boys choir. And of course at that time, the unfortunate truth of the matter was that...
Russell Gavin:
Hey, if you like singing in choir, you're not going to do it for a long time.
Mathr de Leon:
Exactly. Nobody's trying to meet up for a rousing chorus of Miserere Mei in the midst of a global pandemic. But of course, Mike's not having any of it. So he's like...
Russell Gavin:
"Wait, are we sure? Can't we use modern technology to solve this issue?" And he went through all of the available protocols, all the technology that was out there with no allegiance to anything except getting his son's choir to be able to sing together.
Mathr de Leon:
And that's how Mike discovered JackTrip.
Russell Gavin:
He got to know Chris.
Alexander Carôt:
Chris Chafe, who is like the father of all this stuff.
Mathr de Leon:
That's Alex again, who as it turns out, cites the early work of JackTrip's inventors, Chris Chafe and Juan-Pablo Caceres as the impetus for his motivation to develop Soundjack.
Alexander Carôt:
Hey Chris. So whenever you hear this, when people ask me what was the reason for working with network technologies, I always mentioned you were the first who created a working product rather than just spreading a rough idea.
Russell Gavin:
And Chris said, "You should talk to Mike." And so I showed up and I was like, "I hear you doing some cool stuff. I struggled with this command line aspect." And he gave me a couple of pieces of hardware and I immediately went home and plugged it in and gave one to one of my friends across town. And I was like, "Let's check this out." And plug it in, start playing. And immediately it was not the famous latency that blew my mind. It was the audio quality. I could hear the articulations of the saxophone on the other end.
Russell Gavin:
I could hear the overtones and the sound. There was a level of depth to the audio experience, it was unlike anything I'd ever experienced in an online setting of any kind. The latency was great too. And we played duets for half an hour and it was like, "Whoa, we're living in Star Trek land." And at that moment where I first experienced it, I knew that this technology existing was going to have a profound impact on my field, which is music and music education.
Russell Gavin:
And in June of 2020, that summer, he just started getting these hardware pieces out to members of the choir. And by late July, they were having 40 person rehearsals online in full real time. Uncompressed, lossless audio. You can tune, you can... It was amazing.
Mathr de Leon:
So at this point, since rising demand for distributed music technology is inevitable, Chris, Mike, Russ and a number of other colleagues decide to form JackTrip Labs. The goal is to take that command line tool and that little piece of hardware and make them obsolete. To transform JackTrip into a standalone virtual studio, accessible over a basic internet connection, the kind found in most homes. Now I know what you're thinking.
Mathr de Leon:
It's super rad that Mike was able to get 40 kids online and relatively synced up for choir rehearsals during the pandemic, but these kids are likely all within a few miles of one another. And that proximity really matters because according to Russ...
Russell Gavin:
If you're on a fiber internet connection within a couple of hundred miles of somebody, we're going to get you round trip connections in that five, six millisecond range. And so it really does hit that sweet spot where our brains are telling us that we are in person.
Mathr de Leon:
And no doubt this technology is incredible, but John and Arman are at least 500 miles apart. And the further you push this proximity, the harder it gets for our brains to deal with the effect. Here's Alex again.
Alexander Carôt:
Others think that it is the same thing as if in the same room, and this is just not true. It is getting close to true the closer you are physically. However, even then it is a different thing. An ideal case, it feels like a situation where you're in a sound studio and two separate room just connected via microphone cable. But it is really difficult adopting this principle on a worldwide plug and play manner.
Alexander Carôt:
And this is again, not the case. And even if we do that within Germany and Switzerland and so on, with a physical distance up to, let's say, 1000 kilometers, it is not the same thing.
Mathr de Leon:
So the band's future with Arman hangs in the balance. Unless we can get that round trip latency down to within a reasonable figure, kind of seems like they might have to settle for the asynchronous option after all. That or recruit a new member of the fam. But we have another stop to make, all the way out in Alpine country.
Janine Hacker:
My name is Jane Hacker. I'm an assistant professor at the University of Liechtenstein.
Mathr de Leon:
As an assistant professor, Janine's research delves into how we use different technologies and what their impact is on society. She's particularly keen to study communications and collaboration technologies. And as luck would have it...
Janine Hacker:
I have been singing in various choirs since I was a child. And during the pandemic, I also had some experiences with online rehearsals via Zoom where basically everyone except for the conductor is muted.
Mathr de Leon:
Which again is less than ideal. But like I said, Janine is a researcher and this is her specialty. So of course she's going to be asking questions, trying to understand her situation a bit better. And in the middle of lockdown, what you really need is a touch of serendipity.
Janine Hacker:
During the pandemic, I virtually met Heike Henning, who is a professor in the field of choral and music pedagogy. And she was basically conducting her choirs in the Nuremberg region via Jamulus.
Mathr de Leon:
Jamulus is yet another networked music performance software. This one written by Volker Fischer and ninety-nine other contributors all credited on GitHub. What's interesting about Professor Henning's situation is that while her choirs were back in Nuremberg, Germany, she herself had taken up a professorship at the Mozarteum in Salzburg, Austria, nearly 200 miles away.
Janine Hacker:
I thought, well, that's actually really interesting. Why not try to initiate a project on this?
Mathr de Leon:
So Janine together with her new colleague, Professor Henning, set out to identify and evaluate technology solutions that contribute to state-of-the-art online choir rehearsals.
Janine Hacker:
And then, somehow we got connected to Alex.
Mathr de Leon:
Yep, same Alex.
Janine Hacker:
And we're super lucky to actually have this connection. Him being the developer of one of the most renowned software in this field and very quickly wrote the proposal for this project.
Alexander Carôt:
It was one week or something, right?
Janine Hacker:
Or one weekend, I guess. More or less.
Mathr de Leon:
And thus, the Choir@Home Project was born
Janine Hacker:
And Choir@Home is also the short name [laughs]
Mathr de Leon:
Yeah, but it’s full name is “How to carry out virtual choir rehearsals with the help of digital tools” which is great but I think we'll just stick with Choir@Home So, with Alex on board, they switched from Jamulus to Soundjack, secured funding from Erasmus+, which is one of the European Union's most significant student exchange initiatives, instead about developing an experiment they called the online lab choir.
Janine Hacker:
The general objective now of this project is to enable choirs to carry out online rehearsals. When I went into this project, I thought, "Okay, when we use the software, it is really close to singing together in one room." And I have to say it is not. It is still quite different. And in our project, we already use some hardware that makes it more or less plug and play, but as we also found out in the last couple of weeks, it is still not really plug and play. And I think Alex is nodding. Let's say the human factor is quite big in this, right?
Mathr de Leon:
This again, hints at the need for accessibility, right? What she's getting at here is while two thirds of the lab's 35 participants had at least 20 years experience singing in choirs, many of them were of an age demographic that historically does not have a great track record of adapting quickly to new technologies, i.e. the human element. Which makes it all the more interesting that so many applied to be part of the lab. And Janine wanted to know what their motivations were for participating.
Janine Hacker:
And many of them, they were just curious. They wanted to see does it work. Some of them also have some, let's say, key roles in their own choirs. So they wanted to see what's possible, maybe in terms of organizing also hybrid rehearsals in the future, or maybe engaging in collaborations with choirs in other countries.
Mathr de Leon:
But perhaps most interesting is the idea that according to Janine participation in amateur choirs is for many people not really about the singing itself.
Janine Hacker:
They care a lot about this real connection to people, real in terms of physical presence. If you ask the typical chorister in amateur choirs, maybe 90% of the experience is actually going there for social reasons and not really for singing.
Mathr de Leon:
And this hits on the real crux of what's missing from so many remote collaboration platforms.
Janine Hacker:
Some of them had participated in Zoom rehearsals during the pandemic, which are not well, right? What you typically have there is people having their cameras off and you are just speaking to whoever. You don't really know who's there. You just have those black tiles. And it can be quite frustrating with no feedback.
Mathr de Leon:
No feedback.
Alexander Carôt:
Well, and we're tunnel visioned into the fourth wall, right? And so we don't have peripheral vision to our peers. We don't have smell.
Mathr de Leon:
Interesting point.
Alexander Carôt:
So we cut off three out of five senses, probably only four are relevant to singing. But taste might have an element that we just don't know about.
Mathr de Leon:
And I could totally imagine the scene that Janine is describing. Like she said, little black squares, cameras on, cameras off, awkward silences, people talking over one another, terrible Wi-Fi connections, muffled audio. I mean, just imagine trying to lead a choir like that because you're trying to sing in a choir like that. I mean, typically in these kinds of situations, you need to listen very closely to the other voices in your group.
Janine Hacker:
That's not exactly happening in this online environment. Actually, to be a choir that is together, you have to unlearn listening to the others to some extent.
Mathr de Leon:
So there it is, our trifecta, speed, accessibility, and feedback. And strangely, that last one feels the most out of reach, at least for now. But in the interest of due diligence, I reached out to someone with a very unique perspective on the whole idea.
Pamela Pavliscak:
Hi, I am Pamela Pavliscak. I am a tech emotionographer. That means I study how technology and emotion affect our world. I run a research studio called Subjective to explore these big questions with the biggest technology companies of today.
Mathr de Leon:
She's also a professor at Pratt, and she has a lot to say about the way we experience feedback in the real world. I wonder if these kinds of things impact the way that we feel like in sync. That we're collaborating or communicating with people is not having any control over the environment itself.
Pamela Pavliscak:
I think there is something to that. I think we put so much emphasis on nonverbal communication, but we forget that a lot of how we pick up mood is contextual. It's what's going on around us, right? So I think of how many times a pet cat wanders onto screen on Zoom and everyone's like, "Oh, finally something good's happening here in this meeting." And it just changes the mood of the room. And that's something that just has to do with the environment itself and those cues that are in it.
Pamela Pavliscak:
So I think technology can't imitate that rich sensory experience yet that we have in person. All the smells, the little movements, the lighting as you say, but it could.
Mathr de Leon:
Are you talking about Metaverse type stuff? Where are we going?
Pamela Pavliscak:
Well, I don't know. I mean, I think that a lot of it's so theoretical because where this technology is at now, it doesn't give that mood at all, does it? Because you're kind of isolated. You're wearing a headset, you're feeling cut off from everyone around you, and then you're in this immersive world that feels not very rich, not very realistic, not very satisfying. That may change. I think there might be other ways to enhance the sensory cues that we get that give us that vibe maybe through wearables connected to home devices, for instance.
Mathr de Leon:
So this is the stone I really want to take a peek under because I love the idea of wearables and the role they might play in enhancing our connection to others in online spaces. So one more trip around the world to a little shop in the Xiongwan district of Hong Kong.
Florian Simmendinger:
My name is Florian Simmendinger. I'm a wearable nerd because it's my life to make wearables.
Mathr de Leon:
Florian is the founder of a company called Soundbrenner, and it's their mission to make music practice addictive. And their flagship product is a smartwatch with a twist.
Florian Simmendinger:
As a musician, it's pure torture to sit there for an hour with this annoying click in your head. So what if you could simply feel it and replace the annoying sound of a vibration? That was kind of the starting premise of the company. When it comes to music, it is a full body experience. That's when it's most enjoyable. So it's never going to feel the same to sit at home and listening with your headphones than when you're live in a place and you can feel the bass through your whole body.
Florian Simmendinger:
So I think the human brain is wired to enjoy it for some reason, to activate all the senses at once.
Mathr de Leon:
The self-proclaimed wearable nerd makes a wearable metronome. Go figure. Now, while wearables are not the sole focus of his work, and Soundbrenner isn't explicitly a haptics company, Florian has to admit...
Florian Simmendinger:
There's just something cool about haptics. When the Apple Watch initially launched, they listed three features as the headline features. And one of them was actually haptic feedback.
Mathr de Leon:
I actually remember this. It was the 2014 Apple keynote, and it was the end of the day, so most of the audience thought the presentation was over. But just as Tim Cook is about to walk offstage, he hesitates. And like the great Lieutenant Columbo, in one of his finest gotcha moments, turns back to the crowd and says “just one more thing.” And then he plays that sexy reveal video like they do for every new product to a standing ovation, of course. And sure enough, one of the key features is that the Apple Watch provides new intimate ways to connect and communicate directly from your wrist.
Mathr de Leon:
And then the voice of Johnny Ive goes on to describe how you can tap the watch to get someone's attention or draw a little picture, write a little note, even share your heartbeat. And most of these interactions are accompanied by haptic feedback. And what he's suggesting here is that while technology often inhibits the subtle ways that humans communicate with one another, features like these might one day redefine how we interact with one another through our devices. I remember being super into this.
Mathr de Leon:
And then I saw the price tag and because at the time I was still very much under the umbrella of starving artist, I said, "Hard pass." But I tried the sound burner core. And I got to say, I kind of dig it. I mean, I don't know that it would make practice "addictive" for me. But if I could integrate feedback like that into some kind of virtual performance or a rehearsal online and if it could be a little more subtle, not so much a metronome, but maybe just a little push to let me know that there's someone on the other side, I think that would be so satisfying.
Mathr de Leon:
And let's say we get there, we solve for speed with low latency software that's close to the metal, optimized for deployment over high bandwidth peer-to-peer networks. And we solve for accessibility with an open-source, free-to-use interface that inches ever closer to a true plug-and-play standard with each new iteration. And let's say we solve for feedback by employing our personal and often wearable devices to deliver the kind of sensory input that we take for granted in our everyday world interactions.
Mathr de Leon:
It seems like that might do it because according to all the technologists and teachers and researchers working on this stuff, we're right there on the edge. But is it enough? Will we ever truly capture the essence of that vibe that John and Arman and every musician alive are chasing after?
Amy Tobey:
I feel like they're wishing more than what's going to actually happen. I don't think it's ever really going to reach the experience of being in a room with a group of people singing together.
Mathr de Leon:
Well, we'll find out if Amy's right on the next episode of Traceroute. Traceroute is a podcast from Equinix and Stories Bureau. This episode was hosted by Amy Tobey and was produced by Mathr de Leon with help from Sadie Scott. It was edited by Joshua Ramsey with mixing, sound design, and original composition by Brett Vanderlaan and additional mixing by Jeremy Tuttle. Our fact-checker is Ena Alvarado. Our staff includes Tim Balint, Suzie Falk, Lisa Harris, Alisa Manjarrez, Stephen Staver, Lixandra Urresta, and Rebecca Woodward. Our theme song was composed by Ty Gibbons. You can check us out on X @equinixmetal, and on YouTube at Equinix Developers. Visit Traceroutepodcast.com for even more stories about the human layer of the stack. We’ll leave all these links and a link to the transcript down in the show notes. Special thanks to our producer John Taylor and The Uninvited for the use of their music in this episode. And a very special thanks to all the voices who helped contribute to this story. If you liked it, it'd be awesome if you could share this wherever you hang out online. Maybe drop us a rating on Spotify, a rating and review on Apple. I mean, this is just how other people find the show, so any little bit helps. I'm Mathr de Leon, Senior Producer for Traceroute, and we'll be back in two weeks with the conclusion of this story. Until then, thanks for listening.
- Traceroute is a podcast from Equinix and Stories Bureau.
- This episode was hosted by Amy Tobey and was produced by Mathr de Leon with help from Sadie Scott.
- It was edited by Joshua Ramsey with mixing and sound design by Brett Vanderlaan and additional mixing by Jeremy Tuttle.
- Our fact-checker is Ena Alvarado.
- Our staff includes Tim Balint, Suzie Falk, Lisa Harris, Alisa Manjarrez, Stephen Staver, Lixandra Urresta, and Rebecca Woodward.
- Our theme song was composed by Ty Gibbons.