Jake Moshenko and Joey Schorr didn’t set out to build a business around container image storage.
Because Equinix Metal is giving us the actual bare metal machines, we still have that first layer of virtualization extensions that we can use to isolate our guest machines from one another. None of the other cloud providers provide that.
When the two left their jobs on the APIs team at Google for the startup life, their fledgling business, DevTable, focused on creating developer tools that would allow people to code in the browser. “As part of that,” Moshenko explains, “people wanted to spin up development servers, which required that we adopt a containerization strategy. And at the time, this was like Docker super-early, pre-release version.” As such, there was no Docker private registry, no Docker hub—and DevTable itself needed a place to store and manage its private container images.
“We said, ‘Hey, if we need this, I bet some other people need this,’” says Moshenko. “So we went ahead and built in about a month, crazy fast.” They introduced the first version of this container registry, which they called Quay, at a Docker meetup in New York City in October 2013. It was almost an aside during a presentation about DevTable. Quite simply, they hadn’t anticipated the immediate, positive response. The pair had hit upon a huge need in the space of storing binaries, and the customers rushed in. Still bootstrapping their business, the pair were working toward profitability when two things happened in their space in 2014. First, a competitor emerged. And CoreOS came calling.
Alex Polvi, the CEO of CoreOS, had actually been at the same Docker meetup where Quay was launched. He was there pitching his own product, an operating system for running containers. Back then, Moshenko says, “We had a chat with Alex and said, ‘Hey, where are you telling people to store their images?’ And at the time there wasn’t a really good story. Joey had kept in touch with Alex, and about a year later, it just made sense for us to join forces.”
Quay has been run almost entirely on Amazon since the beginning, but one of the company’s early decisions was to add a build cluster in their infrastructure. After experimenting with several other platforms, the Quay team signed on with Packet, now Equinix Metal, last year to run their builds as virtual machines on Kubernetes. “We needed to give the Docker engines that are doing our builds on a very native feeling so that they can be as effective as possible but not make any compromises on security,” Moshenko says. “So what we’re essentially doing with Equinix Metal is building our own GCE [Google Compute Engine].”
“The virtual machines that we start on the Equinix Metal instances start in like 6 seconds, and they’re ready to do a build after 30 seconds, so it’s way, way faster than our EC2 cluster,” says Moshenko. “The other thing is that because we’re paying for the underlying machines and not the virtual machines, we step around the problem with EC2’s billing model where they charge you for an entire hour, regardless of how long you use the machine. So we just kind of found the cost sweet spot as well as the user experience sweet spot when we switched to Equinix Metal.”
In fact, what Quay is currently doing isn’t possible on any other virtualized cloud provider; only Equinix Metal supports nested virtualization. “If you were to run a virtual machine on EC2, you have to run it in pure software virtualization, which causes a 60x speed decrease,” says Moshenko. “Because Equinix Metal is giving us the actual bare metal machines, we still have that first layer of virtualization extensions that we can use to isolate our guest machines from one another. None of the other cloud providers provide that. We could have used the classic rack space style infrastructure provider, but we really like having the API to spin machines up and down without making a phone call or without signing any contracts. It’s like an Amazon-style experience but with the bare metal machines.”
The virtual machines that we start on the Equinix Metal instances start in like 6 seconds, and they’re ready to do a build after 30 seconds, so it’s way, way faster than our EC2 cluster.