Skip to main content

No More Warming up the Disk: Announcing our Block Storage Service

After our beta process last Summer, we knew we’d need to make storage easier for our customers - specifically network accessible block devices. But I’ll be honest: building a multi-tenant, highly resilient and high performance block storage service scared the crap out of me!

Headshot of Zac Smith
Zac SmithGlobal Head, Edge Infrastructure Services
No More Warming up the Disk: Announcing our Block Storage Service

A guiding principle at Packet has always been to provide powerful yet easy to use infrastructure building blocks.  In a world that can seem dominated by virtualization, our goal is to keep the base infrastructure layers (compute, network and storage) as clean and simple as possible - so you can do whatever you need to do on top of it (including virtualization!).  

A major tool in our toolset is our focus on single tenancy.    In fact, when we started the company we were dead set on avoiding multi-tenancy at all costs - it was sort of our mantra!   With most of our product set rolled out, how’d we do?  Here’s the roundup:  

#1 Compute - Single tenant all the way!  We only serve up dedicated servers, from our "tiny but mighty" Type 0 to our beefy Type 3, you get a fully dedicated server with no virtualization / hypervisor / etc.   This might seem obvious if you’ve taken a look at our website, but it’s kind of amazing how many new users are convinced we delivered them a VM (likely because our dedicated servers are provisioned quickly, and controlled by an API - traditionally the territory of virtualization).    Of course, it’s no surprise that many of customers take our dedicated instances and then slice them up into virtual machines, containers or other goodies.        

#2 Network - Single tenant, most of the way. We don’t give you virtual switch ports on a software network sitting above our physical network, we give you pairs of line rate ports in 1Gbps or 10Gbps flavors.   Sure, there are parts of our network fabric that are shared, but in general we strive to remove as many points of multi-tenancy as possible, and the network is a major part of that story.

#3 Storage - Multi tenancy, here we come.  Well, to be honest we struggled with this one.  You can see from our server configurations that we crafted both the Type1 and Type3 servers to have redundant disk options.  By default we provision boot volumes with a software RAID setup.  In our Type3 server, we also invested in 1.6TB of super-fast NVMe flash.  All of this to say: our thinking was that people would enjoy the local, large and fast disks at a great price point but would also use them to export storage volumes to other server instances when necessary.  

We were wrong.  And this is the part of the story where we skid off the single-tenant tracks and dive happily into multi-tenancy.

First, The Backstory

Customers have used our crazy fast local flash storage for some amazing workloads (such as tripling the speed of Elasticsearch clusters or relational databases), but making, managing and exporting storage volumes across the network isn’t something most people dream of spending their free time with.   Just look at Amazon’s Elastic Block Store - an industry standard success story despite being a fairly ‘meh’ product, especially in comparison to superstars like S3.  Poor IOPS performance, failures, and pre-warming.  You get the idea: convenience trumps doing it yourself any day.

At the same time, new software from people like Portworx and Rancher is evolving in a positive way the experience of managing volumes, particularly in container-focused environments.  But at the end of the day, the underlying storage fabric can still seem equal part brilliance and dark magic.  Better just not to ask!

Build it or Buy It

After our beta process last Spring/Summer, we knew we’d need to make storage easier for our customers to procure, use and manage - specifically network accessible block devices.    But I’ll be honest: building a multi-tenant, highly resilient and high performance block storage service scared the crap out of me!   Not only would we need to build and maintain something that entire companies spend years working on, we’d have to make it good enough to deploy out in the wild.  It didn’t take long for us to decide that this was the one product we wanted to buy, not build.   It’s worth a brief side note about that:


Packet believes in fundamental infrastructure being available to all developers, operators and businesses.  We also think it’s a bad thing (™) to lock your customers in using proprietary, can-only-be-used-and-bought-from-us services.  Our recent introduction of the Packet Private Rack is part of our strategy to ensure customers have choice in where to deploy their infrastructure and how to do it.  However, we also believe that core infrastructure shouldn’t be proprietary.  You should have the reasonable capability to use Packet-like services (e.g. bare metal servers, non-overlay networks and block storage) in your own data center or colocation.

Our Search

Along with other members of the engineering team, I started hunting for a suitable scale-out multi-tenant solution that we could offer to our users, while solving some of the common usability gripes about EBS (primarily around consistency).  We quickly ended up with a company running in stealth at the time but founded by the creators and maintainers of Linux IO Target, the iSCSI target stack of the Linux kernel (more here: http://linux-iscsi.org/wiki/LIO).  

Tiering, Tiering and more Tiering

Years ago, while running the cloud division of Internap, I was one of the first service provider customers of all-flash SAN provider SolidFire.  All that flash was pretty great, but essentially you were paying for a Ferrari, even if much of the time you only needed something to get around town to pick up the kids and hit the grocery store.  This made it hard to price, and even harder to make money on!

With our new solutions, we wanted layers of incrementally slower (and cheaper) storage media, orchestrated by smart software without introducing delay in the IO path.  Need 100k read IOPS or 10k write IOPS?  You got them courtesy of NVMe and caching.  Need just 500 read IOPS?  Spinning disks to the rescue.

This pragmatic approach is very complicated to orchestrate, but it makes the economics much more favorable to a service provider like us (in comparison, say, to a big bank buying a storage cluster for their private datacenter).  We think it's the future of block storage.

Integration, Testing, and the Product

We started working on the project in September, receiving a lab and production cluster (each cluster has three nodes for high availability) from our vendor, and beta releases of their API and management portals.  Over the next few months, Dave Laube worked closely with the team to stress test the system: revving up IOPS, soaking the drives for days on end, randomly pulling drives out, purposely failing clusters - your typical Navy SEAL training.   

In the meantime, Lucas led our API and portal integrations, stitching together the user experience and management tools we would need to deploy, manage and bill for our block service.

Finally, with security and beta testing underway, we’re super excited to release this awesome product to our customers.  Here are the features:

  • In the name of simplicity, we’re starting with two performance tiers
  • $.07 / GB per month gets you 500 IOPS per volume
  • $.15 / GB per month gets you 15,000 IOPS per volume
  • You can set unlimited snapshot policies
  • You can scale volumes from 100GB up to 15TB
  • We offer easy management of iSCSI targets via our metadata service

What’s Next?

Our experience in building the Packet block storage platform has been fantastic, so we’re already working on some wishlist items: proper IPv6 support (cause we roll everything dual stack here at Packet!), a brand new Flocker driver for container volume management and replication across geographic facilities for our Amsterdam and San Jose datacenters.

Additionally, we’ll be offering full support for scale out block service in our Packet Private Rack solution, allowing large scale customers to benefit from the integrated block storage service in their own datacenters. 

Published on

10 February 2016

Tags