Custom Partitioning and RAID (CPR)¶
Equinix Metal™ offers a Custom Partitioning and RAID (CPR) feature, which allows you to specify the disk configuration when you deploy a server from your reserved hardware.
Reserved Hardware Requirement¶
A reserved device is required because the exact drive scheme must be known beforehand in order to enable the customization to work through the API. The Custom Partitioning and RAID feature is not available for on demand instances. However, all of our hardware can be converted to reserved hardware, so just reach out to support@equinixmetal.com to arrange a reservation.
Provisioning with CPR¶
To use CPR, you will need to deploy a server from your hardware reservations through the API. When you send your POST
request to the projects/<id>/devices
endpoint, you will need to send in your CPR configuration as a JSON object with the "storage"
parameter in the body of the request.
curl -X POST \
-H "Content-Type: application/json" \
-H "X-Auth-Token: <API_TOKEN>" \
"https://api.equinix.com/metal/v1/projects/{id}/devices" \
-d '{
"hardware_reservation_id":"reservation_id",
"hostname": "your-hostname",
"operating_system": "<os_slug>",
"storage": "<CPR_JSON_definition>",
}'
Example CPR JSON Definitions¶
To use CPR, create a JSON object that:
- States which disks you want to format.
- How you want these disks formatted.
- What filesystem should be created on the disks.
- Where to mount the partition once created.
Use the examples to get started on creating your own.
General Example¶
The c3.small.x86
comes with 2 x 480 GB drives. This example takes one of those drives, creates three partitions (BIOS, swap, and root), then formats the swap partition as swap and the root partition as ext4
.
{
"disks": [
{
"device": "/dev/sda",
"wipeTable": true,
"partitions": [
{
"label": "BIOS",
"number": 1,
"size": 4096
},
{
"label": "SWAP",
"number": 2,
"size": "3993600"
},
{
"label": "ROOT",
"number": 3,
"size": 0
}
]
}
],
"filesystems": [
{
"mount": {
"device": "/dev/sda3",
"format": "ext4",
"point": "/",
"create": {
"options": [
"-L",
"ROOT"
]
}
}
},
{
"mount": {
"device": "/dev/sda2",
"format": "swap",
"point": "none",
"create": {
"options": [
"-L",
"SWAP"
]
}
}
}
]
}
RAID and NVME Examples¶
This example is on the m1.xlarge.x86
, which came with 6 x 480GB SSD drives. Two of the drives are partitioned (BIOS, SWAP, and ROOT) and then SWAP and ROOT are setup with software RAID.
{
"disks":[
{
"device":"/dev/sda",
"wipeTable":true,
"partitions":[
{
"label":"BIOS",
"number":1,
"size":4096
},
{
"label":"SWAPA1",
"number":2,
"size":"3993600"
},
{
"label":"ROOTA1",
"number":3,
"size":0
}
]
},
{
"device":"/dev/sdb",
"wipeTable":true,
"partitions":[
{
"label":"BIOS",
"number":1,
"size":4096
},
{
"label":"SWAPA2",
"number":2,
"size":"3993600"
},
{
"label":"ROOTA2",
"number":3,
"size":0
}
]
}
],
"raid":[
{
"devices":[
"/dev/sda2",
"/dev/sdb2"
],
"level":"1",
"name":"/dev/md/SWAP"
},
{
"devices":[
"/dev/sda3",
"/dev/sdb3"
],
"level":"1",
"name":"/dev/md/ROOT"
}
],
"filesystems":[
{
"mount":{
"device":"/dev/md/ROOT",
"format":"ext4",
"point":"/",
"create":{
"options":[
"-L",
"ROOT"
]
}
}
},
{
"mount":{
"device":"/dev/md/SWAP",
"format":"swap",
"point":"none",
"create":{
"options":[
"-L",
"SWAP"
]
}
}
}
]
}
The s3.xlarge has 2 x 960 GB SSD, 12 x 8 TB HDD, and 2 x 256 GB of NVMe Flash. In this example, the 2 x 960 GB SSDs will be used by ROOT and SWAP in RAID 1, 12 x 8 TB will be used by DATA in RAID 10. The NVMe drives are left for other purposes.
{
"disks": [
{
"device": "/dev/sdn",
"wipeTable": true,
"partitions": [
{
"label": "BIOS",
"number": 1,
"size": 4096
},
{
"label": "SWAP",
"number": 2,
"size": "8G"
},
{
"label": "ROOT",
"number": 3,
"size": 0
}
]
},
{
"device": "/dev/sdm",
"wipeTable": true,
"partitions": [
{
"label": "BIOS",
"number": 1,
"size": 4096
},
{
"label": "SWAP",
"number": 2,
"size": "8G"
},
{
"label": "ROOT",
"number": 3,
"size": 0
}
]
},
{
"device": "/dev/sdj",
"wipeTable": true,
"partitions": [
{
"label": "DATA",
"number": 1,
"size": 0
}
]
},
{
"device": "/dev/sda",
"wipeTable": true,
"partitions": [
{
"label": "DATA",
"number": 1,
"size": 0
}
]
},
{
"device": "/dev/sdc",
"wipeTable": true,
"partitions": [
{
"label": "DATA",
"number": 1,
"size": 0
}
]
},
{
"device": "/dev/sdi",
"wipeTable": true,
"partitions": [
{
"label": "DATA",
"number": 1,
"size": 0
}
]
},
{
"device": "/dev/sdb",
"wipeTable": true,
"partitions": [
{
"label": "DATA",
"number": 1,
"size": 0
}
]
},
{
"device": "/dev/sdk",
"wipeTable": true,
"partitions": [
{
"label": "DATA",
"number": 1,
"size": 0
}
]
},
{
"device": "/dev/sdf",
"wipeTable": true,
"partitions": [
{
"label": "DATA",
"number": 1,
"size": 0
}
]
},
{
"device": "/dev/sdd",
"wipeTable": true,
"partitions": [
{
"label": "DATA",
"number": 1,
"size": 0
}
]
},
{
"device": "/dev/sdg",
"wipeTable": true,
"partitions": [
{
"label": "DATA",
"number": 1,
"size": 0
}
]
},
{
"device": "/dev/sde",
"wipeTable": true,
"partitions": [
{
"label": "DATA",
"number": 1,
"size": 0
}
]
},
{
"device": "/dev/sdl",
"wipeTable": true,
"partitions": [
{
"label": "DATA",
"number": 1,
"size": 0
}
]
},
{
"device": "/dev/sdh",
"wipeTable": true,
"partitions": [
{
"label": "DATA",
"number": 1,
"size": 0
}
]
}
],
"raid": [
{
"devices": [
"/dev/sda1",
"/dev/sdb1",
"/dev/sdc1",
"/dev/sdd1",
"/dev/sde1",
"/dev/sdf1",
"/dev/sdg1",
"/dev/sdh1",
"/dev/sdi1",
"/dev/sdj1",
"/dev/sdk1",
"/dev/sdl1"
],
"level": "10",
"name": "/dev/md/DATA"
},
{
"devices": [
"/dev/sdm2",
"/dev/sdn2"
],
"level": "1",
"name": "/dev/md/SWAP"
},
{
"devices": [
"/dev/sdm3",
"/dev/sdn3"
],
"level": "1",
"name": "/dev/md/ROOT"
}
],
"filesystems": [
{
"mount": {
"device": "/dev/md/ROOT",
"format": "ext4",
"point": "/",
"create": {
"options": [
"-L",
"ROOT"
]
}
}
},
{
"mount": {
"device": "/dev/md/DATA",
"format": "ext4",
"point": "/DATA",
"create": {
"options": [
"-L",
"DATA"
]
}
}
},
{
"mount": {
"device": "/dev/md/SWAP",
"format": "swap",
"point": "none",
"create": {
"options": [
"-L",
"SWAP"
]
}
}
}
]
}
The m2.xlarge.x86
has 2 x 120 GB SSD and 1 x 3.8 TB NVME drives, so this example configures RAID setups for the ROOT and SWAP partitions as well as mounting the NVMe drive during deployment.
{
"disks": [
{
"device": "/dev/sda",
"wipeTable": true,
"partitions": [
{
"label": "BIOS",
"number": 1,
"size": 4096
},
{
"label": "SWAP",
"number": 2,
"size": "8G"
},
{
"label": "ROOT",
"number": 3,
"size": 0
}
]
},
{
"device": "/dev/sdb",
"wipeTable": true,
"partitions": [
{
"label": "BIOS",
"number": 1,
"size": 4096
},
{
"label": "SWAP",
"number": 2,
"size": "8G"
},
{
"label": "ROOT",
"number": 3,
"size": 0
}
]
},
{
"device": "/dev/nvme0n1",
"wipeTable": true,
"partitions": [
{
"label": "VAR1",
"number": 1,
"size": 0
}
]
}
],
"raid": [
{
"devices": [
"/dev/sda3",
"/dev/sdb3"
],
"level": "0",
"name": "/dev/md/ROOT"
},
{
"devices":[
"/dev/sda2",
"/dev/sdb2"
],
"level":"1",
"name":"/dev/md/SWAP"
}
],
"filesystems": [
{
"mount": {
"device": "/dev/md/ROOT",
"format": "ext4",
"point": "/",
"create": {
"options": [
"-L",
"ROOT"
]
}
}
},
{
"mount": {
"device": "/dev/nvme0n1p1",
"format": "ext4",
"point": "/var",
"create": {
"options": [
"-L",
"VAR1"
]
}
}
},
{
"mount":{
"device":"/dev/md/SWAP",
"format":"swap",
"point":"none",
"create":{
"options":[
"-L",
"SWAP"
]
}
}
}
]
}
CPR for UEFI-only Servers¶
For the c1.large.arm
, c2.large.arm
and c2.medium.x86
servers which are UEFI only, you are required to use a FAT32 EFI partition for /boot/efi
. This is an example configuration for the c2.medium.x86
, which has 2 x 120 GB SSD and 2 x 480 GB SSD drives.
{
"disks": [
{
"device": "/dev/sda",
"wipeTable": true,
"partitions": [
{
"label": "BIOS",
"number": 1,
"size": "512M"
},
{
"label": "SWAP",
"number": 2,
"size": "3993600"
},
{
"label": "ROOT",
"number": 3,
"size": 0
}
]
}
],
"filesystems": [
{
"mount": {
"device": "/dev/sda1",
"format": "vfat",
"point": "/boot/efi",
"create": {
"options": [
"32",
"-n",
"EFI"
]
}
}
},
{
"mount": {
"device": "/dev/sda3",
"format": "ext4",
"point": "/",
"create": {
"options": [
"-L",
"ROOT"
]
}
}
},
{
"mount": {
"device": "/dev/sda2",
"format": "swap",
"point": "none",
"create": {
"options": [
"-L",
"SWAP"
]
}
}
}
]
}