- Home /
- Resources /
- Learning center /
- Deploying a multi-...
Deploying a multi-node Nutanix cluster on Metal
A step-by-step guide on creating a 3 node Nutanix cluster on Equinix Metal without DHCP.
On this page
As an ecosystem partner for Metal, Nutanix offers solutions for users looking to deploy VMs or container workloads. This guide will focus on bringing up a multi-node Nutanix Cluster in Equinix Metal.
NOTE: This guide is not intended to represent best practices for Nutanix, nor a hardened, enterprise-ready platform, but rather cover the considerations when bringing up an environment.
Desired end-state
The desired end-state for all deployment is 3 Nutanix nodes, joined by a VLAN and a jump host. It is depicted below:
Note that we'll still need to access the Nutanix clusters, to do so we can make use of a jump host to facilitate that access to the cluster as well as the user virtual machines.
Nutanix on Equinix Metal
Equinix Metal currently has two validated server configurations for running Nutanix AHV (Acropolis Hypervisor):
For this guide, we'll be using the m3.large.opt-c2s1.x86 server. The Dallas metro will also be the selected due to the availability of resources.
Understanding Nutanix networking
Before starting with the deployment for Nutanix on Metal, it is important to first understand the default networking that Nutanix configures on the server as part of the boot process. This will also assist with visualising the changes that need to be made in order to successfully create the desired end-state architecture.
The main networking components that we need to be familiar with are:
- Controller VM (CVM): The virtual machine responsible for the configuration and control of the cluster
- Acropolis Hypervisor (AHV): The Nutanix virtualisation layer installed on the Metal server
- Default networking: A set of Virtual Switches / Bridges used for connectivity of the server uplinks, CVM and AHV components
This environment can be visualised as follows:
A few notes from experience:
- Both AHV br0 and CVM eth0 are configured to receive addresses via DHCP
- By default, the Nutanix node is only given a /31 private IP block, creating one useable IP address
- As AHV is created first, it will lease this IP, leaving no available IPs for the CVM external interface. It will use a link-local address instead
- Eth1 of CVM is the internal interface that communicates with the AHV. It is always configured with 192.168.5.2 and 192.168.5.254
- The AHV virbr0 interface is in the same subnet as CVM Eth1 (192.168.5.0/24) and always has the IP address of 192.168.5.1
- The private IPv4 that Equinix metal deploys to the server is on br0 of the AHV – this is the reason that the Nutanix console is accessible over SSH to this IP address
- The IPMI interface is configured with a DHCP address from the 10.x.x.x range and connected to the SOS platform for Metal
For all of the configurations deployed in this guide, the internal management network (the one that both br0 and eth0 will be placed in) is going to exist on VLAN 50 and represent the network 192.168.50.0/24. We will see various configuration approaches that will see IPs from this range dynamically allocated through DHCP, as well as manually configured as static IPs.
Major steps in the guide
With a base level knowledge of the environment and the changes that need to be made, it is time to build the environment. This will be broken up into two main components:
- 1. Deploying the infrastructure - heavily centralised around workflows in the Equinix Metal portal, and
- 2. Configuring the Nutanix environment - predominantly terminal based configuration steps to bring the cluster up.
1. Deploying the infrastructure
The deployment of the infrastructure in Equinix Metal will be done through the Equinix Metal console.
In this step, we'll create our Nutanix servers, a jump host, a VLAN, and then associate the servers with that VLAN.
Provisioning the Nutanix servers
For this step, we'll be using Reserved Servers. Go to the console and choose to deploy a reserved server.
We'll choose a custom request, the Dallas metro, and the m3.large.opt-c2s1.x86 server type. Don't forget to increase the quantity to 3 as we'll be creating a cluster.
Once reserved, choose to deploy the reserved hardware. Select the recently deployed servers and choose Nutanix LTS 6.5 from the Operating System drop down. For this guide we'll use the following hostnames:
- nutanix-ahv01
- nutanix-ahv02
- nutanix-ahv03
NOTE: At this point, it is worth mentioning that Nutanix does not allow for the hypervisor to be deployed with a Public IP address. Click on the Configure option and ensure that the Public IPv4 and Public IPv6 options are disabled.
It is for this purpose that access to this device will be managed through the SOS console. Alternatively, it is possible to use the Private IP address to access the console via the private L3 network as well, however, changes to the networking for the Nutanix Hypervisor (AHV) will result in a network drop and potential loss of access if misconfigured.
Create a jump host
We'll also need to create a jump host, with a public IP address, that can also access the Nutanix servers. The jump host will be used to verify access to the Nutanix Prism console once the deployment is complete.
To do this, choose to deploy another server in the same metro, a c3.small.x86 is sufficient. Use The latest version of Ubuntu and give it a name like: nutanix-jump.
Setting up a VLAN
Once the servers come up, we can convert the interfaces to either hybrid bonded and attach the management VLAN (50).
To do this, go to the Networking > Layer 2 VLAN option in the navigation menu.
Choose to create a new VLAN with the following attributes:
- Metro: The same metro that the servers are located in
- Description: Management
- VNID: 50
Associating the VLAN with each server
Once the VLAN is created, view the details of each Nutanix server (and jump host). Click on the Networking tab, and choose the Convert to other network type button. Convert it with the following options:
- Type: Hybrid
- Hybrid: Bonded
- VLAN: The one you just created
NOTE: Repeat this step for each Nutanix server and the jump host.
With all of the Metal networking done, we will now begin to prepare each node to be joined into a single Nutanix cluster.
2. Configuring the Nutanix environment
NOTE: For the rest of this guide, we will focus on just a single node, but the actions MUST be completed on each other Nutanix node.
Use SOS to access the Nutanix servers
To access the Nutanix servers, we'll use SOS. To locate the SOS address for the server and initiate an SSH session click on the terminal icon and copy the command into a terminal.
For example, it may look like the command below:
ssh 3b26382b-735f-481b-8580-2924f90ab3ab@sos.da11.platformequinix.com
Through the SOS console, we will be connecting to AHV, for which he default credentials are:
username: root
password: nutanix/4u
NOTE: Nutanix has default credentials for both AHV and CVM, which are available here link and expanded upon in the AHV Admin Guide.
Once logged in, you'll now want to connect to the CVM that is within the AHV.
Remember that CVM is reachable on the internal bridge (192.168.5.0/24) at either 192.168.5.2
or 192.168.5.254
, we can SSH to this VM using the credentials below (noting the capital "N" this time in the password):
username: admin
password: Nutanix/4u
For instance, this may look like:
root@ahv# ssh admin@192.168.5.2
If it is the first-time logging into the CVM, you will be prompted to change the password from the default. NOTE that you will need to re-login after changing the password.
Once re-logged into the CVM, we can view the current interface configurations, namely eth0
(external) and eth1
(internal). It is also worth noting the CVM is always referenced by the external IP:
Summary of networking configuration
Below is a refernce table that summarizes the configuration for the entire environment (all in VLAN 50):
Server | Component | IP Address |
---|---|---|
nutanix-jump |
Management Interface | 192.168.50.254 |
nutanix-ahv01 |
AHV | 192.168.50.10 |
CVM | 192.168.50.11 | |
nutanix-ahv02 |
AHV | 192.168.50.20 |
CVM | 192.168.50.21 | |
nutanix-ahv03 |
AHV | 192.168.50.30 |
CVM | 192.168.50.31 |
As there is no common network between all of our CVMs (across the three nodes), we will begin by creating 3 single-node clusters. This allows for the appropriate configurations to be made on each server, after which, the single-node clusters will be destroyed and a new 3-node cluster will be created.
Creating a single node cluster
To create a single node cluster on the Nutanix server we'll be using the cluster
command, and specifically setting the redundancy_factor
option to 1
.
admin@cvm# cluster -s <cvm_ip> --redundancy_factor=1 create
NOTE: The cvm_ip
value will be the external IP of the CVM (eth0) which is currently configured as a link-local address in the 169.254.x.x
range, so for example:
admin@cvm# cluster -s 169.254.100.254 --redundancy_factor=1 create
Executing this will start the cluster creation process and this will take some time with a lot of output. Once that is completed, that status of the cluster
admin@cvm# cluster status
And an output similar to this (with different PIDs) should be displayed
Update the external IP address of the CVM
Now that the cluster has been created, we can modify the external IP address of CVM using the external_ip_reconfig
command.
In order to execute this script, we first need to stop the current cluster.
admin@cvm# cluster stop
You should notice that every process except VipMonitor
is shutdown. We also need to restart Genesis
– a bootstrap service that will ensure everything is synced and performs some housekeeping for the CVM:
admin@cvm# genesis restart
At this point, we can execute the script to reconfigure the external CVM IP. Due to the files within the CVM that the script interacts with, it needs to be run with elevated privileges:
admin@cvm# sudo external_ip_reconfig
This will start a workflow to reconfigure the network mask, default gateway, and interface IP of the CVM. For our node, these details will be:
Netmask: 255.255.255.0 Default Gateway: 192.168.50.254 CVM IP: 192.168.50.31
This reconfiguration will take a little bit of time to complete, but afterwards, we should log out of the CVM and use virsh
to restart the VM:
root@ahv# virsh reboot <vm_name>
Which for this particular node is:
root@ahv# virsh reboot NTNX-G93STD3-A-CVM
Upon logging back into the CVM, we should notice that the referenced external IP has already changed:
Update the VLAN at the CVM level
With the IP changed, we also need to configure the VLAN that this network exists one. Remember, the management network defined in Metal was VLAN 50. The documentation in this process is available from this link.
To do this, we'll use the change_cvm_vlan
command that is pre-installed on each CVM. Remember to log back into the CVM if needed.
admin@cvm# change_cvm_vlan <vlan_id>
Which for this environment:
admin@cvm# change_cvm_vlan 50
NOTE: The password that this process is asking for is that of AHV, not CVM! This is because the network changes are being configured on the virtual switch of the host.
These changes will take a long time to complete, but once successful, we can validate that it was successful at the AHV level. NOTE that the tag on vnet0
(where the external interface of CVM is attached).
root@ahv# ovc-vsctl show
That completes the configuration of CVM, now it is onto AHV where similar changes will need to be made.
Update br0
at the AHV level
For each AHV we'll need to make the following changes:
-
br0
needs to be configured in the same subnet (VLAN 50) –192.168.50.0/24
-
br0
needs to be tagged with VLAN 50
Log into the AHV as root and show the network-scripts configuration file for br0
:
root@ahv# cat /etc/sysconfig/network-scripts/ifcfg-br0
Right now, the interface is configure to receive addresses via DHCP (from Metal). Update the file with nano
or vi
to have the following configuration:
root@ahv# nano /etc/sysconfig/network-scripts/ifcfg-br0
It should look like the following:
DEVICE="br0"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="ethernet"
BOOTPROTO="none"
PERSISTENT_DHCLIENT=1
NETMASK="255.255.255.0"
IPADDR="192.168.50.30"
GATEWAY="192.168.50.254"
To make the changes take effect, the network service will need to be restarted:
root@ahv# /etc/init.d/network restart
Restart the Genesis service, (must be logged into the CVM).
admin@cvm# genesis restart
And finally, tag the br0
interface to place AHV in the management network.
root@ahv# ovs-vsctl set port br0 tag=<vlan_tag>
Which for this deployment would be:
root@ahv# ovs-vsctl set port br0 tag=50
Again, this configuration change can be confirmed with the command below and by checking that the br0
port now has a tag of 50:
root@ahv# ovs-vsctl show
Verify with a ping test
We can verify the configuration works with a ping test between AHV and CVM across vs0
as both interfaces are in VLAN 50:
admi@cvm# ping 192.168.50.30
NOTE: At this point, there is a chance that the CVM will not display the external IP, which causes the ping test to fail. Restarting the CVM from virsh
solves the issue:
root@ahv# virsh reboot <vm_name>
Destroy the cluster (yes, really)
Since all hosts were brought up in single-node clutsers to configure the IPs, we first have to destroy those clusters. This will remove the CVM IP from the current cluster and allow us to reassign it to a new cluster.
Login to the CVM and run the cluster destroy
command.
admin@cvm# cluster destroy
NOTE: After destruction, CVM password might be reset. If it has, logout of CVM and log back in using the default (username: admin
, password: Nutanix/4u
)
Verify that it is complete by checking the cluster's status on each node:
admin@cvm# cluster status
Setup a 3-node cluster
With all of this completed, we are left with 3 nodes, that have their AHV and CVMs configured in a single management network and the 3-node cluster can now be defined.
From any CVM, run the following command:
admin@cvm# cluster -s <cvm_ips> create
In this example, it is:
admin@cvm# cluster -s 192.168.50.11,192.168.50.21,192.168.50.31 create
Nutanix will go through and build a new cluster containing the 3 nodes. Again this will take some time to complete.
Testing the cluster
Once the configuration is finished, we are ready to test the cluster. Since the CVM interface is in the same network as the management interface of the jump host, we should be able to reach our Nutanix cluster from there.
Log onto the jump host, load a browser and hit any of the CVM IP addresses. If everything has worked, the Nutanix Prism Console should be visible.
This portal uses the same credentials as the CVM itself. Once logged in and some other T&Cs have been accepted, the management console will present itself, ready for other configuration.
Congratulations! You have successfully deployed a multi-node Nutanix cluster.
Summary
In this guide we covered how to deploy the required infrastructure - three Nutanix hosts and a jump host - through Equinix Metal; and how to configure each host to work in a multi-node configuration. To learn more about Nutanix and Equinix, check out the following resources:
Last updated
25 April, 2024Category
You may also like
Digger deeper into similar topics in our archivesConfiguring BGP with BIRD 1.6 on an Equinix Metal Server
Set up BGP on your Equinix Metal server using BIRD 1.6, covering IP configuration, installation, and neighbor setup to ensure robust routing capabilities between your server and the Equinix Metal network.
Configuring BGP with FRR on an Equinix Metal Server
Establish a robust BGP configuration on your Equinix Metal server using FRR, including setting up network interfaces, installing and configuring FRR software, and ensuring secure and efficient IP address announcement.
Crosscloud VPN with Wireguard
Learn to establish secure VPN connections across cloud environments using WireGuard, including detailed setups for site-to-site tunnels and VPN gateways with NAT on Equinix Metal, enhancing cross-platform security and connectivity.
Deploy Your First Server
Learn the essentials of deploying your first server with Equinix Metal. Set up your project & SSH keys, provision a server and connect it to the internet.
Ready to kick the tires?
Use code DEPLOYNOW for $250 credit