Skip to main content
Version: v1.5 (dev)

Hardware and Network Requirements

As an HCI solution on bare metal servers, there are minimum node hardware and network requirements for installing and running Harvester.

A three-node cluster is required to fully realize the multi-node features of Harvester. The first node that is added to the cluster is by default the management node. When the cluster has three or more nodes, the two nodes added after the first are automatically promoted to management nodes to form a high availability (HA) cluster.

Certain versions of Harvester support the deployment of single-node clusters. Such clusters do not support high availability, multiple replicas, and live migration.

Hardware Requirements

Harvester nodes have the following hardware requirements and recommendations for installation and testing.

HardwareDevelopment/TestingProduction
CPUx86_64 (with hardware-assisted virtualization); 8 cores minimumx86_64 (with hardware-assisted virtualization); 16 cores minimum
Memory32 GB minimum64 GB minimum
Disk capacity250 GB minimum (180 GB minimum when using multiple disks)500 GB minimum
Disk performance5,000+ random IOPS per disk (SSD/NVMe); management node storage must meet etcd speed requirements. Only local disks and hardware RAID are supported.5,000+ random IOPS per disk (SSD/NVMe); management node storage must meet etcd speed requirements. Only local disks and hardware RAID are supported.
Network card countManagement cluster network: 1 NIC required, 2 NICs recommended; VM workload network: 1 NIC required, at least 2 NICs recommended (does not apply to the witness node)Management cluster network: 1 NIC required, 2 NICs recommended; VM workload network: 1 NIC required, at least 2 NICs recommended (does not apply to the witness node)
Network card speed1 Gbps Ethernet minimum10 Gbps Ethernet minimum
Network switchPort trunking for VLAN supportPort trunking for VLAN support
important
  • For best results, use YES-certified hardware for SUSE Linux Enterprise Server (SLES) 15 SP3 or SP4. Harvester is built on SLE technology and YES-certified hardware has additional validation of driver and system board compatibility. Laptops and nested virtualization are not supported.
  • Each node must have a unique product_uuid (fetched from /sys/class/dmi/id/product_uuid) to prevent errors from occurring during VM live migration and other operations. For more information, see Issue #4025.
  • Harvester has a built-in management cluster network (mgmt). To achieve high availability and the best performance in production environments, use at least two NICs in each node to set up a bonded NIC for the management network (see step 6 in ISO Installation). You can also create custom cluster networks for VM workloads. Each custom cluster network requires at least two additional NICs to set up a bonded NIC in every involved node of the Harvester cluster. The witness node does not require additional NICs. For more information, see Cluster Network.
  • During testing, you can use only one NIC for the built-in management cluster network (mgmt), and for testing the VM network that is also carried by mgmt. High availability and optimal performance are not guaranteed.

CPU Specifications

Live Migration functions correctly only if the CPUs of all physical servers in the Harvester cluster have the same specifications. This requirement applies to all operations that rely on Live Migration functionality, such as automatic VM migration when Maintenance Mode is enabled.

Newer CPUs (even those from the same vendor, generation, and family) can have varying capabilities that may be exposed to VM operating systems. To ensure VM stability, Live Migration checks if the CPU capabilities are consistent, and blocks migration attempts when the source and destination are incompatible.

When creating clusters, adding more hosts to a cluster, and replacing hosts, always use CPUs with the same specifications to prevent operational constraints.

Network Requirements

Harvester nodes have the following network requirements for installation.

Port Requirements for Harvester Nodes

Harvester nodes require the following port connections or inbound rules. Typically, all outbound traffic is allowed.

ProtocolPortSourceDescription
TCP2379Harvester management nodesEtcd client port
TCP2381Harvester management nodesEtcd metrics collection
TCP2380Harvester management nodesEtcd peer port
TCP2382Harvester management nodesEtcd client port (HTTP only)
TCP10010Harvester management and compute nodesContainerd
TCP6443Harvester management nodesKubernetes API
TCP9345Harvester management nodesKubernetes API
TCP10252Harvester management nodesKube-controller-manager health checks
TCP10257Harvester management nodesKube-controller-manager secure port
TCP10251Harvester management nodesKube-scheduler health checks
TCP10259Harvester management nodesKube-scheduler secure port
TCP10250Harvester management and compute nodesKubelet
TCP10256Harvester management and compute nodesKube-proxy health checks
TCP10258Harvester management nodescloud-controller-manager
TCP10260Harvester management nodescloud-controller-manager
TCP9091Harvester management and compute nodesCanal calico-node felix
TCP9099Harvester management and compute nodesCanal CNI health checks
UDP8472Harvester management and compute nodesCanal CNI with VxLAN
TCP2112Harvester management nodesKube-vip
TCP6444Harvester management and compute nodesRKE2 agent
TCP10246/10247/10248/10249Harvester management and compute nodesNginx worker process
TCP8181Harvester management and compute nodesNginx-ingress-controller
TCP8444Harvester management and compute nodesNginx-ingress-controller
TCP10245Harvester management and compute nodesNginx-ingress-controller
TCP80Harvester management and compute nodesNginx
TCP9796Harvester management and compute nodesNode-exporter
TCP30000-32767Harvester management and compute nodesNodePort port range
TCP22Harvester management and compute nodessshd
UDP68Harvester management and compute nodesWicked
TCP3260Harvester management and compute nodesiscsid

Port Requirements for Integrating Harvester with Rancher

If you want to integrate Harvester with Rancher, you need to make sure that all Harvester nodes can connect to TCP port 443 of the Rancher load balancer.

When provisioning VMs with Kubernetes clusters from Rancher into Harvester, you need to be able to connect to TCP port 443 of the Rancher load balancer. Otherwise, the cluster won't be manageable by Rancher. For more information, refer to Rancher Architecture.

Port Requirements for K3s or RKE/RKE2 Clusters

For the port requirements for guest clusters deployed inside Harvester VMs, refer to the following links: