The Harvester node driver is used to provision VMs in the Harvester cluster. In this section, you'll learn how to configure Rancher to use the Harvester node driver to launch and manage Kubernetes clusters.
You can now provision RKE1/RKE2 Kubernetes clusters in Rancher
v2.6.3+ with the built-in Harvester node driver.
Additionally, Harvester now can provide built-in Load Balancer support as well as raw cluster persistent storage support to the guest Kubernetes cluster.
Harvester v1.0.0 is compatible with Rancher
Harvester Node Driver
The Harvester node driver is enabled by default from Rancher
v2.6.3. You can go to
Cluster Management >
Node Drivers page to manage the Harvester node driver manually.
When the Harvester node driver is enabled, you can create Kubernetes clusters on top of the Harvester cluster and manage them from Rancher.
RKE1 Kubernetes Cluster
Click to learn how to create RKE1 Kubernetes Clusters.
RKE2 Kubernetes Cluster
Click to learn how to create RKE2 Kubernetes Clusters.
K3s Kubernetes Cluster
Click to learn how to create k3s Kubernetes Clusters.
Topology Spread Constraints
Available as of v1.0.3
In your guest Kubernetes cluster, you can use topology spread constraints to control how workloads are spread across the Harvester VMs among failure-domains such as regions and zones. This can help to achieve high availability as well as efficient resource utilization of your cluster resources.
The minimum RKE2 versions required to support the sync topology label feature are as follows:
|Supported RKE2 Version
In addition, the cloud provider version installed via the
Apps of RKE/K3s must be >= v0.1.4
Sync Topology Labels to the Guest Cluster Node
During the cluster installation, the Harvester node driver will automatically help synchronize topology labels from VM nodes to guest cluster nodes. Currently, only
zone topology labels are supported.
Label synchronization will only take effect during guest node initialization. To avoid node drifts to another region or zone, it is recommended to add the node affinity rules during the cluster provisioning, so that the VMs can be scheduled to the same zone even after rebuilding.
Configuring topology labels on the Harvester nodes through
Hosts > Edit Config > Labels. e.g., add the topology labels as follows:
Creating a guest Kubernetes cluster using the Harvester node driver and it is recommended to add the node affinity rules, this will help to avoid node drifting to other zones after VM rebuilding.
After the cluster is successfully deployed, confirm that guest Kubernetes node labels are successfully synchronized from the Harvester VM node.
Now deploy workloads on your guest Kubernetes cluster, and you should be able to manage them using the topology spread constraints.