Skip to main content
Version: v1.1 (EOL)

Creating an RKE1 Kubernetes Cluster

You can now provision RKE1 Kubernetes clusters on top of the Harvester cluster in Rancher v2.6.3+ with the built-in Harvester node driver.

rke1-cluster

note
  • VLAN network is required for Harvester node driver.
  • Harvester node driver only supports cloud images.

When you create a Kubernetes cluster hosted by the Harvester infrastructure, node templates are used to provision the cluster nodes. These templates use Docker Machine configuration options to define an operating system image and settings/parameters for the node.

Node templates can use cloud credentials to access the credentials information required to provision nodes in the infrastructure providers. The same cloud credentials can be used by multiple node templates. By using cloud credentials, you do not have to re-enter access keys for the same cloud provider. Cloud credentials are stored as Kubernetes secrets.

You can create cloud credentials in two contexts:

All cloud credentials are bound to your user profile and cannot be shared with other users.

Create Your Cloud Credentials

  1. Click ☰ > Cluster Management.
  2. Click Cloud Credentials.
  3. Click Create.
  4. Click Harvester.
  5. Enter your cloud credential name.
  6. Select "Imported Harvester Cluster".
  7. Click Create.

create-harvester-cloud-credentials

Create Node Template

You can use the Harvester node driver to create node templates and eventually node pools for your Kubernetes cluster.

  1. Configure the Cloud Credentials.
  2. Configure Instance Options:
    • Configure the CPU, memory, and disk
    • Select an OS image that is compatible with the cloud-init config.
    • Select a network that the node driver is able to connect to; currently, only VLAN is supported.
    • Enter the SSH User; the username will be used to ssh to nodes. For example, a default user of the Ubuntu cloud image will be ubuntu.
  3. (Optional) Configure Advanced Options if you want to customise the cloud-init config of the VMs:
  4. Enter a RANCHER TEMPLATE name.

See nodes hosted by an infrastructure provider for more information.

Add Node Affinity

Available as of v1.0.3 + Rancher v2.6.7

The Harvester node driver now supports scheduling a group of machines to particular nodes through the node affinity rules, which can provide high availability and better resource utilization.

Node affinity can be added to the node template during the cluster creation, click Add Node Template or edit your existing node template via RKE1 Configuration > Node Templates:

  1. Check the Advanced Options tab and click Add Node Selectoraffinity-add-node-selector
  2. Set priority to Required if you wish the scheduler to schedule the machines only when the rules are met.
  3. Click Add Rule to specify the node affinity rules, e.g., for the topology spread constraints use case, you can add the region and zone labels as follows:
    key: topology.kubernetes.io/region
    operator: in list
    values: us-east-1
    ---
    key: topology.kubernetes.io/zone
    operator: in list
    values: us-east-1a
    affinity-add-rules
  4. Click Create to save the node template. After the cluster is installed, you can check whether its machine nodes are scheduled accordingly to the affinity rules.

Create RKE1 Kubernetes Cluster

Users can create an RKE1 Kubernetes cluster from the Cluster Management page via the Harvester RKE1 node driver.

  1. Select Clusters menu.
  2. Click Create button.
  3. Toggle Switch to RKE1.
  4. Select Harvester node driver.
  5. Enter Cluster Name (required).
  6. Enter Name Prefix (required).
  7. Enter Template (required).
  8. Select etcd and Control Plane (required).
  9. On the Cluster Options configure Cloud Provider to Harvester if you want to use the Harvester Cloud Provider and CSI Diver.
  10. Click Create.

create-rke-harvester-cluster

Using Harvester RKE1 Node Driver in Air Gapped Environment

RKE1 provisioning relies on the qemu-guest-agent to get the IP of the virtual machine, and docker to set up the RKE cluster. However, It may not be feasible to install qemu-guest-agent and docker in an air gapped environment.

You can address the installation constraints with the following options:

Option 1. Use a VM image with qemu-guest-agent and docker installed.

Option 2. Configure the cloud init user data to enable the VMs to install qemu-guest-agent and docker via an HTTP(S) proxy.

Example user data in Harvester node template:

#cloud-config
apt:
http_proxy: http://192.168.0.1:3128
https_proxy: http://192.168.0.1:3128
write_files:
- path: /etc/environment
content: |
HTTP_PROXY="http://192.168.0.1:3128"
HTTPS_PROXY="http://192.168.0.1:3128"
append: true