Upgrade from v1.6.x to v1.7.x
General Information
An Upgrade button appears on the Dashboard screen whenever a new Harvester version that you can upgrade to becomes available. For more information, see Start an upgrade.
Clusters running v1.6.x can upgrade to v1.7.x directly because Harvester allows a maximum of one minor version upgrade for underlying components. Harvester v1.6.0 and v1.6.1 use the same minor version of RKE2 (v1.33), while Harvester v1.7.0 and v1.7.1 use the next minor version (v1.34). For more information, see Upgrade paths.
For information about upgrading Harvester in air-gapped environments, see Prepare an air-gapped upgrade.
v1.7.x uses NetworkManager instead of wicked, which was used in earlier versions of Harvester. If you modified the management interface configuration after the initial installation, you must perform additional manual steps to avoid issues during the upgrade. For more information, see Migration from wicked to NetworkManager.
Host IP addresses configured via DHCP may change during upgrades. This prevents the cluster from starting correctly and requires manual recovery steps. For details, see Host IP address may change during upgrade when using DHCP.
Update Harvester UI Extension on Rancher v2.13
You must use a compatible version (v1.7.x) of the Harvester UI Extension to import Harvester v1.7.x clusters on Rancher v2.13.
On the Rancher UI, go to local > Apps > Repositories.
Locate the repository named harvester, and then select ⋮ > Refresh.
Go to the Extensions screen.
Locate the extension named Harvester, and then click Update.
Select a compatible version, and then click Update.
Allow some time for the extension to be updated and then refresh the screen.
Migration from wicked to NetworkManager
Harvester v1.7.x transitions from wicked to NetworkManager for network management. Because there is no direct 1:1 mapping between the legacy ifcfg files and NetworkManager's connection profiles, an in-place migration of the existing network configuration is not possible.
During upgrades, Harvester generates new NetworkManager connection profiles using the original installation settings stored in /oem/harvester.config. The legacy ifcfg files in /oem/90_custom.yaml remain on the system, but NetworkManager ignores these files and instead stores its configuration under /etc/NetworkManager.
| Scenario | Action Required |
|---|---|
| You installed v1.1 or later, and never manually modified the management interface or DNS configuration. | None |
You manually modified the management interface configuration by editing the /oem/90_custom.yaml file or by adding CloudInit resources to the ifcfg files. | Required (Custom configuration will be ignored after the upgrade to v1.7.0.) |
If action is required, choose one of the following methods:
Pre-upgrade (Recommended): Edit the
/oem/harvester.configfile on each node. Configure the relevant network settings, particularlyos.dns_nameserversandinstall.management_interface. For more information, see Harvester Configuration.noteIf you initially installed v1.0, ensure that
install.management_interfacefollows the updated schema required by later Harvester versions.Post-upgrade: Use the
nmclitool to manually re-apply your custom configuration to the new NetworkManager connection profiles.
If you encounter any issues during the upgrade, you can perform the following workarounds:
| Scenario | Workaround | Result |
|---|---|---|
| A node becomes stuck in "Waiting Reboot" state. | Log in via the console and verify the network configuration using the nmcli tool. If necessary, manually correct the configuration, then reboot the node. | The upgrade automatically resumes. |
| Errors occur when you manually change the configuration. | If you want to revert to the automatically generated NetworkManager connection profiles, run the command harvester-installer generate-network-config. | The NetworkManager connection profiles in /etc/NetworkManager/system-connections/ are recreated based on the configuration specified in /oem/harvester.config. |
Known Issues
1. Host IP address may change during upgrade when using DHCP
Harvester v1.7.x uses NetworkManager instead of wicked, which was used in earlier versions of Harvester. These two network stacks have different defaults for generating DHCP client IDs.
If the host IP addresses are configured using DHCP, a Harvester upgrade and subsequent reboot may cause the DHCP server to assign IP addresses that are different from what hosts previously used. Consequently, the affected hosts are unable to join the cluster on startup because of the IP address change.
This issue typically occurs when the DHCP server allocates IP addresses based solely on the DHCP client ID. You are unlikely to encounter this issue when the DHCP server is configured to allocate fixed IP addresses based on the MAC address (as demonstrated in the Harvester iPXE Examples).
The impact of this issue varies by cluster size:
- Single-node clusters: Harvester fails to start after rebooting because the IP address has changed.
- Multi-node clusters: Management nodes become stuck in the "Waiting Reboot" state.
To address the issue, perform the following steps:
You must perform the steps for each affected node after the upgrade is completed and the IP address has changed.
Log in to the affected node. You can either access the node via SSH at its new IP address or use the console.
In the
/var/lib/wickeddirectory, check for the lease XML file (named similar to/var/lib/wicked/lease-mgmt-br-dhcp-ipv4.xml).If you are using a VLAN, the file name includes the VLAN ID (for example,
/var/lib/wicked/lease-mgmt-br.2017-dhcp-ipv4.xml).View the file and identify the DHCP client ID.
$ cat /var/lib/wicked/lease-mgmt-br-dhcp-ipv4.xml
<lease>
...
<ipv4:dhcp>
<client-id>ff:00:dd:c7:05:00:01:00:01:30:ae:a0:d3:52:54:00:dd:c7:05</client-id>
...
</ipv4:dhcp>
</lease>Use the
nmclicommand to set the DHCP client ID for the appropriate NetworkManager connection profile.The connection profile you need to modify depends on whether your node uses a VLAN.
- No VLAN: Add the DHCP client ID to the
bridge-mgmtconnection profile. - VLAN used: Add the DHCP client ID to the
vlan-mgmtconnection profile.
For example, in the no VLAN case:
$ nmcli con modify bridge-mgmt \
ipv4.dhcp-client-id \
ff:00:dd:c7:05:00:01:00:01:30:ae:a0:d3:52:54:00:dd:c7:05Be sure to replace the client ID in the example with the actual client ID from your wicked lease file.
- No VLAN: Add the DHCP client ID to the
Reboot the node.
The DHCP server should return the original IP address and the affected node should be able to join the cluster.