Upgrade from v1.7.x to v1.8.x
General Information
An Upgrade button appears on the Dashboard screen whenever a new Harvester version that you can upgrade to becomes available. For more information, see Start an upgrade.
Clusters running v1.7.x can upgrade to v1.8.x directly because Harvester allows a maximum of one minor version upgrade for underlying components. Harvester v1.7.0 and v1.7.1 use the same minor version of RKE2 (v1.34), while Harvester v1.8.0 uses the next minor version (v1.35). For more information, see Upgrade paths.
For information about upgrading Harvester in air-gapped environments, see Prepare an air-gapped upgrade.
Support for legacy BIOS booting is removed in v1.8.0. Existing Harvester clusters that use this boot mode will continue to function, but upgrading to later versions may require re-installation in UEFI mode. To avoid issues and disruptions, use UEFI in new installations.
Update Harvester UI Extension on Rancher v2.14
You must use a compatible version (v1.8.x) of the Harvester UI Extension to import Harvester v1.8.x clusters on Rancher v2.14.
-
On the Rancher UI, go to local > Apps > Repositories.
-
Locate the repository named harvester, and then select ⋮ > Refresh.
-
Go to the Extensions screen.
-
Locate the extension named Harvester, and then click Update.
-
Select a compatible version, and then click Update.
-
Allow some time for the extension to be updated and then refresh the screen.
Known Issues
1. Virtual Machines Fail to Migrate with "KubeVirt Not Ready" Error
After upgrading from v1.7.x to v1.8.x, virtual machines may fail to migrate with the error message "KubeVirt is not ready". This issue is caused by a race condition in which a virt-handler pod is created with missing annotations that are required by KubeVirt to determine whether the pod is up-to-date.

The KubeVirt operator continuously waits for the outdated virt-handler pod to terminate, preventing the KubeVirt custom resource from reaching the "Available" state. This blocks virtual machine operations including live migration.
This issue has been observed in three-node clusters with one witness node, but may occur in other configurations as well.
Identifying the Issue
-
Check the KubeVirt custom resource status:
kubectl get kubevirt/kubevirt -n harvester-system -o yaml | yq '.status.conditions'If the issue is present, you will see the
Availablecondition set toFalsewith the reasonDeploymentInProgress:- lastProbeTime: "2026-04-18T17:42:39Z"
lastTransitionTime: "2026-04-18T17:42:39Z"
message: Deploying version 1.7.0-150700.3.16.2 with registry registry.suse.com/suse/sles/15.7
reason: DeploymentInProgress
status: "False"
type: Available -
Check the
virt-operatorlogs:kubectl logs deployment/virt-operator -n harvester-system --tail 10 | grep waitingYou should see messages indicating that the DaemonSet is waiting for outdated pods to terminate:
{"component":"virt-operator","level":"info","msg":"DaemonSet virt-handler waiting for out of date pods to terminate.","pos":"readycheck.go:63","timestamp":"2026-04-20T02:19:14.503468Z"} -
Identify the problematic
virt-handlerpod by checking which pod is missing the required KubeVirt annotations:kubectl get pods -n harvester-system -l kubevirt.io=virt-handler -o json | \
jq -r '.items[] | "\(.metadata.name):\n" + ((.metadata.annotations // {}) | to_entries | map(select(.key | startswith("kubevirt.io/install-strategy-"))) | map(" \(.key): \(.value)") | join("\n")) + "\n"'The output will show each pod with its KubeVirt install-strategy annotations. The problematic pod will have no annotations listed:
virt-handler-64r9v:
kubevirt.io/install-strategy-identifier: 9890638436fb4150e2046eff9f500bc4f18812f8
kubevirt.io/install-strategy-registry: registry.suse.com/suse/sles/15.7
kubevirt.io/install-strategy-version: 1.7.0-150700.3.16.2
virt-handler-wzmdv:The pod with no annotations (in this example,
virt-handler-wzmdv) is the problematic one that needs to be deleted.
Workaround
Delete the problematic virt-handler pod. Kubernetes will automatically recreate it with the correct annotations.
-
Identify the name of the problematic pod (for example,
virt-handler-wzmdv). -
Delete the problematic pod:
kubectl delete pod virt-handler-wzmdv -n harvester-system -
Wait for the pod to be recreated and verify that the KubeVirt custom resource is now available:
kubectl get kubevirt/kubevirt -n harvester-system -o yaml | yq '.status.conditions[] | select(.type == "Available")'The
Availablecondition should now be set toTrue:- lastProbeTime: "2026-04-18T17:45:00Z"
lastTransitionTime: "2026-04-18T17:45:00Z"
message: All components ready
reason: AllComponentsReady
status: "True"
type: Available -
Verify that virtual machine operations are now working correctly.
Related issue: #10447