- Dec 4, 2025
- Parsed from source:Dec 4, 2025
- Detected by Releasebot:Dec 6, 2025
Proxmox Datacenter Manager 1.0 (stable)
Proxmox Datacenter Manager ships its first stable release, a centralized multi-cluster admin for Proxmox environments. Built in Rust with a modern frontend, it adds LDAP/OpenID auth, live migrations across clusters, centralized updates, and SDN integration.
We're very excited to present the first stable release of our new Proxmox Datacenter Manager!
Proxmox Datacenter Manager is an open-source, centralized management solution to oversee and manage multiple, independent Proxmox-based environments. It provides an aggregated view of all your connected nodes and clusters and is designed to manage complex and distributed infrastructures, from local installations to globally scaled data centers. With multi-cluster management it enables management like live migrations of virtual guests without any cluster network requirements.
The project is fully developed in the Rust programming language, from the backend API server to the CLI tools to a completely new frontend. The frontend is built on the new widget toolkit that we developed over the last few years. This offers a more modern web user interface experience, not only in terms of appearance and functionality, but also in terms of accessibility, speed, and compatibility. Proxmox Datacenter Manager is licensed under the GNU Affero General Public License v3 (GNU AGPLv3).
A big THANK YOU to our global community for the substantial support during the beta phase! We are particularly grateful for the stress tests performed, the detailed bug reports, and contributions of other kinds - thank you so much, your collaboration is fantastic!
Main Features
- Based on Debian Trixie 13.2, with latest security updates
- Linux kernel based on version 6.17 with ZFS 2.3.4 included
- Authentication: support for LDAP, Active Directory and OpenID Connect realms for authentication
- Custom views for tailored overviews filtered by remotes, resources, resource type, or tags and dedicated access control
- Support for Proxmox Virtual Environment and Proxmox Backup Server remotes
- Efficient central metric collection
- Powerful search functionality to quickly find resources (filtered by resource type, status, or more).
- Privilege Management for Proxmox Datacenter Manager users from the access control UI
- Centralized system update overview
- Initial Software-Defined Networking integration with EVPN configuration between clusters
- Centralized update overview
- Enterprise-support available for existing customers with active Basic or higher subscriptions for their Proxmox remotes
- Open-source license: GNU AGPLv3
Release notes
https://pdm.proxmox.com/docs/roadmap.html#proxmox-datacenter-manager-1-0
Press release
https://www.proxmox.com/en/about/company-details/press-releases/proxmox-datacenter-manager-1-0
Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso
Documentation
Community Forum
Bugtracker
Source code
This latest release features enhancements and updates shaped by your ideas and support. A huge thank you to all of our community members and customers reporting bugs, submitting patches and getting involved in testing - THANK YOU!
FAQ
Q: How does this integrate into Proxmox Virtual Environment and Proxmox Backup Server?
A: You can add arbitrary Proxmox hosts or clusters as remotes. Proxmox Datacenter Manager will then monitor them and provide basic management using only the API.Q: How many different Proxmox VE hosts and/or cluster can I manage with a single Datacenter Manager instance?
A: Due to the early stage of development, there are still some pain points, but we are confident that we will be able to handle large setups with a moderate amount of resources. We have run tests with over 5000 remotes and over 10000 virtual guests to confirm the performance expectations of our new UI framework. We are targeting similar numbers for the backend.Q: What Proxmox VE and Proxmox Backup Server versions are supported?
A: The minimum required Proxmox VE version is 8.4 and the minimum required Proxmox Backup Server version is 3.4.
We will support all actively supported Proxmox project releases, but encourage frequent upgrades of both PDM and the PVE and PBS remotes to leverage all features.Q: Can I upgrade a beta installation to the stable 1.0 via apt?
A: A: Yes, upgrading from is possible via apt and GUI. We recommend using the
pdm-enterprise repository
on upgrade for the most stable experience.Q: Can I upgrade Proxmox Datacenter Manager Alpha to this 1.0 version?
A: Yes, please follow the upgrade instructions on
https://pve.proxmox.com/wiki/Proxmox_Datacenter_Manager_Upgrade_from_Alpha_to_1Q: Can I install Proxmox Datacenter Manager alongside with Proxmox VE or Proxmox Backup Server?
A: Yes, but installing alongside other Proxmox projects is not the recommended setup (expert use only).Q: What environment does Proxmox Datacenter Manager support?
A: Proxmox Datacenter Manager will work everywhere where a standard x86-64/AMD64 Debian system is supported.Q: Is there any recommended system requirements for the Proxmox Datacenter Manager?
A: Yes, see
https://pdm.proxmox.com/docs/installation.html#system-requirements
.Q: What network setups are supported between Proxmox Datacenter Manager and remotes?
A: In general the Proxmox Datacenter Manager needs to be able to connect to all Proxmox VE remotes directly to send API requests and query load and usage metrics. Remotes on the other hand do not need to be able to connect to Datacenter Manager directly. Reverse proxies between Proxmox Datacenter Manager and any of its Proxmox VE remotes are not supported, we recommend using tunneling (for example, WireGuard or OpenVPN) for hosts that must not be exposed directly to a non-private network.Q: Where can I get more information about feature updates?
A: Check the
roadmap
,
forum
, the
mailing list
, and/or subscribe to our
newsletter
.Best regards,
ThomasDo you already have a Commercial Support Subscription? - If not,
Original source Report a problem
Buy now
and read the
documentation - Nov 26, 2025
- Parsed from source:Nov 26, 2025
- Detected by Releasebot:Dec 6, 2025
Proxmox Backup Server 4.1 released!
Proxmox Backup Server 4.1 arrives with Debian 13.2, Linux 6.17.2 and ZFS 2.3.4, plus new traffic control, configurable verify parallelism and S3 rate limiting. Expect stronger performance, stability and enterprise storage enhancements for streamlined backups.
Proxmox Backup Server 4.1 release notes
We're pleased to announce the release of Proxmox Backup Server 4.1.
This version is based on Debian 13.2 (“Trixie”), uses Linux kernel 6.17.2-1 as the new stable default, and comes with ZFS 2.3.4 for reliable, enterprise-grade storage and improved hardware support.
Highlights
- User-based traffic control for more fine-grained bandwidth management across backup and restore operations
- Configurable parallelism for verify jobs to optimize runtimes and balance I/O and CPU usage
- Rate limiting for S3 endpoints (technology preview) to keep S3-based backup and restore traffic from congesting shared network
- Numerous performance, stability, and usability improvements across the stack
You can find all details in the full release notes, and as always, we’re really looking forward to your feedback and experiences with Proxmox Backup Server 4.1!
Release notes
https://pbs.proxmox.com/wiki/Roadmap#Proxmox_Backup_Server_4.1
Press release
https://www.proxmox.com/en/about/company-details/press-releases/proxmox-backup-server-4-1
Download
https://www.proxmox.com/en/downloads
Alternate ISO download:
https://enterprise.proxmox.com/iso
Documentation
Community Forum
Bugtracker
Source code
Thanks for your contributions, ideas, and support — with your insights, we’ve introduced meaningful updates to enhance your experience.
FAQ
Q: Can I upgrade the latest Proxmox Backup Server 3.x to 4.1 with apt?
A: Yes, please follow the upgrade instructions on https://pbs.proxmox.com/wiki/index.php/Upgrade_from_3_to_4Q: How does this integrate into Proxmox Virtual Environment?
A: Just add a Proxmox Backup Server datastore as a new storage target in your Proxmox VE. Make sure that you run the latest Proxmox VE 9.xQ: Is Proxmox Backup Server still compatible with older clients or Proxmox VE releases?
A: We are actively testing the compatibility of all the major versions currently supported, including the previous one. This means that you can safely back up from Proxmox VE 8 to Proxmox Backup Server 4, or from Proxmox VE 9 to Proxmox Backup Server 3. However, full compatibility with major client versions that are two or more releases apart, like for example Proxmox VE 7 based on Debian 11 Bullseye and Proxmox Backup Server 4 based on Debian 13 Trixie, is supported on a best-effort basis only.Q: How long will Proxmox Backup Server 3.4 receive bug fixes and security support?
A: Proxmox Backup Server 3.4 will receive security updates and critical bug fixes until August 2026. This support window provides an overlap of approximately one year after the release of Proxmox Backup Server 4, giving users ample time to plan their upgrade to the new major version.
For more information on the support lifecycle of Proxmox Backup Server releases, please visit:
https://pbs.proxmox.com/docs/faq.html#how-long-will-my-proxmox-backup-server-version-be-supportedQ: How do I install the proxmox-backup-client on my Debian or Ubuntu server?
A: We provide a "Proxmox Backup Client-only Repository". See https://pbs.proxmox.com/docs/installation.html#client-installation
For Debian derivatives we recommend installing the proxmox-backup-client-static package to avoid issues with different system library versions.Q: What will happen with the existing backup tool (vzdump) in Proxmox Virtual Environment?
A: You can still use vzdump. The new backup is an additional, but very powerful way to back up and restore your VMs and containers.Q: Is there any recommended server hardware for the Proxmox Backup Server?
A: We recommend enterprise-grade server hardware components, with fast local SSD/NVMe storage. Access and response times from rotating drives will slow down all backup server operations. See https://pbs.proxmox.com/docs/installation.html#recommended-server-system-requirementsQ: Can I install Proxmox Backup Server on Debian, in a VM, as LXC or alongside with Proxmox VE?
A: Yes, but all this is not the recommended setup (expert use only).Q: Where can I get more information about upcoming features?
A: Follow the announcement forum and pbs-devel mailing list https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel, and subscribe to our newsletter https://www.proxmox.com/news and see https://pbs.proxmox.com/wiki/index.php/Roadmap.Best regards,
ThomasDo you already have a Commercial Support Subscription? - If not, Buy now and read the documentation
Original source Report a problem - Nov 19, 2025
- Parsed from source:Nov 19, 2025
- Detected by Releasebot:Dec 6, 2025
Proxmox VE 9.1
Proxmox VE 9 ships with Kernel 6.17 as default and updated QEMU, LXC and more. Highlights include creating LXC containers from OCI images, TPM support in qcow2, deeper SDN status in the GUI, and broad virtualization and storage enhancements.
Based on Debian Trixie (13.2)
Latest 6.17.2-1 Kernel as new stable default
QEMU 10.1.2
LXC 6.0.5
ZFS 2.3.4
Ceph Squid 19.2.3Highlights
Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in the qcow2 format.
This allows taking snapshots of VMs with a TPM state on file-level storages such as NFS or CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
A new vCPU flag allows to enable nested virtualization on top of a vCPU type that corresponds to the host CPU vendor and generation.
This can be an alternative to using the full
host
vCPU type.
More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guests.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.
Kernel 6.17 as new stable default.
Seamless upgrade from Proxmox VE 8.4, see
Upgrade from 8 to 9
.Changelog Overview
Before upgrading, please consider
Known Issues & Breaking Changes
.Enhancements in the web interface (GUI)
Allow to initiate bulk actions directly from the Tag View.
The Tag View provides a convenient overview over virtual guests, grouped according to their assigned tags.
Right-clicking on a tag in the resource tree now allows to conveniently initiate bulk actions for virtual guests with that tag.
Right-clicking on a VM in the resource tree now allows to reset the VM (issue 4248).
Improvements to the new mobile web interface introduced in Proxmox VE 9.0:
Support login using an OpenID Connect (OIDC) realm.
The VM hardware and option panels now show pending changes.
Allow to edit VM options directly from the mobile web interface.
Improve detection of mobile Firefox (issue 6657).
Fix an issue where the consent banner was not rendered as Markdown.
Move the global search bar to the middle of the screen and adapt to the screen size for improved visibility.
Use icons that are more suitable for high-resolution display (issue 6599).
Increase the thresholds for warning about high memory usage for the cluster, node, and guest summary pages.
Fix an issue where no login dialog would be shown after a session has expired.
Fix an issue where resource pool members would not be displayed correctly after adding or removing a member (issue 6385).
Fix performance issues where the GUI would be slowed down in setups with many guests.
The datacenter, node, and guest summary pages now show two separate selectors for graph timeframe and aggregation type.
Previously, both settings were combined into one dropdown box.
The journal viewer now keeps the horizontal scrolling position even when refreshing the journal (issue 6679).
Fix an issue where APT changelogs or package descriptions with non-ASCII characters would be displayed incorrectly.
Clarify that the graphs in the storage summary show usage in bytes.
Fix an issue where the first guest would not update its status in the resource tree for five minutes after creation.
Allow filtering for name, node or VMID when adding guests to a resource pool.
Fix an issue where certain setting dialogs, available only to highly-privileged users by default, were susceptible to XSS (PSA-2025-00013-1).
Fix an issue where the task description of some task types would be incomplete.
Fix a regression where the resource tree tags would not have the correct color (issue 6815).
Add some missing links to the online documentation (issue 6443).
Updated translations, among others:
Czech
French
Georgian
German
Italian
Japanese
Korean
Polish
Spanish
Swedish
Traditional Chinese
UkrainianVirtual machines (KVM/QEMU)
New QEMU version 10.1.2:
See the
upstream changelog
for details.
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in a non-raw format such as qcow2, and is then provided to QEMU using the QEMU storage daemon.
This allows taking snapshots of VMs with a TPM state on file-level storages such as directory, NFS and CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
The
host
CPU type makes all CPU flags visible within the guest, which may not be desirable in all setups.
As an alternative, a new vCPU flag
nested-virt
allows to specifically enable nested virtualization.
The flag has to be enabled on a vCPU type corresponding to the host CPU vendor and generation, enabling it on a generic
x86-64-v*
vCPU type is not sufficient.
nested-virt
automatically resolves to the vendor-specific virtualization flag.
Add initial support for Intel Trust Domain Extensions (TDX).
On supported platforms, such as specific recent Intel CPUs, TDX can isolate guest memory from the host.
TDX also requires support in the guest. Windows guests currently do not support TDX.
Initial support for enabling TDX attestation is also available.
Some features like live migration are unsupported.
With Intel TDX and AMD SEV, Proxmox VE now provides initial integration of all major vendor-specific confidential computing technologies.
Allow disabling Kernel Samepage Merging (KSM) for specific VMs (issue 5291).
KSM can optimize memory usage in setups that run many similar VMs, but in some setups, it may be required to disable KSM.
Instead of completely disabling KSM, it is now possible to disable KSM only for specific VMs.
Ensure newly created EFI disks contain new Microsoft UEFI CA 2023 keys to avoid guest boot failures with Secure Boot (issue 6985).
Newer Windows ISOs will only be signed by this new CA and thus require the keys to be present.
Existing EFI disks are currently not updated automatically to avoid issues with existing Windows installations, and need to be updated manually.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (issue 6989).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
Avoid applying pending changes when resuming from hibernation (issue 6934).
Fix a short-lived regression where aarch64 VMs would not start with machine type 10.1 or higher (issue 7014).
Disable High Precision Event Timer (HPET) for recent Linux VMs and machine version 10.1 to avoid increased CPU usage.
Avoid failing live migration of certain VMs due to a different MTU settings on the source and target node.
Allow setting the
guest-phys-bits
CPU option, which may be required for PCI(e) passthrough on certain Intel CPUs (issue 6378).
Allow setting the
aw-bits
machine option to work around issues when using the Intel vIOMMU driver on newer machine versions (issue 6608).
Remote migration: Increase timeout for volume activation to avoid migration failures (issue 6828).
Ensure correct ordering of machine versions (issue 6648).
Fix an issue where a migration with conntrack state could fail due to unfortunate timing.
Fix an issue where FreeBSD guests could get stuck when canceling a SCSI request when using VirtIO SCSI (issue 6810).
Backport a fix for an issue where enabling the VNC clipboard for a Windows guest could cause the mouse pointer to get stuck.
Avoid blocking the QEMU guest agent if an
fsfreeze
command never returns.
Such situations can arise if an in-guest shutdown is triggered after initiating the freeze, but before it is finished.
This is implemented by querying the completion status multiple times instead of blocking the socket for up to one hour.
Integrate fixes for Spectre branch target injection ("VMScape", see PSA-2025-00016-1).
Fix an issue where the Disk IO graphs would show spikes if the QEMU monitor was unavailable when collecting statistics (issue 6207).
Fix an issue where a VM with an unavailable QEMU monitor would slow down statistics collection.
Fix a regression where VM template backups with QEMU machine version 10 and IDE/SATA with a read-only volume could fail (issue 6675).
Fix an issue where SCSI device passthrough would not work with QEMU machine version 10 (issue 6680).
Fix an issue where the CPU could not be edited if properties with dashes had been added manually to the VM CPU config.
Avoid a "timeout waiting on systemd" error after a live migration.
Avoid logging spurious QEMU monitor errors to the logs.Containers (LXC)
Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).
The entrypoint for containers can now be modified in the container options.
Customization of environment variables.
The container options now allow to set environment variables that are passed to the container's entrypoint.
This is especially useful for application containers that can be configured using environment variables.
Host-managed DHCP for containers without a network stack.
For system containers, DHCP management is delegated to the network management stack shipped by the container.
However, application containers may not ship a network management stack.
Enabling host-managed DHCP allows such containers to acquire DHCP leases via the DHCP tooling on the host.
Host-managed DHCP is automatically enabled for application containers created from OCI images.
New LXC version 6.0.5:
See the
upstream changelog
for details.
Improve compatibility with guest distributions:
Add support for openSUSE Leap 16.0 (issue 6731).
Support Ubuntu 25.10 and 26.04 LTS releases, and warn instead of erroring out if the release is unknown.
Warn instead of erroring out if the Debian version in a container is higher than the known maximum.
Fix compatibility issues Debian 13 point releases.
Fix DHCP issues in Debian 13 containers by actively installing
isc-dhcp-client
and
ifupdown2
.
Disable
systemd-networkd-wait-online.service
in Debian 13 containers to prevent spurious error-messages and long delays during startup.
Improve compatibility with AlmaLinux 10 and CentOS 10 containers.
Lift restrictions on /proc and /sys if nesting is enabled to avoid issues in certain nested setups (issue 7006).
Regenerate snakeoil SSH certificates and keypairs shipped in Debian-based containers (PSA-2025-00017-1).
Show dynamically assigned IP addresses in the Network tab.
Document that systemd inside a container requires nesting to be enabled (issue 6897).
Enable Router Advertisements if the container is configured with DHCPv6 (issue 5538).General improvements for virtual guests
Support for node-independent bulk actions.
Bulk actions already allow to conveniently start, shut down, suspend, or migrate many virtual guests located on a specific node.
In addition, it is now possible to initiate bulk actions on the datacenter level.
The new datacenter-level bulk actions are also accessible by right-clicking on the datacenter in the resource tree.
Fix an issue where the API would not generate PNGs for RRD graphs.
Fix an overly strict permission check for NICs without a bridge.HA Manager
Allow adding VMs and containers as HA resources after creation or restore.
The HA state is automatically set to
started
if "Start after creation"/"Start after restore" is enabled.
When deleting a resource, also optionally purge it by updating all rules containing the resource (issue 6613).
Significantly reduce the time to compute the currently used resources for Cluster Resource Scheduling (CRS) with the static load scheduler (technology preview).
Adapt the
pve-ha-simulator
package to the changes added to the HA Manager for PVE 9.0.
Fix a few issues due to missing imports and newly added files (issue 6839).
Add support for configuring resources for the static-load scheduler.
Fix an issue where disabled HA rules could not be re-enabled in the GUI.
Fix an issue where HA rules could not be edited in the GUI.
Support parsing version strings of nodes running legacy versions of Proxmox VE (< 8.0) in mixed clusters.
Fix a warning due to a wrong emptiness check for blocking resources (issue 6881).
Fix issues that would incorrectly mark rules as unsatisfiable, resulting from the false assumption to also consider ignored resources.
Apply negative affinity constraints before positive ones, in order not to limit the number of available nodes unneccessarily.
Remove the computed and obsolete
group
field from the service status data.Improved management for Proxmox VE clusters
Improve reliability of opening shells for
root@pam
to other cluster nodes (issue 6789).
Improvements to the
/cluster/metrics/export
API endpoint for pull-style metric collection:
Use credentials from original request for proxying.
Increase timeout for collecting metrics in a cluster.
ACME Account view: Improve handling of providers that do not return an e-mail field, such as Let's Encrypt.
Fix a short-lived regression that caused an error when listing ACME plugins (issue 6932).
API viewer: Add description panel to return section (issue 6830).
Improvements to the extended metrics collection introduced in PVE 9.0, based on the feedback from the community:
On upgraded nodes, data is now transparently fetched from RRD files in the old format.
This results in more robust and simpler code and also provides data for longer time windows.
Migrating RRD data when upgrading from Proxmox VE 8.4 is not necessary anymore.
Fix two glitches with the resolution calculation for the longer, aggregated timeframes.
Fix a timing issue when reading from RRD files before the metrics are written to it (issue 6615).Backup/Restore
File restore from container backups on a Proxmox Backup Server instance: Add support for symlinks when downloading a directory as a ZIP archive (issue 4995).
Improvements to the backup provider API:
Fix an issue where backups with fleecing would fail for VMs with a TPM state (issue 6882).Storage
Improvements to snapshots as volume chains (technology preview):
Disallow disabling volume-chain snapshots if a qcow2 image exists, as this can have unintended consequences.
Fix an issue where taking a snapshot would fail after a disk move (issue 6713).
Fix an issue where cloning a VM from a snapshot would fail.
Fix cluster-locking behavior when performing snapshot operations on a shared LVM storage.
As "snapshot as volume chains" requires machine version 10 or higher, fail early when attempting to start a VM with a lower machine version.
Improvements to the LVM-thick plugin:
Use the more performant
blkdiscard
instead of
cstream
for wiping removed volumes when possible.
Fix an issue where to-be-removed volumes would not be activated before attempting to wipe the volume (issue 6941).
LVM-thin plugin: Avoid LVM warning when creating a template.
iSCSI plugin: Add initial support for portals that return hostnames instead of IP addresses during discovery.
ZFS plugin: Avoid overly strict error detection when deleting a volume (issue 6845).
Improvements to the ESXi import:
Fix an issue where live import from an ESXi storage would fail for QEMU machine version 10.
Ensure the FUSE process is cleaned up after removing an ESXi storage (issue 6073).Ceph
Mapping volumes of Windows VMs with KRBD now sets the
rxbounce
map option (issue 5779).
This fixes an issue where Windows VMs with disks on an RBD storage with KRBD enabled would cause warnings and degraded performance on the host.
Simplify Ceph installation on air-gapped clusters by allowing to choose a 'manual' repository option in the wizard and
pveceph install
(issue 5244).
The usage data printed by
pveceph pool
commands now also mentions the unit.
Fix a short-lived issue where OSDs newly created under Proxmox VE 9.0 would not become active automatically after reboot (issue 6652).
Fix a regression where creating OSDs with a separate DB/WAL device would fail (issue 6747).Access control
Allow to delete the comment of an API token by setting it to an empty value (issue 6890).
Fix an issue where virtual NIC settings would display bridges that cannot actually be used due to insufficient permissions.
Add an API endpoint for verifying token-owned VNC authentication tickets to be used by termproxy.Firewall & Software Defined Networking
More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guest NICs.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.
Allow interfaces starting with
nic
and
if
as bridge ports (issue 6806).
Improvements to the nftables firewall (technology preview):
Fix atomicity when updating ipsets, to ensure consistent filtering.
Move the conntrack statement to the forward chain so it gets properly re-created on flush (issue 6831).
Create chains in the host table of a VNet firewall only if the host firewall is enabled.
When creating firewall entries, only use the IP address of a container instead of the whole subnet.
Fix up the ipfilter matching logic in the firewall (issue 6336).
Add support for legacy ipsets and alias names in the firewall (issue 6107).
Support overlapping ipsets via the auto-merge flag.
Merge the "management" and "local_network" ipsets.
Correctly document that NDP is actually enabled by default.
Fix an issue where the GUI would not allow to remove the fabric property from an EVPN controller or VXLAN zone.
Correctly set the maximum for ASN numbers for controllers.
Add better documentation for SDN controller, VNet and zone endpoints and their returned properties.
Add fabrics status to SDN status function, allowing the web UI to display it.
Make the
rollback
and
lock
endpoints return null in accordance with the specified return type.
Add descriptions to fields in the "rule" return type.
Document return type of the firewall's
list_rules
endpoint.
Print a task warning when reloading the FRR configuration fails.Improved management of Proxmox VE nodes
Renew node certificate also if the current certificate is expired (issue 6779).
This avoids issues in case a node had been shut down for a long time.
Improved
pvereport
to provide a better status overview:
Add replication job configuration.
Fix a few issues due to missing imports and newly added files (issue 6839).
Provide additional context if initialization fails (issue 6434).
Pretty-print corosync error codes in the logs to facilitate troubleshooting.
Fix an issue where the information about installed package versions would not be broadcast to the cluster in some situations (issue 5894).Installation ISO
Allow pinning the names of network interface names in the GUI, TUI and auto installer.
When a kernel version changes, the interface name may change due to new (PCIe) features being picked up. This can lead to broken network configurations.
By pinning the network interfaces, the names will not change between kernel versions.
Including this functionality in the installer ensures that a node will not change its interface names accross updates.
Enable
import
content type by default on storages set up by the installer.
Setup directory for the rrdcached database to avoid a race condition.
Mark
/etc/pve
immutable to avoid clobbering it when it is not mounted.
Set the
compatibility_level
of postfix to
3.6
to avoid a warning.
Remove a warning by setting the timezone before configuring postfix.
Do not create deprecated
/etc/timezone
.
Add an option to
proxmox-auto-install-assistant
to verify the hashed root password.
Do not select a Debian mirror and use the CDN for Debian repositories instead.Notable changes
Kernel 6.17 is reported to fix a memory leak issue when using Intel NICs with the
ice
driver and MTU 9000,
see here for more details
.Known Issues & Breaking Changes
Original source Report a problem
NVIDIA vGPU Compatibility
If you are using NVIDIA's GRID/vGPU technology, its driver must be compatible with the kernel you are using. As of November 2025, NVIDIAs vGPU host driver are not compatible with the kernel 6.17.
To avoid a failing update and working vGPU support you have two options:
Postpone the update until kernel 6.17 is supported
Prevent installing the kernel headers for 6.17 and pin your kernel to 6.14
This can be done by manually installing the kernel specific version and uninstalling theproxmox-default-headerspackage:
apt install proxmox-headers-6.14
apt remove proxmox-default-headers
After that, make sure your boot loader loads the 6.14 kernel by default, e.g. by using
proxmox-boot-tool
to pin the kernel.
proxmox-boot-tool kernel pin
You can list the exact available versions with
proxmox-boot-tool kernel list
. See the
admin guide
for more details.
Make sure you update the pin when the 6.14 kernel gets an update, or remove it when NVIDIAs driver is compatible with kernel 6.17. For the list of tested versions see
NVIDIA vGPU on Proxmox VE
.
Potential issues booting into kernel 6.17 on some Dell PowerEdge servers
Some users have reported failure to boot into kernel 6.17 and machine check errors on certain Dell PowerEdge servers, while kernel 6.14 boots successfully. It is reported that enabling SR-IOV Global and I/OAT DMA in the firmware helps. See
this forum thread
.
If enabling SR-IOV Global and I/OAT DMA does not resolve the issue, we recommend pinning the 6.14 kernel:
proxmox-boot-tool kernel pin 6.14.11-4-pve
You can list the exact available versions with
proxmox-boot-tool kernel list
. See the
admin guide
for more details.
Compatibility issues of LINSTOR/DRBD and kernel 6.17
Users reported that currently, the DRBD kernel module is not yet compatible with kernel 6.17 and building the module for 6.17 via DKMS will fail, see
the forum for more details
.
Until a fix is available, a workaround is to manually install and pin a 6.14 kernel. - Nov 19, 2025
- Parsed from source:Nov 19, 2025
- Detected by Releasebot:Dec 6, 2025
Proxmox VE 9.1
Proxmox VE 9 ships with Kernel 6.17 as the default and enables creating LXC containers from OCI images with app container support. TPM state in qcow2 and nested virtualization controls expand VM capabilities, while GUI and SDN updates improve visibility and bulk actions.
Based on Debian Trixie (13.2)
Latest 6.17.2-1 Kernel as new stable default
QEMU 10.1.2
LXC 6.0.5
ZFS 2.3.4
Ceph Squid 19.2.3Highlights
Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in the qcow2 format.
This allows taking snapshots of VMs with a TPM state on file-level storages such as NFS or CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
A new vCPU flag allows to enable nested virtualization on top of a vCPU type that corresponds to the host CPU vendor and generation.
This can be an alternative to using the full
host
vCPU type.More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guests.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.Kernel 6.17 as new stable default.
Seamless upgrade from Proxmox VE 8.4, see
Upgrade from 8 to 9
.Changelog Overview
Before upgrading, please consider
Known Issues & Breaking Changes
.Enhancements in the web interface (GUI)
Allow to initiate bulk actions directly from the Tag View.
The Tag View provides a convenient overview over virtual guests, grouped according to their assigned tags.
Right-clicking on a tag in the resource tree now allows to conveniently initiate bulk actions for virtual guests with that tag.
Right-clicking on a VM in the resource tree now allows to reset the VM (issue 4248).
Improvements to the new mobile web interface introduced in Proxmox VE 9.0:
Support login using an OpenID Connect (OIDC) realm.
The VM hardware and option panels now show pending changes.
Allow to edit VM options directly from the mobile web interface.
Improve detection of mobile Firefox (issue 6657).
Fix an issue where the consent banner was not rendered as Markdown.
Move the global search bar to the middle of the screen and adapt to the screen size for improved visibility.
Use icons that are more suitable for high-resolution display (issue 6599).
Increase the thresholds for warning about high memory usage for the cluster, node, and guest summary pages.
Fix an issue where no login dialog would be shown after a session has expired.
Fix an issue where resource pool members would not be displayed correctly after adding or removing a member (issue 6385).
Fix performance issues where the GUI would be slowed down in setups with many guests.
The datacenter, node, and guest summary pages now show two separate selectors for graph timeframe and aggregation type.
Previously, both settings were combined into one dropdown box.
The journal viewer now keeps the horizontal scrolling position even when refreshing the journal (issue 6679).
Fix an issue where APT changelogs or package descriptions with non-ASCII characters would be displayed incorrectly.
Clarify that the graphs in the storage summary show usage in bytes.
Fix an issue where the first guest would not update its status in the resource tree for five minutes after creation.
Allow filtering for name, node or VMID when adding guests to a resource pool.
Fix an issue where certain setting dialogs, available only to highly-privileged users by default, were susceptible to XSS (PSA-2025-00013-1).
Fix an issue where the task description of some task types would be incomplete.
Fix a regression where the resource tree tags would not have the correct color (issue 6815).
Add some missing links to the online documentation (issue 6443).
Updated translations, among others:
Czech
French
Georgian
German
Italian
Japanese
Korean
Polish
Spanish
Swedish
Traditional Chinese
UkrainianVirtual machines (KVM/QEMU)
New QEMU version 10.1.2:
See the
upstream changelog
for details.
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in a non-raw format such as qcow2, and is then provided to QEMU using the QEMU storage daemon.
This allows taking snapshots of VMs with a TPM state on file-level storages such as directory, NFS and CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
The
host
CPU type makes all CPU flags visible within the guest, which may not be desirable in all setups.
As an alternative, a new vCPU flag
nested-virt
allows to specifically enable nested virtualization.
The flag has to be enabled on a vCPU type corresponding to the host CPU vendor and generation, enabling it on a generic
x86-64-v*
vCPU type is not sufficient.
nested-virt
automatically resolves to the vendor-specific virtualization flag.
Add initial support for Intel Trust Domain Extensions (TDX).
On supported platforms, such as specific recent Intel CPUs, TDX can isolate guest memory from the host.
TDX also requires support in the guest. Windows guests currently do not support TDX.
Initial support for enabling TDX attestation is also available.
Some features like live migration are unsupported.
With Intel TDX and AMD SEV, Proxmox VE now provides initial integration of all major vendor-specific confidential computing technologies.
Allow disabling Kernel Samepage Merging (KSM) for specific VMs (issue 5291).
KSM can optimize memory usage in setups that run many similar VMs, but in some setups, it may be required to disable KSM.
Instead of completely disabling KSM, it is now possible to disable KSM only for specific VMs.
Ensure newly created EFI disks contain new Microsoft UEFI CA 2023 keys to avoid guest boot failures with Secure Boot (issue 6985).
Newer Windows ISOs will only be signed by this new CA and thus require the keys to be present.
Existing EFI disks are currently not updated automatically to avoid issues with existing Windows installations, and need to be updated manually.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (issue 6989).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
Avoid applying pending changes when resuming from hibernation (issue 6934).
Fix a short-lived regression where aarch64 VMs would not start with machine type 10.1 or higher (issue 7014).
Disable High Precision Event Timer (HPET) for recent Linux VMs and machine version 10.1 to avoid increased CPU usage.
Avoid failing live migration of certain VMs due to a different MTU settings on the source and target node.
Allow setting the
guest-phys-bits
CPU option, which may be required for PCI(e) passthrough on certain Intel CPUs (issue 6378).
Allow setting the
aw-bits
machine option to work around issues when using the Intel vIOMMU driver on newer machine versions (issue 6608).
Remote migration: Increase timeout for volume activation to avoid migration failures (issue 6828).
Ensure correct ordering of machine versions (issue 6648).
Fix an issue where a migration with conntrack state could fail due to unfortunate timing.
Fix an issue where FreeBSD guests could get stuck when canceling a SCSI request when using VirtIO SCSI (issue 6810).
Backport a fix for an issue where enabling the VNC clipboard for a Windows guest could cause the mouse pointer to get stuck.
Avoid blocking the QEMU guest agent if an
fsfreeze
command never returns.
Such situations can arise if an in-guest shutdown is triggered after initiating the freeze, but before it is finished.
This is implemented by querying the completion status multiple times instead of blocking the socket for up to one hour.
Integrate fixes for Spectre branch target injection ("VMScape", see PSA-2025-00016-1).
Fix an issue where the Disk IO graphs would show spikes if the QEMU monitor was unavailable when collecting statistics (issue 6207).
Fix an issue where a VM with an unavailable QEMU monitor would slow down statistics collection.
Fix a regression where VM template backups with QEMU machine version 10 and IDE/SATA with a read-only volume could fail (issue 6675).
Fix an issue where SCSI device passthrough would not work with QEMU machine version 10 (issue 6680).
Fix an issue where the CPU could not be edited if properties with dashes had been added manually to the VM CPU config.
Avoid a "timeout waiting on systemd" error after a live migration.
Avoid logging spurious QEMU monitor errors to the logs.Containers (LXC)
Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).
The entrypoint for containers can now be modified in the container options.
Customization of environment variables.
The container options now allow to set environment variables that are passed to the container's entrypoint.
This is especially useful for application containers that can be configured using environment variables.
Host-managed DHCP for containers without a network stack.
For system containers, DHCP management is delegated to the network management stack shipped by the container.
However, application containers may not ship a network management stack.
Enabling host-managed DHCP allows such containers to acquire DHCP leases via the DHCP tooling on the host.
Host-managed DHCP is automatically enabled for application containers created from OCI images.
New LXC version 6.0.5:
See the
upstream changelog
for details.
Improve compatibility with guest distributions:
Add support for openSUSE Leap 16.0 (issue 6731).
Support Ubuntu 25.10 and 26.04 LTS releases, and warn instead of erroring out if the release is unknown.
Warn instead of erroring out if the Debian version in a container is higher than the known maximum.
Fix compatibility issues Debian 13 point releases.
Fix DHCP issues in Debian 13 containers by actively installing
isc-dhcp-client
and
ifupdown2
.
Disable
systemd-networkd-wait-online.service
in Debian 13 containers to prevent spurious error-messages and long delays during startup.
Improve compatibility with AlmaLinux 10 and CentOS 10 containers.
Lift restrictions on /proc and /sys if nesting is enabled to avoid issues in certain nested setups (issue 7006).
Regenerate snakeoil SSH certificates and keypairs shipped in Debian-based containers (PSA-2025-00017-1).
Show dynamically assigned IP addresses in the Network tab.
Document that systemd inside a container requires nesting to be enabled (issue 6897).
Enable Router Advertisements if the container is configured with DHCPv6 (issue 5538).General improvements for virtual guests
Support for node-independent bulk actions.
Bulk actions already allow to conveniently start, shut down, suspend, or migrate many virtual guests located on a specific node.
In addition, it is now possible to initiate bulk actions on the datacenter level.
The new datacenter-level bulk actions are also accessible by right-clicking on the datacenter in the resource tree.
Fix an issue where the API would not generate PNGs for RRD graphs.
Fix an overly strict permission check for NICs without a bridge.HA Manager
Allow adding VMs and containers as HA resources after creation or restore.
The HA state is automatically set to
started
if "Start after creation"/"Start after restore" is enabled.
When deleting a resource, also optionally purge it by updating all rules containing the resource (issue 6613).
Significantly reduce the time to compute the currently used resources for Cluster Resource Scheduling (CRS) with the static load scheduler (technology preview).
Adapt the
pve-ha-simulator
package to the changes added to the HA Manager for PVE 9.0.
Fix a few issues due to missing imports and newly added files (issue 6839).
Add support for configuring resources for the static-load scheduler.
Fix an issue where disabled HA rules could not be re-enabled in the GUI.
Fix an issue where HA rules could not be edited in the GUI.
Support parsing version strings of nodes running legacy versions of Proxmox VE (< 8.0) in mixed clusters.
Fix a warning due to a wrong emptiness check for blocking resources (issue 6881).
Fix issues that would incorrectly mark rules as unsatisfiable, resulting from the false assumption to also consider ignored resources.
Apply negative affinity constraints before positive ones, in order not to limit the number of available nodes unneccessarily.
Remove the computed and obsolete
group
field from the service status data.Improved management for Proxmox VE clusters
Improve reliability of opening shells for
root@pam
to other cluster nodes (issue 6789).
Improvements to the
/cluster/metrics/export
API endpoint for pull-style metric collection:
Use credentials from original request for proxying.
Increase timeout for collecting metrics in a cluster.
ACME Account view: Improve handling of providers that do not return an e-mail field, such as Let's Encrypt.
Fix a short-lived regression that caused an error when listing ACME plugins (issue 6932).
API viewer: Add description panel to return section (issue 6830).
Improvements to the extended metrics collection introduced in PVE 9.0, based on the feedback from the community:
On upgraded nodes, data is now transparently fetched from RRD files in the old format.
This results in more robust and simpler code and also provides data for longer time windows.
Migrating RRD data when upgrading from Proxmox VE 8.4 is not necessary anymore.
Fix two glitches with the resolution calculation for the longer, aggregated timeframes.
Fix a timing issue when reading from RRD files before the metrics are written to it (issue 6615).Backup/Restore
File restore from container backups on a Proxmox Backup Server instance: Add support for symlinks when downloading a directory as a ZIP archive (issue 4995).
Improvements to the backup provider API:
Fix an issue where backups with fleecing would fail for VMs with a TPM state (issue 6882).Storage
Improvements to snapshots as volume chains (technology preview):
Disallow disabling volume-chain snapshots if a qcow2 image exists, as this can have unintended consequences.
Fix an issue where taking a snapshot would fail after a disk move (issue 6713).
Fix an issue where cloning a VM from a snapshot would fail.
Fix cluster-locking behavior when performing snapshot operations on a shared LVM storage.
As "snapshot as volume chains" requires machine version 10 or higher, fail early when attempting to start a VM with a lower machine version.
Improvements to the LVM-thick plugin:
Use the more performant
blkdiscard
instead of
cstream
for wiping removed volumes when possible.
Fix an issue where to-be-removed volumes would not be activated before attempting to wipe the volume (issue 6941).
LVM-thin plugin: Avoid LVM warning when creating a template.
iSCSI plugin: Add initial support for portals that return hostnames instead of IP addresses during discovery.
ZFS plugin: Avoid overly strict error detection when deleting a volume (issue 6845).
Improvements to the ESXi import:
Fix an issue where live import from an ESXi storage would fail for QEMU machine version 10.
Ensure the FUSE process is cleaned up after removing an ESXi storage (issue 6073).Ceph
Mapping volumes of Windows VMs with KRBD now sets the
rxbounce
map option (issue 5779).
This fixes an issue where Windows VMs with disks on an RBD storage with KRBD enabled would cause warnings and degraded performance on the host.
Simplify Ceph installation on air-gapped clusters by allowing to choose a 'manual' repository option in the wizard and
pveceph install
(issue 5244).
The usage data printed by
pveceph pool
commands now also mentions the unit.
Fix a short-lived issue where OSDs newly created under Proxmox VE 9.0 would not become active automatically after reboot (issue 6652).
Fix a regression where creating OSDs with a separate DB/WAL device would fail (issue 6747).Access control
Allow to delete the comment of an API token by setting it to an empty value (issue 6890).
Fix an issue where virtual NIC settings would display bridges that cannot actually be used due to insufficient permissions.
Add an API endpoint for verifying token-owned VNC authentication tickets to be used by termproxy.Firewall & Software Defined Networking
More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guest NICs.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.
Allow interfaces starting with
nic
and
if
as bridge ports (issue 6806).
Improvements to the nftables firewall (technology preview):
Fix atomicity when updating ipsets, to ensure consistent filtering.
Move the conntrack statement to the forward chain so it gets properly re-created on flush (issue 6831).
Create chains in the host table of a VNet firewall only if the host firewall is enabled.
When creating firewall entries, only use the IP address of a container instead of the whole subnet.
Fix up the ipfilter matching logic in the firewall (issue 6336).
Add support for legacy ipsets and alias names in the firewall (issue 6107).
Support overlapping ipsets via the auto-merge flag.
Merge the "management" and "local_network" ipsets.
Correctly document that NDP is actually enabled by default.
Fix an issue where the GUI would not allow to remove the fabric property from an EVPN controller or VXLAN zone.
Correctly set the maximum for ASN numbers for controllers.
Add better documentation for SDN controller, VNet and zone endpoints and their returned properties.
Add fabrics status to SDN status function, allowing the web UI to display it.
Make the
rollback
and
lock
endpoints return null in accordance with the specified return type.
Add descriptions to fields in the "rule" return type.
Document return type of the firewall's
list_rules
endpoint.
Print a task warning when reloading the FRR configuration fails.Improved management of Proxmox VE nodes
Renew node certificate also if the current certificate is expired (issue 6779).
This avoids issues in case a node had been shut down for a long time.
Improved
pvereport
to provide a better status overview:
Add replication job configuration.
Improvements to the
pve8to9
upgrade tool:
Remove obsolete checks and warnings concerning RRD migration.
LVM: Detect a manual
thin_check_options
override that could cause problems with thin pool activation after upgrade.
Check that grub is set up to update removable grub installations on ESP.
Detect problematic situations if the
systemd-boot
metapackage is installed, as this can cause problems on upgrade.
Increase severity of failing boot-related checks.
Fix false positives when checking unified cgroup v2 support for containers.
Clarify informational and warning messages.
Fix a buffer overflow issue in vncterm/spiceterm handling of ANSI escape sequences (PSA-2025-00018-1).
vncterm/spiceterm are spawned when a user with sufficient privileges initiates a VNC or SPICE session, for accessing a node or container console via the GUI.
Fix an issue where CLI command completion would print a "Can't use an undefined value" error (issue 6762).
Improve robustness of alternative names support for network interfaces introduced in Proxmox VE 9.0.
Fix a regression that caused an error if a VLAN interface was defined on top of a bond member.
Lower the timeout of retrieving disk SMART data to 10 seconds, in order to avoid blocking the GUI (issue 6224).
Add timestamps to debug logs of
pveproxy
and
pvedaemon
to facilitate troubleshooting.
Updates to packages rebuilt from Debian GNU/Linux upstream versions:
systemd is rebuilt to work around issues in current upstream handling of
systemd-boot
.
The version got increased to match upstream
rrdtool
is rebuilt to provide a native systemd unit file.
The
ListenStream
directive was updated to not print a deprecation warning by using
/run
as directory for the socket.Installation ISO
Allow pinning the names of network interface names in the GUI, TUI and auto installer.
When a kernel version changes, the interface name may change due to new (PCIe) features being picked up. This can lead to broken network configurations.
By pinning the network interfaces, the names will not change between kernel versions.
Including this functionality in the installer ensures that a node will not change its interface names accross updates.
Enable
import
content type by default on storages set up by the installer.
Setup directory for the rrdcached database to avoid a race condition.
Mark
/etc/pve
immutable to avoid clobbering it when it is not mounted.
Set the
compatibility_level
of postfix to
3.6
to avoid a warning.
Remove a warning by setting the timezone before configuring postfix.
Do not create deprecated
/etc/timezone
.
Add an option to
proxmox-auto-install-assistant
to verify the hashed root password.
Do not select a Debian mirror and use the CDN for Debian repositories instead.Notable changes
Kernel 6.17 is reported to fix a memory leak issue when using Intel NICs with the
ice
driver and MTU 9000,
see here for more details
.Known Issues & Breaking Changes
NVIDIA vGPU Compatibility
If you are using NVIDIA's GRID/vGPU technology, its driver must be compatible with the kernel you are using. As of November 2025, NVIDIAs vGPU host driver are not compatible with the kernel 6.17.
To avoid a failing update and working vGPU support you have two options:
Postpone the update until kernel 6.17 is supported
Prevent installing the kernel headers for 6.17 and pin your kernel to 6.14
This can be done by manually installing the kernel specific version and uninstalling theproxmox-default-headerspackage:
apt install proxmox-headers-6.14
apt remove proxmox-default-headers
After that, make sure your boot loader loads the 6.14 kernel by default, e.g. by using
proxmox-boot-tool
to pin the kernel.
proxmox-boot-tool kernel pin
You can list the exact available versions with
proxmox-boot-tool kernel list
. See the
admin guide
for more details.
Make sure you update the pin when the 6.14 kernel gets an update, or remove it when NVIDIAs driver is compatible with kernel 6.17. For the list of tested versions see
NVIDIA vGPU on Proxmox VE
.Potential issues booting into kernel 6.17 on some Dell PowerEdge servers
Some users have reported failure to boot into kernel 6.17 and machine check errors on certain Dell PowerEdge servers, while kernel 6.14 boots successfully. It is reported that enabling SR-IOV Global and I/OAT DMA in the firmware helps. See
this forum thread
.
If enabling SR-IOV Global and I/OAT DMA does not resolve the issue, we recommend pinning the 6.14 kernel:
proxmox-boot-tool kernel pin 6.14.11-4-pve
You can list the exact available versions with
proxmox-boot-tool kernel list
. See the
admin guide
for more details.Compatibility issues of LINSTOR/DRBD and kernel 6.17
Users reported that currently, the DRBD kernel module is not yet compatible with kernel 6.17 and building the module for 6.17 via DKMS will fail, see
Original source Report a problem
the forum for more details
.
Until a fix is available, a workaround is to manually install and pin a 6.14 kernel. - Nov 19, 2025
- Parsed from source:Nov 19, 2025
- Detected by Releasebot:Dec 6, 2025
Proxmox VE 9.1
Proxmox VE 9.0 ships with Kernel 6.17 as the new stable default and broad platform updates. It adds OCI-based LXC templates, TPM state in qcow2, nested virtualization, TDx support, and refined GUI with bulk actions and improved SDN insights, plus many stability fixes.
Based on Debian Trixie (13.2)
Latest 6.17.2-1 Kernel as new stable default
QEMU 10.1.2
LXC 6.0.5
ZFS 2.3.4
Ceph Squid 19.2.3Highlights
Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in the qcow2 format.
This allows taking snapshots of VMs with a TPM state on file-level storages such as NFS or CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
A new vCPU flag allows to enable nested virtualization on top of a vCPU type that corresponds to the host CPU vendor and generation.
This can be an alternative to using the full host vCPU type.
More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guests.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.
Kernel 6.17 as new stable default.
Seamless upgrade from Proxmox VE 8.4, see Upgrade from 8 to 9.Changelog Overview
Before upgrading, please consider Known Issues & Breaking Changes.
Enhancements in the web interface (GUI)
Allow to initiate bulk actions directly from the Tag View.
The Tag View provides a convenient overview over virtual guests, grouped according to their assigned tags.
Right-clicking on a tag in the resource tree now allows to conveniently initiate bulk actions for virtual guests with that tag.
Right-clicking on a VM in the resource tree now allows to reset the VM (issue 4248).
Improvements to the new mobile web interface introduced in Proxmox VE 9.0:
Support login using an OpenID Connect (OIDC) realm.
The VM hardware and option panels now show pending changes.
Allow to edit VM options directly from the mobile web interface.
Improve detection of mobile Firefox (issue 6657).
Fix an issue where the consent banner was not rendered as Markdown.
Move the global search bar to the middle of the screen and adapt to the screen size for improved visibility.
Use icons that are more suitable for high-resolution display (issue 6599).
Increase the thresholds for warning about high memory usage for the cluster, node, and guest summary pages.
Fix an issue where no login dialog would be shown after a session has expired.
Fix an issue where resource pool members would not be displayed correctly after adding or removing a member (issue 6385).
Fix performance issues where the GUI would be slowed down in setups with many guests.
The datacenter, node, and guest summary pages now show two separate selectors for graph timeframe and aggregation type.
Previously, both settings were combined into one dropdown box.
The journal viewer now keeps the horizontal scrolling position even when refreshing the journal (issue 6679).
Fix an issue where APT changelogs or package descriptions with non-ASCII characters would be displayed incorrectly.
Clarify that the graphs in the storage summary show usage in bytes.
Fix an issue where the first guest would not update its status in the resource tree for five minutes after creation.
Allow filtering for name, node or VMID when adding guests to a resource pool.
Fix an issue where certain setting dialogs, available only to highly-privileged users by default, were susceptible to XSS (PSA-2025-00013-1).
Fix an issue where the task description of some task types would be incomplete.
Fix a regression where the resource tree tags would not have the correct color (issue 6815).
Add some missing links to the online documentation (issue 6443).
Updated translations, among others: Czech, French, Georgian, German, Italian, Japanese, Korean, Polish, Spanish, Swedish, Traditional Chinese, Ukrainian.Virtual machines (KVM/QEMU)
New QEMU version 10.1.2:
See the upstream changelog for details.
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in a non-raw format such as qcow2, and is then provided to QEMU using the QEMU storage daemon.
This allows taking snapshots of VMs with a TPM state on file-level storages such as directory, NFS and CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
The host CPU type makes all CPU flags visible within the guest, which may not be desirable in all setups.
As an alternative, a new vCPU flag nested-virt allows to specifically enable nested virtualization.
The flag has to be enabled on a vCPU type corresponding to the host CPU vendor and generation, enabling it on a generic x86-64-v* vCPU type is not sufficient.
nested-virt automatically resolves to the vendor-specific virtualization flag.
Add initial support for Intel Trust Domain Extensions (TDX).
On supported platforms, such as specific recent Intel CPUs, TDX can isolate guest memory from the host.
TDX also requires support in the guest. Windows guests currently do not support TDX.
Initial support for enabling TDX attestation is also available.
Some features like live migration are unsupported.
With Intel TDX and AMD SEV, Proxmox VE now provides initial integration of all major vendor-specific confidential computing technologies.
Allow disabling Kernel Samepage Merging (KSM) for specific VMs (issue 5291).
KSM can optimize memory usage in setups that run many similar VMs, but in some setups, it may be required to disable KSM.
Instead of completely disabling KSM, it is now possible to disable KSM only for specific VMs.
Ensure newly created EFI disks contain new Microsoft UEFI CA 2023 keys to avoid guest boot failures with Secure Boot (issue 6985).
Newer Windows ISOs will only be signed by this new CA and thus require the keys to be present.
Existing EFI disks are currently not updated automatically to avoid issues with existing Windows installations, and need to be updated manually.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the vfio-pci driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new driver option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The qm import CLI command now supports the import-from parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (issue 6989).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
Avoid applying pending changes when resuming from hibernation (issue 6934).
Fix a short-lived regression where aarch64 VMs would not start with machine type 10.1 or higher (issue 7014).
Disable High Precision Event Timer (HPET) for recent Linux VMs and machine version 10.1 to avoid increased CPU usage.
Avoid failing live migration of certain VMs due to a different MTU settings on the source and target node.
Allow setting the guest-phys-bits CPU option, which may be required for PCI(e) passthrough on certain Intel CPUs (issue 6378).
Allow setting the aw-bits machine option to work around issues when using the Intel vIOMMU driver on newer machine versions (issue 6608).
Remote migration: Increase timeout for volume activation to avoid migration failures (issue 6828).
Ensure correct ordering of machine versions (issue 6648).
Fix an issue where a migration with conntrack state could fail due to unfortunate timing.
Fix an issue where FreeBSD guests could get stuck when canceling a SCSI request when using VirtIO SCSI (issue 6810).
Backport a fix for an issue where enabling the VNC clipboard for a Windows guest could cause the mouse pointer to get stuck.
Avoid blocking the QEMU guest agent if an fsfreeze command never returns.
Such situations can arise if an in-guest shutdown is triggered after initiating the freeze, but before it is finished.
This is implemented by querying the completion status multiple times instead of blocking the socket for up to one hour.
Integrate fixes for Spectre branch target injection ("VMScape", see PSA-2025-00016-1).
Fix an issue where the Disk IO graphs would show spikes if the QEMU monitor was unavailable when collecting statistics (issue 6207).
Fix an issue where a VM with an unavailable QEMU monitor would slow down statistics collection.
Fix a regression where VM template backups with QEMU machine version 10 and IDE/SATA with a read-only volume could fail (issue 6675).
Fix an issue where SCSI device passthrough would not work with QEMU machine version 10 (issue 6680).
Fix an issue where the CPU could not be edited if properties with dashes had been added manually to the VM CPU config.
Avoid a "timeout waiting on systemd" error after a live migration.
Avoid logging spurious QEMU monitor errors to the logs.Containers (LXC)
Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).
The entrypoint for containers can now be modified in the container options.
Customization of environment variables.
The container options now allow to set environment variables that are passed to the container's entrypoint.
This is especially useful for application containers that can be configured using environment variables.
Host-managed DHCP for containers without a network stack.
For system containers, DHCP management is delegated to the network management stack shipped by the container.
However, application containers may not ship a network management stack.
Enabling host-managed DHCP allows such containers to acquire DHCP leases via the DHCP tooling on the host.
Host-managed DHCP is automatically enabled for application containers created from OCI images.
New LXC version 6.0.5:
See the upstream changelog for details.
Improve compatibility with guest distributions:
Add support for openSUSE Leap 16.0 (issue 6731).
Support Ubuntu 25.10 and 26.04 LTS releases, and warn instead of erroring out if the release is unknown.
Warn instead of erroring out if the Debian version in a container is higher than the known maximum.
Fix compatibility issues Debian 13 point releases.
Fix DHCP issues in Debian 13 containers by actively installing isc-dhcp-client and ifupdown2.
Disable systemd-networkd-wait-online.service in Debian 13 containers to prevent spurious error-messages and long delays during startup.
Improve compatibility with AlmaLinux 10 and CentOS 10 containers.
Lift restrictions on /proc and /sys if nesting is enabled to avoid issues in certain nested setups (issue 7006).
Regenerate snakeoil SSH certificates and keypairs shipped in Debian-based containers (PSA-2025-00017-1).
Show dynamically assigned IP addresses in the Network tab.
Document that systemd inside a container requires nesting to be enabled (issue 6897).
Enable Router Advertisements if the container is configured with DHCPv6 (issue 5538).General improvements for virtual guests
Support for node-independent bulk actions.
Bulk actions already allow to conveniently start, shut down, suspend, or migrate many virtual guests located on a specific node.
In addition, it is now possible to initiate bulk actions on the datacenter level.
The new datacenter-level bulk actions are also accessible by right-clicking on the datacenter in the resource tree.
Fix an issue where the API would not generate PNGs for RRD graphs.
Fix an overly strict permission check for NICs without a bridge.HA Manager
Allow adding VMs and containers as HA resources after creation or restore.
The HA state is automatically set to started if "Start after creation"/"Start after restore" is enabled.
When deleting a resource, also optionally purge it by updating all rules containing the resource (issue 6613).
Significantly reduce the time to compute the currently used resources for Cluster Resource Scheduling (CRS) with the static load scheduler (technology preview).
Adapt the pve-ha-simulator package to the changes added to the HA Manager for PVE 9.0.
Fix a few issues due to missing imports and newly added files (issue 6839).
Add support for configuring resources for the static-load scheduler.
Fix an issue where disabled HA rules could not be re-enabled in the GUI.
Fix an issue where HA rules could not be edited in the GUI.
Support parsing version strings of nodes running legacy versions of Proxmox VE (< 8.0) in mixed clusters.
Fix a warning due to a wrong emptiness check for blocking resources (issue 6881).
Fix issues that would incorrectly mark rules as unsatisfiable, resulting from the false assumption to also consider ignored resources.
Apply negative affinity constraints before positive ones, in order not to limit the number of available nodes unneccessarily.
Remove the computed and obsolete group field from the service status data.Improved management for Proxmox VE clusters
Improve reliability of opening shells for root@pam to other cluster nodes (issue 6789).
Improvements to the /cluster/metrics/export API endpoint for pull-style metric collection:
Use credentials from original request for proxying.
Increase timeout for collecting metrics in a cluster.
ACME Account view: Improve handling of providers that do not return an e-mail field, such as Let's Encrypt.
Fix a short-lived regression that caused an error when listing ACME plugins (issue 6932).
API viewer: Add description panel to return section (issue 6830).
Improvements to the extended metrics collection introduced in PVE 9.0, based on the feedback from the community:
On upgraded nodes, data is now transparently fetched from RRD files in the old format.
This results in more robust and simpler code and also provides data for longer time windows.
Migrating RRD data when upgrading from Proxmox VE 8.4 is not necessary anymore.
Fix two glitches with the resolution calculation for the longer, aggregated timeframes.
Fix a timing issue when reading from RRD files before the metrics are written to it (issue 6615).Backup/Restore
File restore from container backups on a Proxmox Backup Server instance: Add support for symlinks when downloading a directory as a ZIP archive (issue 4995).
Improvements to the backup provider API:
Fix an issue where backups with fleecing would fail for VMs with a TPM state (issue 6882).Storage
Improvements to snapshots as volume chains (technology preview):
Disallow disabling volume-chain snapshots if a qcow2 image exists, as this can have unintended consequences.
Fix an issue where taking a snapshot would fail after a disk move (issue 6713).
Fix an issue where cloning a VM from a snapshot would fail.
Fix cluster-locking behavior when performing snapshot operations on a shared LVM storage.
As "snapshot as volume chains" requires machine version 10 or higher, fail early when attempting to start a VM with a lower machine version.
Improvements to the LVM-thick plugin:
Use the more performant blkdiscard instead of cstream for wiping removed volumes when possible.
Fix an issue where to-be-removed volumes would not be activated before attempting to wipe the volume (issue 6941).
LVM-thin plugin: Avoid LVM warning when creating a template.
iSCSI plugin: Add initial support for portals that return hostnames instead of IP addresses during discovery.
ZFS plugin: Avoid overly strict error detection when deleting a volume (issue 6845).
Improvements to the ESXi import:
Fix an issue where live import from an ESXi storage would fail for QEMU machine version 10.
Ensure the FUSE process is cleaned up after removing an ESXi storage (issue 6073).Ceph
Mapping volumes of Windows VMs with KRBD now sets the rxbounce map option (issue 5779).
This fixes an issue where Windows VMs with disks on an RBD storage with KRBD enabled would cause warnings and degraded performance on the host.
Simplify Ceph installation on air-gapped clusters by allowing to choose a 'manual' repository option in the wizard and pveceph install (issue 5244).
The usage data printed by pveceph pool commands now also mentions the unit.
Fix a short-lived issue where OSDs newly created under Proxmox VE 9.0 would not become active automatically after reboot (issue 6652).
Fix a regression where creating OSDs with a separate DB/WAL device would fail (issue 6747).Access control
Allow to delete the comment of an API token by setting it to an empty value (issue 6890).
Fix an issue where virtual NIC settings would display bridges that cannot actually be used due to insufficient permissions.
Add an API endpoint for verifying token-owned VNC authentication tickets to be used by termproxy.Firewall & Software Defined Networking
More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guest NICs.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.
Allow interfaces starting with nic and if as bridge ports (issue 6806).
Improvements to the nftables firewall (technology preview):
Fix atomicity when updating ipsets, to ensure consistent filtering.
Move the conntrack statement to the forward chain so it gets properly re-created on flush (issue 6831).
Create chains in the host table of a VNet firewall only if the host firewall is enabled.
When creating firewall entries, only use the IP address of a container instead of the whole subnet.
Fix up the ipfilter matching logic in the firewall (issue 6336).
Add support for legacy ipsets and alias names in the firewall (issue 6107).
Support overlapping ipsets via the auto-merge flag.
Merge the "management" and "local_network" ipsets.
Correctly document that NDP is actually enabled by default.
Fix an issue where the GUI would not allow to remove the fabric property from an EVPN controller or VXLAN zone.
Correctly set the maximum for ASN numbers for controllers.
Add better documentation for SDN controller, VNet and zone endpoints and their returned properties.
Add fabrics status to SDN status function, allowing the web UI to display it.
Make the rollback and lock endpoints return null in accordance with the specified return type.
Add descriptions to fields in the "rule" return type.
Document return type of the firewall's list_rules endpoint.
Print a task warning when reloading the FRR configuration fails.Improved management of Proxmox VE nodes
Renew node certificate also if the current certificate is expired (issue 6779).
This avoids issues in case a node had been shut down for a long time.
Improved pvereport to provide a better status overview:
Add replication job configuration.
Improvements to the pve8to9 upgrade tool:
Remove obsolete checks and warnings concerning RRD migration.
LVM: Detect a manual thin_check_options override that could cause problems with thin pool activation after upgrade.
Check that grub is set up to update removable grub installations on ESP.
Detect problematic situations if the systemd-boot metapackage is installed, as this can cause problems on upgrade.
Increase severity of failing boot-related checks.
Fix false positives when checking unified cgroup v2 support for containers.
Clarify informational and warning messages.
Fix a buffer overflow issue in vncterm/spiceterm handling of ANSI escape sequences (PSA-2025-00018-1).
vncterm/spiceterm are spawned when a user with sufficient privileges initiates a VNC or SPICE session, for accessing a node or container console via the GUI.
Fix an issue where CLI command completion would print a "Can't use an undefined value" error (issue 6762).
Improve robustness of alternative names support for network interfaces introduced in Proxmox VE 9.0.
Fix a regression that caused an error if a VLAN interface was defined on top of a bond member.
Lower the timeout of retrieving disk SMART data to 10 seconds, in order to avoid blocking the GUI (issue 6224).
Add timestamps to debug logs of pveproxy and pvedaemon to facilitate troubleshooting.
Updates to packages rebuilt from Debian GNU/Linux upstream versions:
systemd is rebuilt to work around issues in current upstream handling of systemd-boot.
The version got increased to match upstream.
rrdtool is rebuilt to provide a native systemd unit file.
The ListenStream directive was updated to not print a deprecation warning by using /run as directory for the socket.Installation ISO
Allow pinning the names of network interface names in the GUI, TUI and auto installer.
When a kernel version changes, the interface name may change due to new (PCIe) features being picked up. This can lead to broken network configurations.
By pinning the network interfaces, the names will not change between kernel versions.
Including this functionality in the installer ensures that a node will not change its interface names accross updates.
Enable import content type by default on storages set up by the installer.
Setup directory for the rrdcached database to avoid a race condition.
Mark /etc/pve immutable to avoid clobbering it when it is not mounted.
Set the compatibility_level of postfix to 3.6 to avoid a warning.
Remove a warning by setting the timezone before configuring postfix.
Do not create deprecated /etc/timezone.
Add an option to proxmox-auto-install-assistant to verify the hashed root password.
Do not select a Debian mirror and use the CDN for Debian repositories instead.Notable changes
Kernel 6.17 is reported to fix a memory leak issue when using Intel NICs with the ice driver and MTU 9000, see here for more details.
Known Issues & Breaking Changes
NVIDIA vGPU Compatibility
If you are using NVIDIA's GRID/vGPU technology, its driver must be compatible with the kernel you are using. As of November 2025, NVIDIAs vGPU host driver are not compatible with the kernel 6.17.
To avoid a failing update and working vGPU support you have two options:
Postpone the update until kernel 6.17 is supported
Prevent installing the kernel headers for 6.17 and pin your kernel to 6.14
This can be done by manually installing the kernel specific version and uninstalling theproxmox-default-headerspackage:
apt install proxmox-headers-6.14
apt remove proxmox-default-headers
After that, make sure your boot loader loads the 6.14 kernel by default, e.g. by using proxmox-boot-tool to pin the kernel.
proxmox-boot-tool kernel pin
You can list the exact available versions with proxmox-boot-tool kernel list. See the admin guide for more details.
Make sure you update the pin when the 6.14 kernel gets an update, or remove it when NVIDIAs driver is compatible with kernel 6.17. For the list of tested versions see NVIDIA vGPU on Proxmox VE.Potential issues booting into kernel 6.17 on some Dell PowerEdge servers
Some users have reported failure to boot into kernel 6.17 and machine check errors on certain Dell PowerEdge servers, while kernel 6.14 boots successfully. It is reported that enabling SR-IOV Global and I/OAT DMA in the firmware helps. See this forum thread.
If enabling SR-IOV Global and I/OAT DMA does not resolve the issue, we recommend pinning the 6.14 kernel:
proxmox-boot-tool kernel pin 6.14.11-4-pve
You can list the exact available versions with proxmox-boot-tool kernel list. See the admin guide for more details.Compatibility issues of LINSTOR/DRBD and kernel 6.17
Users reported that currently, the DRBD kernel module is not yet compatible with kernel 6.17 and building the module for 6.17 via DKMS will fail, see the forum for more details.
Original source Report a problem
Until a fix is available, a workaround is to manually install and pin a 6.14 kernel. - Nov 19, 2025
- Parsed from source:Nov 19, 2025
- Detected by Releasebot:Dec 6, 2025
Proxmox VE 9.1
Proxmox VE 9 debuts with kernel 6.17 as default and QEMU 10.1.2. It adds OCI image templates for LXC, TPM state in qcow2, nested virtualization controls, Intel TDX support, and richer SDN UI along with broad GUI enhancements and reliability fixes. Upgrade path outlined.
Highlights
Based on Debian Trixie (13.2)
Latest 6.17.2-1 Kernel as new stable default
QEMU 10.1.2
LXC 6.0.5
ZFS 2.3.4
Ceph Squid 19.2.3Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in the qcow2 format.
This allows taking snapshots of VMs with a TPM state on file-level storages such as NFS or CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
A new vCPU flag allows to enable nested virtualization on top of a vCPU type that corresponds to the host CPU vendor and generation.
This can be an alternative to using the full host vCPU type.
More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guests.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.
Kernel 6.17 as new stable default.
Seamless upgrade from Proxmox VE 8.4, see Upgrade from 8 to 9.Changelog Overview
Before upgrading, please consider
Known Issues & Breaking Changes
Enhancements in the web interface (GUI)
Allow to initiate bulk actions directly from the Tag View.
The Tag View provides a convenient overview over virtual guests, grouped according to their assigned tags.
Right-clicking on a tag in the resource tree now allows to conveniently initiate bulk actions for virtual guests with that tag.
Right-clicking on a VM in the resource tree now allows to reset the VM (issue 4248).Improvements to the new mobile web interface introduced in Proxmox VE 9.0:
Support login using an OpenID Connect (OIDC) realm.
The VM hardware and option panels now show pending changes.
Allow to edit VM options directly from the mobile web interface.
Improve detection of mobile Firefox (issue 6657).
Fix an issue where the consent banner was not rendered as Markdown.
Move the global search bar to the middle of the screen and adapt to the screen size for improved visibility.
Use icons that are more suitable for high-resolution display (issue 6599).
Increase the thresholds for warning about high memory usage for the cluster, node, and guest summary pages.
Fix an issue where no login dialog would be shown after a session has expired.
Fix an issue where resource pool members would not be displayed correctly after adding or removing a member (issue 6385).
Fix performance issues where the GUI would be slowed down in setups with many guests.
The datacenter, node, and guest summary pages now show two separate selectors for graph timeframe and aggregation type.
Previously, both settings were combined into one dropdown box.
The journal viewer now keeps the horizontal scrolling position even when refreshing the journal (issue 6679).
Fix an issue where APT changelogs or package descriptions with non-ASCII characters would be displayed incorrectly.
Clarify that the graphs in the storage summary show usage in bytes.
Fix an issue where the first guest would not update its status in the resource tree for five minutes after creation.
Allow filtering for name, node or VMID when adding guests to a resource pool.
Fix an issue where certain setting dialogs, available only to highly-privileged users by default, were susceptible to XSS (PSA-2025-00013-1).
Fix an issue where the task description of some task types would be incomplete.
Fix a regression where the resource tree tags would not have the correct color (issue 6815).
Add some missing links to the online documentation (issue 6443).
Updated translations, among others:
Czech
French
Georgian
German
Italian
Japanese
Korean
Polish
Spanish
Swedish
Traditional Chinese
UkrainianVirtual machines (KVM/QEMU)
New QEMU version 10.1.2:
Original source Report a problem
See the upstream changelog for details.
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in a non-raw format such as qcow2, and is then provided to QEMU using the QEMU storage daemon.
This allows taking snapshots of VMs with a TPM state on file-level storages such as directory, NFS and CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
The host CPU type makes all CPU flags visible within the guest, which may not be desirable in all setups.
As an alternative, a new vCPU flag nested-virt allows to specifically enable nested virtualization.
The flag has to be enabled on a vCPU type corresponding to the host CPU vendor and generation, enabling it on a generic x86-64-v* vCPU type is not sufficient.
nested-virt automatically resolves to the vendor-specific virtualization flag.
Add initial support for Intel Trust Domain Extensions (TDX).
On supported platforms, such as specific recent Intel CPUs, TDX can isolate guest memory from the host.
TDX also requires support in the guest. Windows guests currently do not support TDX.
Initial support for enabling TDX attestation is also available.
Some features like live migration are unsupported.
With Intel TDX and AMD SEV, Proxmox VE now provides initial integration of all major vendor-specific confidential computing technologies.
Allow disabling Kernel Samepage Merging (KSM) for specific VMs (issue 5291).
KSM can optimize memory usage in setups that run many similar VMs, but in some setups, it may be required to disable KSM.
Instead of completely disabling KSM, it is now possible to disable KSM only for specific VMs.
Ensure newly created EFI disks contain new Microsoft UEFI CA 2023 keys to avoid guest boot failures with Secure Boot (issue 6985).
Newer Windows ISOs will only be signed by this new CA and thus require the keys to be present.
Existing EFI disks are currently not updated automatically to avoid issues with existing Windows installations, and need to be updated manually.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the vfio-pci driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new driver option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The qm import CLI command now supports the import-from parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (issue 6989).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
Avoid applying pending changes when resuming from hibernation (issue 6934).
Fix a short-lived regression where aarch64 VMs would not start with machine type 10.1 or higher (issue 7014).
Disable High Precision Event Timer (HPET) for recent Linux VMs and machine version 10.1 to avoid increased CPU usage.
Avoid failing live migration of certain VMs due to a different MTU settings on the source and target node.
Allow setting the guest-phys-bits CPU option, which may be required for PCI(e) passthrough on certain Intel CPUs (issue 6378).
Allow setting the aw-bits machine option to work around issues when using the Intel vIOMMU driver on newer machine versions (issue 6608).
Remote migration: Increase timeout for volume activation to avoid migration failures (issue 6828).
Ensure correct ordering of machine versions (issue 6648).
Fix an issue where a migration with conntrack state could fail due to unfortunate timing.
Fix an issue where FreeBSD guests could get stuck when canceling a SCSI request when using VirtIO SCSI (issue 6810).
Backport a fix for an issue where enabling the VNC clipboard for a Windows guest could cause the mouse pointer to get stuck.
Avoid blocking the QEMU guest agent if an fsfreeze command never returns.
Such situations can arise if an in-guest shutdown is triggered after initiating the freeze, but before it is finished.
This is implemented by querying the completion status multiple times instead of blocking the socket for up to one hour.
Integrate fixes for Spectre branch target injection ("VMScape", see PSA-2025-00016-1).
Fix an issue where the Disk IO graphs would show spikes if the QEMU monitor was unavailable when collecting statistics (issue 6207).
Fix an issue where a VM with an unavailable QEMU monitor would slow down statistics collection.
Fix a regression where VM template backups with QEMU machine version 10 and IDE/SATA with a read-only volume could fail (issue 6675).
Fix an issue where SCSI device passthrough would not work with QEMU machine version 10 (issue 6680).
Fix an issue where the CPU could not be edited if properties with dashes had been added manually to the VM CPU config.
Avoid a "timeout waiting on systemd" error after a live migration.
Avoid logging spurious QEMU monitor errors to the logs. - Nov 19, 2025
- Parsed from source:Nov 19, 2025
- Detected by Releasebot:Dec 6, 2025
Proxmox VE 9.1
Proxmox VE 9 updates with Kernel 6.17, QEMU 10.1.2 and LXC 6.0.5. Adds OCI image templates for LXC, TPM state in qcow2, nested virtualization controls, TDX support, and stronger SDN visibility. Seamless upgrade path from 8.x.
Based on Debian Trixie (13.2)
Latest 6.17.2-1 Kernel as new stable default
QEMU 10.1.2
LXC 6.0.5
ZFS 2.3.4
Ceph Squid 19.2.3Highlights
Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in the qcow2 format.
This allows taking snapshots of VMs with a TPM state on file-level storages such as NFS or CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
A new vCPU flag allows to enable nested virtualization on top of a vCPU type that corresponds to the host CPU vendor and generation.
This can be an alternative to using the full
host
vCPU type.
More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guests.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.
Kernel 6.17 as new stable default.
Seamless upgrade from Proxmox VE 8.4, see
Upgrade from 8 to 9
.Changelog Overview
Before upgrading, please consider
Known Issues & Breaking Changes
.Enhancements in the web interface (GUI)
Allow to initiate bulk actions directly from the Tag View.
The Tag View provides a convenient overview over virtual guests, grouped according to their assigned tags.
Right-clicking on a tag in the resource tree now allows to conveniently initiate bulk actions for virtual guests with that tag.
Right-clicking on a VM in the resource tree now allows to reset the VM (
issue 4248
).
Improvements to the new mobile web interface introduced in Proxmox VE 9.0:
Support login using an OpenID Connect (OIDC) realm.
The VM hardware and option panels now show pending changes.
Allow to edit VM options directly from the mobile web interface.
Improve detection of mobile Firefox (
issue 6657
).
Fix an issue where the consent banner was not rendered as Markdown.
Move the global search bar to the middle of the screen and adapt to the screen size for improved visibility.
Use icons that are more suitable for high-resolution display (
issue 6599
).
Increase the thresholds for warning about high memory usage for the cluster, node, and guest summary pages.
Fix an issue where no login dialog would be shown after a session has expired.
Fix an issue where resource pool members would not be displayed correctly after adding or removing a member (
issue 6385
).
Fix performance issues where the GUI would be slowed down in setups with many guests.
The datacenter, node, and guest summary pages now show two separate selectors for graph timeframe and aggregation type.
Previously, both settings were combined into one dropdown box.
The journal viewer now keeps the horizontal scrolling position even when refreshing the journal (
issue 6679
).
Fix an issue where APT changelogs or package descriptions with non-ASCII characters would be displayed incorrectly.
Clarify that the graphs in the storage summary show usage in bytes.
Fix an issue where the first guest would not update its status in the resource tree for five minutes after creation.
Allow filtering for name, node or VMID when adding guests to a resource pool.
Fix an issue where certain setting dialogs, available only to highly-privileged users by default, were susceptible to XSS (
PSA-2025-00013-1
).
Fix an issue where the task description of some task types would be incomplete.
Fix a regression where the resource tree tags would not have the correct color (
issue 6815
).
Add some missing links to the online documentation (
issue 6443
).
Updated translations, among others:
Czech
French
Georgian
German
Italian
Japanese
Korean
Polish
Spanish
Swedish
Traditional Chinese
UkrainianVirtual machines (KVM/QEMU)
New QEMU version 10.1.2:
Original source Report a problem
See the
upstream changelog
for details.
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in a non-raw format such as qcow2, and is then provided to QEMU using the QEMU storage daemon.
This allows taking snapshots of VMs with a TPM state on file-level storages such as directory, NFS and CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
The
host
CPU type makes all CPU flags visible within the guest, which may not be desirable in all setups.
As an alternative, a new vCPU flag
nested-virt
allows to specifically enable nested virtualization.
The flag has to be enabled on a vCPU type corresponding to the host CPU vendor and generation, enabling it on a generic
x86-64-v*
vCPU type is not sufficient.
nested-virt
automatically resolves to the vendor-specific virtualization flag.
Add initial support for Intel Trust Domain Extensions (TDX).
On supported platforms, such as specific recent Intel CPUs, TDX can isolate guest memory from the host.
TDX also requires support in the guest. Windows guests currently do not support TDX.
Initial support for enabling TDX attestation is also available.
Some features like live migration are unsupported.
With Intel TDX and AMD SEV, Proxmox VE now provides initial integration of all major vendor-specific confidential computing technologies.
Allow disabling Kernel Samepage Merging (KSM) for specific VMs (
issue 5291
).
KSM can optimize memory usage in setups that run many similar VMs, but in some setups, it may be required to disable KSM.
Instead of completely disabling KSM, it is now possible to disable KSM only for specific VMs.
Ensure newly created EFI disks contain new Microsoft UEFI CA 2023 keys to avoid guest boot failures with Secure Boot (
issue 6985
).
Newer Windows ISOs will only be signed by this new CA and thus require the keys to be present.
Existing EFI disks are currently not updated automatically to avoid issues with existing Windows installations, and need to be updated manually.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
Avoid applying pending changes when resuming from hibernation (
issue 6934
).
Fix a short-lived regression where aarch64 VMs would not start with machine type 10.1 or higher (
issue 7014
).
Disable High Precision Event Timer (HPET) for recent Linux VMs and machine version 10.1 to avoid increased CPU usage.
Avoid failing live migration of certain VMs due to a different MTU settings on the source and target node.
Allow setting the
guest-phys-bits
CPU option, which may be required for PCI(e) passthrough on certain Intel CPUs (
issue 6378
).
Allow setting the
aw-bits
machine option to work around issues when using the Intel vIOMMU driver on newer machine versions (
issue 6608
).
Remote migration: Increase timeout for volume activation to avoid migration failures (
issue 6828
).
Ensure correct ordering of machine versions (
issue 6648
).
Fix an issue where a migration with conntrack state could fail due to unfortunate timing.
Fix an issue where FreeBSD guests could get stuck when canceling a SCSI request when using VirtIO SCSI (
issue 6810
).
Backport a fix for an issue where enabling the VNC clipboard for a Windows guest could cause the mouse pointer to get stuck.
Avoid blocking the QEMU guest agent if an
fsfreeze
command never returns.
Such situations can arise if an in-guest shutdown is triggered after initiating the freeze, but before it is finished.
This is implemented by querying the completion status multiple times instead of blocking the socket for up to one hour.
Integrate fixes for Spectre branch target injection ("VMScape", see
PSA-2025-00016-1
).
Fix an issue where the Disk IO graphs would show spikes if the QEMU monitor was unavailable when collecting statistics (
issue 6207
).
Fix an issue where a VM with an unavailable QEMU monitor would slow down statistics collection.
Fix a regression where VM template backups with QEMU machine version 10 and IDE/SATA with a read-only volume could fail (
issue 6675
).
Fix an issue where SCSI device passthrough would not work with QEMU machine version 10 (
issue 6680
).
Fix an issue where the CPU could not be edited if properties with dashes had been added manually to the VM CPU config.
Avoid a "timeout waiting on systemd" error after a live migration.
Avoid logging spurious QEMU monitor errors to the logs. - Nov 19, 2025
- Parsed from source:Nov 19, 2025
- Detected by Releasebot:Dec 6, 2025
Proxmox VE 9.1
Proxmox VE ships with kernel 6.17 as the new stable default and updated virtualization stack. It adds OCI image templates for LXC containers, TPM state in qcow2, nested virtualization controls and Intel TDX, plus notable GUI and API improvements.
Based on Debian Trixie (13.2)
Latest 6.17.2-1 Kernel as new stable default
QEMU 10.1.2
LXC 6.0.5
ZFS 2.3.4
Ceph Squid 19.2.3Highlights
Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in the qcow2 format.
This allows taking snapshots of VMs with a TPM state on file-level storages such as NFS or CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
A new vCPU flag allows to enable nested virtualization on top of a vCPU type that corresponds to the host CPU vendor and generation.
This can be an alternative to using the full
host
vCPU type.
More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guests.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.
Kernel 6.17 as new stable default.
Seamless upgrade from Proxmox VE 8.4, see
Upgrade from 8 to 9
.Changelog Overview
Before upgrading, please consider
Known Issues & Breaking Changes
.Enhancements in the web interface (GUI)
Allow to initiate bulk actions directly from the Tag View.
The Tag View provides a convenient overview over virtual guests, grouped according to their assigned tags.
Right-clicking on a tag in the resource tree now allows to conveniently initiate bulk actions for virtual guests with that tag.
Right-clicking on a VM in the resource tree now allows to reset the VM (
issue 4248
).
Improvements to the new mobile web interface introduced in Proxmox VE 9.0:
Support login using an OpenID Connect (OIDC) realm.
The VM hardware and option panels now show pending changes.
Allow to edit VM options directly from the mobile web interface.
Improve detection of mobile Firefox (
issue 6657
).
Fix an issue where the consent banner was not rendered as Markdown.
Move the global search bar to the middle of the screen and adapt to the screen size for improved visibility.
Use icons that are more suitable for high-resolution display (
issue 6599
).
Increase the thresholds for warning about high memory usage for the cluster, node, and guest summary pages.
Fix an issue where no login dialog would be shown after a session has expired.
Fix an issue where resource pool members would not be displayed correctly after adding or removing a member (
issue 6385
).
Fix performance issues where the GUI would be slowed down in setups with many guests.
The datacenter, node, and guest summary pages now show two separate selectors for graph timeframe and aggregation type.
Previously, both settings were combined into one dropdown box.
The journal viewer now keeps the horizontal scrolling position even when refreshing the journal (
issue 6679
).
Fix an issue where APT changelogs or package descriptions with non-ASCII characters would be displayed incorrectly.
Clarify that the graphs in the storage summary show usage in bytes.
Fix an issue where the first guest would not update its status in the resource tree for five minutes after creation.
Allow filtering for name, node or VMID when adding guests to a resource pool.
Fix an issue where certain setting dialogs, available only to highly-privileged users by default, were susceptible to XSS (
PSA-2025-00013-1
).
Fix an issue where the task description of some task types would be incomplete.
Fix a regression where the resource tree tags would not have the correct color (
issue 6815
).
Add some missing links to the online documentation (
issue 6443
).
Updated translations, among others:
Czech
French
Georgian
German
Italian
Japanese
Korean
Polish
Spanish
Swedish
Traditional Chinese
UkrainianVirtual machines (KVM/QEMU)
New QEMU version 10.1.2:
See the
upstream changelog
for details.
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in a non-raw format such as qcow2, and is then provided to QEMU using the QEMU storage daemon.
This allows taking snapshots of VMs with a TPM state on file-level storages such as directory, NFS and CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
The
host
CPU type makes all CPU flags visible within the guest, which may not be desirable in all setups.
As an alternative, a new vCPU flag
nested-virt
allows to specifically enable nested virtualization.
The flag has to be enabled on a vCPU type corresponding to the host CPU vendor and generation, enabling it on a generic
x86-64-v*
vCPU type is not sufficient.nested-virt
Original source Report a problem
automatically resolves to the vendor-specific virtualization flag.
Add initial support for Intel Trust Domain Extensions (TDX).
On supported platforms, such as specific recent Intel CPUs, TDX can isolate guest memory from the host.
TDX also requires support in the guest. Windows guests currently do not support TDX.
Initial support for enabling TDX attestation is also available.
Some features like live migration are unsupported.
With Intel TDX and AMD SEV, Proxmox VE now provides initial integration of all major vendor-specific confidential computing technologies.
Allow disabling Kernel Samepage Merging (KSM) for specific VMs (
issue 5291
).
KSM can optimize memory usage in setups that run many similar VMs, but in some setups, it may be required to disable KSM.
Instead of completely disabling KSM, it is now possible to disable KSM only for specific VMs.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there. - Nov 19, 2025
- Parsed from source:Nov 19, 2025
- Detected by Releasebot:Dec 6, 2025
Proxmox VE 9.1
Proxmox VE release upgrade brings Kernel 6.17 as the stable default with QEMU 10.1.2 and LXC 6.0.5 plus ZFS 2.3.4 and Ceph; adds OCI-based LXC templates, TPM support in qcow2, nested virtualization controls, and enhanced SDN/GUIs for deeper visibility.
Based on Debian Trixie (13.2)
Latest 6.17.2-1 Kernel as new stable default
QEMU 10.1.2
LXC 6.0.5
ZFS 2.3.4
Ceph Squid 19.2.3Highlights
Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in the qcow2 format.
This allows taking snapshots of VMs with a TPM state on file-level storages such as NFS or CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
A new vCPU flag allows to enable nested virtualization on top of a vCPU type that corresponds to the host CPU vendor and generation.
This can be an alternative to using the full
host
vCPU type.
More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guests.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.
Kernel 6.17 as new stable default.
Seamless upgrade from Proxmox VE 8.4, see
Upgrade from 8 to 9
.Changelog Overview
Before upgrading, please consider
Known Issues & Breaking Changes
.Enhancements in the web interface (GUI)
Allow to initiate bulk actions directly from the Tag View.
The Tag View provides a convenient overview over virtual guests, grouped according to their assigned tags.
Right-clicking on a tag in the resource tree now allows to conveniently initiate bulk actions for virtual guests with that tag.
Right-clicking on a VM in the resource tree now allows to reset the VM (
issue 4248
).
Improvements to the new mobile web interface introduced in Proxmox VE 9.0:
Support login using an OpenID Connect (OIDC) realm.
The VM hardware and option panels now show pending changes.
Allow to edit VM options directly from the mobile web interface.
Improve detection of mobile Firefox (
issue 6657
).
Fix an issue where the consent banner was not rendered as Markdown.
Move the global search bar to the middle of the screen and adapt to the screen size for improved visibility.
Use icons that are more suitable for high-resolution display (
issue 6599
).
Increase the thresholds for warning about high memory usage for the cluster, node, and guest summary pages.
Fix an issue where no login dialog would be shown after a session has expired.
Fix an issue where resource pool members would not be displayed correctly after adding or removing a member (
issue 6385
).
Fix performance issues where the GUI would be slowed down in setups with many guests.
The datacenter, node, and guest summary pages now show two separate selectors for graph timeframe and aggregation type.
Previously, both settings were combined into one dropdown box.
The journal viewer now keeps the horizontal scrolling position even when refreshing the journal (
issue 6679
).
Fix an issue where APT changelogs or package descriptions with non-ASCII characters would be displayed incorrectly.
Clarify that the graphs in the storage summary show usage in bytes.
Fix an issue where the first guest would not update its status in the resource tree for five minutes after creation.
Allow filtering for name, node or VMID when adding guests to a resource pool.
Fix an issue where certain setting dialogs, available only to highly-privileged users by default, were susceptible to XSS (
PSA-2025-00013-1
).
Fix an issue where the task description of some task types would be incomplete.
Fix a regression where the resource tree tags would not have the correct color (
issue 6815
).
Add some missing links to the online documentation (
issue 6443
).
Updated translations, among others:
Czech
French
Georgian
German
Italian
Japanese
Korean
Polish
Spanish
Swedish
Traditional Chinese
UkrainianVirtual machines (KVM/QEMU)
New QEMU version 10.1.2:
See the
upstream changelog
for details.
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in a non-raw format such as qcow2, and is then provided to QEMU using the QEMU storage daemon.
This allows taking snapshots of VMs with a TPM state on file-level storages such as directory, NFS and CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
The
host
CPU type makes all CPU flags visible within the guest, which may not be desirable in all setups.
As an alternative, a new vCPU flag
nested-virt
allows to specifically enable nested virtualization.
The flag has to be enabled on a vCPU type corresponding to the host CPU vendor and generation, enabling it on a generic
x86-64-v*
vCPU type is not sufficient.nested-virt
automatically resolves to the vendor-specific virtualization flag.
Add initial support for Intel Trust Domain Extensions (TDX).
On supported platforms, such as specific recent Intel CPUs, TDX can isolate guest memory from the host.
TDX also requires support in the guest. Windows guests currently do not support TDX.
Initial support for enabling TDX attestation is also available.
Some features like live migration are unsupported.
With Intel TDX and AMD SEV, Proxmox VE now provides initial integration of all major vendor-specific confidential computing technologies.
Allow disabling Kernel Samepage Merging (KSM) for specific VMs (
issue 5291
).
KSM can optimize memory usage in setups that run many similar VMs, but in some setups, it may be required to disable KSM.
Instead of completely disabling KSM, it is now possible to disable KSM only for specific VMs.
Ensure newly created EFI disks contain new Microsoft UEFI CA 2023 keys to avoid guest boot failures with Secure Boot (
issue 6985
).
Newer Windows ISOs will only be signed by this new CA and thus require the keys to be present.
Existing EFI disks are currently not updated automatically to avoid issues with existing Windows installations, and need to be updated manually.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (
issue 6989
).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
Avoid applying pending changes when resuming from hibernation (
issue 6934
).
Fix a short-lived regression where aarch64 VMs would not start with machine type 10.1 or higher (
issue 7014
).
Disable High Precision Event Timer (HPET) for recent Linux VMs and machine version 10.1 to avoid increased CPU usage.
Avoid failing live migration of certain VMs due to a different MTU settings on the source and target node.
Allow setting the
guest-phys-bits
CPU option, which may be required for PCI(e) passthrough on certain Intel CPUs (
issue 6378
).
Allow setting the
aw-bits
machine option to work around issues when using the Intel vIOMMU driver on newer machine versions (
issue 6608
).
Remote migration: Increase timeout for volume activation to avoid migration failures (
issue 6828
).
Ensure correct ordering of machine versions (
issue 6648
).
Fix an issue where a migration with conntrack state could fail due to unfortunate timing.
Fix an issue where FreeBSD guests could get stuck when canceling a SCSI request when using VirtIO SCSI (
issue 6810
).
Backport a fix for an issue where enabling the VNC clipboard for a Windows guest could cause the mouse pointer to get stuck.
Avoid blocking the QEMU guest agent if an
fsfreeze
command never returns.
Such situations can arise if an in-guest shutdown is triggered after initiating the freeze, but before it is finished.
This is implemented by querying the completion status multiple times instead of blocking the socket for up to one hour.
Integrate fixes for Spectre branch target injection ("VMScape", see
PSA-2025-00016-1
).
Fix an issue where the Disk IO graphs would show spikes if the QEMU monitor was unavailable when collecting statistics (
issue 6207
).
Fix an issue where a VM with an unavailable QEMU monitor would slow down statistics collection.
Fix a regression where VM template backups with QEMU machine version 10 and IDE/SATA with a read-only volume could fail (
issue 6675
).
Fix an issue where SCSI device passthrough would not work with QEMU machine version 10 (
issue 6680
).
Fix an issue where the CPU could not be edited if properties with dashes had been added manually to the VM CPU config.
Avoid a "timeout waiting on systemd" error after a live migration.
Avoid logging spurious QEMU monitor errors to the logs.Containers (LXC)
Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).
The entrypoint for containers can now be modified in the container options.
Customization of environment variables.
The container options now allow to set environment variables that are passed to the container's entrypoint.
This is especially useful for application containers that can be configured using environment variables.
Host-managed DHCP for containers without a network stack.
For system containers, DHCP management is delegated to the network management stack shipped by the container.
However, application containers may not ship a network management stack.
Enabling host-managed DHCP allows such containers to acquire DHCP leases via the DHCP tooling on the host.
Host-managed DHCP is automatically enabled for application containers created from OCI images.
New LXC version 6.0.5:
See the
upstream changelog
for details.
Improve compatibility with guest distributions:
Add support for openSUSE Leap 16.0 (
issue 6731
).
Support Ubuntu 25.10 and 26.04 LTS releases, and warn instead of erroring out if the release is unknown.
Warn instead of erroring out if the Debian version in a container is higher than the known maximum.
Fix compatibility issues Debian 13 point releases.
Fix DHCP issues in Debian 13 containers by actively installing
isc-dhcp-client
and
ifupdown2
.
Disable
systemd-networkd-wait-online.service
in Debian 13 containers to prevent spurious error-messages and long delays during startup.
Improve compatibility with AlmaLinux 10 and CentOS 10 containers.
Lift restrictions on /proc and /sys if nesting is enabled to avoid issues in certain nested setups (
issue 7006
).
Regenerate snakeoil SSH certificates and keypairs shipped in Debian-based containers (
PSA-2025-00017-1
).
Show dynamically assigned IP addresses in the Network tab.
Document that systemd inside a container requires nesting to be enabled (
issue 6897
).
Enable Router Advertisements if the container is configured with DHCPv6 (
issue 5538
).General improvements for virtual guests
Support for node-independent bulk actions.
Bulk actions already allow to conveniently start, shut down, suspend, or migrate many virtual guests located on a specific node.
In addition, it is now possible to initiate bulk actions on the datacenter level.
The new datacenter-level bulk actions are also accessible by right-clicking on the datacenter in the resource tree.
Fix an issue where the API would not generate PNGs for RRD graphs.
Fix an overly strict permission check for NICs without a bridge.HA Manager
Allow adding VMs and containers as HA resources after creation or restore.
The HA state is automatically set to
started
if "Start after creation"/"Start after restore" is enabled.
When deleting a resource, also optionally purge it by updating all rules containing the resource (
issue 6613
).
Significantly reduce the time to compute the currently used resources for Cluster Resource Scheduling (CRS) with the static load scheduler (technology preview).
Adapt the
pve-ha-simulator
package to the changes added to the HA Manager for PVE 9.0.
Fix a few issues due to missing imports and newly added files (
issue 6839
).
Add support for configuring resources for the static-load scheduler.
Fix an issue where disabled HA rules could not be re-enabled in the GUI.
Fix an issue where HA rules could not be edited in the GUI.
Support parsing version strings of nodes running legacy versions of Proxmox VE (< 8.0) in mixed clusters.
Fix a warning due to a wrong emptiness check for blocking resources (
issue 6881
).
Fix issues that would incorrectly mark rules as unsatisfiable, resulting from the false assumption to also consider ignored resources.
Apply negative affinity constraints before positive ones, in order not to limit the number of available nodes unneccessarily.
Remove the computed and obsolete
group
field from the service status data.Improved management for Proxmox VE clusters
Improve reliability of opening shells for
root@pam
to other cluster nodes (
issue 6789
).
Improvements to the
/cluster/metrics/export
API endpoint for pull-style metric collection:
Use credentials from original request for proxying.
Increase timeout for collecting metrics in a cluster.
ACME Account view: Improve handling of providers that do not return an e-mail field, such as Let's Encrypt.
Fix a short-lived regression that caused an error when listing ACME plugins (
issue 6932
).
API viewer: Add description panel to return section (
issue 6830
).
Improvements to the extended metrics collection introduced in PVE 9.0, based on the feedback from the community:
On upgraded nodes, data is now transparently fetched from RRD files in the old format.
This results in more robust and simpler code and also provides data for longer time windows.
Migrating RRD data when upgrading from Proxmox VE 8.4 is not necessary anymore.
Fix two glitches with the resolution calculation for the longer, aggregated timeframes.
Fix a timing issue when reading from RRD files before the metrics are written to it (
issue 6615
).Backup/Restore
File restore from container backups on a Proxmox Backup Server instance: Add support for symlinks when downloading a directory as a ZIP archive (
issue 4995
).
Improvements to the backup provider API:
Fix an issue where backups with fleecing would fail for VMs with a TPM state (
issue 6882
).Storage
Improvements to snapshots as volume chains (technology preview):
Disallow disabling volume-chain snapshots if a qcow2 image exists, as this can have unintended consequences.
Fix an issue where taking a snapshot would fail after a disk move (
issue 6713
).
Fix an issue where cloning a VM from a snapshot would fail.
Fix cluster-locking behavior when performing snapshot operations on a shared LVM storage.
As "snapshot as volume chains" requires machine version 10 or higher, fail early when attempting to start a VM with a lower machine version.
Improvements to the LVM-thick plugin:
Use the more performant
blkdiscard
instead of
cstream
for wiping removed volumes when possible.
Fix an issue where to-be-removed volumes would not be activated before attempting to wipe the volume (
issue 6941
).
LVM-thin plugin: Avoid LVM warning when creating a template.
iSCSI plugin: Add initial support for portals that return hostnames instead of IP addresses during discovery.
ZFS plugin: Avoid overly strict error detection when deleting a volume (
issue 6845
).
Improvements to the ESXi import:
Fix an issue where live import from an ESXi storage would fail for QEMU machine version 10.
Ensure the FUSE process is cleaned up after removing an ESXi storage (
issue 6073
).Ceph
Mapping volumes of Windows VMs with KRBD now sets the
rxbounce
map option (
issue 5779
).
This fixes an issue where Windows VMs with disks on an RBD storage with KRBD enabled would cause warnings and degraded performance on the host.
Simplify Ceph installation on air-gapped clusters by allowing to choose a 'manual' repository option in the wizard and
pveceph install
(
issue 5244
).
The usage data printed by
pveceph pool
commands now also mentions the unit.
Fix a short-lived issue where OSDs newly created under Proxmox VE 9.0 would not become active automatically after reboot (
issue 6652
).
Fix a regression where creating OSDs with a separate DB/WAL device would fail (
issue 6747
).Access control
Allow to delete the comment of an API token by setting it to an empty value (
issue 6890
).
Fix an issue where virtual NIC settings would display bridges that cannot actually be used due to insufficient permissions.
Add an API endpoint for verifying token-owned VNC authentication tickets to be used by termproxy.Firewall & Software Defined Networking
More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guest NICs.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.
Allow interfaces starting with
nic
and
if
as bridge ports (
issue 6806
).
Improvements to the nftables firewall (technology preview):
Fix atomicity when updating ipsets, to ensure consistent filtering.
Move the conntrack statement to the forward chain so it gets properly re-created on flush (
issue 6831
).
Create chains in the host table of a VNet firewall only if the host firewall is enabled.
When creating firewall entries, only use the IP address of a container instead of the whole subnet.
Fix up the ipfilter matching logic in the firewall (
issue 6336
).
Add support for legacy ipsets and alias names in the firewall (
issue 6107
).
Support overlapping ipsets via the auto-merge flag.
Merge the "management" and "local_network" ipsets.
Correctly document that NDP is actually enabled by default.
Fix an issue where the GUI would not allow to remove the fabric property from an EVPN controller or VXLAN zone.
Correctly set the maximum for ASN numbers for controllers.
Add better documentation for SDN controller, VNet and zone endpoints and their returned properties.
Add fabrics status to SDN status function, allowing the web UI to display it.
Make the
rollback
and
lock
endpoints return null in accordance with the specified return type.
Add descriptions to fields in the "rule" return type.
Document return type of the firewall's
list_rules
endpoint.
Print a task warning when reloading the FRR configuration fails.Improved management of Proxmox VE nodes
Renew node certificate also if the current certificate is expired (
issue 6779
).
This avoids issues in case a node had been shut down for a long time.
Improved
pvereport
to provide a better status overview:
Add replication job configuration.
Improvements to the
pve8to9
upgrade tool:
Remove obsolete checks and warnings concerning RRD migration.
LVM: Detect a manual
thin_check_options
override that could cause problems with thin pool activation after upgrade.
Check that grub is set up to update removable grub installations on ESP.
Detect problematic situations if the
systemd-boot
metapackage is installed, as this can cause problems on upgrade.
Increase severity of failing boot-related checks.
Fix false positives when checking unified cgroup v2 support for containers.
Clarify informational and warning messages.
Fix a buffer overflow issue in vncterm/spiceterm handling of ANSI escape sequences (
PSA-2025-00018-1
).
vncterm/spiceterm are spawned when a user with sufficient privileges initiates a VNC or SPICE session, for accessing a node or container console via the GUI.
Fix an issue where CLI command completion would print a "Can't use an undefined value" error (
issue 6762
).
Improve robustness of alternative names support for network interfaces introduced in Proxmox VE 9.0.
Fix a regression that caused an error if a VLAN interface was defined on top of a bond member.
Lower the timeout of retrieving disk SMART data to 10 seconds, in order to avoid blocking the GUI (
issue 6224
).
Add timestamps to debug logs of
pveproxy
and
pvedaemon
to facilitate troubleshooting.
Updates to packages rebuilt from Debian GNU/Linux upstream versions:
systemd
is rebuilt to work around issues in current upstream handling of
systemd-boot
.
The version got increased to match upstream
rrdtool
is rebuilt to provide a native systemd unit file.
The
ListenStream
directive was updated to not print a deprecation warning by using
/run
as directory for the socket.Installation ISO
Allow pinning the names of network interface names in the GUI, TUI and auto installer.
When a kernel version changes, the interface name may change due to new (PCIe) features being picked up. This can lead to broken network configurations.
By pinning the network interfaces, the names will not change between kernel versions.
Including this functionality in the installer ensures that a node will not change its interface names accross updates.
Enable
import
content type by default on storages set up by the installer.
Setup directory for the rrdcached database to avoid a race condition.
Mark
/etc/pve
immutable to avoid clobbering it when it is not mounted.
Set the
compatibility_level
of postfix to
3.6
to avoid a warning.
Remove a warning by setting the timezone before configuring postfix.
Do not create deprecated
/etc/timezone
.
Add an option to
proxmox-auto-install-assistant
to verify the hashed root password.
Do not select a Debian mirror and use the CDN for Debian repositories instead.Notable changes
Kernel 6.17 is reported to fix a memory leak issue when using Intel NICs with the
ice
driver and MTU 9000,
see here for more details
.Known Issues & Breaking Changes
Original source Report a problem
NVIDIA vGPU Compatibility
If you are using NVIDIA's GRID/vGPU technology, its driver must be compatible with the kernel you are using. As of November 2025, NVIDIAs vGPU host driver are not compatible with the kernel 6.17.
To avoid a failing update and working vGPU support you have two options:
Postpone the update until kernel 6.17 is supported
Prevent installing the kernel headers for 6.17 and pin your kernel to 6.14
This can be done by manually installing the kernel specific version and uninstalling theproxmox-default-headerspackage:
apt install proxmox-headers-6.14
apt remove proxmox-default-headers
After that, make sure your boot loader loads the 6.14 kernel by default, e.g. by using
proxmox-boot-tool
to pin the kernel.
proxmox-boot-tool kernel pin
You can list the exact available versions with
proxmox-boot-tool kernel list
. See the
admin guide
for more details.
Make sure you update the pin when the 6.14 kernel gets an update, or remove it when NVIDIAs driver is compatible with kernel 6.17. For the list of tested versions see
NVIDIA vGPU on Proxmox VE
.
Potential issues booting into kernel 6.17 on some Dell PowerEdge servers
Some users have reported failure to boot into kernel 6.17 and machine check errors on certain Dell PowerEdge servers, while kernel 6.14 boots successfully. It is reported that enabling SR-IOV Global and I/OAT DMA in the firmware helps. See
this forum thread
.
If enabling SR-IOV Global and I/OAT DMA does not resolve the issue, we recommend pinning the 6.14 kernel:
proxmox-boot-tool kernel pin 6.14.11-4-pve
You can list the exact available versions with
proxmox-boot-tool kernel list
. See the
admin guide
for more details.
Compatibility issues of LINSTOR/DRBD and kernel 6.17
Users reported that currently, the DRBD kernel module is not yet compatible with kernel 6.17 and building the module for 6.17 via DKMS will fail, see
the forum for more details
.
Until a fix is available, a workaround is to manually install and pin a 6.14 kernel. - Nov 19, 2025
- Parsed from source:Nov 19, 2025
- Detected by Releasebot:Dec 6, 2025
Proxmox VE 9.1
Proxmox VE 9.0 ships with Kernel 6.17 as the default, adds OCI based LXC containers from images, and TPM support in qcow2. It enhances nested virtualization, SDN visibility, GUI upgrades, and broader VM and container compatibility for a smoother upgrade.
Based on Debian Trixie (13.2)
Latest 6.17.2-1 Kernel as new stable default
QEMU 10.1.2
LXC 6.0.5
ZFS 2.3.4
Ceph Squid 19.2.3Highlights
Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in the qcow2 format.
This allows taking snapshots of VMs with a TPM state on file-level storages such as NFS or CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
A new vCPU flag allows to enable nested virtualization on top of a vCPU type that corresponds to the host CPU vendor and generation.
This can be an alternative to using the full
host
vCPU type.
More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guests.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.
Kernel 6.17 as new stable default.
Seamless upgrade from Proxmox VE 8.4, see Upgrade from 8 to 9.Changelog Overview
Before upgrading, please consider Known Issues & Breaking Changes.Enhancements in the web interface (GUI)
Allow to initiate bulk actions directly from the Tag View.
The Tag View provides a convenient overview over virtual guests, grouped according to their assigned tags.
Right-clicking on a tag in the resource tree now allows to conveniently initiate bulk actions for virtual guests with that tag.
Right-clicking on a VM in the resource tree now allows to reset the VM (issue 4248).
Improvements to the new mobile web interface introduced in Proxmox VE 9.0:
Support login using an OpenID Connect (OIDC) realm.
The VM hardware and option panels now show pending changes.
Allow to edit VM options directly from the mobile web interface.
Improve detection of mobile Firefox (issue 6657).
Fix an issue where the consent banner was not rendered as Markdown.
Move the global search bar to the middle of the screen and adapt to the screen size for improved visibility.
Use icons that are more suitable for high-resolution display (issue 6599).
Increase the thresholds for warning about high memory usage for the cluster, node, and guest summary pages.
Fix an issue where no login dialog would be shown after a session has expired.
Fix an issue where resource pool members would not be displayed correctly after adding or removing a member (issue 6385).
Fix performance issues where the GUI would be slowed down in setups with many guests.
The datacenter, node, and guest summary pages now show two separate selectors for graph timeframe and aggregation type.
Previously, both settings were combined into one dropdown box.
The journal viewer now keeps the horizontal scrolling position even when refreshing the journal (issue 6679).
Fix an issue where APT changelogs or package descriptions with non-ASCII characters would be displayed incorrectly.
Clarify that the graphs in the storage summary show usage in bytes.
Fix an issue where the first guest would not update its status in the resource tree for five minutes after creation.
Allow filtering for name, node or VMID when adding guests to a resource pool.
Fix an issue where certain setting dialogs, available only to highly-privileged users by default, were susceptible to XSS (PSA-2025-00013-1).
Fix an issue where the task description of some task types would be incomplete.
Fix a regression where the resource tree tags would not have the correct color (issue 6815).
Add some missing links to the online documentation (issue 6443).
Updated translations, among others: Czech, French, Georgian, German, Italian, Japanese, Korean, Polish, Spanish, Swedish, Traditional Chinese, Ukrainian.Virtual machines (KVM/QEMU)
New QEMU version 10.1.2:
See the upstream changelog for details.
Support for TPM state in qcow2 format.
Some VM guest workloads require attaching a virtual Trusted Platform Module (TPM), for example newer Windows guests.
The state of a virtual TPM can now be stored in a non-raw format such as qcow2, and is then provided to QEMU using the QEMU storage daemon.
This allows taking snapshots of VMs with a TPM state on file-level storages such as directory, NFS and CIFS shares.
Storages with "snapshots as volume chains" (technology preview) enabled now support taking offline snapshots of VMs with TPM state.
Fine-grained control of nested virtualization for VM guests.
Some VM guest workloads need access to the host CPU's virtualization extensions for nested virtualization.
Examples are nested hypervisors or Windows guests with Virtualization-based Security enabled.
The
host
CPU type makes all CPU flags visible within the guest, which may not be desirable in all setups.
As an alternative, a new vCPU flag
nested-virt
allows to specifically enable nested virtualization.
The flag has to be enabled on a vCPU type corresponding to the host CPU vendor and generation, enabling it on a generic
x86-64-v*
vCPU type is not sufficient.
nested-virt
automatically resolves to the vendor-specific virtualization flag.
Add initial support for Intel Trust Domain Extensions (TDX).
On supported platforms, such as specific recent Intel CPUs, TDX can isolate guest memory from the host.
TDX also requires support in the guest. Windows guests currently do not support TDX.
Initial support for enabling TDX attestation is also available.
Some features like live migration are unsupported.
With Intel TDX and AMD SEV, Proxmox VE now provides initial integration of all major vendor-specific confidential computing technologies.
Allow disabling Kernel Samepage Merging (KSM) for specific VMs (issue 5291).
KSM can optimize memory usage in setups that run many similar VMs, but in some setups, it may be required to disable KSM.
Instead of completely disabling KSM, it is now possible to disable KSM only for specific VMs.
Ensure newly created EFI disks contain new Microsoft UEFI CA 2023 keys to avoid guest boot failures with Secure Boot (issue 6985).
Newer Windows ISOs will only be signed by this new CA and thus require the keys to be present.
Existing EFI disks are currently not updated automatically to avoid issues with existing Windows installations, and need to be updated manually.
PCI passthrough: Improve compatibility with devices that are already bound to the correct VFIO driver.
By default, PCI devices are bound to the
vfio-pci
driver and reset.
Some devices, such as certain NICs or GPUs, may support targeted VFIO drivers that provide more features.
The new
driver
option can be configured to keep the binding to the targeted VFIO drivers.
This option is currently only available via API and CLI.
Pressing escape during early boot of an OVMF VM will now lead to a boot device menu rather than the "EFI Firmware Setup" menu.
The reason is that choosing a different boot device is the most common use case.
The "EFI Firmware Setup" can still be selected in the boot entry menu and reached from there.
The
qm import
CLI command now supports the
import-from
parameter.
Enable Joliet for cloud-init disks to avoid issues with the nocloud ISO format on Windows guests (issue 6989).
Remove notice marking AMD SEV-ES and SEV-SNP as highly experimental.
Avoid applying pending changes when resuming from hibernation (issue 6934).
Fix a short-lived regression where aarch64 VMs would not start with machine type 10.1 or higher (issue 7014).
Disable High Precision Event Timer (HPET) for recent Linux VMs and machine version 10.1 to avoid increased CPU usage.
Avoid failing live migration of certain VMs due to a different MTU settings on the source and target node.
Allow setting the
guest-phys-bits
CPU option, which may be required for PCI(e) passthrough on certain Intel CPUs (issue 6378).
Allow setting the
aw-bits
machine option to work around issues when using the Intel vIOMMU driver on newer machine versions (issue 6608).
Remote migration: Increase timeout for volume activation to avoid migration failures (issue 6828).
Ensure correct ordering of machine versions (issue 6648).
Fix an issue where a migration with conntrack state could fail due to unfortunate timing.
Fix an issue where FreeBSD guests could get stuck when canceling a SCSI request when using VirtIO SCSI (issue 6810).
Backport a fix for an issue where enabling the VNC clipboard for a Windows guest could cause the mouse pointer to get stuck.
Avoid blocking the QEMU guest agent if an
fsfreeze
command never returns.
Such situations can arise if an in-guest shutdown is triggered after initiating the freeze, but before it is finished.
This is implemented by querying the completion status multiple times instead of blocking the socket for up to one hour.
Integrate fixes for Spectre branch target injection ("VMScape", see PSA-2025-00016-1).
Fix an issue where the Disk IO graphs would show spikes if the QEMU monitor was unavailable when collecting statistics (issue 6207).
Fix an issue where a VM with an unavailable QEMU monitor would slow down statistics collection.
Fix a regression where VM template backups with QEMU machine version 10 and IDE/SATA with a read-only volume could fail (issue 6675).
Fix an issue where SCSI device passthrough would not work with QEMU machine version 10 (issue 6680).
Fix an issue where the CPU could not be edited if properties with dashes had been added manually to the VM CPU config.
Avoid a "timeout waiting on systemd" error after a live migration.
Avoid logging spurious QEMU monitor errors to the logs.Containers (LXC)
Create LXC containers from OCI images.
Open Container Initiative (OCI) images are a popular format for distributing templates for system or application containers.
OCI images can now be uploaded manually or downloaded from image registries, and then be used as templates for LXC containers.
This allows to create full system containers from suitable OCI images.
Initial support for creating application containers from suitable OCI images is also available (technology preview).
The entrypoint for containers can now be modified in the container options.
Customization of environment variables.
The container options now allow to set environment variables that are passed to the container's entrypoint.
This is especially useful for application containers that can be configured using environment variables.
Host-managed DHCP for containers without a network stack.
For system containers, DHCP management is delegated to the network management stack shipped by the container.
However, application containers may not ship a network management stack.
Enabling host-managed DHCP allows such containers to acquire DHCP leases via the DHCP tooling on the host.
Host-managed DHCP is automatically enabled for application containers created from OCI images.
New LXC version 6.0.5:
See the upstream changelog for details.
Improve compatibility with guest distributions:
Add support for openSUSE Leap 16.0 (issue 6731).
Support Ubuntu 25.10 and 26.04 LTS releases, and warn instead of erroring out if the release is unknown.
Warn instead of erroring out if the Debian version in a container is higher than the known maximum.
Fix compatibility issues Debian 13 point releases.
Fix DHCP issues in Debian 13 containers by actively installing
isc-dhcp-client
and
ifupdown2
.
Disable
systemd-networkd-wait-online.service
in Debian 13 containers to prevent spurious error-messages and long delays during startup.
Improve compatibility with AlmaLinux 10 and CentOS 10 containers.
Lift restrictions on /proc and /sys if nesting is enabled to avoid issues in certain nested setups (issue 7006).
Regenerate snakeoil SSH certificates and keypairs shipped in Debian-based containers (PSA-2025-00017-1).
Show dynamically assigned IP addresses in the Network tab.
Document that systemd inside a container requires nesting to be enabled (issue 6897).
Enable Router Advertisements if the container is configured with DHCPv6 (issue 5538).General improvements for virtual guests
Support for node-independent bulk actions.
Bulk actions already allow to conveniently start, shut down, suspend, or migrate many virtual guests located on a specific node.
In addition, it is now possible to initiate bulk actions on the datacenter level.
The new datacenter-level bulk actions are also accessible by right-clicking on the datacenter in the resource tree.
Fix an issue where the API would not generate PNGs for RRD graphs.
Fix an overly strict permission check for NICs without a bridge.HA Manager
Allow adding VMs and containers as HA resources after creation or restore.
The HA state is automatically set to
started
if "Start after creation"/"Start after restore" is enabled.
When deleting a resource, also optionally purge it by updating all rules containing the resource (issue 6613).
Significantly reduce the time to compute the currently used resources for Cluster Resource Scheduling (CRS) with the static load scheduler (technology preview).
Adapt the
pve-ha-simulator
package to the changes added to the HA Manager for PVE 9.0.
Fix a few issues due to missing imports and newly added files (issue 6839).
Add support for configuring resources for the static-load scheduler.
Fix an issue where disabled HA rules could not be re-enabled in the GUI.
Fix an issue where HA rules could not be edited in the GUI.
Support parsing version strings of nodes running legacy versions of Proxmox VE (< 8.0) in mixed clusters.
Fix a warning due to a wrong emptiness check for blocking resources (issue 6881).
Fix issues that would incorrectly mark rules as unsatisfiable, resulting from the false assumption to also consider ignored resources.
Apply negative affinity constraints before positive ones, in order not to limit the number of available nodes unneccessarily.
Remove the computed and obsolete
group
field from the service status data.Improved management for Proxmox VE clusters
Improve reliability of opening shells for
root@pam
to other cluster nodes (issue 6789).
Improvements to the
/cluster/metrics/export
API endpoint for pull-style metric collection:
Use credentials from original request for proxying.
Increase timeout for collecting metrics in a cluster.
ACME Account view: Improve handling of providers that do not return an e-mail field, such as Let's Encrypt.
Fix a short-lived regression that caused an error when listing ACME plugins (issue 6932).
API viewer: Add description panel to return section (issue 6830).
Improvements to the extended metrics collection introduced in PVE 9.0, based on the feedback from the community:
On upgraded nodes, data is now transparently fetched from RRD files in the old format.
This results in more robust and simpler code and also provides data for longer time windows.
Migrating RRD data when upgrading from Proxmox VE 8.4 is not necessary anymore.
Fix two glitches with the resolution calculation for the longer, aggregated timeframes.
Fix a timing issue when reading from RRD files before the metrics are written to it (issue 6615).Backup/Restore
File restore from container backups on a Proxmox Backup Server instance: Add support for symlinks when downloading a directory as a ZIP archive (issue 4995).
Improvements to the backup provider API:
Fix an issue where backups with fleecing would fail for VMs with a TPM state (issue 6882).Storage
Improvements to snapshots as volume chains (technology preview):
Disallow disabling volume-chain snapshots if a qcow2 image exists, as this can have unintended consequences.
Fix an issue where taking a snapshot would fail after a disk move (issue 6713).
Fix an issue where cloning a VM from a snapshot would fail.
Fix cluster-locking behavior when performing snapshot operations on a shared LVM storage.
As "snapshot as volume chains" requires machine version 10 or higher, fail early when attempting to start a VM with a lower machine version.
Improvements to the LVM-thick plugin:
Use the more performant
blkdiscard
instead of
cstream
for wiping removed volumes when possible.
Fix an issue where to-be-removed volumes would not be activated before attempting to wipe the volume (issue 6941).
LVM-thin plugin: Avoid LVM warning when creating a template.
iSCSI plugin: Add initial support for portals that return hostnames instead of IP addresses during discovery.
ZFS plugin: Avoid overly strict error detection when deleting a volume (issue 6845).
Improvements to the ESXi import:
Fix an issue where live import from an ESXi storage would fail for QEMU machine version 10.
Ensure the FUSE process is cleaned up after removing an ESXi storage (issue 6073).Ceph
Mapping volumes of Windows VMs with KRBD now sets the
rxbounce
map option (issue 5779).
This fixes an issue where Windows VMs with disks on an RBD storage with KRBD enabled would cause warnings and degraded performance on the host.
Simplify Ceph installation on air-gapped clusters by allowing to choose a 'manual' repository option in the wizard and
pveceph install (issue 5244).
The usage data printed by
pveceph pool
commands now also mentions the unit.
Fix a short-lived issue where OSDs newly created under Proxmox VE 9.0 would not become active automatically after reboot (issue 6652).
Fix a regression where creating OSDs with a separate DB/WAL device would fail (issue 6747).Access control
Allow to delete the comment of an API token by setting it to an empty value (issue 6890).
Fix an issue where virtual NIC settings would display bridges that cannot actually be used due to insufficient permissions.
Add an API endpoint for verifying token-owned VNC authentication tickets to be used by termproxy.Firewall & Software Defined Networking
More detailed status reporting for the Software-Defined Networking (SDN) stack in the GUI.
Local bridges and VNets report the currently connected guest NICs.
EVPN zones additionally report the learned IPs and MAC addresses.
Fabrics are now part of the resource tree and report routes, neighbors, and interfaces.
Allow interfaces starting with
nic
and
if
as bridge ports (issue 6806).
Improvements to the nftables firewall (technology preview):
Fix atomicity when updating ipsets, to ensure consistent filtering.
Move the conntrack statement to the forward chain so it gets properly re-created on flush (issue 6831).
Create chains in the host table of a VNet firewall only if the host firewall is enabled.
When creating firewall entries, only use the IP address of a container instead of the whole subnet.
Fix up the ipfilter matching logic in the firewall (issue 6336).
Add support for legacy ipsets and alias names in the firewall (issue 6107).
Support overlapping ipsets via the auto-merge flag.
Merge the "management" and "local_network" ipsets.
Correctly document that NDP is actually enabled by default.
Fix an issue where the GUI would not allow to remove the fabric property from an EVPN controller or VXLAN zone.
Correctly set the maximum for ASN numbers for controllers.
Add better documentation for SDN controller, VNet and zone endpoints and their returned properties.
Add fabrics status to SDN status function, allowing the web UI to display it.
Make the
rollback
and
lock
endpoints return null in accordance with the specified return type.
Add descriptions to fields in the "rule" return type.
Document return type of the firewall's
list_rules
endpoint.
Print a task warning when reloading the FRR configuration fails.Improved management of Proxmox VE nodes
Renew node certificate also if the current certificate is expired (issue 6779).
This avoids issues in case a node had been shut down for a long time.
Improved
pvereport
to provide a better status overview:
Add replication job configuration.
Improvements to the
pve8to9
upgrade tool:
Remove obsolete checks and warnings concerning RRD migration.
LVM: Detect a manual
thin_check_options
override that could cause problems with thin pool activation after upgrade.
Check that grub is set up to update removable grub installations on ESP.
Detect problematic situations if the
systemd-boot
metapackage is installed, as this can cause problems on upgrade.
Increase severity of failing boot-related checks.
Fix false positives when checking unified cgroup v2 support for containers.
Clarify informational and warning messages.
Fix a buffer overflow issue in vncterm/spiceterm handling of ANSI escape sequences (PSA-2025-00018-1).
vncterm/spiceterm are spawned when a user with sufficient privileges initiates a VNC or SPICE session, for accessing a node or container console via the GUI.
Fix an issue where CLI command completion would print a "Can't use an undefined value" error (issue 6762).
Improve robustness of alternative names support for network interfaces introduced in Proxmox VE 9.0.
Fix a regression that caused an error if a VLAN interface was defined on top of a bond member.
Lower the timeout of retrieving disk SMART data to 10 seconds, in order to avoid blocking the GUI (issue 6224).
Add timestamps to debug logs of
pveproxy
and
pvedaemon
to facilitate troubleshooting.
Updates to packages rebuilt from Debian GNU/Linux upstream versions:
systemd
is rebuilt to work around issues in current upstream handling of
systemd-boot
.
The version got increased to match upstream
rrdtool
is rebuilt to provide a native systemd unit file.
The
ListenStream
directive was updated to not print a deprecation warning by using
/run
as directory for the socket.Installation ISO
Allow pinning the names of network interface names in the GUI, TUI and auto installer.
When a kernel version changes, the interface name may change due to new (PCIe) features being picked up. This can lead to broken network configurations.
By pinning the network interfaces, the names will not change between kernel versions.
Including this functionality in the installer ensures that a node will not change its interface names accross updates.
Enable
import
content type by default on storages set up by the installer.
Setup directory for the rrdcached database to avoid a race condition.
Mark
/etc/pve
immutable to avoid clobbering it when it is not mounted.
Set the
compatibility_level
of postfix to
3.6
to avoid a warning.
Remove a warning by setting the timezone before configuring postfix.
Do not create deprecated
/etc/timezone
.
Add an option to
proxmox-auto-install-assistant
to verify the hashed root password.
Do not select a Debian mirror and use the CDN for Debian repositories instead.Notable changes
Kernel 6.17 is reported to fix a memory leak issue when using Intel NICs with the
ice
driver and MTU 9000, see here for more details.Known Issues & Breaking Changes
Original source Report a problem
NVIDIA vGPU Compatibility
If you are using NVIDIA's GRID/vGPU technology, its driver must be compatible with the kernel you are using. As of November 2025, NVIDIAs vGPU host driver are not compatible with the kernel 6.17.
To avoid a failing update and working vGPU support you have two options:
Postpone the update until kernel 6.17 is supported
Prevent installing the kernel headers for 6.17 and pin your kernel to 6.14
This can be done by manually installing the kernel specific version and uninstalling theproxmox-default-headerspackage:
apt install proxmox-headers-6.14
apt remove proxmox-default-headers
After that, make sure your boot loader loads the 6.14 kernel by default, e.g. by using
proxmox-boot-tool
to pin the kernel.
proxmox-boot-tool kernel pin
You can list the exact available versions with
proxmox-boot-tool kernel list
. See the admin guide for more details.
Make sure you update the pin when the 6.14 kernel gets an update, or remove it when NVIDIAs driver is compatible with kernel 6.17. For the list of tested versions see NVIDIA vGPU on Proxmox VE.
Potential issues booting into kernel 6.17 on some Dell PowerEdge servers
Some users have reported failure to boot into kernel 6.17 and machine check errors on certain Dell PowerEdge servers, while kernel 6.14 boots successfully. It is reported that enabling SR-IOV Global and I/OAT DMA in the firmware helps. See this forum thread.
If enabling SR-IOV Global and I/OAT DMA does not resolve the issue, we recommend pinning the 6.14 kernel:
proxmox-boot-tool kernel pin 6.14.11-4-pve
You can list the exact available versions with
proxmox-boot-tool kernel list
. See the admin guide for more details.
Compatibility issues of LINSTOR/DRBD and kernel 6.17
Users reported that currently, the DRBD kernel module is not yet compatible with kernel 6.17 and building the module for 6.17 via DKMS will fail, see the forum for more details.
Until a fix is available, a workaround is to manually install and pin a 6.14 kernel.