I'm not sure how you are testing or but are you able to confirm these 3 variables are as follows after a boot, as Proxmox ZFS kernel parameters may be set differently after a reboot or a Proxmox upgrade - due to the appliance design of Proxmox. Also, have you forced a synthetic full on each full run with the registry entry of "HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication\ForceTransform" as mentioned previouslyThis works also on PVE 9 but ... it's utilizing fast clone pretty randomly. Most VMs are just "normal" merged (slow). Are there any constraints on the chain maybe?
All my tests were on Rocky9 to give me complete control of the tin.
I do use Proxmox in home/lab perspective but not as a backup target to Veeam.
Code:
echo "zfs_bclone_enabled:`cat /sys/module/zfs/parameters/zfs_bclone_enabled`"echo "zfs_bclone_wait_dirty:`cat /sys/module/zfs/parameters/zfs_bclone_wait_dirty`"echo "zfs_dio_enabled:`cat /sys/module/zfs/parameters/zfs_dio_enabled`"zfs_bclone_enabled:1zfs_bclone_wait_dirty:1zfs_dio_enabled:0Looking quickly at the release notes, there is one change that defaults the bclone_wait_dirty to 1 now (hooray!); https://github.com/openzfs/zfs/pull/17455
OpenZFS is looking better than ever!
(By the way the latest version of Proxmox PVE 9.0.6 is still seems to be tracking a slightly earlier version for now - 2.3.3)
Code:
zfs --versionzfs-2.3.4-1zfs-kmod-2.3.4-1Statistics: Posted by ashleyw — Aug 27, 2025 3:54 am





