just an update.
I've been running some tests on V13 and ZFS2.3.4 and Rocky9.6 - which we have been running now since we deployed V13 a couple of weeks ago.
There are 2 unresolved issues we've noticed.
1. Bulk deletions are quite slow in ZFS.
During my testing I had to kill large test backups numerous times. Deleting 30+ TB with several hundred files took longer than expected. (in the order of 30 minutes or so). There are some changes around async deletes that can be made, but this issue seems to be more pronounced as soon as block cling enters the mix (which is the default). Slow deletions seem to be a well documented ZFS phenomena. I don't consider this a blocker as this activity is mostly drip fed during Veeam workloads rather than doing mass deletions like I've been doing.
2. Performance of Synthetic Fulls is slower than expected.
As part of my performance testing, I ran an Active Full backup, followed by two incrementals followed by a synthetic full.
Active full duration: 3hr22min, Processed: 43.9TB, Read 38TB, Transferred 21TB, 336 VMs
Incremental duration: 45min, Processed: 39.8TB, Read 1.5TB, Transferred 318GB, 336 VMs
Synthetic full duration: 6hr3min, Processed: 39.8TB, Read 1.9TB, Transferred 416GB, 336 VMs
During a synthetic full cycle, each VM will very quickly have an incremental done on it, and then the the status on the specific VM in the job will change to;
“Creating synthetic full backup (xx% done) [fast clone] ”
I'm assuming that the cloning process duration is related to the number of cloned blocks on the storage repository relating to the VM backup.
out of the 6hr 3min, I'd estimate 5hr 15min of this are while the various VMs are in the middle of the Creating synthetic full backup stage.
Theoretically the performance of the synthetic should be much closer to the performance of the Incremental, as on a Synthetic run, an incremental will first be made, but then the block points should kick in to make the speed of the synthetic full relatively quick after then.
This doesn't appear to be the case, as you can see by the duration, the synthetic full took nearly twice as long as the Active full.
We've reached out to the ZFS team and the initial thoughts are that the BRT (block reference table) that ZFS uses to keep track of the blocks is performing slower than expected due to a possible lack of RAM in the ARC (accelerated read cache). Sadly right now the remediation suggested could be to use a "ZFS Special device" to speed up the BRT activities.
This is not something we have the appetite for at this stage, as the special device itself could become a point of failure and add to the complexity and cost of deployment.
The opinion of Veeam support teams is that XFS using hardware based controllers does not exhibit these issues, so until we have easy workable solutions to these issues, it's not desirable for Veeam to officially support support ZFS.
At this stage I need to do further testing to see if these issues can be addressed through ZFS tuning changes, or if coding/design changes need to be made to ZFS.
Its unlikely the performance drop on Synthetics is Veeam related - but I can't rule that out either given that we are likely one of only a handful of people pushing Veeam workloads to a ZFS target like this.
For our backups, the advantages far outweigh the disadvantages and we have plenty of storage on tap and our backup window time is much greater than the run time, so we are happy to roll with longer than expected synthetic full backup times once per week or so, while the problems can be isolated and fixed.
From our perspective, the critical part of backups is always the reliability and durability - both of which ZFS now excels in from our practical experience.
If anyone has any ideas or has seen these types of issues in other environments supporting blocking cloning/refinks then that would be fantastic to see if there is any correlation.
Shout out to Hannes and the team from Veeam, and also Alex from iXsystems for all there suggestions/help to get this far.
thanks
Ashley
active full
![Image]()
incremental
![Image]()
synthetic full
![Image]()
I've been running some tests on V13 and ZFS2.3.4 and Rocky9.6 - which we have been running now since we deployed V13 a couple of weeks ago.
There are 2 unresolved issues we've noticed.
1. Bulk deletions are quite slow in ZFS.
During my testing I had to kill large test backups numerous times. Deleting 30+ TB with several hundred files took longer than expected. (in the order of 30 minutes or so). There are some changes around async deletes that can be made, but this issue seems to be more pronounced as soon as block cling enters the mix (which is the default). Slow deletions seem to be a well documented ZFS phenomena. I don't consider this a blocker as this activity is mostly drip fed during Veeam workloads rather than doing mass deletions like I've been doing.
2. Performance of Synthetic Fulls is slower than expected.
As part of my performance testing, I ran an Active Full backup, followed by two incrementals followed by a synthetic full.
Active full duration: 3hr22min, Processed: 43.9TB, Read 38TB, Transferred 21TB, 336 VMs
Incremental duration: 45min, Processed: 39.8TB, Read 1.5TB, Transferred 318GB, 336 VMs
Synthetic full duration: 6hr3min, Processed: 39.8TB, Read 1.9TB, Transferred 416GB, 336 VMs
During a synthetic full cycle, each VM will very quickly have an incremental done on it, and then the the status on the specific VM in the job will change to;
“Creating synthetic full backup (xx% done) [fast clone] ”
I'm assuming that the cloning process duration is related to the number of cloned blocks on the storage repository relating to the VM backup.
out of the 6hr 3min, I'd estimate 5hr 15min of this are while the various VMs are in the middle of the Creating synthetic full backup stage.
Theoretically the performance of the synthetic should be much closer to the performance of the Incremental, as on a Synthetic run, an incremental will first be made, but then the block points should kick in to make the speed of the synthetic full relatively quick after then.
This doesn't appear to be the case, as you can see by the duration, the synthetic full took nearly twice as long as the Active full.
We've reached out to the ZFS team and the initial thoughts are that the BRT (block reference table) that ZFS uses to keep track of the blocks is performing slower than expected due to a possible lack of RAM in the ARC (accelerated read cache). Sadly right now the remediation suggested could be to use a "ZFS Special device" to speed up the BRT activities.
This is not something we have the appetite for at this stage, as the special device itself could become a point of failure and add to the complexity and cost of deployment.
The opinion of Veeam support teams is that XFS using hardware based controllers does not exhibit these issues, so until we have easy workable solutions to these issues, it's not desirable for Veeam to officially support support ZFS.
At this stage I need to do further testing to see if these issues can be addressed through ZFS tuning changes, or if coding/design changes need to be made to ZFS.
Its unlikely the performance drop on Synthetics is Veeam related - but I can't rule that out either given that we are likely one of only a handful of people pushing Veeam workloads to a ZFS target like this.
For our backups, the advantages far outweigh the disadvantages and we have plenty of storage on tap and our backup window time is much greater than the run time, so we are happy to roll with longer than expected synthetic full backup times once per week or so, while the problems can be isolated and fixed.
From our perspective, the critical part of backups is always the reliability and durability - both of which ZFS now excels in from our practical experience.
If anyone has any ideas or has seen these types of issues in other environments supporting blocking cloning/refinks then that would be fantastic to see if there is any correlation.
Shout out to Hannes and the team from Veeam, and also Alex from iXsystems for all there suggestions/help to get this far.
thanks
Ashley
active full

incremental

synthetic full

Statistics: Posted by ashleyw — Oct 02, 2025 8:42 am




