For anyone still following this, I've been running many tests under 2.3 RC3 and have the following conclusions.
- The message "Agent: Failed to process method {Transform.CompileFIB}: Resource temporarily unavailable" still occurs when synthetics are enabled and block cloning is enabled on ZFS.
- There is no currently agreed root cause as to which side of the fence the issue exists, but Veeam suspects this is related to OpenZFS. However using the OpenZFS block cloning reliability tests I've been unable to isolate the issue to OpenZFS.
- The following doesn't eliminate the errors but seems to impact at which stage in the job they occur.
-> Change blocksize from 4MB to 8MB (or even downwards).
-> Changing compression level at a job level from optimal to none.
-> Reducing the number of concurrent jobs hitting the repository.
-> The number of jobs hitting the repositories simultaneously in the transformation stage does not appear to correlate to the failures.
- There appear to be long standing issues with synthetic backups and the load they place on the target storage device even when block cloning is in use, so even on commercial solutions many people seem to run without synthetic solutions.
- As a backup target with synthetic backups disabled, an OpenZFS appliance (in our case running Rocky9 on commodity tin with a standard HBA controller, with 23 spindles split over 4 vdevs) can easily saturate a 10gb link throughout the entire period of an Active Full backup, so we are in the process of upgrading our interconnects to 25gb.
- I'm working with Veeam and OpenZFS team on figuring out the best way of moving forwards with this. (thanks to @hannesk).
- This is how active fulls are looking like with this configuration which appears to be a health throughput;
![Image]()
- The message "Agent: Failed to process method {Transform.CompileFIB}: Resource temporarily unavailable" still occurs when synthetics are enabled and block cloning is enabled on ZFS.
- There is no currently agreed root cause as to which side of the fence the issue exists, but Veeam suspects this is related to OpenZFS. However using the OpenZFS block cloning reliability tests I've been unable to isolate the issue to OpenZFS.
- The following doesn't eliminate the errors but seems to impact at which stage in the job they occur.
-> Change blocksize from 4MB to 8MB (or even downwards).
-> Changing compression level at a job level from optimal to none.
-> Reducing the number of concurrent jobs hitting the repository.
-> The number of jobs hitting the repositories simultaneously in the transformation stage does not appear to correlate to the failures.
- There appear to be long standing issues with synthetic backups and the load they place on the target storage device even when block cloning is in use, so even on commercial solutions many people seem to run without synthetic solutions.
- As a backup target with synthetic backups disabled, an OpenZFS appliance (in our case running Rocky9 on commodity tin with a standard HBA controller, with 23 spindles split over 4 vdevs) can easily saturate a 10gb link throughout the entire period of an Active Full backup, so we are in the process of upgrading our interconnects to 25gb.
- I'm working with Veeam and OpenZFS team on figuring out the best way of moving forwards with this. (thanks to @hannesk).
- This is how active fulls are looking like with this configuration which appears to be a health throughput;

Statistics: Posted by ashleyw — Nov 12, 2024 5:10 am





