Hi, I just wanted to say thanks for posting this.
We have been in a similar situation since our migration from VMware to Nutanix AHV - high churn, long immutability, Wasabi Capacity tier. Our issue was compounded from having removed the original vCenter but kept the jobs in a disabled state. Over the last few months our Capacity tier has been rising by 1tb+ per day.
Like yourself, we have had a long running support case to resolve the issue and have been anxiously waiting for immutability periods to end, hoping to see a large drop as per your most recent screenshots. The most recent patch was released on June 17, very soon after our case closed but hadn't been suggested. When it was released, we didn't applied it figuring it was a security hot-fix for people running a domain-joined stack. But there is this in the release notes:
"Background checkpoint removal process may lag behind the addition of new data due to poor deletion API call performance on certain on-prem object storage devices, causing continuous backup accumulation. To work around this issue, these API calls will now be called concurrently instead of sequentially."
Prior to the case, we hadn't been fully aware that there are different types of retention which apply given different configurations. Normal retention only applies if the source job is running to clean up its own files. Without the source job, then "background" retention applies and keeps 3+ Restore Points indefinitely. We had been in this situation, given we had removed vCenter but kept the jobs (disabled) while retention / immutability still applied. Initially we assumed this might be related to our increasing Capacity tier - though having deleted the jobs, thus enacting "Orphaned" retention, we have been seeing Restore Points removal from the Performance tier (no longer surfaced in the console) but have been waiting a long time to see corresponding drops in the buckets above background churn.
Actual retention with immutability can be hard to figure, an explanation we were offered is as follows. Due to object storage structure (metadata "block maps", block data), when immutability is configured, effectively all Restore Points under current retention are immutable. Any new object placed there – data block / metadata is set to be immutable for 90 days to begin with and will be extended later if required by the job logic, block reuse, etc. I.e. if there's object dependency on earlier data then that too remains immutable for the period required by the new data. This is not necessarily surfaced in the console. Depended upon data can persist long after it is gone from there.
It's also frustratingly hard to get a sense of what VBR is doing in the Capacity tier. We have noticed that logs in here: ".\ProgramData\Veeam\Backup\System\Retention" are the source of the VBR console's History > System > Background retention > Retention job - where you see "Failed to perform retention Error: Unable to delete backup in the Capacity Tier because it is immutable until...". It's hard to get an overview within the console but you can use tools like Agent Ransack to see where immutability still applies over long periods (especially since logs are often truncated from the console). One thing we noticed is that it seems, such attempts seem to made once (per-VM backups). It doesn't look like there are subsequent attempts after it fails initially to tidy up and is prevented by immutability.
We've applied the patch this afternoon and are already seeing this directory fill up with many more logs than usual:
'.\ProgramData\Veeam\Backup\System\CheckpointRemoval\2025-07-17\WasabiBucketName'
...Update 24hrs later we're down 20tbs.
Appreciate you having creating this thread. If we'd not seen it we'd still be waiting for orphaned job immutability to end -though I'm not sure we'd have seen a drop without the patch, given the above and your experience.
Have a great weekend.
We have been in a similar situation since our migration from VMware to Nutanix AHV - high churn, long immutability, Wasabi Capacity tier. Our issue was compounded from having removed the original vCenter but kept the jobs in a disabled state. Over the last few months our Capacity tier has been rising by 1tb+ per day.
Like yourself, we have had a long running support case to resolve the issue and have been anxiously waiting for immutability periods to end, hoping to see a large drop as per your most recent screenshots. The most recent patch was released on June 17, very soon after our case closed but hadn't been suggested. When it was released, we didn't applied it figuring it was a security hot-fix for people running a domain-joined stack. But there is this in the release notes:
"Background checkpoint removal process may lag behind the addition of new data due to poor deletion API call performance on certain on-prem object storage devices, causing continuous backup accumulation. To work around this issue, these API calls will now be called concurrently instead of sequentially."
Prior to the case, we hadn't been fully aware that there are different types of retention which apply given different configurations. Normal retention only applies if the source job is running to clean up its own files. Without the source job, then "background" retention applies and keeps 3+ Restore Points indefinitely. We had been in this situation, given we had removed vCenter but kept the jobs (disabled) while retention / immutability still applied. Initially we assumed this might be related to our increasing Capacity tier - though having deleted the jobs, thus enacting "Orphaned" retention, we have been seeing Restore Points removal from the Performance tier (no longer surfaced in the console) but have been waiting a long time to see corresponding drops in the buckets above background churn.
Actual retention with immutability can be hard to figure, an explanation we were offered is as follows. Due to object storage structure (metadata "block maps", block data), when immutability is configured, effectively all Restore Points under current retention are immutable. Any new object placed there – data block / metadata is set to be immutable for 90 days to begin with and will be extended later if required by the job logic, block reuse, etc. I.e. if there's object dependency on earlier data then that too remains immutable for the period required by the new data. This is not necessarily surfaced in the console. Depended upon data can persist long after it is gone from there.
It's also frustratingly hard to get a sense of what VBR is doing in the Capacity tier. We have noticed that logs in here: ".\ProgramData\Veeam\Backup\System\Retention" are the source of the VBR console's History > System > Background retention > Retention job - where you see "Failed to perform retention Error: Unable to delete backup in the Capacity Tier because it is immutable until...". It's hard to get an overview within the console but you can use tools like Agent Ransack to see where immutability still applies over long periods (especially since logs are often truncated from the console). One thing we noticed is that it seems, such attempts seem to made once (per-VM backups). It doesn't look like there are subsequent attempts after it fails initially to tidy up and is prevented by immutability.
We've applied the patch this afternoon and are already seeing this directory fill up with many more logs than usual:
'.\ProgramData\Veeam\Backup\System\CheckpointRemoval\2025-07-17\WasabiBucketName'
...Update 24hrs later we're down 20tbs.
Appreciate you having creating this thread. If we'd not seen it we'd still be waiting for orphaned job immutability to end -though I'm not sure we'd have seen a drop without the patch, given the above and your experience.
Have a great weekend.
Statistics: Posted by le0n — Jul 18, 2025 1:03 pm






