Good day all.
I have a question on best practices to use for backing up Windows Shares from a NetApp CIFS server. For reference, our current config is as follows:
1. NetApp ONTAP storage set up, and assigned as "NAS Filer" for source data.
2. The destination storage is a Wasabi S3 bucket
3. Approx 300 Windows shares each have their own backup job, and doing a daily backup. They are all primary backups, none of them are "Archive Backups".
4. All shares currently are set to max retension of 84 months (7 years, as per our requirement to keep backups for this long)
5. All share backup times are spread out thru the entire evening. I did approx 10 backup every 30min segment. Example: Backup1 starts at 6:00pm (10 shares), then the next 10 shares start at 6:30pm, then the next start at 7:00pm, etc. etc
6. We have limited bandwidth. Our networking team was only able to provide us 300mb/s during the day, and 500mb/s at night.
The setup is working right now, but some of these shares are huge, over 17tb. There is a "post-processing" activity that occurs with every time the backup job runs. So the 3-4 shares that are this big, are taking all night to do their "post-processing" activity, even if there's no new files to back up. This is severely slowing down the rest of the backups.
What can I do to resolve this issue? Would using the archive tier help in any way? We don't have "performance storage" and "capacity storage". The source is the NetApp, and the destination repository is the same Wasabi S3 account. I can create multiple buckets, but they would be in the same Wasabi Endpoint.
Are we doing this right? Or am I missing anything here? (Mind you we are using Veeam Backup and Replication 100% to back up Windows Shares. We don't use it with vmware, or for any other types of backups)
I have a question on best practices to use for backing up Windows Shares from a NetApp CIFS server. For reference, our current config is as follows:
1. NetApp ONTAP storage set up, and assigned as "NAS Filer" for source data.
2. The destination storage is a Wasabi S3 bucket
3. Approx 300 Windows shares each have their own backup job, and doing a daily backup. They are all primary backups, none of them are "Archive Backups".
4. All shares currently are set to max retension of 84 months (7 years, as per our requirement to keep backups for this long)
5. All share backup times are spread out thru the entire evening. I did approx 10 backup every 30min segment. Example: Backup1 starts at 6:00pm (10 shares), then the next 10 shares start at 6:30pm, then the next start at 7:00pm, etc. etc
6. We have limited bandwidth. Our networking team was only able to provide us 300mb/s during the day, and 500mb/s at night.
The setup is working right now, but some of these shares are huge, over 17tb. There is a "post-processing" activity that occurs with every time the backup job runs. So the 3-4 shares that are this big, are taking all night to do their "post-processing" activity, even if there's no new files to back up. This is severely slowing down the rest of the backups.
What can I do to resolve this issue? Would using the archive tier help in any way? We don't have "performance storage" and "capacity storage". The source is the NetApp, and the destination repository is the same Wasabi S3 account. I can create multiple buckets, but they would be in the same Wasabi Endpoint.
Are we doing this right? Or am I missing anything here? (Mind you we are using Veeam Backup and Replication 100% to back up Windows Shares. We don't use it with vmware, or for any other types of backups)
Statistics: Posted by HenryA — Jun 11, 2025 2:48 pm






