Quantcast
Channel: R&D Forums
Viewing all articles
Browse latest Browse all 9994

VMware vSphere • Re: Poor restore performance when restoring in AllFlash environment

$
0
0
I checked some slides from 2023-03 about the HPE Apollo 4510 tests. They were done with 45 VMs in parallel and they used data that is around 50% compressible (which is realistic). 45 * 100MByte/s * 2 would be around 9GByte/s. The Apollo 4510 tests had up to 12GByte/s backup speed and up to 9GByte/s restore speed with 45VMs in parallel.

Probably that would be the same here if there were 45 disks in parallel to work around the "per-stream limit".

The key issue I have always seen with direct SAN is that it writes synchronous and the per-stream performance can be relatively low depending on the SAN array. Internal SSDs in the ESXi hosts have better "per stream" performance than SAN storage. I remember being involved in restore tests for a mirrored all-flash 3par 20000 a few years ago and the results were similar to what we see in this thread. HPE was involved and everything was tested into detail and everything operated "as expected" (similar picture as in this thread).
- VixDiskLib test from Veeam support is aligned with the restore performance for DirectSAN
- restoring the data to a non-mirrored LUN doubled the restore performance
- using HotAdd was fastest (because it can write asynchronous)

The per-stream limit for SAN storage has been between 100-150MByte/s for long time and I heard some vendors improved it to 250-500MByte/s but in general, one needs parallel streams to get best performance from a SAN array. That's why customers often split a 10TB VM in 10x1TB VMDKs to have better parallel restore performance.

Statistics: Posted by HannesK — Jul 07, 2025 6:08 am



Viewing all articles
Browse latest Browse all 9994

Latest Images

Trending Articles



Latest Images