Hey there,
I'm quite interested in this project - been using Unraid for over a decade, but over the last few years have been using it virtualized under Proxmox. Performance is fine (15 disks, 2 parity) - I can max out my LSI HBA.
I attempted to use nonraid, so moved the super.dat, and it imported just fine (my disks are XFS), array started, etc.
However I am seeing something very odd happen - I went to do a bit of cleanup on one of my disks and was doing an rsync from disk1 to disk2 and the performance was absolutely abysmal.. Here's what was happening:
- Performance really bad (14-25MB/s)
- The rsync operation with the
--progress flag would literally not update the speed in real time, only every 30-60 seconds
- If I cancel the rsync (CTL-C), it would pause for a good 30-40 seconds while I assume it's flushing the data from memory (I can tell as doing an
iostat -x 2 shows writes keep happening on parity even after CTRL-C has been hit.
What's curious, on the same proxmox system I purposefully invalidated my parity and just mounted disk1 and disk2 manually, did the same rsync operation and the performance was 160-180MB/s, which is about as fast as I can consider (yes I know this isn't including parity, see below), and the --progress showed real-time speed, and cancelling it would cancel immediately without hanging/flushing from memory.
Under Unraid, I would do this rsync operation (with valid parity) quite often (using unbalance and directly with rsync), and performance was always 70-80MB/s, which I believe is what you can expect when doing a disk to disk copy with parity being calculated.
Spent 3-4 hours last night trying to troubleshoot this, but the common thread appears to be nonraid, granted I don't really have any other way to validate this.
Any thoughts? Thanks!
Hey there,
I'm quite interested in this project - been using Unraid for over a decade, but over the last few years have been using it virtualized under Proxmox. Performance is fine (15 disks, 2 parity) - I can max out my LSI HBA.
I attempted to use nonraid, so moved the super.dat, and it imported just fine (my disks are XFS), array started, etc.
However I am seeing something very odd happen - I went to do a bit of cleanup on one of my disks and was doing an rsync from disk1 to disk2 and the performance was absolutely abysmal.. Here's what was happening:
--progressflag would literally not update the speed in real time, only every 30-60 secondsiostat -x 2shows writes keep happening on parity even after CTRL-C has been hit.What's curious, on the same proxmox system I purposefully invalidated my parity and just mounted disk1 and disk2 manually, did the same rsync operation and the performance was 160-180MB/s, which is about as fast as I can consider (yes I know this isn't including parity, see below), and the
--progressshowed real-time speed, and cancelling it would cancel immediately without hanging/flushing from memory.Under Unraid, I would do this rsync operation (with valid parity) quite often (using unbalance and directly with rsync), and performance was always 70-80MB/s, which I believe is what you can expect when doing a disk to disk copy with parity being calculated.
Spent 3-4 hours last night trying to troubleshoot this, but the common thread appears to be nonraid, granted I don't really have any other way to validate this.
Any thoughts? Thanks!