fabi
Junior Member
Posts: 61
|
Post by fabi on Jul 20, 2007 13:55:42 GMT 7
I also thought about open files, but as samba is being shut down this shouldn't matter I think. The connections are lost. One additional thing to kill in my case is the rsync daemon as I also installed the rsync module (which of course lies also on the raid). One other thing is that I have 2 sshd (-kind) processes: root@127.0.0.1:~# ps | grep sshd 2583 root 1468 S /raid/data/module/SSHD/system/bin/sshd -f /raid/data/ 20800 root 1788 S sshd: sys@pts/0 So instead of adding the process number (which gets forgotten every second time anyway ) I replaced it with killall sshd To check if everything worked I also added ps to the script, right before the umount. For further checks I copied over lsof from another linux-box (as it is not available on the thecus busybox) and added /app/lsof /raid/data to the script. After some more trial and error it seems to have done something. With my 5x500GB RAID5 I'm now up to 1.9TB total size and 1.78TB data size which I think is what other people have if they started with this config right from the beginning. Now it's (very slowly) rebuilding, hopefully everthing is okay. Thanks for the script and the help. I was wondering though if the original script(s) could be changed so this manual step would not be necessary if snapshots are not needed. Even better would be if the snapshot percentage could be entered on the RAID config page. But as I hear thecus doesn't seem to be willing to add it.
|
|
|
Post by pellicle on Sept 23, 2007 23:46:16 GMT 7
I have just encountered the same issue. I am on the most current 2.00.01. The expand after manually adding a spare disk to the raid array give an error message indicating there is not enough space. All the space as others encountered is was allocated to the snapshot with no option to do otherwise.
|
|