We should have a module (or better yet, the functionality built into the system from the beginning) that would let us expand the size of a RAID if there were more unallocated room available on either disk. Rationale: I'm presently confronted with the task of upgrading my n2100 to a larger disk. However, the only way I can see to do this is to back up EVERYthing to an external drive, then put TWO new disks in, format as RAID1, then copy back. So, not only do I have the expense of the two new disks, I have to find a THIRD empty disk to backup to.
RIDICULOUS!!! Not to mention to amount of time it would take to do this all over the network!!!
It would be MUCH better if we could put one new, bigger disk in, have it rebuild the raid onto the disk, then take out the old disk, and have it rebuild and expanded RAID1 to fill the capacity of the new disks. I know it can be done... There is a module/functionality for it on the n5200.
As it is possible on the N5200 it should be possible to do it the same way for the N2100.
The way should be like this: 1) Replace the second disk with a new, bigger one and wait until the RAID is healthy again 2) Replace the first disk with another big one and wait again until the RAID is healthy again 3) Resize the RAID so that the whole disk space is used
I haven't tried this yet, neither on the N5200 nor on on a N2100 so this is only unproved theory!
But as always when doing such things: you really should have a backup of your data!! So having another disk for the backup is not due to technical demands but more a security/safety demand.
I'm not averse to trying this, but I'm concerned that the smaller disk might automatically be assigned as the "mirror" disk. When I put the larger disk in the second slot, it became Disk 1, even though the smaller (with all the data on it) had previously been Disk 1. So, even though I'm 90% sure that it would try to mirror from the disk that has RAID1 established to the non-raid, I'm a little hesitant to risk it, unless someone has more specific info about how the RAID controller decides which disk to mirror to and from.
Again, all this apprehension and blind guessing could be saved by a module or built-in functionality for this.
I went ahead and put in the new, larger drive and rebuilt the raid.
As for actually expanding the RAID to maximum size on the larger drive, here's what I've found out:
There's a command-line program specifically for exactly this! I logged in as root via ssh, and found it's already compiled on the n2100. The command is mdadm --grow. There's no man file for it on the n2100, so see the following: linuxmanpages.com/man8/mdadm.8.php
Usage for expanding a raid: mdadm --grow device options
mdadm --help for instructions.
Now all I need is some specific instruction on how to apply it in my case. It seems like Thecus would have thought to build a module in to do this automatically, ESPECIALLY since the program is already there!
But it's cool that the program's at least there! It looks like this should actually be quite easy.
I'm assuming I can ignore everything about creating boot drives and GRUB, and skip straight to the point at which both new drives are installed with healthy RAID1 at the original disk's partition size. Basically, I think I can start at the point "Expand the array to the new size"
It seems fairly straightforward. I'm hoping for some comment on it from people who know better, before I waste a lot of time trying to get it to work. Can it really be that simple?
as far as i see you are absolutely right. And you still have the original drives as backup
If i were you i would give mdadm --grow /dev/md0 --size=max resize2fs /dev/md0a try. But to say this clearly: I am not a pro!
md0 is your data partition. But i have no idea what md1 is.. buechse:/dev# mdadm --query /dev/md0 /dev/md0: 231.44GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail. /dev/md0: No md super block found, not an md component. buechse:/dev# mdadm --query /dev/md1 /dev/md1: 980.38MiB raid1 4 devices, 0 spares. Use mdadm --detail for more detail. /dev/md1: No md super block found, not an md component. And detailed:buechse:/dev# mdadm --detail /dev/md0 /dev/md0: Version : 00.90.01 Creation Time : Tue Jul 10 12:27:41 2007 Raid Level : raid1 Array Size : 242685824 (231.44 GiB 248.51 GB) Device Size : 242685824 (231.44 GiB 248.51 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent
Update Time : Thu Jul 3 06:01:19 2008 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
ok, now I'm probably going to sound really dumb with this next question, but...
I did e2fsck -f, and got this:
N2100:/app/etc/sysconfig# e2fsck -f Usage: e2fsck [-panyrcdfvstDFSV] [-b superblock] [-B blocksize] [-I inode_buffer_blocks] [-P process_inode_size] [-l|-L bad_blocks_file] [-C fd] [-j external_journal] [-E extended-options] deviceEmergency help: -p Automatic repair (no questions) -n Make no changes to the filesystem -y Assume "yes" to all questions -c Check for bad blocks and add them to the badblock list -f Force checking even if filesystem is marked clean -v Be verbose -b superblock Use alternative superblock -B blocksize Force blocksize when looking for superblock -j external_journal Set location of the external journal -l bad_blocks_file Add to badblocks list -L bad_blocks_file Set badblocks list N2100:/app/etc/sysconfig#
So my question is, do I just run this on /dev/md0? I've actually never had to specify a specific device before, at least not using fsck.
And what's the difference between fsck ad e2fsck?
Edit: I tried this, and yikes!
N2100:/app/etc/sysconfig# e2fsck -f /dev/md0 e2fsck 1.39 (29-May-2006) /dev/md0 is mounted.
WARNING!!! Running e2fsck on a mounted filesystem may cause SEVERE filesystem damage.
N2100:/app/etc/sysconfig# umount -a umount: Couldn't umount /raid: Invalid argument umount: Couldn't umount /app: Inappropriate ioctl for device umount: Couldn't umount /opt: Device or resource busy umount: Couldn't umount /usr/lib: Device or resource busy umount: Couldn't umount /proc: No such file or directory umount: Couldn't umount /dev/pts: No such file or directory umount: Couldn't umount /var: No such file or directory umount: Couldn't umount /: No such file or directory umount: Couldn't umount /: No such file or directory N2100:/app/etc/sysconfig# e2fsck -f /dev/md0 e2fsck 1.39 (29-May-2006) /dev/md0 is mounted.
WARNING!!! Running e2fsck on a mounted filesystem may cause SEVERE filesystem damage.
Do you really want to continue (y/n)? no
check aborted. N2100:/app/etc/sysconfig#
That's a little peculiar. I even tried forcing it:
N2100:/app/etc/sysconfig# umount -afumount: forced umount of /raid failed! N2100:/app/etc/sysconfig#
i didnt try to umount my /raid until now. But now i did. This seems to be a bug in busybox Search the net for "busybox umount Invalid argument" to get some info... I was trying to stop all services and modules still accessing /raid but still cant umount. i was able to remount it to read only (with -r).
You could try to run the e2fschk on the ro-mounted filesystem. But mdadm will definetly get a prob with the mounted fs.
Add to wishlist: New busybox version...
3* N2100 fw 2.1.09 with 512MB RAM, 500GB RAID1 and fanmod
PID USER STATUS RSS PPID %CPU %MEM COMMAND 10165 root R 772 18066 0.7 0.1 top 18029 root S 1732 2569 0.1 0.3 sshd 2577 root S 6656 2162 0.0 1.2 httpd 2575 root S 6600 2162 0.0 1.2 httpd 3620 root S 6548 2162 0.0 1.2 httpd 2587 root S 6508 2162 0.0 1.2 httpd 2574 root S 6508 2162 0.0 1.2 httpd 2576 root S 6504 2162 0.0 1.2 httpd 3646 root S 6472 2162 0.0 1.2 httpd 2162 root S 4832 1 0.0 0.9 httpd 2930 root S 2352 1 0.0 0.4 smbd 2942 root S 2344 2930 0.0 0.4 smbd 2569 root S 1488 1 0.0 0.2 sshd 10155 root S 1444 1 0.0 0.2 nmbd 3101 root S 1388 1 0.0 0.2 afpd 2204 root S 1096 1 0.0 0.2 cupsd 3013 root S 1044 2991 0.0 0.2 upnpd 3000 root S 1044 2991 0.0 0.2 upnpd 2991 root S 1044 2985 0.0 0.2 upnpd 3011 root S 1044 2991 0.0 0.2 upnpd 9234 root S 1044 2991 0.0 0.2 upnpd 2985 root S 1044 1 0.0 0.2 upnpd 2211 root S 836 1 0.0 0.1 udpr
Yes, there are plenty of priocesses running, but none of them should be running from /raid... Yet it is "busy" for some reason.
I also wanted to see what IS mounted as separate filesystems:
N2100:/app/etc# mount rootfs on / type rootfs (rw) /dev/root on / type ext2 (rw) tmpfs on /var type tmpfs (rw) devpts on /dev/pts type devpts (rw) proc on /proc type proc (rw,nodiratime) /dev/cloop0 on /usr/lib type ext2 (ro) /dev/cloop2 on /opt type ext2 (ro) /dev/mtdblock3 on /app type jffs2 (rw,noatime) /dev/md0 on /raid type ext3 (rw,noatime,nodiratime) N2100:/app/etc#
Finally, I wanted to see exactly what IS SUPPOSED to be mounted:
Unfortunately, I'm not quite sure what to make of it.
There seem to be no processes running that would access /raid, and according to fstab, well I'm not sure... I expected to see a lot more there, based on how many things are mounted. At a loss for the moment.