|
Post by giessle on Feb 4, 2007 20:02:34 GMT 7
Hi,
I started with 2 drives (RAID 0) in N5200 and added 3 drives later. I chose RAID-Migration to get a 5-disk RAID-0. After 1-2 days, N5200 was ready with that job, but my "Data Capacity" didn't change, only the "total capacity" was augmented.
==> Short : RAID Migration doesn't work at all ? Why is Thecus waisting time to describe the feature in the FAQ with some tables, but no helpful howto ?
Did someone succeed in RAID Migration either RAID0--> RAID5 or RAID0--> RAID0 ?
Regards,giessle
|
|
|
Post by giessle on Feb 4, 2007 20:33:42 GMT 7
/dev/vg0/lv0 1.3T 1.2T 46.0G 96% /raid/data /dev/vg0/syslv 1007.9M 32.1M 924.6M 3% /raid/sys
==> 4x750GB = 3TB (RAID5)
Thecus doesn't matter what data capacity is inside the box.
Is the only resolution "Remove RAID" = "destroy all data" and start with an empty RAID5 ?
|
|
|
Post by giessle on Feb 4, 2007 21:28:09 GMT 7
root@127.0.0.1:~# vgdisplay --- Volume group --- VG Name vg0 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 5 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.73 TB PE Size 4.00 MB Total PE 714437 Alloc PE / Size 714189 / 2.72 TB Free PE / Size 248 / 992.00 MB VG UUID iIwXFX-cSYT-GXMk-ETeI-1mlb-DNce-i2CSXJ
==> a part of Thecus N5200 has recognized that 5 disks are inside. that's next ? resize2fs needs an unmounted filesystem. By the way is that the right way ?
|
|
|
Post by omega on Feb 4, 2007 21:41:26 GMT 7
Hi giessle,
I checked the source code and I can see that there are several calls to reisze2fs. There is a script /img/bin/migrate_raid.sh, resize_raid.sh and jbod_resize.sh. All of them seem to pay attention to resize the filesystem too.
So in my opinion everything is programmed but somehow for you it was not working. Thecus should be able to tell you what was going wrong and how to fix your situation now......
Andreas
|
|
|
Post by giessle on Feb 4, 2007 22:31:13 GMT 7
Thanks omega for your answer.
I had the same problem with the second Thecus N5200 consisting of 5x 500GB disks (Hitachti) but that was with firmware 1.004. I move all files on other devices and started then with a blank RAID5.
I ask thecus, you'll receive their answer.
|
|
|
Post by omega on Feb 5, 2007 1:19:11 GMT 7
Giessle, did you do the same upgrade from RAID-0 to RAID-5 with your second N5200? Yes, please keep us informed.... BTW; a also very interesting command for your issue is mdadm --detail /dev/md0Just in case you don't know it. And if there is something interesting in the output, you might consider posting it here too. Andreas
|
|
|
Post by kevincy on Feb 5, 2007 22:09:23 GMT 7
root@127.0.0.1:~# vgdisplay --- Volume group --- VG Name vg0 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 5 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.73 TB PE Size 4.00 MB Total PE 714437 Alloc PE / Size 714189 / 2.72 TB Free PE / Size 248 / 992.00 MB VG UUID iIwXFX-cSYT-GXMk-ETeI-1mlb-DNce-i2CSXJ ==> a part of Thecus N5200 has recognized that 5 disks are inside. that's next ? resize2fs needs an unmounted filesystem. By the way is that the right way ? Hello , try to display more info like the follow command . pvdisplay /dev/md0 vgdispl vg0 lvdisply
|
|
|
Post by giessle on Feb 7, 2007 3:35:55 GMT 7
root@127.0.0.1:/raid/data/daten# mdadm --detail /dev/md0 /dev/md0: Version : 00.90.02 Creation Time : Sat Feb 3 16:18:05 2007 Raid Level : raid5 Array Size : 2926335744 (2790.77 GiB 2996.57 GB) Device Size : 731583936 (697.69 GiB 749.14 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent
Update Time : Tue Feb 6 21:30:36 2007 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 64K
UUID : 03476644:a841a4a9:7273174d:94311b2d Events : 0.2761
Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 2 8 34 2 active sync /dev/sdc2 3 8 50 3 active sync /dev/sdd2 4 8 66 4 active sync /dev/sde2
|
|
|
Post by giessle on Feb 7, 2007 3:37:17 GMT 7
I still got no answer from thecus.
|
|
|
Post by giessle on Feb 7, 2007 3:37:35 GMT 7
root@127.0.0.1:/raid/data/daten# pvdisplay /dev/md0 --- Physical volume --- PV Name /dev/md0 VG Name vg0 PV Size 2.73 TB / not usable 0 Allocatable yes PE Size (KByte) 4096 Total PE 714437 Free PE 248 Allocated PE 714189 PV UUID u3uM1u-Ipyv-m56S-9UBR-KFvI-9MX0-jkVop3
|
|
|
Post by giessle on Feb 7, 2007 3:38:42 GMT 7
root@127.0.0.1:/raid/data/daten# lvdisplay --- Logical volume --- LV Name /dev/vg0/syslv VG Name vg0 LV UUID Rgklpp-Zp18-zgXb-C9XB-zFTW-ZU0f-otBHOP LV Write Access read/write LV Status available # open 1 LV Size 1.00 GB Current LE 256 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:0
--- Logical volume --- LV Name /dev/vg0/lv0 VG Name vg0 LV UUID ePcdtA-Il9N-3lWM-7MER-NPLp-ccZM-vfo38m LV Write Access read/write LV Status available # open 1 LV Size 2.72 TB Current LE 713933 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:1
|
|
|
Post by giessle on Feb 7, 2007 3:44:21 GMT 7
omega: did you do the same upgrade from RAID-0 to RAID-5 with your second N5200? yes, the second N5200 had initially 2 disks and I migrated drom 2 disk RAID0 ==> 5 disk RAID5. With firmware 1.00.4 there left 1GB of data capacity and 1GB of snapshot space. with firmware >= 1.00.5 the snapshot space will be erased till the snapshot function will work properly, but the space is not availible for data either.
|
|
|
Post by kevincy on Feb 7, 2007 7:16:33 GMT 7
root@127.0.0.1:/raid/data/daten# lvdisplay --- Logical volume --- LV Name /dev/vg0/syslv VG Name vg0 LV UUID Rgklpp-Zp18-zgXb-C9XB-zFTW-ZU0f-otBHOP LV Write Access read/write LV Status available # open 1 LV Size 1.00 GB Current LE 256 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:0 --- Logical volume --- LV Name /dev/vg0/lv0 VG Name vg0 LV UUID ePcdtA-Il9N-3lWM-7MER-NPLp-ccZM-vfo38m LV Write Access read/write LV Status available # open 1 LV Size 2.72 TB Current LE 713933 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:1 It seems correct in migrade process . Could you type the follow command again? df cat /proc/partition
|
|
|
Post by giessle on Feb 9, 2007 4:54:03 GMT 7
root@127.0.0.1:~# df Filesystem 1k-blocks Used Available Use% Mounted on /dev/ram0 54533 36168 18365 66% / /dev/hda5 9523 2095 6936 23% /app tmpfs 131072 308 130764 0% /tmpfs /dev/hda3 30301 17040 11697 59% /cloopfs /dev/cloop0 9911 7596 1803 81% /lib/modules /dev/cloop1 39663 31988 5627 85% /opt /dev/cloop2 3963 1987 1772 53% /img/www /dev/cloop3 3019 2424 442 85% /img/bin /dev/vg0/lv0 1438572828 1298755392 66742136 95% /raid/data /dev/vg0/syslv 1032088 32880 946780 3% /raid/sys root@127.0.0.1:~# cat /proc/partitions major minor #blocks name
3 0 62592 hda 3 1 2032 hda1 3 2 11776 hda2 3 3 31296 hda3 3 4 1 hda4 3 5 9840 hda5 3 6 7600 hda6 240 0 10304 cloop0 240 1 41024 cloop1 240 2 4160 cloop2 240 3 3136 cloop3 8 0 732574584 sda 8 1 987966 sda1 8 2 731584035 sda2 8 16 732574584 sdb 8 17 987966 sdb1 8 18 731584035 sdb2 8 32 732574584 sdc 8 33 987966 sdc1 8 34 731584035 sdc2 8 48 732574584 sdd 8 49 987966 sdd1 8 50 731584035 sdd2 8 64 732574584 sde 8 65 987966 sde1 8 66 731584035 sde2 9 1 987904 md1 9 0 2926335744 md0 253 0 1048576 dm-0 253 1 2924269568 dm-1
|
|
|
Post by kevincy on Feb 9, 2007 11:05:40 GMT 7
root@127.0.0.1:~# df Filesystem 1k-blocks Used Available Use% Mounted on /dev/ram0 54533 36168 18365 66% / /dev/hda5 9523 2095 6936 23% /app tmpfs 131072 308 130764 0% /tmpfs /dev/hda3 30301 17040 11697 59% /cloopfs /dev/cloop0 9911 7596 1803 81% /lib/modules /dev/cloop1 39663 31988 5627 85% /opt /dev/cloop2 3963 1987 1772 53% /img/www /dev/cloop3 3019 2424 442 85% /img/bin /dev/vg0/lv0 1438572828 1298755392 66742136 95% /raid/data /dev/vg0/syslv 1032088 32880 946780 3% /raid/sys root@127.0.0.1:~# cat /proc/partitions major minor #blocks name 3 0 62592 hda 3 1 2032 hda1 3 2 11776 hda2 3 3 31296 hda3 3 4 1 hda4 3 5 9840 hda5 3 6 7600 hda6 240 0 10304 cloop0 240 1 41024 cloop1 240 2 4160 cloop2 240 3 3136 cloop3 8 0 732574584 sda 8 1 987966 sda1 8 2 731584035 sda2 8 16 732574584 sdb 8 17 987966 sdb1 8 18 731584035 sdb2 8 32 732574584 sdc 8 33 987966 sdc1 8 34 731584035 sdc2 8 48 732574584 sdd 8 49 987966 sdd1 8 50 731584035 sdd2 8 64 732574584 sde 8 65 987966 sde1 8 66 731584035 sde2 9 1 987904 md1 9 0 2926335744 md0 253 0 1048576 dm-0 253 1 2924269568 dm-1 Hello , I have some conclusion about your log. - Your migration almost finish . But in migration process , resize file system is fail. - If you have not reboot system after migration ,maybe you can do the follow command. It could be easy to know what's happen. cat /tmp/resize.tmp - If isn't exist , you can do the follow comman . Maybe it will help you get other space . But those command is very technical ,so you must be carefully , and make sure every step. /app/bin/service stop umount /raid/data umount /raid/sys df <-- make sure /raid/data is not be mounted , if sure then do next resize2fs -f /dev/vg0/lv0 <-- Do it , and wait it ,it will cost most time. If finish ,please report it's message. ----------------------------------------- When finish resize2fs , try to do the follow command to check whether success. mount /dev/vg0/lv0 /raid/data df <-- check /raid/data size whether change ------------------------------------------------------------------- If every thing are finish ,you can reboot the system .
|
|