RAID System Errors
I – Introduction
II -How to Fix if RAID seems “In Degreed”
III- How to Fix if RAID seems “Unmounted”
IV – How to Fix if RAID seems Not Active
V – RAID HDD order seems wrong just like “RAID 5 – Drives : 2 4 3″and device seems Not Active
VI – How to Fix if RAID seems as “Single Disk”
VII – User Remove the RAID Volume
VIII – How to Fix if 2 HDD gives error on RAID 5, 3 HDD gives error on RAID 6
IX – Raid fail – HDDs have no partitions;
X – RAID fail – Partitions have no md superblock
XI – No md0 for array
XII – NAS fail –MountHDD(s)with another QNAP NAS
I – Introduction;
Warning : This documents are recommended for Professional users only. If you don’t know what you’r doing, you may damage your RAID which cause loosing data. Qnapsupport Taiwan works great to solve this kind of RAID corruptions easily, and My advice is directly contact with them at this kind of cases.
You can download QNAP NAS Data Recovery Document Down Below:
NAS is OK but cannot access data
- raidtab is broken or missing
- Check raid settings and configure right raidtab
HDD have no partitions
- Use parted to recreate the partitions
Partitions have no MD superblock
- omdadm -CfR –assume-clean
RAID array can’t be assembled or status is inactive
- check above and make sure every disks on raid exist
RAID array can’t been mounted
- e2fsck, e2fsck -b
Able to mount RAID but data is disappear
- umount and e2fsck, if not work, try data recovery
RAID is in degraded, read-only
- backup the data then mdadm -CfR, it not work, recreate the RAID
NAS fail
- MountHDD(s)with another QNAP NAS (System Migration)
- MountHDD(s)with PC ( R-Studio/ ext3/4 reader) (3rd party tool )
Data are deleted by user/administrator accidentally
- data recovery company, photorec, r-studio/r-linux
Introduction of mdadm command
#mdadm -E /dev/sda3 > that will tell if it is md disk
#mdadm -Af /dev/md0 /dev/sd[a-d]3 > that will get available md disk into raid array
———————————–
#mdadm -CfR -l5 -n8 –assume-clean /dev/md0 /dev/sd[a-h]3 > that will overwrite the mdstat on each disk
> -CfR force to create the raid array
> -l5 = raid 5array
> -n8 = available md disk
> –assume-clean without data partition syncing
Introduction of Two Scripts;
# config_util
Usage: config_util input
input=0: Check if any HD existed.
input=1: Mirror ROOT partition.
input=2: Mirror Swap Space (not yet).
input=4: Mirror RFS_EXT partition.
>> usually we have config_util 1 to get the md9 ready
# storage_boot_init
Usage: storage_boot_init phase
phase=1:mountROOTpartition.
phase=2: mount DATA partition, create storage.conf and refresh disk.
phase=3: Create_Disk_Storage_Conf.
>> usually we have storage_boot_init 1 to mount the md9
II -How to Fix if RAID seems “In Degreed”
If your RAID system seems as down below, use this document. If not, Please dont try anything in this document:
A – Qnap FAQ Advice;
Login to Qnap. Disk Management ->Volume management. One of your HDD should give “Read/write” or “Normal” error, or Qnap doesn’t Recognize there is a HDD on on slot.
Just plug out Broken HDD, wait over 20 seconds, and plug in new HDD. If more than one HDD seems broken, dont change 2 HDD at the same time. Wait Qnap finish to Synchronize first HDD, and after it completes, change other broken HDD.
If you loose more HDD than RAID HDD lost tolerance, Backup your datas quickly to another Qnap or External HDD.
Note : New HDD must be same size with your other HDDs. Qnap doesnt accept lower size new HDD. Also I don’t advice to use Higher size HDD at this kind of cases. You can use another brand of HDD, or different sata speed HDDs.
Also, If your HDD seems doesnt plugged in HDD port even if you change it with a new one, it may be an Hardware problem about Qnap sata cable or Mainboard. Send device for Repair to vendor, or open device and plug out sata cable from mainboard, then plug it back.
B – Qnap RAID Recovery Document;
RAID fail – RAID is degraded, read-only
•When degraded, read-only status, there is more disk failure than the raid can support, need to help the user to check which disks are faulty if Web UI isn’t helpful
– Check klog or dmesg to find the faulty disks
•Ask user to backup the data first
•If disks looks OK, after backup, try “mdadm -CfR –assume-clean” to recreate the RAID
•If above doesn’t work, recreate the RAID
C – My Advice;
If your system seems In Degraded, Failed Drive X, you probably loose more HDD thatn RAID tolerated, so Take your Backup, and Re-Install Qnap From Beginning.
III – How to Fix if RAID Becomes “Unmounted”
If your RAID system seems as down below, fallow this document. If not, Please dont try anything in this document:
IF YOU HAVE CRİTİCAL DATA ON QNAP, PLEASE CONTACT WİTH QNAP TAIWAN SUPPORT
A – Qnap FAQ Solution;
Q : My NAS lost all its settings, and all HDDs are shown as unmounted.
A : In case of corrupt/lost config:
1. Power off the NAS. Remove the HDD(s)
2. Power on the NAS
3. After a short beep and a long beep, plug the HDD back into the NAS
4. Run QNAP Finder, it will find the NAS, do NOT configure it!
5. Connect to the NAS by telnet port 13131 (e.g. with Putty)
6. Run the MFA Degree following commands to recover with default config
Use the following commands if using 1 drive (if you have more than 1 HDD, please skip this document)
#mount /dev/sda1 /mnt
# cd /mnt/.config/
# cp /etc/default_config/uLinux.conf /mnt/.config/
# reboot
Use the following command if using 2 drives (not tested) (if you have more than 2 HDD, please skip this document)
# mdadm -A /dev/md9 /dev/sda1 /dev/sdb1
# mount /dev/md9 /mnt
# cd /mnt/.config/
# cp /etc/default_config/uLinux.conf /mnt/.config/
# reboot
8. Above procedure will reset the configuration back to default and then you need to reconfigure it. But all the share should be available now.
Please remember NOT to re-initialize the HDD. Since this will format your HDD and all your data will be lost.
9. To be prepared next time this happens, always make sure you have a working backup of your personal uLinux.conf!
Note: uLinux.conf is the main settings configuration
Taken From : Qnap FAQ
If you have 4 or more HDD, fallow this document. Dont start Qnap wihtout HDDs just like first 2 documents:
RAID fail – Cann’t be mounted, status unmount
(from Offical Qnap RAID recovery document)
1. Make sure the raid status is active (more /proc/mdstat)
2. try manually mount
# mount /dev/md0 /share/MD0_DATA -t ext3
# mount /dev/md0 /share/MD0_DATA -t ext4
# mount /dev/md0 /share/MD0_DATA -o ro (read only)
3. use e2fsck / e2fsck_64 to check
# e2fsck -ay /dev/md0 (auto and continue with yes)
4. If there are many errors when check, memory may not enough, need to create more swap space;
Use the following command to create more swap space
[~] # more /proc/mdstat…….
md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]
……….
[~] # swapoff /dev/md8 [~] # mdadm -S /dev/md8mdadm: stopped /dev/md8
[~] # mkswap /dev/sda2Setting up swapspace version 1, size = 542859 kB
no label, UUID=7194e0a9-be7a-43ac-829f-fd2d55e07d62
[~] # mkswap /dev/sdb2Setting up swapspace version 1, size = 542859 kB
no label, UUID=0af8fcdd-8ed1-4fca-8f53-0349d86f9474
[~] # mkswap /dev/sdc2Setting up swapspace version 1, size = 542859 kB
no label, UUID=f40bd836-3798-4c71-b8ff-9c1e9fbff6bf
[~] # mkswap /dev/sdd2Setting up swapspace version 1, size = 542859 kB
no label, UUID=4dad1835-8d88-4cf1-a851-d80a87706fea
[~] # swapon /dev/sda2 [~] # swapon /dev/sdb2 [~] # swapon /dev/sdc2 [~] # swapon /dev/sdd2 [~] # e2fsck_64 -fy /dev/md0If there is no file system superblock or the check fail, you can try backup superblcok.
1. Use the following command to find backup superblock location
# /usr/local/sbin/dumpe2fs /dev/md0 | grep superblock
Sample output:
Primary superblock at 0, Group descriptors at 1-6
Backup superblock at 32768, Group descriptors at 32769-32774
Backup superblock at 98304, Group descriptors at 98305-98310
..163840…229376…294912…819200…884736…1605632…2654208…4096000… 7962624… 11239424… 20480000…
23887872…71663616…78675968..102400000..214990848..512000000…550731776…644972544
2. Now check and repair a Linux file system using alternate superblock # 32768:
# e2fsck -b 32768 /dev/md0
Sample output:
fsck 1.40.2 (12-Jul-2007)
e2fsck 1.40.2 (12-Jul-2007)
/dev/sda2 was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizescf
…….
Free blocks count wrong for group #241 (32254, counted=32253).
Fix? yes
………
/dev/sda2: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sda2: 59586/30539776 files (0.6% non-contiguous), 3604682/61059048 blocks
3. Now try to mount file system using mount command:
# mount /dev/md0 /share/MD0_DATA -t ext4
RAID fail – able to mount but data disappear
•If the mount is OK, but data is disappear, unmount the RAID and run e2fsck again (can try backup superblock)
•If still fail, try data recovery program (photorec, R-Studio) or contact data recovery company
IV – How to Fix if RAID seems Not Active
If your RAID system seems as down below, fallow this document. If not, Please dont try anything in this document:
Update your Qnap firmware with Qnapfinder 3.7.2 or higher firmware. Just go to Disk Management -> RAID management Choose your RAID and press “Recover” to fix. If It doesn’t work, please check In active RAID scenarios or directly contat with Qnap SupportTaiwan.
If ıt Doesnt Work;
IF YOU HAVE CRİTİCAL DATA ON QNAP, PLEASE CONTACT WİTH QNAP TAIWAN SUPPORT
RAID fail – RAID can’t be assembled or status is inactive:
1.Check partitions, md superblock status
2.Check if there is any RAID disk missing / faulty
3. Use “mdadm -CfR –assume-clean” to recreate the RAID
V – RAID HDD order seems wrong just like “RAID 5 – Drives : 2 4 3″and device seems Not Active
If your RAID order seems like this:
First try RAID recovery. İf its still failes:
Follow this document:
Download Winscp and Login to your Qnap. Go to etc -> raidtab and first take backup of this file!
Then double click on this file. At this table, sda means your first HDD, sdb is your second and sdc means your 3.th HDD, and their order seems wrong.
Right table should be like down below so modify RAID like this:
RAID-5
raiddev /dev/md0
raid-level 5
nr-raid-disks 3
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
device /dev/sdc3
raid-disk 2
At this case I have 4 HDD, so modify nr-raid-disks 4 and also add this line :
device /dev/sdd3
raid-disk 3
It should be look like this:
Now, save this file, and restart your Qnap.
VI – “How to Fix” if all of your HDDs seems as “Single Disk” even you have a RAID structure or accedently RAID Removed;
I Highly Recommended you to contact with Qnap SupportTaiwan, but I you know what you are doing, here is how to fix document;
RAID Issue – raidtab is broken
- raidtab is used to check if the disk is in RAID group or single and show the RAID information on web UI.
- If the disk is in RAID but Web UI show it is single, or the RAID information is different to the actual disk RAID data ( checked by mdadm -E), then the raidtab should be corrupt. Then you need to manually edit the raidtab file to comply the actual RAID status.
- Check the following slides for raidtab contents
Single
No raidtab
RAID 0 Stripping
raiddev /dev/md0
raid-level 0
nr-raid-disks 2
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
RAID-1 Mirror
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
JBOD Linear
raiddev /dev/md0
raid-level linear
nr-raid-disks 3
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
device /dev/sdc3
raid-disk 2
RAID-5
raiddev /dev/md0
raid-level 5
nr-raid-disks 3
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
device /dev/sdc3
raid-disk 2
RAID-5 + Hot spare
raiddev /dev/md0
raid-level 5
nr-raid-disks 3
nr-spare-disks 1
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
device /dev/sdc3
raid-disk 2
device /dev/sdd3
spare-disk 0
RAID-5 + Global Spare
raidtab is same as RAID-5
On uLinux.conf, add a line if global spare disk is disk 4:
[Storage]GLOBAL_SPARE_DRIVE_4 = TRUE
RAID-6
raiddev /dev/md0
raid-level 6
nr-raid-disks 4
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
device /dev/sdc3
raid-disk 2
device /dev/sdd3
raid-disk 3
RAID-10
raiddev /dev/md0
raid-level 10
nr-raid-disks 4
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda3
raid-disk 0
device /dev/sdb3
raid-disk 1
device /dev/sdc3
raid-disk 2
device /dev/sdd3
raid-disk 3
VII – User Remove the RAID Volume
# more /proc/mdstat
**Check if the RAID is really removed
# mdadm -E /dev/sda3
** Check if the MD superblock is really removed
# mdadm -CfR –assume-clean /dev/md0 -l 5 -n 3 /dev/sda3 /dev/sdb3 /dev/sdc3
**Create the RAID, assume it is 3 HDDs raid-5
# e2fsck -y /dev/md0
**check file system, Assume “yes” to all questions. If 64-bit, e2fsck_64
# mount /dev/md0 /share/MD0_DATA -t ext4
** mount the RAID back
# vi raidtab
** manually create the raid table
# reboot
** Need to add the removed network share(s) back after reboot
VIII – How to Fix if 2 HDD gives error on RAID 5, 3 HDD gives error on RAID 6
If you cant reach your datas on Qnap, Plug HDDs to another Qnap (I save my 2 costumer all of datas by this way before)
If you can reach your datas, Quickly Backup your datas to another Qnap orExternal Drive. After it completes, Install Qnap RAID System again.
IX – Raid fail – HDDs have no partitions;
When use the following commands to check the HDD, there is no partition or only one partition.
# parted /dev/sdx print
The following is sample.
# blkid ** this command show all partitions on the NAS
Note: fdisk -l cannot show correct partition table for 3TB HDDs
The following tool (x86 only) can help us to calculate correct partition size according to the HDD size. Please save it in your NAS (x86 model) and make sure the file size is 10,086 bytes.
ftp://csdread:csdread@ftp.qnap.com/NAS/utility/Create_Partitions
1. Get every disk size:
# cat /sys/block/sda/size
625142448
2. Get the disk partition list. It should contain 4 partitions if normal;
# parted /dev/sda print
Model: Seagate ST3320620AS (scsi)
Disk /dev/sda: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 543MB 543MB primary ext3 boot
2 543MB 1086MB 543MB primary linux-swap(v1)
3 1086MB 320GB 318GB primary ext3
4 320GB 320GB 510MB primary ext3
3. Run the tool in your NAS to get the recover commands:
# Create_Partitions /dev/sda 625142448
/dev/sda size 305245
disk_size=625142448
/usr/sbin/parted /dev/sda -s mkpart primary 40s 1060289s
/usr/sbin/parted /dev/sda -s mkpart primary 1060296s 2120579s
/usr/sbin/parted /dev/sda -s mkpart primary 2120584s 624125249s
/usr/sbin/parted /dev/sda -s mkpart primary 624125256s 625121279s
If the disk contains none partition, run the 4 commands.
If the disk contains only 1 partition, run the last 3 commands.
If the disk contains only 2 partition, run the last 2 commands.
If the disk contains only 3 partition, run the last 1 commands.
4. Check the disk partition after recover. And it should contain 4 partitions now.
# parted /dev/sda print
Model: Seagate ST3320620AS (scsi)
Disk /dev/sda: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 543MB 543MB primary ext3 boot
2 543MB 1086MB 543MB primary linux-swap(v1)
3 1086MB 320GB 318GB primary ext3
4 320GB 320GB 510MB primary ext3
5. Please then run “sync” or reboot the NAS for the new partition to take effect.
X – RAID fail – Partitions have no md superblock
•If one or all HDD partitions are lost, or the partitions have no md superblock for unknown reason, use the mdadm -CfR command to recreate the RAID.
# mdadm -CfR –assume-clean /dev/md0 -l 5 -n 4 /dev/sda3…
Note:
1.Make sure the disk is in correct sequence. Use “mdadm -E” or check raidtab to confirm
2.If one of the disk is missing or have problem, replace the disk with “missing”.
For example:
# mdadm -CfR –assume-clean /dev/md0 -l 5 -n 4 /dev/sda3 missing /dev/sdc3 /dev/sdd3
XI – No md0 for array
manually create the md0 with mdadm -CfR
XII – NAS fail – Mount HDD(s) with another QNAP NAS
- User can plug the HDD(s) to another same model name NAS to access the data
- User can plug the HDD(s) to other model name NAS to access the data by perform system migration
ohttp://docs.qnap.com/nas/en/index.html?system_migration.htm
onote: TS-101/201/109/209/409/409U series doesn’t support system migration
- Since the firmware is also stored on the HDD(s), its firmware version may be different to the firmware on NAS. Firmware upgrade may be required required after above operation
CONTACT US!
Disclaimer: Win-Pro Consultancy is a reseller of QNAP Products. For Technical Support, please visit www.qnap.com
If you are interested in QNAP Products:
Hotline : +65 6100 2100 (SALES)
IT Support: +65 6100 8324 (TECH)
Phone Number : +65 6717 8729
Fax Number : +65 6717 5629
Address:
38 Jalan Pemimpin
#07-04, M38
Singapore 577178
or allow us to contact you!