Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

Cannot start dirty degraded array

Here you can discuss every aspect of Debian. Note: not for support requests!
Post Reply
Message
Author
thegameksk
Posts: 3
Joined: 2021-04-20 01:42

Cannot start dirty degraded array

#1 Post by thegameksk »

I had a 10 HDD raid 5. After a reboot a few days ago I got the above message. I turned off NAS and went through my connections. After reboot it said raid cant start because 2 HDDS nonoperational. I turned off and found the 2 suspect HDDS. One spins but isnt seen by the Nas anymore. Oher one the Data port was messed up. I was able to get that one working again and im now back to the cannot start dirty degraded array followed by errors. 9 HDDs are show under disks. Under raid management my raid is missing. Under file systems its missing. Anyway to get back in so I can backup my data?

steve_v
df -h | grep > 20TiB
df -h | grep > 20TiB
Posts: 1396
Joined: 2012-10-06 05:31
Location: /dev/chair
Has thanked: 78 times
Been thanked: 173 times

Re: Cannot start dirty degraded array

#2 Post by steve_v »

thegameksk wrote:I had a 10 HDD raid 5...
2 HDDS nonoperational...
One spins but isnt seen by the Nas anymore...
Oher one the Data port was messed up.
Welcome to why RAID5 sucks, and why RAID in general is not a backup. If you really have two disks down (or the contents of the second is trashed), your data is toast. I've been there with RAID5 arrays, and these days I won't touch one with a 10' pole, especially one >5 disks.
thegameksk wrote:Anyway to get back in so I can backup my data?
If that second disk is actually okay and has valid data on it, probably. RAID recovery is a bitch though, and you want to be _extremely_ careful.
If the only thing wrong is that the array was stopped dirty, and the remaining disks all have good, in-sync data, you may get away with manually assembling it, e.g. something like 'mdadm --assemble --force /dev/[array] /dev/[disk1] /dev/[disk2] /dev/disk[n] missing' Ensure the order is correct.
Have a look at this thread too, there's more than one way to force mdadm to start a dirty array.

However...

In this scenario it's very easy to make things worse, to the point the array is unrecoverable. Before you do anything, read this, this, this, the mdadm manuals, and possibly this as a variation on not hosing good drives. And back up your superblocks.

I's been a while since I ran an mdraid array (I've moved to ZFS), so I'll probably be of limited hand-holding value. Posting output from the likes of 'mdadm --examine' for each of your disks certainly won't hurt though, and I'm sure there are better RAID-heads than me around here somewhere... Or at least there used to be.
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.

thegameksk
Posts: 3
Joined: 2021-04-20 01:42

Re: Cannot start dirty degraded array

#3 Post by thegameksk »

Welcome to why RAID5 sucks, and why RAID in general is not a backup. If you really have two disks down (or the contents of the second is trashed), your data is toast. I've been there with RAID5 arrays, and these days I won't touch one with a 10' pole, especially one >5 disks.
Originally I did use ZFS but that gave me an issue with my data randomly disappearing so I was advised to go this route. 2 years later here I am. I was in the middle of a backup when all the sudden I couldn't log in bc my OS drive was full. Rebooted and here I am. I erased some images from docker that it said weren't in use anymore to be able to log back into OMV.
If that second disk is actually okay and has valid data on it, probably. RAID recovery is a bitch though, and you want to be _extremely_ careful.
If the only thing wrong is that the array was stopped dirty, and the remaining disks all have good, in-sync data, you may get away with manually assembling it, e.g. something like 'mdadm --assemble --force /dev/[array] /dev/[disk1] /dev/[disk2] /dev/disk[n] missing' Ensure the order is correct.
Have a look at this thread too, there's more than one way to force mdadm to start a dirty array.

However...

In this scenario it's very easy to make things worse, to the point the array is unrecoverable. Before you do anything, read this, this, this, the mdadm manuals, and possibly this as a variation on not hosing good drives. And back up your superblocks.

I's been a while since I ran an mdraid array (I've moved to ZFS), so I'll probably be of limited hand-holding value. Posting output from the likes of 'mdadm --examine' for each of your disks certainly won't hurt though, and I'm sure there are better RAID-heads than me around here somewhere... Or at least there used to be.
I am such a newbie so please forgive me for any erros. I do have an extra HDD that I can add if need be.

<root@openmediavault:~# mdadm --examine /dev/sdj
/dev/sdj:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : cf2c9347:55c545b7:71a4b1ee:090e62bb
Name : openmediavault:0
Creation Time : Mon Apr 8 22:55:52 2019
Raid Level : raid5
Raid Devices : 10
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 87896752128 (83824.88 GiB 90006.27 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : c90ec44d:fdd5404c:4ed47794:e3453a6b
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Apr 13 09:42:41 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 3f593c57 - correct
Events : 6937212
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 8
Array State : AAA.AAAAAA ('A' == active, '.' == missing, 'R' == replacing)>

root@openmediavault:~# mdadm --examine /dev/sdk1
/dev/sdk1:
MBR Magic : aa55
root@openmediavault:~# mdadm --examine /dev/sdk5
mdadm: No md superblock detected on /dev/sdk5.
root@openmediavault:~# mdadm --examine /dev/sdf
/dev/sdf:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : cf2c9347:55c545b7:71a4b1ee:090e62bb
Name : openmediavault:0
Creation Time : Mon Apr 8 22:55:52 2019
Raid Level : raid5
Raid Devices : 10
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 87896752128 (83824.88 GiB 90006.27 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 3bab9df0:3a5341a7:12078972:eb0169a2
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Apr 13 09:42:41 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 527344d4 - correct
Events : 6937212
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 9
Array State : AAA.AAAAAA ('A' == active, '.' == missing, 'R' == replacing)

root@openmediavault:~# mdadm --examine /dev/sdh
/dev/sdh:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : cf2c9347:55c545b7:71a4b1ee:090e62bb
Name : openmediavault:0
Creation Time : Mon Apr 8 22:55:52 2019
Raid Level : raid5
Raid Devices : 10
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 87896752128 (83824.88 GiB 90006.27 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : b528e4da:b283bb18:56a2707d:361194a1
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Apr 13 09:42:41 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : b8469d52 - correct
Events : 6937212
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 7
Array State : AAA.AAAAAA ('A' == active, '.' == missing, 'R' == replacing)

root@openmediavault:~# mdadm --examine /dev/sdc
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : cf2c9347:55c545b7:71a4b1ee:090e62bb
Name : openmediavault:0
Creation Time : Mon Apr 8 22:55:52 2019
Raid Level : raid5
Raid Devices : 10
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 87896752128 (83824.88 GiB 90006.27 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 0bcbdb06:4bb894db:868bd887:504b5e3d
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Apr 13 09:42:41 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 4d499784 - correct
Events : 6937212
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAA.AAAAAA ('A' == active, '.' == missing, 'R' == replacing)

root@openmediavault:~# mdadm --examine /dev/sdi
/dev/sdi:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : cf2c9347:55c545b7:71a4b1ee:090e62bb
Name : openmediavault:0
Creation Time : Mon Apr 8 22:55:52 2019
Raid Level : raid5
Raid Devices : 10
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 87896752128 (83824.88 GiB 90006.27 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 426106c7:9bfc6657:64ab8885:83638801
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Apr 13 09:42:41 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 4b20aa22 - correct
Events : 6937212
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 6
Array State : AAA.AAAAAA ('A' == active, '.' == missing, 'R' == replacing)

root@openmediavault:~# mdadm --examine /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : cf2c9347:55c545b7:71a4b1ee:090e62bb
Name : openmediavault:0
Creation Time : Mon Apr 8 22:55:52 2019
Raid Level : raid5
Raid Devices : 10
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 87896752128 (83824.88 GiB 90006.27 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 5105237a:4f94d090:95ba4ab1:13ee894f
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Apr 13 09:42:41 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : b16a7fa5 - correct
Events : 6937212
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 5
Array State : AAA.AAAAAA ('A' == active, '.' == missing, 'R' == replacing)

root@openmediavault:~# mdadm --examine /dev/sdg
/dev/sdg:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : cf2c9347:55c545b7:71a4b1ee:090e62bb
Name : openmediavault:0
Creation Time : Mon Apr 8 22:55:52 2019
Raid Level : raid5
Raid Devices : 10
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 87896752128 (83824.88 GiB 90006.27 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 8c351a8b:64cc9c56:41c0660a:23eee930
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Apr 13 09:42:41 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : c2a9edad - correct
Events : 6937212
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAA.AAAAAA ('A' == active, '.' == missing, 'R' == replacing)

root@openmediavault:~# mdadm --examine /dev/sda
/dev/sda:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : cf2c9347:55c545b7:71a4b1ee:090e62bb
Name : openmediavault:0
Creation Time : Mon Apr 8 22:55:52 2019
Raid Level : raid5
Raid Devices : 10
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 87896752128 (83824.88 GiB 90006.27 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 33b84d96:29b2400e:15d10512:5a50321b
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Apr 13 09:42:41 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 7768c926 - correct
Events : 6937212
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 4
Array State : AAA.AAAAAA ('A' == active, '.' == missing, 'R' == replacing)

root@openmediavault:~# mdadm --examine /dev/sde
/dev/sde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : cf2c9347:55c545b7:71a4b1ee:090e62bb
Name : openmediavault:0
Creation Time : Mon Apr 8 22:55:52 2019
Raid Level : raid5
Raid Devices : 10
Avail Dev Size : 19532611584 (9313.88 GiB 10000.70 GB)
Array Size : 87896752128 (83824.88 GiB 90006.27 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : active
Device UUID : 4b93f915:42581b24:a9e90bef:feff2835
Internal Bitmap : 8 sectors from superblock
Update Time : Tue Apr 13 09:42:41 2021
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 3ec128d - correct
Events : 6937212
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAA.AAAAAA ('A' == active, '.' == missing, 'R' == replacing)

LE_746F6D617A7A69
Posts: 932
Joined: 2020-05-03 14:16
Has thanked: 7 times
Been thanked: 65 times

Re: Cannot start dirty degraded array

#4 Post by LE_746F6D617A7A69 »

First of all, I suppose that Your NAS is not based on Debian (just because it uses the Linux kernel does not mean that it's Debian-compatible -> You should ask the developers of Your NAS system for help)
You may try to use --force option for the --assemble mode, which relaxes some of conditions normally required to assemble the array.

You should also check the output of:

Code: Select all

mdadm --assemble --scan -v
This can show a more detailed info about why array assembling have failed.
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed

thegameksk
Posts: 3
Joined: 2021-04-20 01:42

Re: Cannot start dirty degraded array

#5 Post by thegameksk »

I was able to force it back together and am backing up my data now. Thanks!

Post Reply