RAID superblocks being wiped on reboot

If none of the more specific forums is the right place to ask

Re: RAID superblocks being wiped on reboot

Postby LE_746F6D617A7A69 » 2020-10-23 13:58

p.H wrote:If you want to use whole drives /dev/sdX and /dev/sdY (not partitions) as RAID members for RAID 1 array /dev/mdN :
Code: Select all
# unmount partitions on the RAID drives (if they are mounted)
umount /dev/sdX1
umount /dev/sdY1

# erase signatures on the RAID drives and their partitions
wipefs -a /dev/sdX1
wipefs -a /dev/sdX
wipefs -a /dev/sdY1
wipefs -a /dev/sdY

# create RAID array
mdadm --create /dev/mdN --level=1 --raid-devices=2 /dev/sdX /dev/sdY

# erase signatures on the RAID array
wipefs -a /dev/mdN

# format RAID array as ext4
mkfs.ext4 /dev/mdN
^ While the above commands are technically correct, there are few things to consider:

1) wipefs theoretically wipes also raid superblocks, but it's better to use mdadm --zero-superblock --force /dev/sd[ab] - a more reliable method.

2) there's no need to wipe filesystem superblocks, because:
__a) You may want to re-create the array in-place without destroying the data,
__b) mkfs.whatever will erase the superblocks anyway

3) for Raid1 arrays with size<100GB use --bitmap=internal in Create mode (much faster sync operations)

4) use --assume-clean for creating the array - this disables initial sync. process - saves a *lot* of time. Initial sync. makes sense only if the array is re-created in place (so it keeps old data), and is completely useless otherwise.
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed
LE_746F6D617A7A69
 
Posts: 414
Joined: 2020-05-03 14:16

Re: RAID superblocks being wiped on reboot

Postby p.H » 2020-10-23 14:54

LE_746F6D617A7A69 wrote:wipefs theoretically wipes also raid superblocks, but it's better to use mdadm --zero-superblock

mdadm --zero-superblock erases only RAID superblocks. I wanted to make sure to erase every other superblock too.

LE_746F6D617A7A69 wrote:You may want to re-create the array in-place without destroying the data,

Oh no, you don't want that. In such a situation you want to wipe all that mess and make sure you start from a clean state.

LE_746F6D617A7A69 wrote:mkfs.whatever will erase the superblocks anyway

I would not rely on any mkfs to erase all previous metadata signatures. Some can be located quite far away from the beginning of the device and left untouched.
p.H
 
Posts: 1521
Joined: 2017-09-17 07:12

Re: RAID superblocks being wiped on reboot

Postby LE_746F6D617A7A69 » 2020-10-23 15:31

p.H wrote:
LE_746F6D617A7A69 wrote:You may want to re-create the array in-place without destroying the data,

Oh no, you don't want that. In such a situation you want to wipe all that mess and make sure you start from a clean state.
Yes, I don't need that - it's the OP who asked about such possibility ;)

p.H wrote:
LE_746F6D617A7A69 wrote:mkfs.whatever will erase the superblocks anyway

I would not rely on any mkfs to erase all previous metadata signatures. Some can be located quite far away from the beginning of the device and left untouched.
??? Most filesystems are using more than just one superblock, even such shit as NTFS has 2 copies of it.
Bill Gates: "(...) In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating system."
The_full_story and Nothing_have_changed
LE_746F6D617A7A69
 
Posts: 414
Joined: 2020-05-03 14:16

Re: RAID superblocks being wiped on reboot

Postby PlunderingPirate9000 » 2020-10-24 13:03

p.H wrote:If you want to use whole drives /dev/sdX and /dev/sdY (not partitions) as RAID members for RAID 1 array /dev/mdN :
Code: Select all
# unmount partitions on the RAID drives (if they are mounted)
umount /dev/sdX1
umount /dev/sdY1

# erase signatures on the RAID drives and their partitions
wipefs -a /dev/sdX1
wipefs -a /dev/sdX
wipefs -a /dev/sdY1
wipefs -a /dev/sdY

# create RAID array
mdadm --create /dev/mdN --level=1 --raid-devices=2 /dev/sdX /dev/sdY

# erase signatures on the RAID array
wipefs -a /dev/mdN

# format RAID array as ext4
mkfs.ext4 /dev/mdN


This seems to work, I've rebooted and the array has come up as md127.

The only difference between this and what I was doing before is the lack of `-F` switch to mkfs.ext4, and the additional steps with wipefs. I guess not doing wipefs causes some kind of problem for some reason?

I hava then run

Code: Select all
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf


It adds this line to the file

Code: Select all
ARRAY /dev/md/sagittarius:10 metadata=1.2 name=sagittarius:10 UUID=f68adb8b:8eb45692:0224171b:0ee245a8


After rebooting the array device is still 127 - do I need to run update-initramfs -u to fix this? It should be "md10".
PlunderingPirate9000
 
Posts: 31
Joined: 2017-11-30 15:17

Re: RAID superblocks being wiped on reboot

Postby PlunderingPirate9000 » 2020-10-24 13:06

Actually I forgot to mention - I also did a run of dd if=/dev/zero of=... to zero the entire disk sda/sdb before recreating this array. This may have helped?
PlunderingPirate9000
 
Posts: 31
Joined: 2017-11-30 15:17

Re: RAID superblocks being wiped on reboot

Postby PlunderingPirate9000 » 2020-10-24 13:30

Yes, updating initram fs and rebooting brings the array up with the expected number
PlunderingPirate9000
 
Posts: 31
Joined: 2017-11-30 15:17

Re: RAID superblocks being wiped on reboot

Postby p.H » 2020-10-24 14:03

PlunderingPirate9000 wrote:The only difference between this and what I was doing before is the lack of `-F` switch to mkfs.ext4

Why did you use -F ? What warning did you need to override ? IMO -F is useful only when running mke2fs non interactively in a script (and its author knows what they are doing).

PlunderingPirate9000 wrote:After rebooting the array device is still 127 - do I need to run update-initramfs -u to fix this?

You probably need to update the initramfs if the /etc/mdadm/mdadm.conf file embedded in the initramfs (may be different from the one in the root filesystem) contains an outdated conflicting ARRAY definition with the same number and a different UUID.

PlunderingPirate9000 wrote:I also did a run of dd if=/dev/zero of=... to zero the entire disk sda/sdb before recreating this array. This may have helped?

Yes, but overwriting whole disks is not necessary and takes a lot of time. Erasing only the first few MB should be enough. Also, with flash drives (SSD, SD card, USB flash drive) it causes unnecessary wear. In comparison, wipefs overwrites only a few blocks.
p.H
 
Posts: 1521
Joined: 2017-09-17 07:12

Previous

Return to General Questions

Who is online

Users browsing this forum: 4D696B65 and 15 guests

fashionable