Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

Simple RAID1 setup with Debian 10.6

Ask for help with issues regarding the Installations of the Debian O/S.
Post Reply
Message
Author
Fondor1
Posts: 4
Joined: 2020-11-28 06:55

Simple RAID1 setup with Debian 10.6

#1 Post by Fondor1 »

I am attempting what I believe should be a straightforward RAID1 fresh install using debian-live-10.6.0-i386-xfce from a USB drive onto two 3TB drives. Partitions set up as shown in the below image (sorry for the phone image, there is no web access on this PC yet). The install appears to work fine, and I appear to install grub successfully to the MBR of both sda and sdb with

Code: Select all

grub-install /dev/sdx
Upon reboot, I get dumped to the grub rescue console with the note

Code: Select all

error:disk mduuid/xxxxx... not found
How do I go about troubleshooting this? I booted into the rescue console and viewed the grub.cfg; the uuid in the list entry appears to match that of the /dev/sdx RAID1 partitions when viewed with blkid. Any idea what I can try next?

Image

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: Simple RAID1 setup with Debian 10.6

#2 Post by p.H »

A possible cause may be a flawed BIOS which cannot read beyond 2 TiB (or some other limit). A workaround is to create the partition/array which contains /boot inside the limit.

What does "ls" show at the grub rescue prompt ? It should show the drives (hd<number>), partitions (hd<number>,gpt<number>), and RAID arrays (md/<number>)
Fondor1 wrote:I appear to install grub successfully to the MBR of both sda and sdb with

Code: Select all

grub-install /dev/sdx
Why didn't you just specify both drives in the installer ?

Note : you should not give the same names to multiple partitions. Partition names, labels and UUIDs are expected to be unique.

Fondor1
Posts: 4
Joined: 2020-11-28 06:55

Re: Simple RAID1 setup with Debian 10.6

#3 Post by Fondor1 »

What does "ls" show at the grub rescue prompt ?
ls shows each of the physical drives and partitions as expected, but no RAID arrays appear. This makes me think perhaps it's not loading the right raid modules, so I will go check that.
Why didn't you just specify both drives in the installer ?
The installer only gives the option to install to the MBR of one drive (and it recommended the first drive). I swapped over to another console and installed it on the second drive simultaneously with grub-install.

Good point about naming partitions the same. I've started over with a fresh install and each partition is named uniquely. That said, the UUID that is assigned to the RAID partition sda1/sdb1 is not something I picked but was assigned during the install. The PARTUUID and UUID_SUB is unique for each partition, but for each partition that is part of a RAID the UUID is the same across each one. Is that desired behavior? Looking at another Debian system I did many years ago, this appears to be the case but the older system is set up differently (boot partition is not on the RAID).

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: Simple RAID1 setup with Debian 10.6

#4 Post by p.H »

Fondor1 wrote:perhaps it's not loading the right raid modules
The core image built by grub-mkimage (called by grub-install) must include all modules needed to access /boot/grub (biosdisk, part_gpt, mdraid1x, ext2 or other filesystem used by /boot/grub...). grub-install should be smart enough to detect which modules are required, but you can include extra modules with --modules=<list>.
Fondor1 wrote:The installer only gives the option to install to the MBR of one drive
No, the installer also offers the option to specify an arbitrary list of devices.
Fondor1 wrote:The PARTUUID and UUID_SUB is unique for each partition, but for each partition that is part of a RAID the UUID is the same across each one. Is that desired behavior?
Yes. PARTUUID is the partition UUID. UUID is the RAID array UUID. UUID_SUB is the RAID member UUID.

Fondor1
Posts: 4
Joined: 2020-11-28 06:55

Re: Simple RAID1 setup with Debian 10.6

#5 Post by Fondor1 »

I attempted the re-installation with a smaller partition size (1TB) to see if it was a motherboard limitation. No change in behavior, still getting a grub error on mduuid/1352998c58b95ad218f3f10d6fbe3974. I booted into the system with Debian rescue mode to investigate further. Below are the results of blkid, mdad.conf, grub.conf, and /proc/mdstat:

blkid:

Code: Select all

/dev/sda2: UUID="1352998c-58b9-5ad2-18f3-f10d6fbe3974" UUID_SUB="d9a19caf-0071-c774-093f-76fefdb6c6f8" LABEL="weber2:2" TYPE="linux_raid_member" PARTLABEL="RAIDA" PARTUUID="ab3b6e9d-a607-47e5-9371-01a0b4f564b5"
/dev/sda3: UUID="5a9732e5-cdd2-09e6-31fa-e8d362825b7f" UUID_SUB="167f5b6a-e9f0-fbf5-cc0a-14e7387d2931" LABEL="weber2:1" TYPE="linux_raid_member" PARTLABEL="SWAPA" PARTUUID="46eecfff-fddd-4fc4-a755-143ccf5c2236"
/dev/sdb2: UUID="1352998c-58b9-5ad2-18f3-f10d6fbe3974" UUID_SUB="a7709e3e-797b-16f9-73f4-245dd483eb13" LABEL="weber2:2" TYPE="linux_raid_member" PARTLABEL="RAIDB" PARTUUID="2d61cce7-ad4f-4416-842a-2356bc8f4d5b"
/dev/sdb3: UUID="5a9732e5-cdd2-09e6-31fa-e8d362825b7f" UUID_SUB="a232bc4f-4e52-69b0-847d-01fd6009cadf" LABEL="weber2:1" TYPE="linux_raid_member" PARTLABEL="SWAPB" PARTUUID="bfbaa5cc-0679-4a48-a1f9-01da594fcbf5"
/dev/sdc1: UUID="2020-09-26-11-45-40-00" LABEL="d-live 10.6.0 xf i386" TYPE="iso9660" PTUUID="3204e694" PTTYPE="dos" PARTUUID="3204e694-01"
/dev/sdc2: SEC_TYPE="msdos" UUID="DEB0-0001" TYPE="vfat" PARTUUID="3204e694-02"
/dev/md2: LABEL="RAIDFS" UUID="0433b683-fffe-44ba-a43d-294e3f13a9d5" TYPE="ext4"
/dev/md1: UUID="e64c19bd-ec24-47b7-93b5-c8230373d1f3" TYPE="swap"
/dev/sda1: PARTUUID="c02b21ab-3ca8-4bcd-b7db-e1d4a3710095"
/dev/sdb1: PARTUUID="74c37abe-091e-4def-bd20-55be2c762aaf"
/dev/sdh1: UUID="AB45-CF31" TYPE="vfat" PARTUUID="90909090-01"
/etc/mdadm.d/mdadm.conf

Code: Select all

# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/2  metadata=1.2 UUID=1352998c:58b95ad2:18f3f10d:6fbe3974 name=weber2:2
ARRAY /dev/md/1  metadata=1.2 UUID=5a9732e5:cdd209e6:31fae8d3:62825b7f name=weber2:1

# This configuration was auto-generated on Sun, 01 Jan 2006 23:33:00 -0700 by mkconf
/boot/grub/grub.conf

Code: Select all

#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
  set have_grubenv=true
  load_env
fi
if [ "${next_entry}" ] ; then
   set default="${next_entry}"
   set next_entry=
   save_env next_entry
   set boot_once=true
else
   set default="0"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=a
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}
function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

if [ x$feature_default_font_path = xy ] ; then
   font=unicode
else
insmod part_gpt
insmod part_gpt
insmod diskfilter
insmod mdraid1x
insmod ext2
set root='mduuid/1352998c58b95ad218f3f10d6fbe3974'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint='mduuid/1352998c58b95ad218f3f10d6fbe3974'  0433b683-fffe-44ba-a43d-294e3f13a9d5
else
  search --no-floppy --fs-uuid --set=root 0433b683-fffe-44ba-a43d-294e3f13a9d5
fi
    font="/usr/share/grub/unicode.pf2"
fi

if loadfont $font ; then
  set gfxmode=auto
  load_video
  insmod gfxterm
  set locale_dir=$prefix/locale
  set lang=en_US
  insmod gettext
fi
terminal_output gfxterm
if [ "${recordfail}" = 1 ] ; then
  set timeout=30
else
  if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=5
  # Fallback normal timeout code in case the timeout_style feature is
  # unavailable.
  else
    set timeout=5
  fi
fi
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/05_debian_theme ###
insmod part_gpt
insmod part_gpt
insmod diskfilter
insmod mdraid1x
insmod ext2
set root='mduuid/1352998c58b95ad218f3f10d6fbe3974'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint='mduuid/1352998c58b95ad218f3f10d6fbe3974'  0433b683-fffe-44ba-a43d-294e3f13a9d5
else
  search --no-floppy --fs-uuid --set=root 0433b683-fffe-44ba-a43d-294e3f13a9d5
fi
insmod png
if background_image /usr/share/desktop-base/futureprototype-theme/grub/grub-4x3.png; then
  set color_normal=white/black
  set color_highlight=black/white
else
  set menu_color_normal=cyan/blue
  set menu_color_highlight=white/blue
fi
### END /etc/grub.d/05_debian_theme ###

### BEGIN /etc/grub.d/10_linux ###
function gfxmode {
	set gfxpayload="${1}"
}
set linux_gfx_mode=
export linux_gfx_mode
menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-0433b683-fffe-44ba-a43d-294e3f13a9d5' {
	load_video
	insmod gzio
	if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
	insmod part_gpt
	insmod part_gpt
	insmod diskfilter
	insmod mdraid1x
	insmod ext2
	set root='mduuid/1352998c58b95ad218f3f10d6fbe3974'
	if [ x$feature_platform_search_hint = xy ]; then
	  search --no-floppy --fs-uuid --set=root --hint='mduuid/1352998c58b95ad218f3f10d6fbe3974'  0433b683-fffe-44ba-a43d-294e3f13a9d5
	else
	  search --no-floppy --fs-uuid --set=root 0433b683-fffe-44ba-a43d-294e3f13a9d5
	fi
	echo	'Loading Linux 4.19.0-11-686 ...'
	linux	/boot/vmlinuz-4.19.0-11-686 root=UUID=0433b683-fffe-44ba-a43d-294e3f13a9d5 ro  quiet
	echo	'Loading initial ramdisk ...'
	initrd	/boot/initrd.img-4.19.0-11-686
}
submenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 'gnulinux-advanced-0433b683-fffe-44ba-a43d-294e3f13a9d5' {
	menuentry 'Debian GNU/Linux, with Linux 4.19.0-11-686' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.19.0-11-686-advanced-0433b683-fffe-44ba-a43d-294e3f13a9d5' {
		load_video
		insmod gzio
		if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
		insmod part_gpt
		insmod part_gpt
		insmod diskfilter
		insmod mdraid1x
		insmod ext2
		set root='mduuid/1352998c58b95ad218f3f10d6fbe3974'
		if [ x$feature_platform_search_hint = xy ]; then
		  search --no-floppy --fs-uuid --set=root --hint='mduuid/1352998c58b95ad218f3f10d6fbe3974'  0433b683-fffe-44ba-a43d-294e3f13a9d5
		else
		  search --no-floppy --fs-uuid --set=root 0433b683-fffe-44ba-a43d-294e3f13a9d5
		fi
		echo	'Loading Linux 4.19.0-11-686 ...'
		linux	/boot/vmlinuz-4.19.0-11-686 root=UUID=0433b683-fffe-44ba-a43d-294e3f13a9d5 ro  quiet
		echo	'Loading initial ramdisk ...'
		initrd	/boot/initrd.img-4.19.0-11-686
	}
	menuentry 'Debian GNU/Linux, with Linux 4.19.0-11-686 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.19.0-11-686-recovery-0433b683-fffe-44ba-a43d-294e3f13a9d5' {
		load_video
		insmod gzio
		if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
		insmod part_gpt
		insmod part_gpt
		insmod diskfilter
		insmod mdraid1x
		insmod ext2
		set root='mduuid/1352998c58b95ad218f3f10d6fbe3974'
		if [ x$feature_platform_search_hint = xy ]; then
		  search --no-floppy --fs-uuid --set=root --hint='mduuid/1352998c58b95ad218f3f10d6fbe3974'  0433b683-fffe-44ba-a43d-294e3f13a9d5
		else
		  search --no-floppy --fs-uuid --set=root 0433b683-fffe-44ba-a43d-294e3f13a9d5
		fi
		echo	'Loading Linux 4.19.0-11-686 ...'
		linux	/boot/vmlinuz-4.19.0-11-686 root=UUID=0433b683-fffe-44ba-a43d-294e3f13a9d5 ro single 
		echo	'Loading initial ramdisk ...'
		initrd	/boot/initrd.img-4.19.0-11-686
	}
}

### END /etc/grub.d/10_linux ###

### BEGIN /etc/grub.d/20_linux_xen ###
### END /etc/grub.d/20_linux_xen ###

### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###

### BEGIN /etc/grub.d/30_uefi-firmware ###
### END /etc/grub.d/30_uefi-firmware ###

### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###

### BEGIN /etc/grub.d/41_custom ###
if [ -f  ${config_directory}/custom.cfg ]; then
  source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
  source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###
/proc/mdstat

Code: Select all

Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md1 : active raid1 sda3[0] sdb3[1]
      2026496 blocks super 1.2 [2/2] [UU]
      
md2 : active raid1 sda2[0] sdb2[1]
      976430080 blocks super 1.2 [2/2] [UU]
      [======>..............]  resync = 33.3% (325926400/976430080) finish=116.5min speed=93016K/sec
      bitmap: 6/8 pages [24KB], 65536KB chunk

unused devices: <none>
The fact that there is a resync in progress seems questionable to me, but perhaps this is normal? The system was configured during the installation process with RAID1, so to have a resync operation suggests that there is an issue and data was not truly being mirrored during install. Does the resync process mean RAID was not operational during the install?

After reboot, it dropped back to the grub rescue prompt. The RAID array still does not show up with "ls" in the grub prompt. Trying to list the partition contents with "ls (hd0,2)" individually fails saying the contents are encrypted (they are not). Is that related to the resync operation above? Any suggestions on where to go from here? Thanks for your feedback so far!

User avatar
Bloom
df -h | grep > 90TiB
df -h | grep > 90TiB
Posts: 504
Joined: 2017-11-11 12:23
Been thanked: 26 times

Re: Simple RAID1 setup with Debian 10.6

#6 Post by Bloom »

A RAID array during resync is available. You can't see the RAID array with ls because it needs to be mounted first. In order to boot from it, it needs to be in /etc/fstab. Can you show us yours?
Don't put swap partitions in RAIID. Just define the swap on both drives and don't make a RAID of that. Swap will handle it nicely itself.

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: Simple RAID1 setup with Debian 10.6

#7 Post by p.H »

Fondor1 wrote:The fact that there is a resync in progress seems questionable to me, but perhaps this is normal?
Yes. There is an initial sync to synchronize all members in the array, unless the array was created with --assume-clean.
Bloom wrote:A RAID array during resync is available.
For Linux, yes. But GRUB may have limitations. I do not remember about arrays during resync, but I observed once that GRUB could not use an array with a missing member which was still declared as active in the other member superblocks.
Bloom wrote:You can't see the RAID array with ls because it needs to be mounted first. In order to boot from it, it needs to be in /etc/fstab.
Wrong. GRUB does not care about Linux mounts and fstab.
Bloom wrote:Don't put swap partitions in RAIID. Just define the swap on both drives and don't make a RAID of that. Swap will handle it nicely itself.
Wrong. Independent swap areas are treated either like RAID linear (if different priorities, the default) or RAID 0 (if same priority), i.e. without any redundancy in both cases.
Fondor1 wrote:Trying to list the partition contents with "ls (hd0,2)" individually fails saying the contents are encrypted (they are not). Is that related to the resync operation above?
No, it is just because GRUB cannot find a known filesystem type on this partition, which is expected as it contains a RAID superblock, not a filesystem.

Can GRUB see the other array, whose sync is complete ? You can wait until the resync is complete and try to boot again.

Independent note : I would rather not put the swap at the end of the drives and after a huge partition. This area has the slowest sequential speed, and likely the worst access time because it is the farthest from the most used areas.

User avatar
Bloom
df -h | grep > 90TiB
df -h | grep > 90TiB
Posts: 504
Joined: 2017-11-11 12:23
Been thanked: 26 times

Re: Simple RAID1 setup with Debian 10.6

#8 Post by Bloom »

For booting, you NEED an entry in /etc/fstab. Either reference /dev/md2 or whatever or the UUID of the RAID array.

But since the original poster tried to see the contents with ls, that won't work unless the RAID array is mounted.

On the RAID array, there needs to be a partition (/dev/md2p1) and it must be formatted and contain a Debian installatiion. Only then can it be booted. I have seen RAID arrays boot fine without a partition and the RAID volume directly formatted, but that is not the proper way to do it.

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: Simple RAID1 setup with Debian 10.6

#9 Post by p.H »

Bloom wrote:For booting, you NEED an entry in /etc/fstab
Wrong. Just try it.
1) GRUB does not use /etc/fstab.
2) The kernel and initramfs do not use it either. They use the root= kernel parameter passed by GRUB to the kernel command line.
Bloom wrote:But since the original poster tried to see the contents with ls, that won't work unless the RAID array is mounted.
Wrong. The OP is stuck in the grub rescue shell. GRUB does not mount filesystems. It just reads them.
Bloom wrote:On the RAID array, there needs to be a partition (/dev/md2p1)
Wrong. Linux RAID did not support partitioned arrays initially, that was added later. The standard is still to use LVM over unpartitioned arrays when you need multiple volumes. The Debian installer cannot create a partitioned array.

CwF
Global Moderator
Global Moderator
Posts: 2681
Joined: 2018-06-20 15:16
Location: Colorado
Has thanked: 41 times
Been thanked: 196 times

Re: Simple RAID1 setup with Debian 10.6

#10 Post by CwF »

p.H wrote: Bloom wrote:
But since the original poster tried to see the contents with ls, that won't work unless the RAID array is mounted.


Wrong. The OP is stuck in the grub rescue shell. GRUB does not mount filesystems. It just reads them.
The modules likely need to be loaded (mdraid*.mod ?) in grub rescue in order to understand the array, then point to grub.conf to execute, then update-grub from within the OS. Just guessing since I'd never use software raid, but this is similar to when grub does not understand an encrypted volume...maybe...I've done that a few times, load luks, open a slot, enter key, /path/to/grub.conf, boom we're in.

Fondor1
Posts: 4
Joined: 2020-11-28 06:55

Re: Simple RAID1 setup with Debian 10.6

#11 Post by Fondor1 »

I waited until the RAID was finished syncing and rebooted. No change in behavior, still dropped to the GRUB rescue prompt. Loaded the mdraid1x module and listed drives hoping the RAID would be recognized, but it was not:

Image

I am a bit baffled that this is not an issue others have experienced before considering this is a really rudimentary setup. Is there any chance there is still a hardware limitation? What reasons might there be for GRUB to not recognize the RAID even after loading the mdraid1x module?

Thanks for the recommendation regarding swap, p.H. I will put swap at the front end of the disk when we finally solve this issue and I reformat to use the whole disk.

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: Simple RAID1 setup with Debian 10.6

#12 Post by p.H »

GRUB modules are in /boot/grub, which is in the RAID array. GRUB cannot load modules if /boot/grub is unreachable.
mdraid1x is already included in the core image, so insmod does not do anything.
If you want to be sure about a BIOS/hardware limitation, you can test with the following layouts :
a) a very small array (1 GB) with member partitions at the beginning of the drives for /boot
b) a plain partition (no RAID) at the end of a drive for /boot

If a) works and b) does not work, it is a BIOS/hardware limitation.
If a) does not work and b) works, it is not a hardware/BIOS limitation.
In the other two cases, no clue.

Post Reply