Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

MacMini RAID1 ESP UEFI (SOLVED)

New to Debian (Or Linux in general)? Ask your questions here!
Message
Author
Deekee
Posts: 91
Joined: 2022-07-02 17:50
Has thanked: 5 times
Been thanked: 3 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#41 Post by Deekee »

I applied all the changes and the system is still booting :-) but not matter how i set the boot order via the efibootmgr it's always mounting sdb1.

I probably something again here again ...

How can I find out from which partition the system is currently booting?

If I grep dmesg I just see the UUID of the RAID md0

Code: Select all

# dmesg | grep "BOOT_IMAGE"
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.10.0-18-amd64 root=UUID=d2c78252-cc96-4e19-a8fc-426038c3465a ro quiet
[    0.049057] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.10.0-18-amd64 root=UUID=d2c78252-cc96-4e19-a8fc-426038c3465a ro quiet
[    1.263831]     BOOT_IMAGE=/boot/vmlinuz-5.10.0-18-amd64

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#42 Post by p.H »

Deekee wrote: 2022-09-21 13:30 Do I just need to set one of the above, so changing the UUID will do?
Yes, you only need to set the one that you will use in fstab, whatever you choose.
Deekee wrote: 2022-09-21 13:30 Setting the UUID on the second disk I basically straight forward
You could do it more easily with "fatlabel" (the UUID is called "volume ID") from the package "dosfstools".
Deekee wrote: 2022-09-21 13:30 What I don't get is for what are the entries Boot0080 and BootFFFF for?
I guess they were automatically generated from the "removable media path" of some previous EFI partition on either SATA disk.
Deekee wrote: 2022-09-21 16:16 not matter how i set the boot order via the efibootmgr it's always mounting sdb1.
Maybe I did not make it clear enough that there is no relationship at all between the booted EFI partition and the mounted EFI partition (and conversely) in my previous post.
Also sda and sdb may not be the same disk at each boot.
Deekee wrote: 2022-09-21 16:16 How can I find out from which partition the system is currently booting?
efibootmgr shows the booted entry number in BootCurrent. With the PARTUUID you can find which partition it is.
Deekee wrote: 2022-09-21 16:16 If I grep dmesg I just see the UUID of the RAID md0
The kernel command line shows the root filesystem which is not related with the booted partition in any way.
I guess you could set a GRUB variable with different values in /EFI/debian/grub.cfg of each EFI partition and insert this variable in GRUB_CMDLINE_LINUX in /etc/default/grub so that its value appears as a dummy parameter in the kernel command line.

Segfault
Posts: 993
Joined: 2005-09-24 12:24
Has thanked: 5 times
Been thanked: 17 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#43 Post by Segfault »

Actually, EFI partition does not need to be mounted at all for normal operation. Being FAT it is even better when not mounted, won't be damaged in case of computer crash or power outage.

Deekee
Posts: 91
Joined: 2022-07-02 17:50
Has thanked: 5 times
Been thanked: 3 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#44 Post by Deekee »

So, i started allover again, and this time I also added a swap partition.

After checking the raid I've seen that the swap partition hasn't synched yet!

Code: Select all

# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid1 sdb3[0] sda3[1]
      960027648 blocks super 1.2 [2/2] [UU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

md0 : active (auto-read-only) raid1 sda2[1] sdb2[0]
      15616000 blocks super 1.2 [2/2] [UU]
      	resync=PENDING
      
unused devices: <none>

Code: Select all

# mdadm --query --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Sep 30 16:54:21 2022
        Raid Level : raid1
        Array Size : 15616000 (14.89 GiB 15.99 GB)
     Used Dev Size : 15616000 (14.89 GiB 15.99 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri Sep 30 17:49:09 2022
             State : clean, resyncing (PENDING) 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : macmini:0  (local to host macmini)
              UUID : a341bb13:f7a9576a:3f2828cc:dd8f2f92
            Events : 15

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8        2        1      active sync   /dev/sda2
the root partition seems to be just fine!

Code: Select all

# mdadm --query --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Fri Sep 30 16:57:46 2022
        Raid Level : raid1
        Array Size : 960027648 (915.55 GiB 983.07 GB)
     Used Dev Size : 960027648 (915.55 GiB 983.07 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Oct  1 14:48:34 2022
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : macmini:1  (local to host macmini)
              UUID : e654aa42:ae092ddb:7d4c9fa6:086fb54c
            Events : 1013

    Number   Major   Minor   RaidDevice State
       0       8       19        0      active sync   /dev/sdb3
       1       8        3        1      active sync   /dev/sda3
.. so im wondering what I'm missing here?

Deekee
Posts: 91
Joined: 2022-07-02 17:50
Has thanked: 5 times
Been thanked: 3 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#45 Post by Deekee »

md0 was in auto-read-only mode!

Code: Select all

# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active (auto-read-only) raid1 sdb2[0] sda2[1]
      15616000 blocks super 1.2 [2/2] [UU]
      	resync=PENDING
      
md1 : active raid1 sdb3[0] sda3[1]
      960027648 blocks super 1.2 [2/2] [UU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

unused devices: <none>
So I executed the following command to switch the array to read-write state.

Code: Select all

# mdadm --readwrite /dev/md0
.. and it began the resync process immediately.

Code: Select all

# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb2[0] sda2[1]
      15616000 blocks super 1.2 [2/2] [UU]
      [=====>...............]  resync = 25.5% (3991040/15616000) finish=0.9min speed=200533K/sec
      
md1 : active raid1 sdb3[0] sda3[1]
      960027648 blocks super 1.2 [2/2] [UU]
      bitmap: 2/8 pages [8KB], 65536KB chunk

unused devices: <none>
...and after short while it was fully synched.

Code: Select all

# mdadm --query --detail /dev/md?
/dev/md0:
           Version : 1.2
     Creation Time : Fri Sep 30 16:54:21 2022
        Raid Level : raid1
        Array Size : 15616000 (14.89 GiB 15.99 GB)
     Used Dev Size : 15616000 (14.89 GiB 15.99 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sat Oct  1 18:15:51 2022
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : macmini:0  (local to host macmini)
              UUID : a341bb13:f7a9576a:3f2828cc:dd8f2f92
            Events : 30

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8        2        1      active sync   /dev/sda2
/dev/md1:
           Version : 1.2
     Creation Time : Fri Sep 30 16:57:46 2022
        Raid Level : raid1
        Array Size : 960027648 (915.55 GiB 983.07 GB)
     Used Dev Size : 960027648 (915.55 GiB 983.07 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Oct  1 18:47:32 2022
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : macmini:1  (local to host macmini)
              UUID : e654aa42:ae092ddb:7d4c9fa6:086fb54c
            Events : 1013

    Number   Major   Minor   RaidDevice State
       0       8       19        0      active sync   /dev/sdb3
       1       8        3        1      active sync   /dev/sda3
What is bugging me now and I don't understand is:
  • md0 has Consistency Policy : resync
    md1 has Consistency Policy : bitmap
The question is: Is this a potential problem or not? Please, elaborate a little as I could not find an answer googling

Have all a nice weekend!

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#46 Post by p.H »

Deekee wrote: 2022-10-01 12:55 the swap partition hasn't synched yet
Because the array is started in auto-read-only mode until the first write occurs. Then the resync can begin.
Deekee wrote: 2022-10-01 17:02 md0 has Consistency Policy : resync
md1 has Consistency Policy : bitmap
md0 has no bitmap so can do only full resync. As mentioned in mdadm(8), an internal bitmap is automatically added only when creating an array bigger than 100 GiB. See --bitmap= and --consistency-policy=.

Deekee
Posts: 91
Joined: 2022-07-02 17:50
Has thanked: 5 times
Been thanked: 3 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#47 Post by Deekee »

Nearly finished I guess :-)

I configured mdadm for e-mail notifications and run the following simple test from mdadm to ensure e-mail notifications are working

Code: Select all

# mdadm --monitor --scan --test --oneshot /dev/md[[:digit:]]*
..and it nicely delivers two e-mail into my inbox, with the results from /proc/mdstat.

I would like to have a test e-mail on startup, so in the /etc/default/mdadm option file, I added "--test" to the following line.

Code: Select all

# DAEMON_OPTIONS:
#   additional options to pass to the daemon.
DAEMON_OPTIONS="--syslog --test"
mdadm monitor process is actually running

Code: Select all

# ps aux | grep mdadm
root         481  0.0  0.0   3196  2316 ?        Ss   17:26   0:00 /sbin/mdadm --monitor --scan
But I don't get an email when I restart the system and I don't understand why --test hasn't been added to the monitor process

Again, like always many thanks in adavnce for any hint.

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#48 Post by p.H »

/etc/default/mdadm was used by SysV initscript /etc/init.d/mdadm but I don't think the native mdmonitor systemd unit uses it.

Deekee
Posts: 91
Joined: 2022-07-02 17:50
Has thanked: 5 times
Been thanked: 3 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#49 Post by Deekee »

Yeah, you've absolutely right, it's even stated in line 4 in /etc/init.d/mdadm

those who can read are at a clear advantage :lol:

Code: Select all

# cat /etc/init.d/mdadm
#!/bin/sh
#
# Start the MD monitor daemon for all active MD arrays if desired.
# This script is not used under systemd.
#
# Copyright © 2001-2005 Mario Jou/3en <joussen@debian.org>
# Copyright © 2005-2009 Martin F. Krafft <madduck@debian.org>
# Distributable under the terms of the GNU GPL version 2.
#
### BEGIN INIT INFO
# Provides:          mdadm
# Required-Start:    $local_fs $syslog
# Required-Stop:     $local_fs $syslog sendsigs 
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: MD monitoring daemon
# Description:       mdadm provides a monitor mode, in which it will scan for
#                    problems with the MD devices. If a problem is found, the
#                    administrator is alerted via email, or a custom script is
#                    run.
### END INIT INFO
#
set -eu

MDADM=/sbin/mdadm
MDMON=/sbin/mdmon
RUNDIR=/run/mdadm
PIDFILE=$RUNDIR/monitor.pid
DEBIANCONFIG=/etc/default/mdadm

test -x "$MDADM" || exit 0

test -f /proc/mdstat || exit 0

START_DAEMON=true
test -f $DEBIANCONFIG && . $DEBIANCONFIG

. /lib/lsb/init-functions

is_true()
{
  case "${1:-}" in
    [Yy]es|[Yy]|1|[Tt]|[Tt]rue) return 0;;
    *) return 1;
  esac
}

case "${1:-}" in
  start)
    if [ -x /usr/bin/systemd-detect-virt ] && /usr/bin/systemd-detect-virt --quiet --container; then
      log_daemon_msg "Not starting MD monitoring service in container"
      log_end_msg 0
      exit 0
    fi

    if is_true $START_DAEMON; then
      log_daemon_msg "Starting MD monitoring service" "mdadm --monitor"
      mkdir -p $RUNDIR
      set +e
      start-stop-daemon -S -p $PIDFILE -x $MDADM -- \
        --monitor --pid-file $PIDFILE --daemonise --scan ${DAEMON_OPTIONS:-}
      log_end_msg $?
      set -e
    fi
    if [ "$(echo $RUNDIR/md[0-9]*.pid)" != "$RUNDIR/md[0-9]*.pid" ]; then
      log_daemon_msg "Restarting MD external metadata monitor" "mdmon --takeover --all"
      set +e
      $MDMON --takeover --all
      log_end_msg $?
      set -e
    fi
    ;;
  stop)
    if [ -f $PIDFILE ] ; then
      log_daemon_msg "Stopping MD monitoring service" "mdadm --monitor"
      set +e
      start-stop-daemon -K -p $PIDFILE -x $MDADM
      rm -f $PIDFILE
      log_end_msg $?
      set -e
    fi
    for file in $RUNDIR/md[0-9]*.pid ; do
      [ ! -f "$file" ] && continue
      ln -sf $file /run/sendsigs.omit.d/mdmon-${file##*/}
    done
    ;;
  status)
    status_of_proc -p $PIDFILE "$MDADM" "mdadm" && exit 0 || exit $?
    ;;
  restart|reload|force-reload)
    ${0:-} stop
    ${0:-} start
    ;;
  *)
    echo "Usage: ${0:-} {start|stop|status|restart|reload|force-reload}" >&2
    exit 1
    ;;
esac

exit 0

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#50 Post by p.H »

That /etc/default/mdadm is used by /etc/init.d/mdadm and /etc/init.d/mdadm is not used by systemd does not imply that /etc/default/mdadm is not used by any systemd unit.

I meant that I did not think that the native mdmonitor systemd unit used /etc/default/mdadm because

Code: Select all

systemctl cat mdmonitor mdmonitor-oneshot
does not report any mention of this file. mdmonitor-oneshot uses /usr/lib/mdadm/mdadm_env.sh but this file does not exist in my installation and no package seems to provide it.

Deekee
Posts: 91
Joined: 2022-07-02 17:50
Has thanked: 5 times
Been thanked: 3 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#51 Post by Deekee »

What would be the correct way to update GRUB if I made some changes to /etc/default/grub when running a RAID1

normally I do..

Code: Select all

update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.10.0-18-amd64
Found initrd image: /boot/initrd.img-5.10.0-18-amd64
Warning: os-prober will be executed to detect other bootable partitions.
Its output will be used to detect bootable binaries on them and create new boot entries.
done

Code: Select all

lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda       8:0    0 931.5G  0 disk  
├─sda1    8:1    0   953M  0 part  /boot/efi
├─sda2    8:2    0  14.9G  0 part  
│ └─md0   9:0    0  14.9G  0 raid1 [SWAP]
└─sda3    8:3    0 915.7G  0 part  
  └─md1   9:1    0 915.6G  0 raid1 /
sdb       8:16   0 931.5G  0 disk  
├─sdb1    8:17   0   953M  0 part  /otherboot
├─sdb2    8:18   0  14.9G  0 part  
│ └─md0   9:0    0  14.9G  0 raid1 [SWAP]
└─sdb3    8:19   0 915.7G  0 part  
  └─md1   9:1    0 915.6G  0 raid1 /
If I check grub.cfg on the current boot disk (sda1)

Code: Select all

cat /boot/efi/EFI/debian/grub.cfg
search.fs_uuid bc7bbf9f-6266-434a-a367-ff1df05dcfe2 root mduuid/e654aa42ae092ddb7d4c9fa6086fb54c 
set prefix=($root)'/boot/grub'
configfile $prefix/grub.cfg
It points to the RAID

Code: Select all

lsblk -o +UUID /dev/md1
NAME MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT UUID
md1    9:1    0 915.6G  0 raid1 /          bc7bbf9f-6266-434a-a367-ff1df05dcfe2
If I check grub.cfg on the altenate boot disk (sdb1) which I mounted temporarly as /otherboot it point to the same.

Code: Select all

cat /otherboot/EFI/debian/grub.cfg
search.fs_uuid bc7bbf9f-6266-434a-a367-ff1df05dcfe2 root mduuid/e654aa42ae092ddb7d4c9fa6086fb54c 
set prefix=($root)'/boot/grub'
configfile $prefix/grub.cfg
Does this mean that that a simple update-grub will do?

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#52 Post by p.H »

Deekee wrote: 2022-10-07 10:51 What would be the correct way to update GRUB if I made some changes to /etc/default/grub when running a RAID1
It depends what options you changed in /etc/default/grub. Most options only affect the outcome of update-grub (/boot/grub/grub/cfg) but a few options such as GRUB_DISTRIBUTOR and GRUB_ENABLE_CRYPTODISK also affect the outcome of grub-install (subdirectory name in /boot/efi/EFI, EFI boot entry name, inclusion of crypto drivers in the unsigned core image).

Note that /boot/efi/EFI/debian/grub.cfg is generated by grub-install, not update-grub.

Also note that in

Code: Select all

search.fs_uuid bc7bbf9f-6266-434a-a367-ff1df05dcfe2 root mduuid/e654aa42ae092ddb7d4c9fa6086fb54c
bc7bbf9f-6266-434a-a367-ff1df05dcfe2 is the UUID of the filesystem inside /dev/md1 as shown by
Deekee wrote: 2022-10-07 10:51 lsblk -o +UUID /dev/md1
whereas e654aa42ae092ddb7d4c9fa6086fb54c is the UUID the RAID array itself as would be shown by lsblk on its components /dev/sda3 and /dev/sdb3.

Deekee
Posts: 91
Joined: 2022-07-02 17:50
Has thanked: 5 times
Been thanked: 3 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#53 Post by Deekee »

I just changed the following:

Code: Select all

GRUB_CMDLINE_LINUX_DEFAULT="quiet intremap=off"
So I think

Code: Select all

sudo update-grub
Will do!

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: MacMini RAID1 ESP UEFI (SOLVED)

#54 Post by p.H »

Yes. GRUB_CMDLINE_LINUX_DEFAULT is used only by grub-mkconfig/update-grub.

Post Reply