Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

How to migrate disk drives from Legacy BIOS to UEFI

Ask for help with issues regarding the Installations of the Debian O/S.
Post Reply
Message
Author
gldickens3
Posts: 10
Joined: 2013-10-29 19:04

How to migrate disk drives from Legacy BIOS to UEFI

#1 Post by gldickens3 »

Hello everybody,

I have a Debian server that has been on-line running since 2013 using Legacy BIOS and I want to migrate to a new motherboard. My new motherboard is a SUPERMICRO MBD-X12SAE-5-O ATX Server Motherboard which does not support Legacy BIOS.

My current server is running Linux software RAID1 with two drives in the RAID1 array and I therefore need to migrate these disk drives from my current Legacy BIOS to UEFI on the new motherboard. Is anyone aware of any Debian specific instructions for migrating disk drives from Legacy BIOS to UEFI?

I have found a REDHAT specific set of instructions here: Move your Linux from legacy BIOS to UEFI in place with minimal downtime. Does anybody know if these instructions for REDHAT would work with Debian? Otherwise, please recommend instructions for migrating these disk drives under Debian. Also, I would like instructions for doing the migration with drives that are running RAID1.

Any and all input will be greatly appreciated.

Thanks,

Gordon

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: How to migrate disk drives from Legacy BIOS to UEFI

#2 Post by p.H »

Each case is specific. Can you post the current disk partition tables ?

gldickens3
Posts: 10
Joined: 2013-10-29 19:04

Re: How to migrate disk drives from Legacy BIOS to UEFI

#3 Post by gldickens3 »

Hi p,H,

Thanks very much for your reply and help. Here are my partitions as listed by "parted":

# parted /dev/sda
GNU Parted 3.4
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: ATA WDC WD1003FZEX-0 (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 1048kB 16.0GB 16.0GB extended boot
5 1049kB 16.0GB 16.0GB logical linux-swap(v1) raid
2 16.0GB 18.0GB 2000MB primary ext4 raid
3 18.0GB 1000GB 982GB primary ext4 raid

/dev/sdb is exactly the same since it is in a RAID1 array with /dev/sda.

Here are my partitions in my RAID1 array:

# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda3[0] sdb3[1]
959051776 blocks super 1.2 [2/2] [UU]
bitmap: 2/8 pages [8KB], 65536KB chunk

md1 : active raid1 sda2[0] sdb2[1]
1950720 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda5[0] sdb5[1]
15614976 blocks super 1.2 [2/2] [UU]

Please let me know your advice and recommendations for migrating these disk drives from Legacy BIOS to UEFI.

Thanks,

Gordon

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: How to migrate disk drives from Legacy BIOS to UEFI

#4 Post by p.H »

No free space to create an EFI system partition, so you will need to shrink/remove an existing partition.
What is the use of the RAID arrays ? Can you post the output of "lsblk" ?

Edit:
The basic operations to migrate to UEFI boot are quite simple:
- create an EFI system partition (ESP) formated as FAT (at least 34 MB size is recommended for FAT32)
- mount it on /boot/efi
- install grub-efi (during configuration, select "install into the removable media path")

Things can get a bit more complicated when there is no free space to create an EFI partition so you need to shrink an existing partition. Things can get even more complicated when software RAID is involved for two reasons:
- shrinking a software RAID array is awkward; in some cases it may be simpler to delete and re-create it
- Debian does not support multiple EFI partitions for boot redundancy out of the box; this topic has already been discussed in other threads here.

gldickens3
Posts: 10
Joined: 2013-10-29 19:04

Re: How to migrate disk drives from Legacy BIOS to UEFI

#5 Post by gldickens3 »

Hi p.H,

I am using RAID1 for data redundancy in case of a drive hardware failure. That way, when a drive fails, I am then able to remove it from the RAID1 array and add a new drive to the array without any loss of data and minimal down time.

Here is the output from the "lsblk" command:

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 1K 0 part
├─sda2 8:2 0 1.9G 0 part
│ └─md1 9:1 0 1.9G 0 raid1 /boot
├─sda3 8:3 0 914.7G 0 part
│ └─md2 9:2 0 914.6G 0 raid1 /
└─sda5 8:5 0 14.9G 0 part
└─md0 9:0 0 14.9G 0 raid1 [SWAP]
sdb 8:16 0 931.5G 0 disk
├─sdb1 8:17 0 1K 0 part
├─sdb2 8:18 0 1.9G 0 part
│ └─md1 9:1 0 1.9G 0 raid1 /boot
├─sdb3 8:19 0 914.7G 0 part
│ └─md2 9:2 0 914.6G 0 raid1 /
└─sdb5 8:21 0 14.9G 0 part
└─md0 9:0 0 14.9G 0 raid1 [SWAP]

Please note that my boot partition is currently approximately 1.9 GB so I could shrink that partition to create the extra needed space, however that could be complicated in a RAID1 config. So, I think that I have two options:

Option 1
  • Delete the RAID1 array and choose one of the drives, such as sda, as my primary OS drive.
  • Shrink the boot partition on the selected primary OS drive to create extra space
  • Create the EFI partition from the extra space and install grub-efi as you described.
  • Recreate the RAID1 array with the one drive only.
  • Then, add the second drive back to the array.
Option 2

A second option would be to do a fresh Debian installation on new drives under UEFI and recover from an rsync image backup of the original file system from the Legacy installation (omitting the /boot partition) as follows:
  • Perform an rsync image backup of the entire root file system on md2 (/), omitting md1 (/boot).
  • Install new disk drives and install Debian from scratch with a new UEFI partitioning scheme.
  • Recover from the rsync image all files from the backup excluding /etc/fstab which should be preserved from the UEFI installation.
  • The boot partition should not be restored and should contain the files from the UEFI Debian installation.
  • Reboot and hopefully, if everything worked properly, everything should then be working under UEFI.
p.H and anybody else that wants to contribute; Which of these two methods sounds best to you? I think that the rsync method with new drives may be easier and less risky than attempting to migrate the existing drives not to mention that I have successfully done similar rsync migrations before and so I am relatively confident in that.

Thanks,

Gordon

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: How to migrate disk drives from Legacy BIOS to UEFI

#6 Post by p.H »

gldickens3 wrote: 2022-12-16 18:46 I am using RAID1 for data redundancy in case of a drive hardware failure.
Of course. What else would you use RAID1 for ? This is not what I was asking for. I wanted you to tell me that the 3 RAID arrays were used respectively for /boot, / and swap, as shown by lsblk.

May I ask why you created a separate RAID array for /boot ? Did you fear (or know) that the BIOS was flawed and could not read the whole 1 TB drives ? IME the limit is usually at 2 GiB (2.2 TB) matching the DOS partition table or 32-bit LBA addressing limit. Such limit should not exist with UEFI so you can get rid of that RAID array and move /boot into the root filesystem.
gldickens3 wrote: 2022-12-16 18:46 Delete the RAID1 array and choose one of the drives, such as sda, as my primary OS drive.
Which RAID array ? /dev/md1 (/boot) ? What do you mean by "primary OS drive" ? If you want full redundancy both drives must be equal.
gldickens3 wrote: 2022-12-16 18:46 Recreate the RAID1 array with the one drive only.
Then, add the second drive back to the array.
What is the point of doing this instead of recreating the RAID array with both drives at once ? This does not make any sense. Also you need to create an EFI partition on both drives for boot redundancy.
gldickens3 wrote: 2022-12-16 18:46 A second option would be to do a fresh Debian installation on new drives under UEFI and recover from an rsync image backup
This requires 3 times the storage space :
- original drives
- back-up storage
- new drives
Why not if you have it all. But
gldickens3 wrote: 2022-12-16 18:46 Recover from the rsync image all files from the backup excluding /etc/fstab
What is the point of installing a new system if you are going to overwrite most of it with the old one ? Why not just restore the configuration and data ?
Note that the new network configuration may need to be preserved too.

If you feel more comfortable with the back-up+restore method, go with it, specially if you want to replace the drives (with bigger, faster or just newer ones). However I am kind of a lazy guy and would make as little change as possible.

Method 3:
Move the contents of /boot into the root filesystem.
Delete /dev/md1 and its member partitions.
Create EFI partitions on both drives.
Mount one on /boot/efi and the other one on /boot/efi2.
Update fstab (remove /boot, add /boot/efi and /boot/efi2 with "nofail" option).
Install grub-efi.
Install GRUB on /boot/efi2:

Code: Select all

grub-install --target=x86_64-efi --force-extra-removable --efi-directory=/boot/efi2
Method 4:
Disable the swap.
Delete /dev/md0 and its member partitions.
Create EFI partitions on both drives.
Re-create a RAID array in the remaining free space for swap.
Update fstab, enable swap, install grub-efi, etc.

It is possible to shrink an existing RAID array without deleting+re-creating it but it is more complicated and not necessary here as either /boot or swap can be recreated easily.

Additionally with both methods, you may want to convert the partition tables to GPT with gdisk before installing grub-efi. This requires 33 free sectors at the beginning and the end of the drives for the primary and backup GPT headers and partition tables.

Post Reply