Fixing broken boot partition with an encrypted filesystem

Share your own howto's etc. Not for support questions!

Fixing broken boot partition with an encrypted filesystem

Postby blub » 2017-01-20 12:35

This guide helps you to be able to boot a linux system again that has an encrypted luks partition and an unencrypted boot partition. With this you should be able to repair your boot partition even if it got destroyed, your linux kernel image is gone or corrupted, your initramfs is broken or grub isn't working.

It is assumed you're using grub2 bootloader, a debian-based distribution (otherwise certain config file locations and the apt-get commands might change) and a luks setup that contains a lvm volume inside the encrypted boot partition as described in [1]. This is also the default setup if you installed ubuntu with encrypted filesystem option. The guide was written while rescuing an ubuntu system, but it should be the same for all debian-based distros.

This is essentially the combination of the guides [1], [2], [3] with the additional information that you can't take the grub/initramfs/crypttab/fstab configuration of a working setup, because the "install ubuntu with encrypted filesystem" somehow doesn't create half of the required configuration but magically the fs encryption still works. If someone knows why in the default all the configuration isn't necessary, but to restore functionality it is, please let me know.

Required ingredients: Live system boot CD or USB stick with the same OS/Version as on the system to be saved.

Most commands must be started as root, I suggest you use a root shell instead of typing "sudo" everytime.

This guide assumes you know how to use the shell, gparted, how to spot an uuid and how mounting devices works.

Boot from your live system to run the actions. In the following, it is assumed that the system you want to repair is on /dev/sda.

Creating a new boot partition

Skip this step if you already have a boot partition.

If you absolutely lost your boot partition for whatever reason, create a new one using gparted. This should be straightforward, just create a new empty ext4 partition whereever your old boot partition was or somewhere with free space on the same harddisk as the luks partition.

Later, in the restore section, you will edit your /etc/fstab to accomodate the new partition.

Get the required information: UUIDs

In the following, it is assumed your boot partition is /dev/sdaX (eg. /dev/sda1) and your luks partition is /dev/sdaY. You can see in gparted which partition has which /dev/ path.

First, get the UUIDs of your partitions. Find out the boot partition UUID with

Code: Select all
 blkid /dev/sdaX

which returns something like

Code: Select all
 /dev/sdaX: UUID="6e7d20d5-b54f-45d9-b1cc-b4d6ecfad193" TYPE="swap"

The UUID should look something like 6e7d20d5-b54f-45d9-b1cc-b4d6ecfad193. Copy into a text editor or somewhere you find it again later, same for all the following information.

Get the luks partition UUID with

Code: Select all
 cryptsetup luksUUID /dev/sdaY

Which returns a UUID in case of success:

Code: Select all

Or an error message if you got the wrong partition:

Code: Select all
 Device /dev/sdaY is not a valid LUKS device.

If you get the error message instead of an UUID, then /dev/sdaY is not a valid luks partition. Re-check with gparted whether you got the right partition. If the problem persists, get help somewhere else, this tutorial won't help you in that case.

Get the required information: Target name

Get the target name by looking at /etc/crypttab. It should contain a line like this:

Code: Select all
 TARGETNAME UUID=9537a729-392b-4335-aeb5-f5ab2d64b7c1 none luks,discard

You need to edit this line later, for now just remember the target name.
If there are additional lines for swap or other partitions, you need the one of your root partition, most probably the first one.

If you don't have an /etc/crypttab (anymore?) create a new one with that line above, but substitute your luks volume UUID for the UUID; and as the target name, take what was displayed to you on the "Unlocking disk" password prompt screen when it still worked, at the end of the line the target name is written (in parentheses).

Open LUKS partition and mount root volume

Here you can also see that your data is still there. Using the target name you found out earlier, open your luks volume with

Code: Select all
 cryptsetup luksOpen /dev/sdaY TARGETNAME

Which should give you a password prompt.

After a successful open, scan for LVM volume groups using

Code: Select all

The reply will look like:

Code: Select all
 Reading all physical volumes. This may take a while...
 Found volume group "VGNAME" using metadata type lvm2

Note down the volume group name (VGNAME in the sample output).

Make the LVM volumes in that group available to the kernel and scan for partitions, using the volume group name you just found out:

Code: Select all
 vgchange -a y VGNAME

The reply should look similar to:

Code: Select all
 ACTIVE     '/dev/VGNAME/root'   [5.00 GB]     inherit
 ACTIVE     '/dev/VGNAME/usr'    [6.00 GB]     inherit
 ACTIVE     '/dev/VGNAME/home'   [128 GB]      inherit
 ACTIVE     '/dev/VGNAME/swap'   [2048 MB]     inherit

Now you can actually mount your partitons and look at your files. But instead we will bootstrap into your system to be saved, and take rescue measures from there.

Chroot into encrypted system

Using your target name and volume group name, mount all relevant partitions on a new mountpoint. Compare with your output from lvscan earlier, in that case you need to mount the root partition (always) and the /usr partition (in case you have one). Home and swap partitions aren't necessary to mount, should you have them. You could use any folder name instead of the target name, but this might lead to error messages about /etc/crypttab later.

Code: Select all
 mkdir /media/TARGETNAME
 mount /dev/VGNAME/root /media/TARGETNAME
 mount /dev/VGNAME/usr /media/TARGETNAME/usr
 mount -o bind /proc /media/TARGETNAME/proc
 mount -o bind /dev /media/TARGETNAME/dev
 mount -o bind /sys /media/TARGETNAME/sys

Note that we also bind-mounted /proc, /sys and /dev from the live system which is necessary for bootstrapping into your system from the live system.

Now chroot into your system-to-be-saved using

Code: Select all
 chroot /media/TARGETNAME /bin/bash

Then first thing mount the boot partition on your harddisk into your new system:

Code: Select all
 mount /dev/sdaX /boot

(maybe make sure the /boot mountpoint is empty first.)

Restoration, part 1: re-install kernel image

If you had to re-create your boot partition or for some other reason your kernel image is missing in the /boot partition (eg. failed update) or corrupted, the easiest way to recreate it is to re-install the corresponding linux-image package.

You can skip this step if you know you have a valid linux image installation.

The tricky part here is to find out which kernel image is installed as the newest on your system (and if the newest one was making problems, you might want to install the one before). Get a list of all image packages with

Code: Select all
 apt-cache search --names-only linux-image-

This returns a long list of linux image packages, now you want pick the right image from that list:

To find out the installed version, use

Code: Select all
 uname --kernel-release

which returns eg.

Code: Select all

In this case, your kernel image would be linux-image-3.16.0-4-amd64.

So re-install it with

Code: Select all
 apt-get install --reinstall linux-image-3.16.0-4-amd64

If you suspect the problems to be caused by your current kernel version, install an earlier version instead. Note that the list generated above with apt-cache is not sorted by version number, check twice which one really is the kernel version you want.

It is better to reinstall an already installed version here instead of thinking "well, I could just go for a newer kernel if I'm already doing this" because otherwise all kinds of problems might arise. You can still update your kernel normally after your system works again.

After reinstall, check in /boot whether there is an image (called something like "vmlinuz-3.16-3-amd64" and a config (called something like config-3.16-3-amd64).

Restoration, part 2: crypttab

Edit the file /etc/crypttab where you got the target name earlier. If it isn't there, create it and add the following line, using the correct target name and luks partition UUID; if it is there and contains your target line, edit it to look like the following line and re-check target name and luks partition UUID. Remember to put your volume group name for VGNAME.

Code: Select all
 TARGETNAME UUID=9537a729-392b-4335-aeb5-f5ab2d64b7c1 none luks,retry=1,lvm=VGNAME

Note that if the file existed, there might be additional lines for swap partitions. You can fix them later.

If there are lines for /home, /usr and other partitions, you need to fix them here as well. Unfortunately, I'm not entirely sure how they should look - if you know this better than me, let me know. I guess you have to use additional target names for every mount point and put the corresponding UUID, then make sure it corresponds to fstab entries to be auto-mounted on boot (or not, because crypttab is an fstab replacement?) Not sure, help for this part would be appreciated.

Restoration, part 3: fstab

In case you created a new boot partition earlier, you need to update the boot partition UUID in the file /etc/fstab. Look for a line like this:

Code: Select all
 # /boot was on /dev/sda1 during installation
 UUID=a9c62f1d-b300-4589-b337-aa7141092c60 /boot           ext4    defaults        0       2

and replace the UUID value here with the boot partition UUID you found earlier.

You might also need to replace the line in fstab that describes your root partition. Unfortunately I have no clue how to do this correctly, but if it helps you, here is the default line from a working encrypted-filesystem xubuntu installation:

Code: Select all
 /dev/mapper/xubuntu--vg-root /               ext4    errors=remount-ro 0       1

If you know this better than me, please let me know.

Restoration, part 4: initial ramdisk (initrd)

It is essential to have your boot partition mounted under /boot before doing this part, make sure you did that as mentioned above.

In case your kernel was re-installed or your initrd is missing, you need a new initrd; but because we need to update initrd config, you need to update it as well, so do this in any case.

Create a file named /etc/initramfs-tools/conf.d/cryptroot in the chrooted environment to contain this line, replacing TARGETNAME with the LUKS target name you found earlier, and the UUID value with the UUID of the LUKS partition:

Code: Select all

Then (re-)create the initial ramdisk image in /boot using

Code: Select all
 update-initramfs -k all -c

This could take a while.

Restoration, part 5: GRUB

Again, first make sure you have your boot partition mounted under /boot.

Install or re-install the grub bootloader onto the harddisk MBR and write grub files into /boot with:

Code: Select all
 grub-install /dev/sda

You should get back a success message:

Code: Select all
 Installation finished. No error reported.

Now edit the config file /etc/default/grub, find the line that looks like this:

Code: Select all

Change it to look like this, replacing TARGETNAME, VGNAME and the luks partition UUID with the appropriate values:

Code: Select all

Then run the following command to apply the grub configuration

Code: Select all

If you want to double-check your GRUB installation, you can use the tool described in [4].


Exit the chroot shell (with the command "exit"). You should now be able to reboot your system (remove your live CD/USB-Stick) and get a password prompt again which unlocks your partition and boots your original GNU/Linux installation.

What needs to be done is fixing the swap partition(s). I haven't done this myself, so maybe I'll include that later if someone tells me how that would be done.

[1] ... VM_on_LUKS
[2] ... -dual-boot
[3] ... d-lvm.html
[4] ... -installed
Posts: 8
Joined: 2015-07-15 05:52

Re: Fixing broken boot partition with an encrypted filesyste

Postby radiosarre » 2017-01-28 21:14

Hi blub, I got the same issue thank you for posting in this thread
Posts: 1
Joined: 2017-01-28 21:04

Re: Fixing broken boot partition with an encrypted filesyste

Postby Ludogre » 2017-12-04 11:45

Thanks for this very useful tutorial.

Just three comments.

1. In Debian 9.0 live cd, cryptsetup and lvm2 aren't installed by default. So, we have to install them before using lvm and encrypted partitions.
2. With this file /etc/initramfs-tools/conf.d/cryptroot , my system boot failed.
3. And last, but maybe it was due to my second comment, cryptsetup command were not available in my initramfs .

To manage to restaure my boot partition, I had to installed manually these files from another installation: config-*, initrd.img-*,* and vmlinuz-* . During the boot process, I modified the grub entry to select the "new" kernel in single mode. Once the system was started, I have reinstalled my kernel with "dpkg-reconfigure" command.

Posts: 1
Joined: 2017-12-04 11:24

Return to Docs, Howtos, Tips & Tricks

Who is online

Users browsing this forum: No registered users and 2 guests