[Solved] Trying to resize /boot partition but stuck

If none of the specific sub-forums seem right for your thread, ask here.
Post Reply
Message
Author
jrv
Posts: 43
Joined: 2012-05-03 12:20

[Solved] Trying to resize /boot partition but stuck

#1 Post by jrv »

I have been running debian for many years, and the initial partitions made to the disk are now no longer acceptable. I have a two TB disk that was partitioned 243MiB for /boot and the remainder for LVM. Over the past year or two kernel compiles run out of disk space on the boot partition. I have been limping along by moving the older initrd image off the boot partition before updating. I have plenty of free space in the LVM partition, and I would like to move a small portion (1G) to the boot partition. The disk uses MBR. The first partition is /dev/sdd1 using ext2 filesystem. The LVM file is lvm2 pv /dev/sdd5, which is wrapped in an extended filesystem /dev/sdd2. The LVM has two logical volumes, root & swap_1

I created a debian live cd and booted from it. I installed gparted and partitionmanager in the live cd. Some of the posts I found suggested that it might be possible to shrink the lvm in gparted or partitionmanager, but I did not find a way to do that. Instead I used the command line:

Code: Select all

lvresize --verbose --resizefs -L -1G /dev/mapper/euterpe--vg-root
This seemed to do what I expected. But the unallocated space was "between" the two logical volumes. I moved the unallocated space to the end of the volume group by first determining where the logical volumes currently were located:

Code: Select all

pvs -v --segments
This gave the starting addresses and sizes of the segments as well as the 256 block unallocated segment.

Code: Select all

pvmove --alloc anywhere /dev/sdd5:476614-476869
This took the last 1G of the swap_1 logical volume and moved it into the 1G gap, which made the last 1G free.

I then used partitionmanager to shrink the sdd5 partition by 1G, and its containing sdd2 partition by 1G. This left me as pictured below

Code: Select all

# parted /dev/sdd print free
Model: ATA ST2000DM006-2DM1 (scsi)
Disk /dev/sdd: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type      File system  Flags
        32.3kB  1049kB  1016kB            Free Space
 1      1049kB  256MB   255MB   primary   ext2         boot
        256MB   257MB   1048kB            Free Space
 2      257MB   1999GB  1999GB  extended
 5      257MB   1999GB  1999GB  logical                lvm
        1999GB  1999GB  2097kB            Free Space
        1999GB  2000GB  1075MB            Free Space
There is 2M of unallocated space under /dev/sdd2, which if I can't reclaim easily I will just ignore. I am guessing this is due to the sdd2 partition not being aligned, which I saw in a message.

There is also 1G of space that is not part of any partition at the end. Here I am stuck. I think I have to move that 1G space from after /dev/sdd2 to before it so it is adjacent to /dev/sdd1 (the boot partition). I was hoping to use partitionmanager or gparted to move the /dev/sdd2 partition (and its contained /dev/sdd5 partition) down, which would in turn move the unallocated 1G of space up. But both partitionmanager and gparted have a lock on the sdd2 and sdd5 partitions. I found something that says you use "Deactivate" in gparted to allow the partition to be moved, but I am concerned that deactivate===data loss. I can't find any documentation on what "deactivate" does.

So questions

1) Am I correct that the unallocated 1G currently located below partition /dev/sdd2 has to be moved above /dev/sdd2 and so next to /dev/sdd1 in order to expand /dev/sdd1 into the unallocated space?
2) Can this be done with gparted or partitionmanager without losing data?
3) Is deactivating the LVM partition in gparted the correct way to begin that process?
Last edited by jrv on 2024-05-17 20:06, edited 1 time in total.

User avatar
wizard10000
Global Moderator
Global Moderator
Posts: 1023
Joined: 2019-04-16 23:15
Location: southeastern us
Has thanked: 114 times
Been thanked: 169 times

Re: Trying to resize /boot partition but stuck

#2 Post by wizard10000 »

1) Am I correct that the unallocated 1G currently located below partition /dev/sdd2 has to be moved above /dev/sdd2 and so next to /dev/sdd1 in order to expand /dev/sdd1 into the unallocated space?
Yes. Expanding a partition requires that the unallocated space be adjacent to the partition you want to resize.
2) Can this be done with gparted or partitionmanager without losing data?
A good backup is strongly recommended. A software, hardware or power failure during partitioning can trash your drive. Can you do it without data loss? Probably, but any data loss will most likely be catastrophic.
3) Is deactivating the LVM partition in gparted the correct way to begin that process?
Yes, you need to remove the lock before repartitioning.

Hope this helps -
we see things not as they are, but as we are.
-- anais nin

jrv
Posts: 43
Joined: 2012-05-03 12:20

Re: Trying to resize /boot partition but stuck

#3 Post by jrv »

Thanks for the reply.
wizard10000 wrote: 2024-05-16 08:41
2) Can this be done with gparted or partitionmanager without losing data?
A good backup is strongly recommended. A software, hardware or power failure during partitioning can trash your drive. Can you do it without data loss? Probably, but any data loss will most likely be catastrophic.
Before beginning I made two backups, one my standard tar-based backup and a second using fsarchiver. The second way is new to me; I encountered it during my research for this. I will probably revisit it later because it has the very interesting (to me) property of being able to back up a lvm snapshot, which is something else I learned about during research. The tar archive is more useful day-to-day because I can dig individual files out of it rather than restore the whole thing when someone who shall remain anonymous deletes the wrong file. The fsarchiver backup looks like it would only be useful in the case of a complete disk failure or corruption. I did not mention backup because it wasn't directly related to the question at hand.
wizard10000 wrote: 2024-05-16 08:41
3) Is deactivating the LVM partition in gparted the correct way to begin that process?
Yes, you need to remove the lock before repartitioning.
I hunted through the gparted code and found for deactivation it is executing:

Code: Select all

lvm vgchange -a n {volgroup}
At this point I am still not clear on what deactivation does in lvm. But I conjecture that even though the disk is not mounted in a traditional sense when booting into a live cd (it does not show in the list that "mount" generates), the OS somehow still scans disks and "mounts" lvm partitions it finds on them. Again purely conjecture. Anyway I will now go forward with deactivation and attempt to move the lvm partition so the unallocated space is adjacent to the /boot partition.

User avatar
pbear
Posts: 492
Joined: 2023-08-27 15:05
Location: San Francisco
Has thanked: 2 times
Been thanked: 81 times

Re: Trying to resize /boot partition but stuck

#4 Post by pbear »

I've only dabbled with LVM in test boxes. If you haven't been all along, I heartily recommend reading the Arch Wiki.
Will mention, by the way, that if I were in your shoes I would reinstall. Probably faster and definitely cleaner.

jrv
Posts: 43
Joined: 2012-05-03 12:20

Re: Trying to resize /boot partition but stuck

#5 Post by jrv »

This is an after-action report for the benefit of anyone who might be in a similar situation.

After deciding that deactivating the LVM partition was ok, I booted into my live cd again. I now had the boot partition first as the primary partition, an extended partition second containing the third LVM partition followed by some unallocated space not contained in any partition. But at this point I ran into a snag. I could not find a way in gparted to move the unallocated space to the left past the nested pair of partitions with the lvm so it would be adjacent to the boot partition. I decided to expand the container partition (sdd2) to include the unallocated space again. I deactivated the lvm partition and edited the container to add the following unallocated space. Once I had done that I moved the contained partition (sdd5) to the right so the unallocated disk space was at the beginning of the the container partition rather than at the end. In the resize dialog for the nested partition there were text boxes saying, "Free space preceding (MiB)," "New size (MiB)" and "Free space following (MiB)". The "following" space was my 1G of empty disk space, so I changed the "preceding" space to that size and the "following" space to zero. I then applied (edit, apply). I could not figure out a way to perform this move using the mouse and the gui slider, but typing in the values worked.

Because this was disk copying from the same disk to itself I was hoping this would be reasonably fast but in fact it took ten hours to move 1.82TB. So be prepared; that's a lot of TikTok viewing. If I had realized it was going to take ten hours I would have checked the weather before starting to see if there were any storms in the forecast.

At the end of this move I now had the unallocated space at the beginning of the container partition (sdd2). I selected the container partition and brought up the resize dialog. Unlike the contained LVM partition it reported that there was no unallocated space before and none after. Again I could not figure out how to use the gui and the slider to resize the partition. There being no reported space before or after, I also did not see at first how to move the unallocated space at the start of the contain partition outside of it. Then I noticed that the dialog had two values, "Minimum size" and "Maximum size", displayed, and that the "Minimum size" was exactly 1G smaller than the "Maximum size". So I set the "New size" to be the minimum size, and here I hit another snag. I got the error message,

Code: Select all

gparted bug a partition cannot end (some number here) after the end of the device (%2).

This seemed to be due to the sdd2 partition not being aligned on a suitable boundary. So instead of resizing the container partition (sdd2) to the reported minimum size I resized it to the minimum size plus 2M, because I knew that was the amount that was unallocated before I started. I had to restart gparted before I could get this to work, but once I did applied the new size to the container partition (sdd2) I now had the unallocated space minus 2M outside the sdd2 partition and adjacent to the sdd1 partition. From here it was downhill. I resized the boot partition (sdd1) so it used all the unallocated space and applied. I was expecting to have to use resize2fs to expand the first partition's filesystem but apparently gparted did this for me automatically.

End result: profit! The /boot partition now has 1.24G of space. No data loss on any partition. The process was not as smooth as silk; it was not clear to me how to proceed forward with gparted a couple times, but in the end I stumbled through the maze.

Aside: the live cd figured out my 27 inch monitor on boot, but when the live cd screen display timed out, it would put up a black screen and would not display the dialog to log back in. I had to type the password in blind (the password for the default user "user" is "live"). Once logged in again the screen would remember how to draw itself and display properly, albeit with a dialog for reconfiguring the display drawn on it. The first time this happened it was quite disconcerting because I was waiting for a two terabyte partition to finish moving. This was to take many hours, and I wanted to look at the progress. If logging in blind hadn't worked I would have had to guess when the partition was done moving and then push the power switch. Needless to say I'm glad it didn't come to that. I think this is because the monitor powers down after being idle for a long time, and when it revives to put up the login screen, the live disk isn't able to perform its monitor recognition code. Whatever defaults the live disk is using do not match what the monitor wants, and so until the login was complete all I could see was a black screen.

Aside: when performing gparted operations the "deactivation" of the lvm disk was not sticky. I would apply an operation and afterwards find the partition was locked and active again. I kept having to deactivate the same partition. This wasn't a problem; it was just a bit surprising.

Aside: on boot the disk device names can change between boots of the live cd. I have referred to the disk here as "sdd" and the partitions as "sdd1," "sdd2" and "sdd5," but at one point they came up as "sdc" instead of "sdd". Not a problem; just something to be aware of.

jrv
Posts: 43
Joined: 2012-05-03 12:20

Re: Trying to resize /boot partition but stuck

#6 Post by jrv »

pbear wrote: 2024-05-17 01:54 I've only dabbled with LVM in test boxes. If you haven't been all along, I heartily recommend reading the Arch Wiki.
Will mention, by the way, that if I were in your shoes I would reinstall. Probably faster and definitely cleaner.
I got into this situation when I first installed debian long ago, maybe fifteen years ago. I believe at the time the boot code could not read a LVM filesystem, so you had to have a separate partition for the boot code. That limitation has since gone away, but for a legacy system like mine, it's my legacy. I will also guess that lvm was the default at the time because it is unlikely I would have installed it if it weren't the default. Also I believe at the time Master Boot Record was the default partitioning model.

For a system like the one I built, a single disk system, lvm is not particularly helpful. It might be useful if I had built a system with a really small drive then decided to expand it in place without a re-install. Then I could add another disc and magically grow the machine into it without having to worry about balancing the data on each disk. The total system storage would be equal to the sum of the two disks without regard to where the files lived on the directory tree. There are of course downsides with regard to potential failures; if one of the disks fails you can have files split over the two disks and so lost, plus you have to deal with a debilitated lvm volume.

Since I first installed lvm I see it has acquired new functionality, e.g. raid and snapshots. I recently built a raid using zfs but I am not entirely enamored with zfs because it gives me a warning, "zfs taints the kernel." This seems to be an issue with regard to differences of licensing and not a code issue per se. But it would be nice to move this zfs raid over to lvm because, neatness counts. Snapshots are also a feature I have not notice before, and I may look at performing backups against a snapshot rather than the actual files. This page has a cool script for creating a backup using a lvm snapshot. It uses fsarchiver, but you could substitute in a tar-based archive instead (or other) without too much difficulty.

As for whether reinstalling would be faster and/or cleaner, that's not clear to me. After fifteen years there's a fair amount of customization on this machine. I know for instance that I would have to at least reinstall all the software I have loaded, remount the zfs partition, re-enable all the timer scripts I have built and re-install the various thunar scripts I have created. A re-install *would* give me a gpt disk, a single bootable volume (instead of having /boot in a different partition) and perhaps other benefits I have not considered. But even if I had thought of the idea, I think I would have gone down the path of patching the existing problem rather than possibly generating new problems. I knew that resizing the /boot partition would not introduce new work, assuming I didn't suffer a catastrophic data loss.

With regard to documentation, I often call upon the Arch linux documentation for solid information. Their lvm doc looks pretty good. Unfortunately in this case my difficulty was more with gparted, and their explanation about what "deactivation" does is just about as informative as other information on the interwebs, i.e. it does not really say what is going on. For lvm as such I think red hat documentation may be as good.

Post Reply