My external cloud drive failed (with my backups) so I started re-thinking my system
I see a lot of people using mdadm for a software raid and it works for /boot
But I also liked the flexibility that LVM provides in growing storage needs.
(and a good friend has been using LVM for some of his needs, different from mine...)
So I ended up rebuilding my system to use my 2 1TB disks as such:
50GB /dev/sd[ab]1 / /dev/md1 (mdadm)
10G /dev/sd[ab]2 swap (mdadm)
871GB /dev/sd[ab]3 /home (lvm2 - configured as a RAID1 since mirroring is considered obsolete)
In a matter of a few days /dev/sdb decided it should have been in the junk bin...
Now the /home partition will not mount, device-mapper hangs and then the LMV never mounts the LV
The madam drives came up fine. NO /dev/sdb3 is not under mdadm using LVM for raid1
According to lvm.conf the default activationMode is degraded so I am confused why the drive does not mount.
I am running Debian 9 (just upgraded since I was changing things so much!
I have not found a way to verify that the LV is set to ActivationMode=Degraded it is acting like it is set to Complete
I have seen suggestions (google searches) to try:
I have also wondered if I should convert the raid1 back to a linear volume to discard the destroyed pvYou can activate what's left of the volume group with:
vgchange --partial -ay <VolGroup>
You can remove the missing PV from the VG with:
vgreduce --removemissing <VolGroup>
Note that this will also remove any logical volumes that were using the
missing physical volume. You can run it with --test first to see what
effect it will have.
Thoughts?
Tom