LVM raid1

Kernels & Hardware, configuring network, installing services

LVM raid1

Postby w2vy » 2018-02-07 15:14

How ironic things can be...

My external cloud drive failed (with my backups) so I started re-thinking my system

I see a lot of people using mdadm for a software raid and it works for /boot
But I also liked the flexibility that LVM provides in growing storage needs.
(and a good friend has been using LVM for some of his needs, different from mine...)

So I ended up rebuilding my system to use my 2 1TB disks as such:

50GB /dev/sd[ab]1 / /dev/md1 (mdadm)
10G /dev/sd[ab]2 swap (mdadm)
871GB /dev/sd[ab]3 /home (lvm2 - configured as a RAID1 since mirroring is considered obsolete)

In a matter of a few days /dev/sdb decided it should have been in the junk bin...

Now the /home partition will not mount, device-mapper hangs and then the LMV never mounts the LV

The madam drives came up fine. NO /dev/sdb3 is not under mdadm using LVM for raid1

According to lvm.conf the default activationMode is degraded so I am confused why the drive does not mount.

I am running Debian 9 (just upgraded since I was changing things so much!

I have not found a way to verify that the LV is set to ActivationMode=Degraded it is acting like it is set to Complete

I have seen suggestions (google searches) to try:

You can activate what's left of the volume group with:

vgchange --partial -ay <VolGroup>

You can remove the missing PV from the VG with:

vgreduce --removemissing <VolGroup>

Note that this will also remove any logical volumes that were using the
missing physical volume. You can run it with --test first to see what
effect it will have.

I have also wondered if I should convert the raid1 back to a linear volume to discard the destroyed pv

Posts: 48
Joined: 2011-02-07 14:06

Re: LVM raid1

Postby dryden » 2018-02-07 20:47

LVM should activate a RAID 1 with a missing leg just fine.

It is possible to vgreduce --remove-missing a RAID 1 PV I think.

But, LVM cannot automatically rebuild RAID 1 until you add a new PV.

So I have no solution for you, but LVM is a bit of a toy system.

You might be able to manually edit /etc/lvm/backup/<vgname>

Lots of things don't work when a PV is missing.

Restoring a backup is done with vgcfgrestore <vgname> but that might not work when the PV is gone.

I don't know the command for replacing a leg myself, but you will (would) have to create a new PV, add it, and then create a new replacement LV (before) adding it to the mirror.

You can always manually change the RAID 1 device to a regular linear target using the configuration file (backup) and adding the VISIBLE tag, ignoring the meta volume on that disk.

But, vgcfgrestore has to work for that.
Posts: 80
Joined: 2015-02-04 08:54

Re: LVM raid1

Postby w2vy » 2018-02-08 00:53

lvm a toy... oh boy... i thought it was more mature...

maybe I need to use mdadm for all...

vgreduce --remove-missing --force vg

fixed it...

Posts: 48
Joined: 2011-02-07 14:06

Re: LVM raid1

Postby dryden » 2018-02-08 07:23

Well they just introduce new features that are like toys and then much later down the road they put in place appropriate protections.

The problem is that Debian 8 is still at version 111 or so, and Stretch is at 168 I think which is much better, but certain protections only become available in 171 etc......

Debian and Ubuntu just run too far behind and are lax in upgrading LVM but LVM follows a "release pretty toys first, worry about making it work right later" model.

The Stretch version *ought* to be quite reasonable.

So I don't know why this thing didn't work, maybe vgreduce IS the proper way to go about things? It seems counterintuitive.

I have never had a RAID 1 system fail to boot or fail to get activated in Ubuntu 16.04 LVM 133.

And most of the time I was running with a missing leg ;-).

In fact when I loop-backed a DD image of that missing leg it automatically synced it, which is a bit annoying, but LVM will auto-sync any leg you re-attach.

To me LVM RAID just feels more shabby than mdadm even though it is more convenient and nicer.
Posts: 80
Joined: 2015-02-04 08:54

Re: LVM raid1

Postby w2vy » 2018-02-08 11:04

Good points, the stability of Debian is exactly while it is still running on my home server.
We get that stability by not pushing anything near the bleeding-edge...

Staying away from the bleeding edge is a double edged sword...

In Ubuntu what I tried might have worked... but then I'd have a higher risk of new bugs too...

I started my server on Debian (long time ago 10-15yrs) and upgraded it to Ubuntu and got bit
by a bug (some processes would spin out of control - again a long time ago) so
I switched this server back to debain... been rock solid ever since...

well until last week... I call this a self inflicted wound...

Posts: 48
Joined: 2011-02-07 14:06

Re: LVM raid1

Postby dryden » 2018-02-08 11:32

The problem is that LVM is documented in such a way that you *think* it will work because there are no warnings anywhere.

For instance there is a cache "splitcache" feature that will undo the "convert to cache" feature.

But, what they didn't tell you is that it was originally just intended as a debugging feature and when you re-attach the split cache it won't check whether it's actually still synced or clean the cache, no it will just attach it completely corrupting the cached volume.

It will do this until LVM 171.

So this is a RELEASE of a product with debugging features marketed as fully-grown features.

Then when it bites you, they say "Yeah, but you need to upgrade to the latest and greatest because we just fixed that 2 days ago."

That's why I call it a toy system: they take no responsibility for their releases.
Posts: 80
Joined: 2015-02-04 08:54

Re: LVM raid1

Postby w2vy » 2018-02-09 11:05

Thanks for that explanation... Clearly not being delivered as a production ready product.

replacement disk arrives today... switching my /home over to mdasm

lvm was overkill, 2TB will likely last me 10+ years (if I ever out grow it)

as I approach 60... I could be dead in 20 yrs LOL
Posts: 48
Joined: 2011-02-07 14:06

Return to System configuration

Who is online

Users browsing this forum: manmath and 14 guests