Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

[sovled] Installing Debian to a Raid 5 setup

Ask for help with issues regarding the Installations of the Debian O/S.
Post Reply
Message
Author
scythefwd
Posts: 47
Joined: 2022-03-13 23:05
Been thanked: 2 times

[sovled] Installing Debian to a Raid 5 setup

#1 Post by scythefwd »

OK.. I know I'm not doing something the most basic, but this should be 100% doable.

I'm going to be rebuilding my system. I have 3x 1tb nvme drives that I want to raid 5. I also have a 250gb ssd that I'll be using as a boot drive (mounted as /boot).
I will be using Debian 11.5, the latest iso from the site. I'll be enabling the non-free after install, but that's only for my gpu drivers in mesa...

I know I cant install grub to the raid because it loads before the kernel does, and raid drivers are in the kernel. Hence my 250gb ssd being used as my boot drive.

So from the installer, how do I create the raid group and file system? Is there a specific FS that is better for this use case than others (I'm looking at BTRFS right now.. is there a drawback to this?).

I know I'm highly unlikely to be able to use the HW raid controller on my mobo for install... so I'm looking at software raid.

I've already got home backed up, and pretty much nothing else is difficult to re-create/install. I'm a simple guy (and my downloads for the oddball packages that arent in the repos are already saved off, like visual studio code).

I'm sure there are some gotchas doing this... so any advice you can give me on this before I dive in would be appreciated.

https://help.ggcircuit.com/knowledge/de ... deprecated - I have found this, but it says it's deprecated so I'm not sure it's accurate enough to use now...

Thanks,
Alan
Last edited by scythefwd on 2022-10-10 07:36, edited 1 time in total.

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: Installing Debian to a Raid 5 setup

#2 Post by p.H »

scythefwd wrote: 2022-10-02 23:10 I know I cant install grub to the raid because it loads before the kernel does, and raid drivers are in the kernel. Hence my 250gb ssd being used as my boot drive.
GRUB has drivers for Linux software RAID. The only part which cannot be in software RAID is the EFI partition (needed for UEFI boot only) or the BIOS boot partition (for legacy boot on GPT).

scythefwd
Posts: 47
Joined: 2022-03-13 23:05
Been thanked: 2 times

Re: Installing Debian to a Raid 5 setup

#3 Post by scythefwd »

Thanks.. EFI partition is easy enough. I still plan on having my sata ssd be the boot drive. The difference in boot times between my nvme and my sata ssd's is small enough that it's a non issue... This makes me rather happy. I'll let yall know how it goes.. hopefully I'll be up and running Sunday. Saturday is my attempt at it.

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: Installing Debian to a Raid 5 setup

#4 Post by p.H »

scythefwd wrote: 2022-10-03 15:22 I still plan on having my sata ssd be the boot drive
... and be a single point of failure.

CwF
Global Moderator
Global Moderator
Posts: 2636
Joined: 2018-06-20 15:16
Location: Colorado
Has thanked: 41 times
Been thanked: 192 times

Re: Installing Debian to a Raid 5 setup

#5 Post by CwF »

p.H wrote: 2022-10-03 17:44
scythefwd wrote: 2022-10-03 15:22 I still plan on having my sata ssd be the boot drive
... and be a single point of failure.
...and convenient to have a few $20 duplicates.

Reliable, cheap and easy is better than robust, expensive and complex.

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: Installing Debian to a Raid 5 setup

#6 Post by p.H »

Not so convenient when you care about downtime. If you don't... why then use RAID at all ?

An extra SSD for boot is more expensive than no extra SSD.

CwF
Global Moderator
Global Moderator
Posts: 2636
Joined: 2018-06-20 15:16
Location: Colorado
Has thanked: 41 times
Been thanked: 192 times

Re: Installing Debian to a Raid 5 setup

#7 Post by CwF »

Well I suppose it's in how you use the 'boot disc', in any case the presence of storage array(s) isn't really related.
It's amazing how long you can go without knowing the boot disc has an issue when the structure of the system doesn't touch it much. This single point doesn't effect uptime unless you load everything on it.
More than one way to slice and dice.

scythefwd
Posts: 47
Joined: 2022-03-13 23:05
Been thanked: 2 times

Re: Installing Debian to a Raid 5 setup

#8 Post by scythefwd »

p.H wrote: 2022-10-03 19:11 Not so convenient when you care about downtime. If you don't... why then use RAID at all ?

An extra SSD for boot is more expensive than no extra SSD.
To simply learn the technology and for a little speed boost. Because I can. I don't need to justify the use of raid honestly... The system has the ability, I have the hardware to do it, and I have chosen to do it.. that is all that matters.

Who says I DONT have another ssd I can swap in if my boot drive fails? I just rattled off what is currently installed (and honestly, not even all of it, I have sata spinning rust drives in there as well). Having multiple boot drives pointing to the same mount points is ... overkill, especially since the drive itself will be pretty much unused so it'd have to come down to controller faillure vs. write limits exceeded... Hell, for all it matters, efi partition can be backed up to a usb stick and boot off that, or a sd card, no need for a spare ssd just for that. Honestly I feel like using a 250 gb ssd just for boot is pretty wasteful too, but it is what it is. It's sitting unused as it is.

I dont care about down time.. its a personal machine I'm using just to learn the different tech. I'll be raiding up my hdd's as well, but only once I have the system up and running using mdadm..

scythefwd
Posts: 47
Joined: 2022-03-13 23:05
Been thanked: 2 times

Re: Installing Debian to a Raid 5 setup

#9 Post by scythefwd »

CwF wrote: 2022-10-03 23:58 This single point doesn't effect uptime unless you load everything on it.
More than one way to slice and dice.
If this disk is only used for the efi partition... it's got 0 relevance for uptime, and only down time is if it fails a reboot. Backing that partition up.. say a few gb partition for efi should be MORE than enough... to a usb stick via dd would make that a moot point.. the drive dies, and boom... put in the stick and keep going. How often is the EFI partition messed with by the system? It's basically a pointer to grub right? But I do have a 500gb spinning rust drive I could use as a backup for the whole disk, as well as a 256 gb nvme sitting on a usb adapter , so options upon options

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: Installing Debian to a Raid 5 setup

#10 Post by p.H »

scythefwd wrote: 2022-10-04 00:04 To simply learn the technology
Then setting up boot redundancy with the internal disks will teach you even more.
Setting up multiple EFI partitions on internal disks would be more effective than using a USB stick.
scythefwd wrote: 2022-10-04 00:09 How often is the EFI partition messed with by the system?
Each time grub* or shim* packages are upgraded.
scythefwd wrote: 2022-10-04 00:09 It's basically a pointer to grub right?
No, it contains a part of GRUB (EFI core image). Actually almost the whole GRUB when installed for secure boot (the default).

scythefwd
Posts: 47
Joined: 2022-03-13 23:05
Been thanked: 2 times

Re: Installing Debian to a Raid 5 setup

#11 Post by scythefwd »

so it's possible to have 3 partitions for efi on the raided disks, just before the raided partitions... and keep a copy of the efi on each.. one fails and it just boots off the next one in order. Fair enough. Then just have the SW raid for the second partitions on each drive. I had thought about that, but wasn't particularly worried about doing it that way. Not really any different than having it on the ssd in the system setup wise except for the extra step of keeping it maintained across all 3 drives. 6 of 1, half dozen of the other. Each method has it's ups and downs..
Setting up multiple EFI partitions on internal disks would be more effective than using a USB stick.
I don't think it'd be as effective as you imply. It will mask a failed nvme disk, or failing one since a boot failure would be a very handy notification that the drive is no longer valid, time to replace and rebuild the raid. With the EFI and boot being on all 3, it means 2 could fail before I have serious issues (unless the raid 5 would fail due to 1 disk being bad.. at which point, the raid 5 setup is a failed implementation).

Having it on it's own separate drive.. I know immediately that the drive has failed, and my clone of it is ready to go (like I said, I have 256gb usb sticks.. actually its a nvme to usb converter, and it's actually faster than my internal sata ssd). I'm not particularly sussed either way.. still will have to figure out automated disk health checks..

scythefwd
Posts: 47
Joined: 2022-03-13 23:05
Been thanked: 2 times

Re: Installing Debian to a Raid 5 setup

#12 Post by scythefwd »

Welp, that went pretty well...

have the efi and my home directory sitting on my lil 250gb drive. I've used DD to clone it to my nvme usb stick and tested that to as a boot device as a backup. I do have to make sure I'm not leaving it unplugged for a year, but it's likely I'm just gonna leave it slapped into the top of the system, I'm too lazy to climb under the desk to retrieve it lol.

3 disks, in raid 5. DISKS is showing a 9gb/s read on it.. which I think is wrong. Kdiskmark is showing 500, which is about 1/6th the speed I was seeing before, and I suspect it's also having issues reading things correctly. All I know is that it's copying multi gb dvd's in seconds around the fs, and I can live with that.

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: Installing Debian to a Raid 5 setup

#13 Post by p.H »

scythefwd wrote: 2022-10-04 19:48 I don't think it'd be as effective as you imply. It will mask a failed nvme disk
Isn't the very purpose of RAID to mask the effects of a drive failure ?
scythefwd wrote: 2022-10-04 19:48 or failing one since a boot failure would be a very handy notification that the drive is no longer valid
Are you kidding ? If you really want the boot to fail when a drive fails, then do not use RAID. Or use RAID0, it will be very "effective" at doing that too.
On the other hand if you just want to be warned of a drive failure without affecting the system, there are more efficient ways such as mdadm monitoring.

scythefwd
Posts: 47
Joined: 2022-03-13 23:05
Been thanked: 2 times

Re: Installing Debian to a Raid 5 setup

#14 Post by scythefwd »

Isn't the very purpose of RAID to mask the effects of a drive failure ?
Umm.. no, or raid 0 would never exist. It was originally about creating extra space with cheap disks. There are performance gains in this case that actually let me exceed the pcie 4x bus I'm using... The raid 5 was so that if a drive dies, I can replace it... I'm already looking into mdadm monitoring... and checking the disks smart info daily.

Effectively what I did is keep a clone... which will cause a whole.. let me carry the one, 30 second difference in recovering from that failed boot drive.
On the other hand if you just want to be warned of a drive failure without affecting the system, there are more efficient ways such as mdadm monitoring.
I am using those as well, but I split home off and efi off because my main system is harder to backup for a full boot system without just cloning all the drives. On the other hand, having the efi / home directory I'm using on the sata ssd is easily small enough to keep a clone of it around... just clone it once a month. That will probably happen via a cronjob.

Now I understand you wouldn't have done it the way I did.. but you've not actually explained what benefits your way has. Personally, I dont care one whit about the systems redundancy.. I literally just rebuilt it for shiggles to try something else out. Some people dont rely on their desktop to be 100% rock stable... or even present. You seem to have failed to grasp my priorities aren't in line with yours, and you seem to be taking a bit of exception to this.. To each their own, but honestly, I don't see any advantages of what you have suggested over what I have done.

You offered up that not putting the ssd in is cheaper, but if I already have the hardware, the cost is the same to have it sitting on the shelf or in as my boot drive. Copying the efi partition over to the beginning of each of the raided drives still means I need to automatically (script out) the cloning of the one I'm using to the ones not used. Yeah, it allows me to add an entry into the grub environment to enable booting from any of them in order.. but that has the same effect of me having a cloned version of my ssd pulgged into the usb. Maybe its just time for you to accept we're not gonna see eye to eye on this and how I setup MY system.. and stop. You're not contributing. You've not even explained why your way is superior.. the closest you've gotten was "Setting up redundant efi partitions will be more efficient".

Maybe I'm missing something, but a clone is a clone. I've not seen any mention of methods for actually keeping the three efi partitions synced up ...

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: Installing Debian to a Raid 5 setup

#15 Post by p.H »

scythefwd wrote: 2022-10-06 22:37 Umm.. no, or raid 0 would never exist. It was originally about creating extra space with cheap disks.
RAID0 is an exception. RAID was originally about creating redundancy (the R in RAID). You do not need RAID to aggregate multiple disk space, LVM is perfect for this an much more flexible.
scythefwd wrote: 2022-10-06 22:37 I dont care one whit about the systems redundancy.
Again, why then are you playing with RAID (other than RAID0 or linear) ? You wrote that you wanted to learn, but if you do not use RAID for what is is designed for, then you are not going to learn.

scythefwd
Posts: 47
Joined: 2022-03-13 23:05
Been thanked: 2 times

Re: Installing Debian to a Raid 5 setup

#16 Post by scythefwd »

I learned how to install it on a Raid. I learned how to create a raid in linux. That's what I learned...

I wont touch lvm in a corporate environment .. Then again I'm not using software raid in a corporate environment either, I'm using sans and letting their storage pools handle the redundancy vs. doing it at the vm level... This was literally for a shiggles, hey, topic I know how to do elsewhere, lets figure it out on linux too thing.

I'm not using raid for what it was intended for.. NO, I really am. I'm using it for it's redundancy. The difference is I dont depend on that redundancy, so if it fails.. it's not a major setback for me.. at max, 2 hours of time to recover . This system is disposable.. so it's the perfect platform to be playing with in regards to drive configuration, scripting, etc... as such, I don't treat it like a production environment.

I don't understand the "if you dont need it for production, dont do it" stance your comments imply. I'm going to learn it because I can.. I'm not using raid as it's intended.. AND? I should remain ignorant of the subject because I don't actually need it? I dont get where you're coming from.. but you do you.. have a day. Don't particularly care what type of day.. but I'm done with the conversation. I'm not learning anything from it...

scythefwd
Posts: 47
Joined: 2022-03-13 23:05
Been thanked: 2 times

Re: Installing Debian to a Raid 5 setup

#17 Post by scythefwd »

One last followup...

BTRFS was only giving me about 500mb/s read and write, confirmed with DD, Kdiskmark, and FIO. I knew it was experimental, but that is horrible. DISKS checks raw device vs. through the fs.. hence the difference.
Rebuilt it with ext4. I'm seeing a genuine read of 8179MB/s and a write of 1405 MB/s. writes are much slower than I expected.. but still quick enough. Read is insane, and it cut my boot time down by about 30 seconds. It used to take me 45 seconds from UEFI splash screen to login prompt. Now it takes me about 15 seconds.
I did reserve 5gb on each of the drives like p.H suggested and have my EFI partition cloned to each... Actually a hair more effort (2 clone jobs vs 1) than doing it via usb stick. Either I'm missing something or only one of the EFI partitions gets mounted when booted from, so using dkpg-reconfigure grub-efi-amd64 is still only setting things up on the one drive and I have to manually clone it.
Boot from all three EFI partitions tested.

Have installed smartmontools and am checking smart health every hour via cronjob. I have the test result , available spare, and the time the test ran being output in my conky so I dont have to run the checks manually. Since this requires sudo / root, I have the cron running as root and then making the files it outputs 755. Honestly it should be 555, but I'm not sure if that would prevent me from over writing the file every cycle. I have the files and script running from a spinning rust drive so re-writing the same file over and over isn't messing with my wear leveling.
Definitely need to re-write the script .. I just have it throwing commands right now vs. running a foreach loop or a switch.. either would be a more efficient script. I'll probably play with that sometime during the week. I'm only writing this because I'm killing time until my cron job runs again so I can watch my display on my desktop update lol.

The install was only partially similar to the guide I posted in my first.

I had to manually partition, create the partition layout, create software raid, and then select the paritions. I'm not using a volumegroup or LVM currently. If I set those up after setting up the partitions I wanted raided, it said it would destroy them, and if I setup a volume group before, it wouldn't let me create partitions on the vg. Fluke, me not doing it right, error in the way the installer is working.. It's not enough of an issue for me to care honestly. Is it 100% correct, I doubt it. Is it working as I expected it to.. yes. I'll live with it. I'm not completely ignorant on volumegroups.. I use them on my powervaults.. though the ui is very different and they've got it dummy proof. Learned a bit doing this.. It was fun.

Post Reply