Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

Debian Stretch testing on Raid0 problem

Ask for help with issues regarding the Installations of the Debian O/S.
Post Reply
Message
Author
GeckoS
Posts: 11
Joined: 2017-04-20 15:50

Debian Stretch testing on Raid0 problem

#1 Post by GeckoS »

Hello Everyone! My first post here.

I have problem with installation Debian Stretch testing on motherboard Asus Maximus IX Hero Z270 with 2 * Samsung 960 Pro in Raid0 where installation process can't see Raid volume. I tried to add ' dmraid=true ' option but it didn't help. However I'm not sure if it was needed.

If I change in UEFI Bios in Boot menu -> Launch CSM (Compatibility Support Module) to Enable then in Advanced menu -> the Intel (R) Raid Storage Technology menu disappears (in Intel RST I can create Raid Volume). However this time I can see two separated Samsung 960 Pros during installation process.

Anyone has any idea?

Thanks in advance.

User avatar
phenest
Posts: 1702
Joined: 2010-03-09 09:38
Location: The Matrix

Re: Debian Stretch testing on Raid0 problem

#2 Post by phenest »

Does your RAID controller require a driver? Otherwise it will just be used as a SATA controller.
ASRock H77 Pro4-M i7 3770K - 32GB RAM - Pioneer BDR-209D

GeckoS
Posts: 11
Joined: 2017-04-20 15:50

Re: Debian Stretch testing on Raid0 problem

#3 Post by GeckoS »

Thank you for your answer.

I wasn't aware it works that way. In manual under Storage I found Intel Z270 Chipset with RAID support so does my installation need driver for Z270?
https://www.asus.com/us/ROG-Republic-Of ... fications/

steve_v
df -h | grep > 20TiB
df -h | grep > 20TiB
Posts: 1400
Joined: 2012-10-06 05:31
Location: /dev/chair
Has thanked: 79 times
Been thanked: 175 times

Re: Debian Stretch testing on Raid0 problem

#4 Post by steve_v »

GeckoS wrote:I found Intel Z270 Chipset with RAID support so does my installation need driver for Z270?
https://www.asus.com/us/ROG-Republic-Of ... fications/
As far as I can tell, that's garden variety FakeRAID. There's no real hardware RAID controller and all the work is done by the BIOS and driver - i.e. the host CPU.
If you have some burning desire to use the BIOS RAID setup, you might want to look here. But I see you've been there, and honestly, mdraid is a much better idea.
Otherwise, turn off the RAID stuff in the BIOS so Debian can see the individual disks, then set up software (md)RAID on them. Software RAID is more flexible, and as you don't have a dedicated RAID processor, just as fast.
Even Intel says mdraid is the way to go on GNU/Linux, the only reason motherboards ship a RAID BIOS is because Windoze software RAID is utterly hopeless.

Aside, are you absolutely sure you want RAID0? That's striped, so if you loose one disk you loose all your data... You're sacrificing your storage reliability for speed doing this.
Do you really need 2x the write performance of an already fast SSD? I hope you have a robust backup strategy in place...
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.

GeckoS
Posts: 11
Joined: 2017-04-20 15:50

Re: Debian Stretch testing on Raid0 problem

#5 Post by GeckoS »

Thank you Steve_v for this helpful information. Now I'm not sure if it was good idea buying two smaller-capacity M.2 drives instead of one bigger. I must consider which way to go - use software RAID or replace them with one bigger as I think I can still go that way. I've been using fakeraid since some time with good effect that's why I'm in favor of this solution even if I'm sacrificing security of my data.
I have another option to go - write to people who are working on development of Debian to include drivers for this kind of hardware I have as it will become more and more popular.

steve_v
df -h | grep > 20TiB
df -h | grep > 20TiB
Posts: 1400
Joined: 2012-10-06 05:31
Location: /dev/chair
Has thanked: 79 times
Been thanked: 175 times

Re: Debian Stretch testing on Raid0 problem

#6 Post by steve_v »

GeckoS wrote:Thank you Steve_v for this helpful information. Now I'm not sure if it was good idea buying two smaller-capacity M.2 drives instead of one bigger. I must consider which way to go - use software RAID or replace them with one bigger as I think I can still go that way.
If you do decide against RAID0, you can always just mount the second drive wherever you need the capacity, or use another RAID layout that improves reliability. No real need to replace them either way.
GeckoS wrote: I've been using fakeraid since some time with good effect that's why I'm in favor of this solution even if I'm sacrificing security of my data.
If you're comfortable with RAID0, that's cool.
I only brought it up because I have encountered several people who believed that RAID0 is just a way to combine two drives - without realising that striping drives means that loosing either one is total data loss.
GeckoS wrote:I have another option to go - write to people who are working on development of Debian to include drivers for this kind of hardware I have as it will become more and more popular.
Why? I don't really see a need to support the BIOS fakeraid when the drives work fine with mdraid as plain old SATA devices. The only real functional difference between bios raid and mdraid is the interface for setting it up.
Why would you want to have your array tied to a particular motherboard chipset/bios when mdraid can do anything the bios raid can? (except work in Windoze of course).
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.

GeckoS
Posts: 11
Joined: 2017-04-20 15:50

Re: Debian Stretch testing on Raid0 problem

#7 Post by GeckoS »

I thought software RAID is much slower than fakeRAID but found this page https://delightlylinux.wordpress.com/20 ... is-faster/ where benchmarks show it's rather the same if it goes about the speed of reads and writes. Also there are more advantages not only the speed of work. I think I'll give it a shot today.
Thank you for your posts. They were very helpfull in understanding few things.
Maybe I'll do my own tests 1*non-raid SSD vs 2*SSD in mdadm RAID0.

GeckoS
Posts: 11
Joined: 2017-04-20 15:50

Re: Debian Stretch testing on Raid0 problem

#8 Post by GeckoS »

So I ended with software RAID. BTW these NVME disks are insane! Earlier I had 2* SATA II RAID0 (BIOS fakeRAID) so I can feel the difference now. I did some tests in 'Gnome-disks' tool with 1*NVME non-RAID but now I can not do the benchmarks in this tool to compare without unmounting partition which is not possible on my working system because all partitions are currently in use ('/', 'SWAP' and '/home').
I had some problems during installation with grub install (in EFI partition) because I tried to do this on small RAID partition (about 600 MB). When I created non-RAID small EFI partition - all went smooth. Below is my current list of places on my disks:

w@sphinx:~$ lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 477G 0 disk
├─nvme0n1p1 259:2 0 571M 0 part /boot/efi
├─nvme0n1p2 259:3 0 14G 0 part
│ └─md0 9:0 0 27,9G 0 raid0 /
├─nvme0n1p3 259:4 0 2,8G 0 part
│ └─md1 9:1 0 5,6G 0 raid0 [SWAP]
└─nvme0n1p4 259:5 0 413G 0 part
└─md2 9:2 0 825,9G 0 raid0 /home
nvme1n1 259:1 0 477G 0 disk
├─nvme1n1p1 259:6 0 571M 0 part
├─nvme1n1p2 259:7 0 14G 0 part
│ └─md0 9:0 0 27,9G 0 raid0 /
├─nvme1n1p3 259:8 0 2,8G 0 part
│ └─md1 9:1 0 5,6G 0 raid0 [SWAP]
└─nvme1n1p4 259:9 0 413G 0 part
└─md2 9:2 0 825,9G 0 raid0 /home

'nvme0n1p1' and 'nvme1n1p1' are normal partitions which of first is EFI and second is not being used. Second one was not necessary to create but I wanted to have all the same created on each disk.

steve_v
df -h | grep > 20TiB
df -h | grep > 20TiB
Posts: 1400
Joined: 2012-10-06 05:31
Location: /dev/chair
Has thanked: 79 times
Been thanked: 175 times

Re: Debian Stretch testing on Raid0 problem

#9 Post by steve_v »

Just quietly, you don't need to RAID0 your swap. Simply add two (or more) swap partitions and it will stripe them automatically. There's probably no harm in it, but it's unnecessary complexity.

For your boot / efi partition you'll need something the BIOS can understand. That means (as you have discovered) a plain partition... Or RAID1(but not RAID0). This works fine for /boot, but I haven't tried it with efi.

And yeah, gnome-disks won't do a read-write benchmark on a mounted filesystem, for obvious reasons. There are plenty of tools that can though.
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.

GeckoS
Posts: 11
Joined: 2017-04-20 15:50

Re: Debian Stretch testing on Raid0 problem

#10 Post by GeckoS »

It's good to know that about SWAP - I didn't know that.

I must look for some other benchmarking tools then. However when I was copying 60 GB file from 1* non-RAID NVME to itself - Caja (I think) was showing 1,9-2,0 GB/s speed. Now in md-RAID I get continuous 2,1 GB/s. It is not a big improvement I was expecting. Maybe it's some kind of limitation of ssd's controller. If yes than maybe fakeRAID would be a better choice.

xoxo
Posts: 2
Joined: 2017-10-06 22:46

Re: Debian Stretch testing on Raid0 problem

#11 Post by xoxo »

I have the same problem with Gigabite AERO 14 v7. When I select Inte RST option in the BIOS (instead of AHCI), create RAID 0 volume1 over two NVMEs, Debian install does not recognizes the drives. If I select option AHCI, the installation shows both drives.

Documentation on Intel RST: https://www.intel.com.au/content/dam/ww ... -paper.pdf

I am not sure if I understand the document correctly. If not please feel free to correct me.

Intel recommends MD RAID, however in different context. They recommend MD RAID over DM RAID, however with RST option in the BIOS. It is incorrect to say that RST with MD RAID does not provide any benefits over AHCI with MD RAID. I comment from the document:
The primary benefit of using Intel RST is the presents of an Intel RST option ROM where the system can boot directly from any Intel RST RAID volume type instead of creating a dedicated partition or using RAID superblock partition to store bootloader.
I also read on the web that RST provides benefit in terms of power management and different way of queuing data. However I was not able to find any documentation on this, so not sure if this is true.

Intel RST is also probably the only option for dual boot with Windows.

Here is my conclusion.
1. I am not clear on whether Intel recommends RST with MD RAID or AHCI with MD RAID. It seams they recommend RST with MD RAID in the document and in such case the drives do now show up in Debian during install.

2. If the recommendation is to use AHCI with MD RAID, should we create the MD RAID CONTAINER for metadata and then MD volumes as described in the document?

xoxo
Posts: 2
Joined: 2017-10-06 22:46

Re: Debian Stretch testing on Raid0 problem

#12 Post by xoxo »

In addition to my previous post it seams that mdadm can see Intel RST during install, however my NVME SSD drives (Samsung EVO 960 500GB) are not detected.

Code: Select all

mdadm -–detail-platform
Platform : Intel(R) Matrix Storage Manager
Version : 15.17.0.3054
RAID Levels : raid0 raid1 raid10 raid5
Chunk Sizes : 4k 8k 16k 32k 64k 128k
2TB volumes: supported
2TB disks: supported
Max Disks : 11
Max Volumes : 2 per array, 4 per controller
I/O Controller : /sys/devices/pci0000:00/0000:00:17.0 (SATA)
Port0 : -no device attached -
Port1 : -no device attached -
Port2 : -no device attached -
Port3 : -no device attached -
Port4 : -no device attached -
Port5 : -no device attached -
Port6 : -no device attached -
...
Port15: -no device attached -

I found similar bug logged for RedHat https://bugzilla.redhat.com/show_bug.cgi?id=1405321. it seems this might be a BIOS issue. I will try to open a case with Gigabyte and see what they say.

So in summary I believe that Linux MD RAID supports Intel RST and its volumes configured in the BIOS as per the document in my previous post. However, possible issues with drives not being detected could be related to BIOS bugs (hardware manufacture).

GeckoS
Posts: 11
Joined: 2017-04-20 15:50

Re: Debian Stretch testing on Raid0 problem

#13 Post by GeckoS »

Hi xoxo!

I can't give too much input to this topic now but I can say few things which I met for last few months. Generally I switched to Arch because sometimes I game on VM with Windows and I need more actual versions of qemu, virt-manager and libvirt. I found that Arch has quite recent versions of those. Of course I could compile binaries by myself under Debian but I'm not so good at that. I think Debian is still a great distro. So... Once I performed performed some test on my second machine with two ADATA SU800 128 Gigs SSDs in RAID 0 with mdadm. IRST was disabled in UEFI. I tested speeds with Gnome-disks (Gnome-disk-utility package) and speeds were about 1GB/s for writes and reads using 1GB sample size. I know it's not the best testing utility and results from other software may be quite different but speeds were good - that's why I think mdadm does good job independently from IRST. Also I tested two NVMes in my main PC in RAID 0 with mdadm. IRST was disabled here too. My all partitions are aligned to 4096 bytes. For chunk=512KB I got these results:

- Gnome-disks: reads: 6,7 GB/s , writes: 1,2 GB/s (1GB sample)
- dd (I found these methods here: https://askubuntu.com/questions/87035/h ... erformance ):
$ time sh -c "dd if=/dev/zero of=testfile bs=1G count=1 && sync" - gives writes ~ 1,3 GB/s

$ echo 3 > /proc/sys/vm/drop_caches
$ time sh -c "dd if=testfile of=/dev/null bs=1G count=1 && sync" - gives reads ~ 3,55 GB/s

As for me these speeds should be doubled for reads and quadrupled for writes (as one Samsung 960 Pro 512GB can perform about 3,5 GB/s in reads and 2,1 GB/s in writes) but I think NVMe technology is still too new or maybe NVMe drivers for Linux aren't good enough now.

I'm not sure now (50/50 - as I've tried many setups on my PCs since few months) but I think I succeded with IRST and NVMes under Arch (with dmraid - not mdadm). One thing that I'm sure of is that I had to stop mdadm service to make dmraid work properly and discover IRST's RAID 0. However I can't test any of these things now because I don't want to destroy my current system. Also I think I couldn't make IRST and mdadm work together as I would probably be using it now.

I hope this can help you somehow.

Post Reply