I successfully set up two 1TB hard drives into a RAID 1 array with MDADM. The two drives were /dev/sda1 and /dev/sdb1. However, I added two additional 2TB hard drives to create a new separate RAID 1 array, and MDADM failed to set up my original 1TB array. It looks like it's because the OS reserved the /dev/sda1 and /dev/sdb1 devices for the two 2 TB hard drives, and MDADM didn't recognize them as the hard drives described in /etc/mdadm/mdadm.conf. I fixed the problem by switching the drives in /etc/mdadm/mdadm.conf to the new hard drive names, but this doesn't seem like a good stable fix. What if I boot with another drive, and that messes up the assignment of the drives again? Am I misunderstanding something?
Here is the current output of lsblk:
Code: Select all
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 1.8T 0 disk
sdb 8:16 1 1.8T 0 disk
sdc 8:32 0 119.2G 0 disk
├─sdc1 8:33 0 22.2G 0 part /
├─sdc2 8:34 0 1K 0 part
├─sdc5 8:37 0 7.7G 0 part /var
├─sdc6 8:38 0 2G 0 part [SWAP]
├─sdc7 8:39 0 1.4G 0 part /tmp
└─sdc8 8:40 0 86G 0 part /home
sdd 8:48 0 931.5G 0 disk
└─sdd1 8:49 0 931.5G 0 part
└─md127 9:127 0 931.4G 0 raid1 /RAIDStorage
sde 8:64 0 931.5G 0 disk
└─sde1 8:65 0 931.5G 0 part
└─md127 9:127 0 931.4G 0 raid1 /RAIDStorage
Code: Select all
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE /dev/sdb1 /dev/sdc1 <- THIS WAS MY ORIGINAL SETUP
DEVICE /dev/sdd1 /dev/sde1
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
# This configuration was auto-generated on Thu, 30 Aug 2018 22:10:04 -0400 by mkconf
ARRAY /dev/md/rdrive metadata=1.2 name=freyland:rdrive UUID=90ba462f:989bf4dc:acbc60c2:ceb104d2