I'm encountering an issue with my personal custom NAS on Debian 8, and I hope someone here will be geek enough to have some clue...
The NAS has a RAID5 mounted on using mdadm (no spare drive). I had a 6x 3To disks array, but I was too tight, so I added another 3To disk.
I did as usual (I started this RAID with 4 HDD and already added 2 more HDD) and typed theses commands:
Everything did go as usual, and after about 30 hours, the last job (the long one) did finish.sgdisk -R=/dev/sdi /dev/sdb
sgdisk -G /dev/sdi
mdadm --manage /dev/md0 --add /dev/sdi1
mdadm --grow --raid-devices=7 --backup-file=/root/grow_md0.bkp /dev/md0
xfs_growfs /dev/md0
But now when I'm checking the array volume, I'm still with 14To I had before instead of the 17ish I should get.
After some thinking, the only thing I did differently this time, was a wrong command:
instead ofmdadm --grow --raid-devices=6 --backup-file=/root/grow_md0.bkp /dev/md0
but I got an error message and then typed the right one.mdadm --grow --raid-devices=7 --backup-file=/root/grow_md0.bkp /dev/md0
Here are some logs:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb1[5] sdg1[9] sdh1[8] sdi1[7] sde1[4] sdd1[2] sdc1[6]
17581581312 blocks super 1.2 level 5, 512k chunk, algorithm 2 [7/7] [UUUUUUU]
unused devices: <none>
It would be a great new year gift for me to have my array regain it right size, so thank by advance!# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Oct 19 03:15:33 2012
Raid Level : raid5
Array Size : 17581581312 (16767.10 GiB 18003.54 GB)
Used Dev Size : 2930263552 (2794.52 GiB 3000.59 GB)
Raid Devices : 7
Total Devices : 7
Persistence : Superblock is persistent
Update Time : Fri Jan 3 15:41:29 2020
State : clean
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : Melkor-Server:0
UUID : c1335b52:217970e8:407df77c:199a66ca
Events : 38463
Number Major Minor RaidDevice State
5 8 17 0 active sync /dev/sdb1
6 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
4 8 65 3 active sync /dev/sde1
7 8 129 4 active sync /dev/sdi1
8 8 113 5 active sync /dev/sdh1
9 8 97 6 active sync /dev/sdg1
EDIT:
It seems that the array is in fact the right size, but not the partition (even after xfs_growfs). Here are some more logs:
# parted -l
Model: Linux Software RAID Array (md)
Disk /dev/md0: 18.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 18.0TB 18.0TB xfs
[...]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 14T 14T 110G 100% /media/NAS
[...]