Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230

 

 

 

Clean, degraded raid

Linux Kernel, Network, and Services configuration.
Post Reply
Message
Author
CXdur
Posts: 1
Joined: 2019-01-11 01:32

Clean, degraded raid

#1 Post by CXdur »

Hey guys!

I'm having some issues with the RAID devices on a dedicated server I'm renting.

This is the following cat /proc/mdstat.

Code: Select all

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md3 : active raid1 sdb3[1] sda3[0]
      1932506048 blocks [2/2] [UU]
      bitmap: 3/15 pages [12KB], 65536KB chunk

md2 : active raid1 sdb2[1] sda2[0]
      20478912 blocks [3/2] [UU_]

unused devices: <none>
And this is an e-mail I received 20 hours ago when I tried solving the problem myself (sdb2 was part of md
A DegradedArray event had been detected on md device /dev/md2.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md3 : active raid1 sdb3[2] sda3[0]
1932506048 blocks [2/1] [U_]
[==>..................] recovery = 13.2% (256509184/1932506048) finish=161.8min speed=172552K/sec
bitmap: 7/15 pages [28KB], 65536KB chunk

md2 : active raid1 sdb2[1] sda2[0]
Raid device options for /dev/md2
RAID device options
Device file /dev/md2
UUID cb678fae:76c936f2:a4d2adc2:26fd5302
RAID level RAID1 (Mirrored)
Filesystem status For mounting on /
Usable size 20478912 blocks (19.53 GB)
Persistent superblock? Yes
Chunk size Default
RAID errors 1 disks have failed
RAID status active, degraded
Partitions in RAID SATA device A partition 2
SATA device B partition 2
We notified the hosting company who we rent server from, and they scheduled an intervention which is now complete. They didn't give us any information regarding the status of the drives and such, but since they didn't replace hardware I will assume the problem lies in configuration and not in hardware.

Is there any way I can rebuild the array and fix this issue? Here is the outcome of mdadm --examine

Code: Select all

root@ns326730:/home/sondre# mdadm --examine  /dev/sda2
/dev/sda2:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : cb678fae:76c936f2:a4d2adc2:26fd5302
  Creation Time : Wed Feb 14 22:50:07 2018
     Raid Level : raid1
  Used Dev Size : 20478912 (19.53 GiB 20.97 GB)
     Array Size : 20478912 (19.53 GiB 20.97 GB)
   Raid Devices : 3
  Total Devices : 2
Preferred Minor : 2

    Update Time : Fri Jan 11 03:46:31 2019
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 6f237178 - correct
         Events : 71172


      Number   Major   Minor   RaidDevice State
this     0       8        2        0      active sync   /dev/sda2

   0     0       8        2        0      active sync   /dev/sda2
   1     1       8       18        1      active sync   /dev/sdb2
   2     2       0        0        2      faulty removed
Any advice on how I can resolve this issue would be really appreciated!

EDIT:

lsblk output

Code: Select all


root@ns326730:/# lsblk
NAME    MAJ:MIN RM    SIZE RO TYPE  MOUNTPOINT
sdb       8:16   0    1.8T  0 disk
├─sdb4    8:20   0    511M  0 part  [SWAP]
├─sdb2    8:18   0   19.5G  0 part
│ └─md2   9:2    0   19.5G  0 raid1 /
├─sdb3    8:19   0    1.8T  0 part
│ └─md3   9:3    0    1.8T  0 raid1 /home
└─sdb1    8:17   0 1004.5K  0 part
sda       8:0    0    1.8T  0 disk
├─sda4    8:4    0    511M  0 part  [SWAP]
├─sda2    8:2    0   19.5G  0 part
│ └─md2   9:2    0   19.5G  0 raid1 /
├─sda3    8:3    0    1.8T  0 part
│ └─md3   9:3    0    1.8T  0 raid1 /home
└─sda1    8:1    0 1004.5K  0 part

p.H
Global Moderator
Global Moderator
Posts: 3049
Joined: 2017-09-17 07:12
Has thanked: 5 times
Been thanked: 132 times

Re: Clean, degraded raid

#2 Post by p.H »

md2 expects to have 3 members but there are only 2 disks. Is this server supposed to have 3 disks ?

Post Reply