如何重新激活MDADM RAID5阵列?


22

我刚刚搬家,涉及拆卸服务器并重新连接。由于这样做,我的MDADM RAID5阵列之一显示为非活动状态:

root@mserver:/tmp# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md1 : active raid5 sdc1[1] sdh1[2] sdg1[0]
      3907023872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md0 : inactive sdd1[0](S) sdf1[3](S) sde1[2](S) sdb1[1](S)
      3907039744 blocks

unused devices: <none>

在我看来,好像找到了所有磁盘,但是由于某种原因不想使用它们。

那么(S)标签是什么意思,我如何告诉MDADM重新开始使用数组?

[编辑]我只是尝试用以下命令停止并组装数组-v

root@mserver:~# mdadm --stop /dev/md0
mdadm: stopped /dev/md0

root@mserver:~# mdadm --assemble --scan -v
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
mdadm: added /dev/sdd1 to /dev/md0 as 0 (possibly out of date)
mdadm: added /dev/sdb1 to /dev/md0 as 1 (possibly out of date)
mdadm: added /dev/sdf1 to /dev/md0 as 3 (possibly out of date)
mdadm: added /dev/sde1 to /dev/md0 as 2
mdadm: /dev/md0 assembled from 1 drive - not enough to start the array.

..和进入猫/proc/mdstat看起来没有什么不同。

[Edit2]不确定是否有帮助,但这是检查每个磁盘的结果:

root @ mserver:〜#mdadm-检查/ dev / sdb1

/dev/sdb1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2f331560:fc85feff:5457a8c1:6e047c67 (local to host mserver)
  Creation Time : Sun Feb  1 20:53:39 2009
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sat Apr 20 13:22:27 2013
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 6c8f71a3 - correct
         Events : 955190

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8       17        1      active sync   /dev/sdb1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       8       97        2      active sync   /dev/sdg1
   3     3       8       33        3      active sync   /dev/sdc1

root @ mserver:〜#mdadm-检查/ dev / sdd1

/dev/sdd1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2f331560:fc85feff:5457a8c1:6e047c67 (local to host mserver)
  Creation Time : Sun Feb  1 20:53:39 2009
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 2
Preferred Minor : 0

    Update Time : Sat Apr 20 18:37:23 2013
          State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 2
  Spare Devices : 0
       Checksum : 6c812869 - correct
         Events : 955205

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8      113        0      active sync   /dev/sdh1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       8       97        2      active sync   /dev/sdg1
   3     3       0        0        3      faulty removed

root @ mserver:〜#mdadm-检查/ dev / sde1

/dev/sde1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2f331560:fc85feff:5457a8c1:6e047c67 (local to host mserver)
  Creation Time : Sun Feb  1 20:53:39 2009
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 2
Preferred Minor : 0

    Update Time : Sun Apr 21 14:00:43 2013
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 2
  Spare Devices : 0
       Checksum : 6c90cc70 - correct
         Events : 955219

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8       97        2      active sync   /dev/sdg1

   0     0       0        0        0      removed
   1     1       0        0        1      faulty removed
   2     2       8       97        2      active sync   /dev/sdg1
   3     3       0        0        3      faulty removed

root @ mserver:〜#mdadm-检查/ dev / sdf1

/dev/sdf1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2f331560:fc85feff:5457a8c1:6e047c67 (local to host mserver)
  Creation Time : Sun Feb  1 20:53:39 2009
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sat Apr 20 13:22:27 2013
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 6c8f71b7 - correct
         Events : 955190

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8       33        3      active sync   /dev/sdc1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       8       97        2      active sync   /dev/sdg1
   3     3       8       33        3      active sync   /dev/sdc1

我有一些注释建议驱动器的原始组装如下:

md0 : active raid5 sdb1[1] sdc1[3] sdh1[0] sdg1[2]
      2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

[Edit3]

通过它看起来像下面发生的日志展望(基于Update Time--examine结果):

  1. sdb和sdf在20号的13:22之后被淘汰了
  2. sdd在20号的18:37之后被淘汰
  3. 1日14:00之后一段时间,服务器已关闭

鉴于两个磁盘下降(显然)同时我认为这应该是合理的安全承担阵列就不会说点(?)之后被写入,所以它应该是相对安全的,迫使它复权在正确的顺序?这样做最安全的命令是什么,并且有没有写任何更改的方法吗?

Answers:


28

S标签是指磁盘被视为“空闲”。您应该尝试停止并重新启动阵列:

  mdadm --stop /dev/md0
  mdadm --assemble --scan

要重新组装阵列,如果不起作用,则可能需要更新mdadm.conf,例如,请参阅此问题,以获取有关如何执行此操作的详细信息。


试过了(并添加-v看看发生了什么事情),但所有这些都应该被添加磁盘得到大意如下的答复:mdadm: /dev/sdb1 is busy - skipping
乔恩·凯奇

只是停止md0并重新组装阵列
krizna 2013年

尝试过-仍然没有运气(请参阅我的编辑)
Jon Cage

2
好的,看来您认为RAI​​D没有正确关闭,如果您确定不是,请尝试-R-f。如果同样失败,请使用来重新创建阵列mdadm create /dev/md0 --assume-clean <original create options> /dev/sd[dbfe]1。请注意:所有这些选项都可能破坏您的数据。
Stefan Seidel 2013年

3
好吧,我去mdadm --assemble --scan --force努力了。阵列已备份并正在运行,我可以访问我的数据:)
Jon Cage 2013年

9

这个问题有点老了,但是答案可能会帮助遇到类似情况的人。从事件计数看 mdadm --examine输出看,它们似乎足够接近(955190-对于sdb1和sdf1,955219对于sde1和sdd1,您有955205)。如果它们低于40-50,则可以,在这种情况下,建议的操作步骤是手动组装阵列,即使事件计数有所不同,强制mdadm接受驱动器:

停止数组:

mdadm --stop /dev/md0

然后尝试手动重新组装阵列:

mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdd1 /dev/sde1 /dev/sdf1

检查阵列的状态,检查驱动器列表/结构是否正常(命令输出的底部将显示哪个驱动器处于阵列的什么状态和位置):

mdadm --detail /dev/md0

如果结构正常,请检查重建进度:

cat /proc/mdstat

0

您可以使用以下命令激活Raid md0

mdadm -A /dev/md0

和此命令来更新mdadm.conf文件

mdadm --examine --scan >> /etc/mdadm/mdadm.conf
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.