【问题标题】:HDFS fsck command shows health as corrupt for '/'HDFS fsck 命令显示“/”的运行状况已损坏
【发布时间】:2017-02-14 09:39:08
【问题描述】:

我在 AWS EC2 实例上安装了开源 hadoop 版本 2.7.3 集群(2 个主服务器 + 3 个从属服务器)。我正在使用集群将其与 Kafka Connect 集成。

集群的设置是在上个月完成的,kafka connect 的设置是在上周完成的。从那时起,我们就可以在我们的 HDFS 上操作 kafka 主题记录,并进行各种操作。

从昨天下午开始,我开始出现以下错误。当我将一个新文件从本地复制到集群时,它会出现并打开,但过了一段时间,再次开始显示类似的 IOException:

17/02/14 07:57:55 INFO hdfs.DFSClient: No node available for BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log
17/02/14 07:57:55 INFO hdfs.DFSClient: Could not obtain BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 from any node: java.io.IOException: No live nodes contain block BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 after checking nodes = [], ignoredNodes = null No live nodes contain current block Block locations: Dead nodes: . Will get new block locations from namenode and retry...
17/02/14 07:57:55 WARN hdfs.DFSClient: DFS chooseDataNode: got # 1 IOException, will wait for 499.3472970548959 msec.
17/02/14 07:57:55 INFO hdfs.DFSClient: No node available for BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log
17/02/14 07:57:55 INFO hdfs.DFSClient: Could not obtain BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 from any node: java.io.IOException: No live nodes contain block BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 after checking nodes = [], ignoredNodes = null No live nodes contain current block Block locations: Dead nodes: . Will get new block locations from namenode and retry...
17/02/14 07:57:55 WARN hdfs.DFSClient: DFS chooseDataNode: got # 2 IOException, will wait for 4988.873277172643 msec.
17/02/14 07:58:00 INFO hdfs.DFSClient: No node available for BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log
17/02/14 07:58:00 INFO hdfs.DFSClient: Could not obtain BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 from any node: java.io.IOException: No live nodes contain block BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 after checking nodes = [], ignoredNodes = null No live nodes contain current block Block locations: Dead nodes: . Will get new block locations from namenode and retry...
17/02/14 07:58:00 WARN hdfs.DFSClient: DFS chooseDataNode: got # 3 IOException, will wait for 8598.311122824263 msec.
17/02/14 07:58:09 WARN hdfs.DFSClient: Could not obtain block: BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException
17/02/14 07:58:09 WARN hdfs.DFSClient: Could not obtain block: BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log No live nodes contain current block Block locations: Dead nodes: . Throwing a BlockMissingException
17/02/14 07:58:09 WARN hdfs.DFSClient: DFS Read
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log
        at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:983)
        at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
        at java.io.DataInputStream.read(DataInputStream.java:100)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
        at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:107)
        at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:102)
        at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
        at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
        at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
        at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
        at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
        at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
cat: Could not obtain block: BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 file=/test/inputdata/derby.log

当我这样做时:hdfs fsck /,我得到:

Total size:    667782677 B
 Total dirs:    406
 Total files:   44485
 Total symlinks:                0
 Total blocks (validated):      43767 (avg. block size 15257 B)
  ********************************
  UNDER MIN REPL'D BLOCKS:      43766 (99.99772 %)
  dfs.namenode.replication.min: 1
  CORRUPT FILES:        43766
  MISSING BLOCKS:       43766
  MISSING SIZE:         667781648 B
  CORRUPT BLOCKS:       43766
  ********************************
 Minimally replicated blocks:   1 (0.0022848265 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     6.8544796E-5
 Corrupt blocks:                43766
 Missing replicas:              0 (0.0 %)
 Number of data-nodes:          3
 Number of racks:               1
FSCK ended at Tue Feb 14 07:59:10 UTC 2017 in 932 milliseconds


The filesystem under path '/' is CORRUPT

这意味着,我所有的文件都以某种方式损坏了。

我想恢复我的 HDFS 并修复损坏的运行状况。 另外,我想了解一下,这样的问题是怎么突然出现的,以后怎么预防?

【问题讨论】:

  • 你是在运行的集群上修改dfs.datanode.data.dir的属性还是删除了对应的目录?
  • 我仅在删除所有块后更改了 dfs.datanode.data.dir 的值。我的 hadoop.tmp.dir 设置在 /opt/data,我将 dfs.datanode.data.dir 更改为 /opt/data/dfs/data。但是我只有在发现我的块已经损坏后才这样做。这个问题在上个月发生了两次,我想知道为什么会这样。
  • 我在互联网上发现这些属性:- dfs.datanode.scan.period.hours 和 dfs.block.scanner.volume.bytes.per.second 将被修改以防止 hdfs 阻塞扫描。我已将 dfs.datanode.scan.period.hours 设置为 -1 并将 dfs.block.scanner.volume.bytes.per.second 设置为 0 以防止 hdfs 块扫描。但我不确定这是否会帮助我。该链接表示默认情况下 dfs.datanode.scan.period.hours 设置为 504 小时或 3 周。这意味着每 504 小时后,就会发生 hdfs 块扫描。我记得最后一个损坏的块大约是三周前。因此我修改了这些值。
  • 但我不确定这是否是对我的问题的完整证明更正。而且,我找不到任何地方为什么会发生这样的问题以及如何防止它
  • 是的,我做到了。当提示询问“您是否要在 /opt/data/dfs/data 重新格式化文件系统”时,我选择了“N”

标签: hadoop hdfs


【解决方案1】:

整个文件系统(43766 块)被标记为损坏可能是由于完全删除了 dfs.datanode.data.dir 文件夹或更改了 hdfs-site.xml 中的值。无论何时这样做,请确保Namenode 也已格式化并重新启动。

如果没有,Namenode 仍然保存块信息,并希望它们在Datanode(s) 中可用。问题中发布的场景是类似的。

如果数据目录被删除,则无法恢复块。如果在hdfs-site.xml 中仅更改了dfs.datanode.data.dir 属性的值,并且Namenode 尚未格式化,则恢复hdfs-site.xml 中的值会有所帮助。

【讨论】:

    猜你喜欢
    • 2016-04-02
    • 2015-04-19
    • 1970-01-01
    • 1970-01-01
    • 2018-08-09
    • 2012-01-06
    • 1970-01-01
    • 1970-01-01
    • 2014-01-11
    相关资源
    最近更新 更多