Hadoop ha CDH5.15.1-hadoop集群启动后,两个namenode都是standby模式
作者:尹正杰
版权声明:原创作品,谢绝转载!否则将追究法律责任。
一说起周五,想必大家都特别开心对吧?其实我也很开心呀~眼看还剩下一个小时就下班了。然而在这一个小时内,我都心里活动真的是跌宕起伏呀~不是因为放假,而是身为一名大数据运维技术人员需要替公司大数据生态圈中面临都各种问题。
这不,遇到了一个奇葩的问题,让我花了接近一个小时才处理完呢!深感惭愧啊,要是有小伙伴遇到跟我同样的问题,别慌!恭喜你,在这里你需要花费1分钟看完我的处理过程,然后可能只需要不到10分钟就能这个问题给解决掉。因为这个坑我已经替你给添了。
一.踩坑环境准备
1>.昨天下午15:30左右对操作系统进行调优,修改一大波内核参数,具体参数如下:(如果大家对下面的参数觉得有任何不合理的地方,欢迎大家留言,帮我指正。)
net.ipv6.conf.all.disable_ipv6 = 1 net.core.rmem_default = 256960 net.core.rmem_max = 513920 net.core.wmem_default = 256960 net.core.wmem_max = 513920 net.core.netdev_max_backlog = 2000 net.core.somaxconn = 2048 net.core.optmem_max = 81920 net.ipv4.tcp_mem = 131072 262144 524288 net.ipv4.tcp_rmem = 8760 256960 4088000 net.ipv4.tcp_wmem = 8760 256960 4088000 net.ipv4.tcp_keepalive_time = 1800 net.ipv4.tcp_keepalive_intvl = 30 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_sack = 1 net.ipv4.tcp_fack = 1 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_fin_timeout = 30 net.ipv4.ip_local_port_range = 1024 65000 net.ipv4.tcp_max_syn_backlog = 2048 vm.dirty_ratio = 80 vm.dirty_background_ratio = 5 vm.swappiness = 1
2>.开发反馈spark程序和mapreduce程序跑的倍儿慢
开发说和之前想比较,同样的任务之前跑不到5分钟就可以跑完,现在跑个20分钟才能跑完。还有的任务跑的直接就提交不了!一个劲儿的报错,哎呀妈呀~我心里以嘀咕,这波参数调的有点小尴尬啊,没把集群调试更好,反而调试的更差啦!开发还给我发了相关日志记录,如下:
[root@rsync115 against_cheating_extract]# yarn logs -applicationId application_1542888832576_2698 > application_1542888832576_2698 18/11/23 15:48:02 INFO client.RMProxy: Connecting to ResourceManager at calculation101.aggrx/10.1.1.101:8032 18/11/23 15:49:06 WARN hdfs.BlockReaderFactory: I/O error constructing remote block reader. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.1.3.115:48474 remote=/10.1.1.119:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2305) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:430) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:890) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:768) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:377) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:666) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:904) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:963) at java.io.DataInputStream.readFully(DataInputStream.java:195) at java.io.DataInputStream.readLong(DataInputStream.java:416) at org.apache.hadoop.io.file.tfile.BCFile$Reader.<init>(BCFile.java:626) at org.apache.hadoop.io.file.tfile.TFile$Reader.<init>(TFile.java:804) at org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogReader.<init>(AggregatedLogFormat.java:488) at org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAllContainersLogs(LogCLIHelpers.java:173) at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:133) at org.apache.hadoop.yarn.client.cli.LogsCLI.main(LogsCLI.java:186) 18/11/23 15:49:06 WARN hdfs.DFSClient: Failed to connect to /10.1.1.119:50010 for block BP-1071333876-10.1.1.101-1538100459813:blk_1095494105_21758975, add to deadNodes and continue. java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.1.3.115:48474 remote=/10.1.1.119:50010] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2305) at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:430) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:890) at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:768) at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:377) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:666) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:904) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:963) at java.io.DataInputStream.readFully(DataInputStream.java:195) at java.io.DataInputStream.readLong(DataInputStream.java:416) at org.apache.hadoop.io.file.tfile.BCFile$Reader.<init>(BCFile.java:626) at org.apache.hadoop.io.file.tfile.TFile$Reader.<init>(TFile.java:804) at org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogReader.<init>(AggregatedLogFormat.java:488) at org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAllContainersLogs(LogCLIHelpers.java:173) at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:133) at org.apache.hadoop.yarn.client.cli.LogsCLI.main(LogsCLI.java:186) 18/11/23 15:49:06 INFO hdfs.DFSClient: Successfully connected to /10.1.1.115:50010 for BP-1071333876-10.1.1.101-1538100459813:blk_1095494105_21758975 [root@rsync115 against_cheating_extract]# yarn logs -applicationId application_1542888832576_2698 > application_1542888832576_2698 18/11/23 15:49:24 INFO client.RMProxy: Connecting to ResourceManager at calculation101.aggrx/10.1.1.101:8032 [root@rsync115 against_cheating_extract]#