array(2) { ["docs"]=> array(10) { [0]=> array(10) { ["id"]=> string(9) "308660876" ["text"]=> string(45) "安全测试前置实践1-白盒&黑盒扫描" ["intro"]=> string(411) "本文我们将以围绕系统安全质量提升为目标,讲述在安全前置扫描上实践开展过程。希望通过此篇文章,帮助大家更深入、透彻地了解安全测试,能快速开展安全测试。 作者:京东物流 陈维 一、引言 G.J.Myers在《软件测试的艺术》中提出:从心理学角度来说,测试是一个为了寻找错误而运行程序的过程。 " ["username"]=> string(12) "jingdongkeji" ["tagsname"]=> string(39) "前端|安全|黑盒测试|白盒测试" ["tagsid"]=> string(29) "["160","2823","14120","5741"]" ["catesname"]=> string(0) "" ["catesid"]=> string(2) "[]" ["createtime"]=> string(10) "1681206002" ["_id"]=> string(9) "308660876" } [1]=> array(10) { ["id"]=> string(9) "308660875" ["text"]=> string(24) "vulnhub靶场之ORASI: 1" ["intro"]=> string(256) "准备: 攻击机:虚拟机kali、本机win10。 靶机:Orasi: 1,下载地址:https://download.vulnhub.com/orasi/Orasi.ova,下载后直接vbox打开即可。 知识点:hex编码、ida逆向、AndroidKiller逆向、ffuf爆破、ssti漏洞、s" ["username"]=> string(6) "upfine" ["tagsname"]=> string(0) "" ["tagsid"]=> string(2) "[]" ["catesname"]=> string(0) "" ["catesid"]=> string(2) "[]" ["createtime"]=> string(10) "1681204802" ["_id"]=> string(9) "308660875" } [2]=> array(10) { ["id"]=> string(9) "308660874" ["text"]=> string(92) "C# Kafka重置到最新的偏移量,即从指定的Partition订阅消息使用Assign方法" ["intro"]=> string(428) "在使用Kafka的过程中,消费者断掉之后,再次开始消费时,消费者会从断掉时的位置重新开始消费。 场景再现:比如昨天消费者晚上断掉了,今天上午我们会发现kafka消费的数据不是最新的,而是昨天晚上的数据,由于数据量比较多,也不会及时的消费到今天上午的数据,这个时候就需要我们对偏移量进行重置为最新的,以" ["username"]=> string(15) "Poetwithapistol" ["tagsname"]=> string(10) ".NET|Kafka" ["tagsid"]=> string(13) "["300","440"]" ["catesname"]=> string(4) ".NET" ["catesid"]=> string(7) "["119"]" ["createtime"]=> string(10) "1681203303" ["_id"]=> string(9) "308660874" } [3]=> array(10) { ["id"]=> string(9) "308660873" ["text"]=> string(129) "迁移学习()《Attract, Perturb, and Explore: Learning a Feature Alignment Network for Semi-supervised Domain Adaptation》" ["intro"]=> string(194) "论文信息 论文标题:Attract, Perturb, and Explore: Learning a Feature Alignment Network for Semi-supervised Domain Adaptation论文作者:Taekyung Kim论文来源:2020 ECCV论文地" ["username"]=> string(12) "BlairGrowing" ["tagsname"]=> string(0) "" ["tagsid"]=> string(2) "[]" ["catesname"]=> string(0) "" ["catesid"]=> string(2) "[]" ["createtime"]=> string(10) "1681203302" ["_id"]=> string(9) "308660873" } [4]=> array(10) { ["id"]=> string(9) "308660872" ["text"]=> string(92) "C# Kafka重置到最新的偏移量,即从指定的Partition订阅消息使用Assign方法" ["intro"]=> string(428) "在使用Kafka的过程中,消费者断掉之后,再次开始消费时,消费者会从断掉时的位置重新开始消费。 场景再现:比如昨天消费者晚上断掉了,今天上午我们会发现kafka消费的数据不是最新的,而是昨天晚上的数据,由于数据量比较多,也不会及时的消费到今天上午的数据,这个时候就需要我们对偏移量进行重置为最新的,以" ["username"]=> string(10) "goodboydcc" ["tagsname"]=> string(10) ".NET|Kafka" ["tagsid"]=> string(13) "["300","440"]" ["catesname"]=> string(4) ".NET" ["catesid"]=> string(7) "["119"]" ["createtime"]=> string(10) "1681202402" ["_id"]=> string(9) "308660872" } [5]=> array(10) { ["id"]=> string(9) "308660870" ["text"]=> string(42) "Django怎么使用原生SQL查询数据库" ["intro"]=> string(392) "这篇文章主要介绍“Django怎么使用原生SQL查询数据库”,在日常操作中,相信很多人在Django怎么使用原生SQL查询数据库问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”Django怎么使用原生SQL查询数据库”的疑惑有所帮助!接下来,请跟着小编一起来学习吧! D" ["username"]=> NULL ["tagsname"]=> string(20) "django|sql|数据库" ["tagsid"]=> NULL ["catesname"]=> string(0) "" ["catesid"]=> NULL ["createtime"]=> string(10) "1681201981" ["_id"]=> string(9) "308660870" } [6]=> array(10) { ["id"]=> string(9) "308660871" ["text"]=> string(37) "Express怎么实现定时发送邮件" ["intro"]=> string(432) "今天小编给大家分享一下Express怎么实现定时发送邮件的相关知识点,内容详细,逻辑清晰,相信大部分人都还太了解这方面的知识,所以分享这篇文章给大家参考一下,希望大家阅读完这篇文章后有所收获,下面我们一起来了解一下吧。 在开发中我们有时候需要每隔 一段时间发送一次电子邮件,或者在某个特定的时间进行发" ["username"]=> NULL ["tagsname"]=> string(7) "express" ["tagsid"]=> NULL ["catesname"]=> string(0) "" ["catesid"]=> NULL ["createtime"]=> string(10) "1681201981" ["_id"]=> string(9) "308660871" } [7]=> array(10) { ["id"]=> string(9) "308660869" ["text"]=> string(29) "mysql运维------分库分表" ["intro"]=> string(412) "1. 介绍 问题分析: 随着互联网以及移动互联网的发展,应用系统的数据量也是成指数式增长,若采用单数据库进行数据存储,存在以下性能瓶颈: IO瓶颈:热点数据太多,数据库缓存不足,产生大量磁盘IO,效率较低。请求数据太多,带宽不够,网络IO瓶颈。CPU瓶颈:排序、分组、连接查询、聚合统计等SQL会耗费" ["username"]=> string(13) "qds1401744017" ["tagsname"]=> string(5) "mysql" ["tagsid"]=> string(7) "["237"]" ["catesname"]=> string(0) "" ["catesid"]=> string(2) "[]" ["createtime"]=> string(10) "1681200304" ["_id"]=> string(9) "308660869" } [8]=> array(10) { ["id"]=> string(9) "308660868" ["text"]=> string(41) "ASP.NET Core - 缓存之内存缓存(下)" ["intro"]=> string(292) "话接上篇 [ASP.NET Core - 缓存之内存缓存(上)],所以这里的目录从 2.4 开始。 2.4 MemoryCacheEntryOptions MemoryCacheEntryOptions 是内存缓存配置类,可以通过它配置缓存相关的策略。除了上面讲到的过期时间,我们还能够设置下面这些" ["username"]=> string(6) "wewant" ["tagsname"]=> string(12) "asp.net core" ["tagsid"]=> string(7) "["179"]" ["catesname"]=> string(25) "APS.NET Core 系列总结" ["catesid"]=> string(9) "["15288"]" ["createtime"]=> string(10) "1681200302" ["_id"]=> string(9) "308660868" } [9]=> array(10) { ["id"]=> string(9) "308660867" ["text"]=> string(9) "SPI协议" ["intro"]=> string(334) "SPI协议是由摩托罗拉公司提出的通讯协议(Serial Peripheral Interface),即串行外设接口。广泛用在ADC、LCD等设备与MCU间,要求通讯速率较高的场合。区分它与I2C协议差异以及FLASH存储器与EEPROM存储器的区别。下面我们分别对SPI协议的物理层及协议层进行讲解。" ["username"]=> string(8) "Kaelthas" ["tagsname"]=> string(5) "STM32" ["tagsid"]=> string(8) "["1311"]" ["catesname"]=> string(5) "STM32" ["catesid"]=> string(8) "["1139"]" ["createtime"]=> string(10) "1681199702" ["_id"]=> string(9) "308660867" } } ["count"]=> int(5621682) } 最近跑hadoop遇到的一些问题 - 爱码网

一、

[#|2013-09-16T18:19:02.663+0800|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=23364;_ThreadName=Thread-2;|2013-09-1618:19:02,663 WARN  DataStreamerException: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File:xxx could only be replicatedto 0 nodes, instead of 1

         atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)

         atorg.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)

         atsun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)

         atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

         atjava.lang.reflect.Method.invoke(Method.java:597)

         atorg.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)

         atorg.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)

         atorg.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)

         atjava.security.AccessController.doPrivileged(Native Method)

         atjavax.security.auth.Subject.doAs(Subject.java:396)

         atorg.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

         atorg.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

 

         atorg.apache.hadoop.ipc.Client.call(Client.java:1070)

         atorg.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)

         at$Proxy140.addBlock(Unknown Source)

         atsun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)

         atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

         atjava.lang.reflect.Method.invoke(Method.java:597)

         atorg.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

         atorg.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

         at$Proxy140.addBlock(Unknown Source)

         atorg.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)

         atorg.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)

         atorg.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)

         atorg.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)

 

 |#]

 

 

二、

[#|2013-09-16T18:19:02.664+0800|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=23367;_ThreadName=Thread-2;|2013-09-1618:19:02,664 WARN  Error Recovery forblock null bad datanode[0] nodes == null

 |#]

 

[#|2013-09-16T18:19:02.664+0800|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=23367;_ThreadName=Thread-2;|2013-09-1618:19:02,664 WARN  Could not get blocklocations. Source file "xxx"- Aborting...

 |#]

 

三、

[#|2013-09-16T18:20:13.134+0800|SEVERE|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=23467;_ThreadName=Thread-2;|java.util.concurrent.TimeoutException:Timed out(5000 milliseconds) waiting for operation while connected to xxx:11211

         atnet.rubyeye.xmemcached.XMemcachedClient.latchWait(XMemcachedClient.java:2617)

         atnet.rubyeye.xmemcached.XMemcachedClient.fetch0(XMemcachedClient.java:617)

         atnet.rubyeye.xmemcached.XMemcachedClient.get0(XMemcachedClient.java:1030)

         atnet.rubyeye.xmemcached.XMemcachedClient.gets(XMemcachedClient.java:1043)

         atnet.rubyeye.xmemcached.XMemcachedClient.gets(XMemcachedClient.java:1065)

         at net.rubyeye.xmemcached.XMemcachedClient.gets(XMemcachedClient.java:1054)

         atcom.haierpip.service.impl.MemcachedServiceImpl.gets(MemcachedServiceImpl.java:104)

         atcom.haierpip.timer.UpdateImgQueueMapJob.updateList(UpdateImgQueueMapJob.java:41)

         at com.haierpip.timer.UpdateImgQueueMapJob.executeInternal(UpdateImgQueueMapJob.java:26)

         at sun.reflect.GeneratedMethodAccessor64.invoke(UnknownSource)

         atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

         at java.lang.reflect.Method.invoke(Method.java:597)

         atorg.springframework.util.MethodInvoker.invoke(MethodInvoker.java:273)

         atorg.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean$MethodInvokingJob.executeInternal(MethodInvokingJobDetailFactoryBean.java:311)

         atorg.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:113)

         at org.quartz.core.JobRunShell.run(JobRunShell.java:202)

         atorg.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)

|#]

 

四、

[#|2013-09-16T18:05:12.983+0800|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=1595;_ThreadName=Thread-2;|2013-09-1618:05:12,983 INFO  Exception increateBlockOutputStream 10.255.254.6:50010 java.net.ConnectException:Connection timed out

 |#]

 

[#|2013-09-16T18:05:12.983+0800|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=1595;_ThreadName=Thread-2;|2013-09-1618:05:12,983 INFO  Abandoning blockblk_2971499765308711932_1671045

 |#]

 

[#|2013-09-16T18:05:12.984+0800|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=1595;_ThreadName=Thread-2;|2013-09-1618:05:12,984 INFO  Excluding datanode10.255.254.6:50010

 |#]

 

五、

2013-09-16 17:56:04,034WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block report is due, andbeen waiting for it for 708 seconds...

六、

[#|2013-09-16T18:05:40.699+0800|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=1571;_ThreadName=Thread-2;|2013-09-16 18:05:40,699 INFO  Exception in createBlockOutputStream xxx:50010 java.io.IOException: Bad connect ack with firstBadLink as xxx.6:50010
 |#]

七、hadoop50070页面

WARNING : There are about146 missing blocks. Please check the log or run fsck.

相关文章:

  • 2022-01-07
  • 2021-08-26
  • 2021-08-24
  • 2021-06-06
  • 2021-11-29
  • 2022-12-23
  • 2022-12-23
  • 2021-08-16
猜你喜欢
  • 2021-12-16
  • 2022-01-26
  • 2021-10-16
  • 2022-12-23
  • 2022-01-17
  • 2021-12-23
  • 2021-10-12
相关资源
相似解决方案