【问题标题】:Java: Hazelcast : java.io.EOFException: Cannot read 4 bytesJava:Hazelcast:java.io.EOFException:无法读取 4 个字节
【发布时间】:2015-10-25 22:38:31
【问题描述】:

对于我的 Web 应用程序,我有 2 个在 hazelcast xml 中定义的实例。当我启动一台服务器时,它可以正常启动,但是当我启动第二台服务器时,出现以下错误:

严重:[192.168.1.32]:5701 [dev] [3.5] java.io.EOFException: 不能 读取 4 个字节! 2015-07-31 18:08:49 com.hazelcast.nio.serialization.HazelcastSerializationException: java.io.EOFException:无法读取 4 个字节! 2015-07-31 18:08:49 在 com.hazelcast.nio.serialization.SerializationServiceImpl.handleException(SerializationServiceImpl.java:380) 2015-07-31 18:08:49 在 com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:282) 2015-07-31 18:08:49 在 com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:200) 2015-07-31 18:08:49 在 com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:294) 2015-07-31 18:08:49 在 com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.processPacket(OperationThread.java:142) 2015-07-31 18:08:49 在 com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.process(OperationThread.java:115) 2015-07-31 18:08:49 在 com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.doRun(OperationThread.java:101) 2015-07-31 18:08:49 在 com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.run(OperationThread.java:76) 2015-07-31 18:08:49 原因:java.io.EOFException:无法读取 4 字节! 2015-07-31 18:08:49 在 com.hazelcast.nio.serialization.ByteArrayObjectDataInput.checkAvailable(ByteArrayObjectDataInput.java:543) 2015-07-31 18:08:49 在 com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readInt(ByteArrayObjectDataInput.java:255) 2015-07-31 18:08:49 在 com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readInt(ByteArrayObjectDataInput.java:249) 2015-07-31 18:08:49 在 com.hazelcast.cluster.impl.ConfigCheck.readData(ConfigCheck.java:217) 2015-07-31 18:08:49 在 com.hazelcast.cluster.impl.JoinMessage.readData(JoinMessage.java:80) 2015-07-31 18:08:49 在 com.hazelcast.cluster.impl.operations.MasterDiscoveryOperation.readInternal(MasterDiscoveryOperation.java:46) 2015-07-31 18:08:49 在 com.hazelcast.spi.Operation.readData(Operation.java:451) 2015-07-31 18:08:49 在 com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:111) 2015-07-31 18:08:49 在 com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:39) 2015-07-31 18:08:49 在 com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:41) 2015-07-31 18:08:49 在 com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:276) 2015-07-31 18:08:49 ... 6 更多

有人可以帮助我吗?我找不到任何东西:(

这是我的 hazelcast xml:

- no hazelcast.xml if present

--> 开发者 开发通行证 http://localhost:8080/mancenter 5701 0 224.2.2.3 54327 192.168.1.67 192.168.1.75 我的访问密钥 我的密钥 us-west-1 ec2.amazonaws.com hazelcast-sg 类型 hz节点 10.10.1.* PBEWithMD5AndDES 盐 通行证 19 16 0 0 1

    <!--
        Number of async backups. 0 means no backup.
    -->
    <async-backup-count>0</async-backup-count>

    <empty-queue-ttl>-1</empty-queue-ttl>
</queue>
 <map name="persistent.*">
    <!--
       Data type that will be used for storing recordMap.
       Possible values:
       BINARY (default): keys and values will be stored as binary data
       OBJECT : values will be stored in their object forms
       NATIVE : values will be stored in non-heap region of JVM
    -->
    <in-memory-format>BINARY</in-memory-format>

    <!--
        Number of backups. If 1 is set as the backup-count for example,
        then all entries of the map will be copied to another JVM for
        fail-safety. 0 means no backup.
    -->
    <backup-count>1</backup-count>
    <!--
        Number of async backups. 0 means no backup.
    -->
    <async-backup-count>0</async-backup-count>
    <!--
        Maximum number of seconds for each entry to stay in the map. Entries that are
        older than <time-to-live-seconds> and not updated for <time-to-live-seconds>
        will get automatically evicted from the map.
        Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
    -->
    <time-to-live-seconds>0</time-to-live-seconds>
    <!--
        Maximum number of seconds for each entry to stay idle in the map. Entries that are
        idle(not touched) for more than <max-idle-seconds> will get
        automatically evicted from the map. Entry is touched if get, put or containsKey is called.
        Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
    -->
    <max-idle-seconds>0</max-idle-seconds>
    <!--
        Valid values are:
        NONE (no eviction),
        LRU (Least Recently Used),
        LFU (Least Frequently Used).
        NONE is the default.
    -->
    <eviction-policy>NONE</eviction-policy>
    <!--
        Maximum size of the map. When max size is reached,
        map is evicted based on the policy defined.
        Any integer between 0 and Integer.MAX_VALUE. 0 means
        Integer.MAX_VALUE. Default is 0.
    -->
    <max-size policy="PER_NODE">0</max-size>
    <!--
        When max. size is reached, specified percentage of
        the map will be evicted. Any integer between 0 and 100.
        If 25 is set for example, 25% of the entries will
        get evicted.
    -->
    <eviction-percentage>25</eviction-percentage>
    <!--
        Minimum time in milliseconds which should pass before checking
        if a partition of this map is evictable or not.
        Default value is 100 millis.
    -->
    <min-eviction-check-millis>100</min-eviction-check-millis>
    <!--
        While recovering from split-brain (network partitioning),
        map entries in the small cluster will merge into the bigger cluster
        based on the policy set here. When an entry merge into the
        cluster, there might an existing entry with the same key already.
        Values of these entries might be different for that same key.
        Which value should be set for the key? Conflict is resolved by
        the policy set here. Default policy is PutIfAbsentMapMergePolicy

        There are built-in merge policies such as
        com.hazelcast.map.merge.PassThroughMergePolicy; entry will be overwritten if merging entry exists for the key.
        com.hazelcast.map.merge.PutIfAbsentMapMergePolicy ; entry will be added if the merging entry doesn't exist in the cluster.
        com.hazelcast.map.merge.HigherHitsMapMergePolicy ; entry with the higher hits wins.
        com.hazelcast.map.merge.LatestUpdateMapMergePolicy ; entry with the latest update wins.
    -->
  <merge-policy>com.hazelcast.map.merge.PutIfAbsentMapMergePolicy</merge-policy>
     <map-store enabled="true">
        <factory-class-name>com.adeptia.indigo.services.hazelcast.PersistentMapStoreFactory</factory-class-name>
        <write-delay-seconds>0</write-delay-seconds>
    </map-store>

</map>

<multimap name="default">
    <backup-count>1</backup-count>
    <value-collection-type>SET</value-collection-type>
</multimap>

<list name="default">
    <backup-count>1</backup-count>
</list>

<set name="default">
    <backup-count>1</backup-count>
</set>

<jobtracker name="default">
    <max-thread-size>0</max-thread-size>
    <!-- Queue size 0 means number of partitions * 2 -->
    <queue-size>0</queue-size>
    <retry-count>0</retry-count>
    <chunk-size>1000</chunk-size>
    <communicate-stats>true</communicate-stats>
    <topology-changed-strategy>CANCEL_RUNNING_OPERATION</topology-changed-strategy>
</jobtracker>

<semaphore name="default">
    <initial-permits>0</initial-permits>
    <backup-count>1</backup-count>
    <async-backup-count>0</async-backup-count>
</semaphore>

<reliable-topic name="default">
    <read-batch-size>10</read-batch-size>
    <topic-overload-policy>BLOCK</topic-overload-policy>
    <statistics-enabled>true</statistics-enabled>
</reliable-topic>

<ringbuffer name="default">
    <capacity>10000</capacity>
    <backup-count>1</backup-count>
    <async-backup-count>0</async-backup-count>
    <time-to-live-seconds>30</time-to-live-seconds>
    <in-memory-format>BINARY</in-memory-format>
</ringbuffer>

<serialization>
    <portable-version>0</portable-version>
</serialization>

<services enable-defaults="true"/>

【问题讨论】:

    标签: java serialization hazelcast distributed-caching eofexception


    【解决方案1】:

    我也有同样的问题。我尝试使用便携式设备将以下数据结构存储到 Hazelcast 中(行和单元格是不同的便携式 impl。):

    行 { 单元格 { '名称' : 'cell_0_0', '值' : 'cell_value_0_0' }, 单元格{'名称':'cell_0_1','值':1}}, ...

    问题在于,对于第一个单元格 hazelcast 存储字段名称“值”的字段类型为 UTF,但在存储第二个单元格期间,hazelcast 检索字段名称“值”的存储字段定义,这是UTF。所以字段类型不是 Int 而是 UTF 并且在从 map readUTF 读取存储的便携式设备的过程中,这导致了我的异常,因为存储的字段值和存储的字段类型彼此不对应。

    编辑:在您的情况下,在启动第二个实例后,存储的对象会被交换,当然也会被读取。或许问题就出在这一点上。

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 2012-12-06
      • 1970-01-01
      • 2015-04-19
      • 2018-09-30
      • 2019-09-05
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多