首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Java: Hazelcast : java.io.EOFException:无法读取4个字节

Java: Hazelcast : java.io.EOFException:无法读取4个字节
EN

Stack Overflow用户
提问于 2015-08-03 17:59:55
回答 1查看 2.8K关注 0票数 2

对于我的web应用程序,我有两个用hazelcast xml定义的实例。当我启动一台服务器时,它正常启动,但当我启动第二台服务器时,我得到了以下错误:

严重: 192.168.1.32:5701 dev java.io.EOFException:无法读取4个字节2015-07-31 18:08:49 com.hazelcast.nio.serialization.HazelcastSerializationException: java.io.EOFException:无法读取4字节2015-07-31 18:08:49 com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:200) 2015-07-31 18:08:49在com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:294) 2015-07-31 18:08:49 com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.processPacket(OperationThread.java:142) 2015-07-31 18:08:49hazelcast.spi.impl.operationexecutor.classic.OperationThread.process(OperationThread.java:115) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.doRun(OperationThread.java:101) 2015-07-31 18:08:49 at com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.run(OperationThread.java:76) 2015-07-31 18:08:4907-31 18:08:49 com.hazelcast.nio.serialization.ByteArrayObjectDataInput.checkAvailable(ByteArrayObjectDataInput.java:543) 2015-07-31 18:08:49 com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readInt(ByteArrayObjectDataInput.java:255) 2015-07-31 18:08:49 com.hazelcast.nio.serialization.ByteArrayObjectDataInput.readInt(ByteArrayObjectDataInput.java:249) 2015-07-31 18:08:49在com.hazelcast.cluster.impl.ConfigCheck.readData(ConfigCheck2015年-07-31 18:08:49在com.hazelcast.cluster.impl.JoinMessage.readData(JoinMessage.java:80) 2015-07-31 18:08:49在com.hazelcast.cluster.impl.operations.MasterDiscoveryOperation.readInternal(MasterDiscoveryOperation.java:46) 2015-07-31 18:08:49在com.hazelcast.spi.Operation.readData(Operation.java:451) 2015-07-31 18:08:49在com.hazelcast.nio。serialization.DataSerializer.read(DataSerializer.java:111) 2015-07-31 18:08:49在com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:39) 2015-07-31 18:08:49在com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:41) 2015-07-08:49在com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:276) 2015-07-31 18:08:49

有人能帮我吗?我什么也找不到

以下是我的hazelcast xml:

代码语言:javascript
复制
- no hazelcast.xml if present

-> dev-pass http://localhost:8080/mancenter 5701 0 224.2.2.3 54327 192.168.1.67192.168.1.75 my-access-key my-key-key us-west-1 ec2.amazonaws.com hazelcast-sg类型hz-节点10.10.1.* PBEWithMD5AndDES盐通道1916 0 0 1

代码语言:javascript
复制
    <!--
        Number of async backups. 0 means no backup.
    -->
    <async-backup-count>0</async-backup-count>

    <empty-queue-ttl>-1</empty-queue-ttl>
</queue>
 <map name="persistent.*">
    <!--
       Data type that will be used for storing recordMap.
       Possible values:
       BINARY (default): keys and values will be stored as binary data
       OBJECT : values will be stored in their object forms
       NATIVE : values will be stored in non-heap region of JVM
    -->
    <in-memory-format>BINARY</in-memory-format>

    <!--
        Number of backups. If 1 is set as the backup-count for example,
        then all entries of the map will be copied to another JVM for
        fail-safety. 0 means no backup.
    -->
    <backup-count>1</backup-count>
    <!--
        Number of async backups. 0 means no backup.
    -->
    <async-backup-count>0</async-backup-count>
    <!--
        Maximum number of seconds for each entry to stay in the map. Entries that are
        older than <time-to-live-seconds> and not updated for <time-to-live-seconds>
        will get automatically evicted from the map.
        Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
    -->
    <time-to-live-seconds>0</time-to-live-seconds>
    <!--
        Maximum number of seconds for each entry to stay idle in the map. Entries that are
        idle(not touched) for more than <max-idle-seconds> will get
        automatically evicted from the map. Entry is touched if get, put or containsKey is called.
        Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
    -->
    <max-idle-seconds>0</max-idle-seconds>
    <!--
        Valid values are:
        NONE (no eviction),
        LRU (Least Recently Used),
        LFU (Least Frequently Used).
        NONE is the default.
    -->
    <eviction-policy>NONE</eviction-policy>
    <!--
        Maximum size of the map. When max size is reached,
        map is evicted based on the policy defined.
        Any integer between 0 and Integer.MAX_VALUE. 0 means
        Integer.MAX_VALUE. Default is 0.
    -->
    <max-size policy="PER_NODE">0</max-size>
    <!--
        When max. size is reached, specified percentage of
        the map will be evicted. Any integer between 0 and 100.
        If 25 is set for example, 25% of the entries will
        get evicted.
    -->
    <eviction-percentage>25</eviction-percentage>
    <!--
        Minimum time in milliseconds which should pass before checking
        if a partition of this map is evictable or not.
        Default value is 100 millis.
    -->
    <min-eviction-check-millis>100</min-eviction-check-millis>
    <!--
        While recovering from split-brain (network partitioning),
        map entries in the small cluster will merge into the bigger cluster
        based on the policy set here. When an entry merge into the
        cluster, there might an existing entry with the same key already.
        Values of these entries might be different for that same key.
        Which value should be set for the key? Conflict is resolved by
        the policy set here. Default policy is PutIfAbsentMapMergePolicy

        There are built-in merge policies such as
        com.hazelcast.map.merge.PassThroughMergePolicy; entry will be overwritten if merging entry exists for the key.
        com.hazelcast.map.merge.PutIfAbsentMapMergePolicy ; entry will be added if the merging entry doesn't exist in the cluster.
        com.hazelcast.map.merge.HigherHitsMapMergePolicy ; entry with the higher hits wins.
        com.hazelcast.map.merge.LatestUpdateMapMergePolicy ; entry with the latest update wins.
    -->
  <merge-policy>com.hazelcast.map.merge.PutIfAbsentMapMergePolicy</merge-policy>
     <map-store enabled="true">
        <factory-class-name>com.adeptia.indigo.services.hazelcast.PersistentMapStoreFactory</factory-class-name>
        <write-delay-seconds>0</write-delay-seconds>
    </map-store>

</map>

<multimap name="default">
    <backup-count>1</backup-count>
    <value-collection-type>SET</value-collection-type>
</multimap>

<list name="default">
    <backup-count>1</backup-count>
</list>

<set name="default">
    <backup-count>1</backup-count>
</set>

<jobtracker name="default">
    <max-thread-size>0</max-thread-size>
    <!-- Queue size 0 means number of partitions * 2 -->
    <queue-size>0</queue-size>
    <retry-count>0</retry-count>
    <chunk-size>1000</chunk-size>
    <communicate-stats>true</communicate-stats>
    <topology-changed-strategy>CANCEL_RUNNING_OPERATION</topology-changed-strategy>
</jobtracker>

<semaphore name="default">
    <initial-permits>0</initial-permits>
    <backup-count>1</backup-count>
    <async-backup-count>0</async-backup-count>
</semaphore>

<reliable-topic name="default">
    <read-batch-size>10</read-batch-size>
    <topic-overload-policy>BLOCK</topic-overload-policy>
    <statistics-enabled>true</statistics-enabled>
</reliable-topic>

<ringbuffer name="default">
    <capacity>10000</capacity>
    <backup-count>1</backup-count>
    <async-backup-count>0</async-backup-count>
    <time-to-live-seconds>30</time-to-live-seconds>
    <in-memory-format>BINARY</in-memory-format>
</ringbuffer>

<serialization>
    <portable-version>0</portable-version>
</serialization>

<services enable-defaults="true"/>

EN

回答 1

Stack Overflow用户

发布于 2017-06-27 08:51:04

我也有同样的问题。我试图使用可移植性将以下数据结构存储到Hazelcast中(行和单元格是不同的可移植文件):

行{单元格{ 'name‘:' cell __0','value’:'cell_value__0‘},单元格{ 'name’:‘cell _1’,'value‘:1} },

问题是,对于第一个单元格,hazelcast为字段名“value”存储字段类型的UTF,但在存储第二个单元时,hazelcast检索字段名'value‘的存储字段定义,这是UTF。因此,字段类型不是Int,而是UTF,在从map读取存储的可移植性时使用了readUTF,这给我造成了异常,因为存储的字段值和存储的字段类型互不对应。

编辑:在您的示例中,在启动第二个实例之后,交换存储的对象,当然是读取对象。也许问题就在这一点上。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/31793785

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档