首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >为什么卡桑德拉在马厩压实过程中OutOfMemoryError失败

为什么卡桑德拉在马厩压实过程中OutOfMemoryError失败
EN

Stack Overflow用户
提问于 2014-08-08 08:30:24
回答 1查看 5.6K关注 0票数 1

嗨,也许这是一个愚蠢的问题,但我没有找到答案通过谷歌。

所以我拥有的是:

  • java 1.7
  • Cassandra1.2.8在具有-Xmx1G和-Xms1G的单节点上运行,对yaml文件没有任何更改

我创建了下一个测试列系列:

代码语言:javascript
复制
CREATE COLUMN FAMILY TEST_HUGE_SF
    WITH comparator = UTF8Type
    AND key_validation_class=UTF8Type;

然后,我尝试在这个列家族中插入行。我使用astyanax库访问cassandra:

代码语言:javascript
复制
    final long START = 1;
    final long MAX_ROWS_COUNT = 1000000000; // 1 Billion

    Keyspace keyspace = AstyanaxProvider.getAstyanaxContext().getClient();

    ColumnFamily<String, String> cf = new ColumnFamily<>(
        "TEST_HUGE_SF", 
        StringSerializer.get(), 
        StringSerializer.get());

    MutationBatch mb = keyspace.prepareMutationBatch()
            .withRetryPolicy(new BoundedExponentialBackoff(250, 5000, 20));
    for (long i = START; i<MAX_ROWS_COUNT; i++) {
        long t = i % 1000;
        if (t == 0) {
            System.out.println("pushed: " + i);
            mb.execute();
            Thread.sleep(1);
            mb = keyspace.prepareMutationBatch()
                    .withRetryPolicy(new BoundedExponentialBackoff(250, 5000, 20));
        }

        ColumnListMutation<String> clm = mb.withRow(cf, String.format("row_%012d", i));
        clm.putColumn("col1", i);
        clm.putColumn("col2", t);
    }
    mb.execute();

因此,从代码中可以看到,我尝试插入10亿行,每一行包含两列,每列包含简单的长值。

在插入了1.22亿行后,卡桑德拉与OutOfMemoryError崩溃了。在日志中,接下来是:

代码语言:javascript
复制
 INFO [CompactionExecutor:1571] 2014-08-08 08:31:45,334 CompactionTask.java (line 263) Compacted 4 sstables to [\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-2941,].  865 252 169 bytes to 901 723 715 (~104% of original) in 922 963ms = 0,931728MB/s.  26 753 257 total rows, 26 753 257 unique.  Row merge counts were {1:26753257, 2:0, 3:0, 4:0, }
 INFO [CompactionExecutor:1571] 2014-08-08 08:31:45,337 CompactionTask.java (line 106) Compacting [SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-2069-Data.db'), SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-629-Data.db'), SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-2941-Data.db'), SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-1328-Data.db')]
ERROR [CompactionExecutor:1571] 2014-08-08 08:31:46,167 CassandraDaemon.java (line 132) Exception in thread Thread[CompactionExecutor:1571,1,main]
java.lang.OutOfMemoryError
    at sun.misc.Unsafe.allocateMemory(Native Method)
    at org.apache.cassandra.io.util.Memory.<init>(Memory.java:52)
    at org.apache.cassandra.io.util.Memory.allocate(Memory.java:60)
    at org.apache.cassandra.utils.obs.OffHeapBitSet.<init>(OffHeapBitSet.java:40)
    at org.apache.cassandra.utils.FilterFactory.createFilter(FilterFactory.java:143)
    at org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:137)
    at org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:126)
    at org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.<init>(SSTableWriter.java:445)
    at org.apache.cassandra.io.sstable.SSTableWriter.<init>(SSTableWriter.java:92)
    at org.apache.cassandra.db.ColumnFamilyStore.createCompactionWriter(ColumnFamilyStore.java:1958)
    at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:144)
    at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
    at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
    at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:59)
    at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:62)
    at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:191)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)
 INFO [CompactionExecutor:1570] 2014-08-08 08:31:46,994 CompactionTask.java (line 263) Compacted 4 sstables to [\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-3213,].  34 773 524 bytes to 35 375 883 (~101% of original) in 44 162ms = 0,763939MB/s.  1 151 482 total rows, 1 151 482 unique.  Row merge counts were {1:1151482, 2:0, 3:0, 4:0, }
 INFO [CompactionExecutor:1570] 2014-08-08 08:31:47,105 CompactionTask.java (line 106) Compacting [SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-2069-Data.db'), SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-629-Data.db'), SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-2941-Data.db'), SSTableReader(path='\var\lib\cassandra\data\cyodaTest1\TEST_HUGE_SF\cyodaTest1-TEST_HUGE_SF-ib-1328-Data.db')]
ERROR [CompactionExecutor:1570] 2014-08-08 08:31:47,110 CassandraDaemon.java (line 132) Exception in thread Thread[CompactionExecutor:1570,1,main]
java.lang.OutOfMemoryError
    at sun.misc.Unsafe.allocateMemory(Native Method)
    at org.apache.cassandra.io.util.Memory.<init>(Memory.java:52)
    at org.apache.cassandra.io.util.Memory.allocate(Memory.java:60)
    at org.apache.cassandra.utils.obs.OffHeapBitSet.<init>(OffHeapBitSet.java:40)
    at org.apache.cassandra.utils.FilterFactory.createFilter(FilterFactory.java:143)
    at org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:137)
    at org.apache.cassandra.utils.FilterFactory.getFilter(FilterFactory.java:126)
    at org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.<init>(SSTableWriter.java:445)
    at org.apache.cassandra.io.sstable.SSTableWriter.<init>(SSTableWriter.java:92)
    at org.apache.cassandra.db.ColumnFamilyStore.createCompactionWriter(ColumnFamilyStore.java:1958)
    at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:144)
    at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
    at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
    at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:59)
    at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:62)
    at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:191)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)

正如我所见,卡桑德拉在马厩压实过程中坠毁。

这是否意味着要处理更多的行,cassandra需要更多的堆空间?

我预计堆空间的缺乏只会影响性能。有人能描述一下,为什么我的期望是错误的?

EN

回答 1

Stack Overflow用户

发布于 2014-09-04 18:14:59

其他人注意到这个- 1GB堆非常小。使用Cassandra2.0,您可以查看本调优指南以获得更多信息:c.html

另一个考虑是如何处理垃圾收集。在cassandra日志目录中,还应该有GC日志来指示集合的频率和时间。如果您愿意,可以使用jvisualvm实时监视它们。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/25199114

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档