使用top
8260 root 20 0 5163m 4.7g **133m** S 144.6 30.5 2496:46 java大多数情况下,%CPU是>170。
我正试图找出这个问题。我认为GC或脸红也是罪魁祸首。
S0 S1 E O P YGC YGCT FGC FGCT GCT LGCC GCC
0.00 16.73 74.74 29.33 59.91 27819 407.186 206 10.729 417.914 Allocation Failure No GC
0.00 16.73 99.57 29.33 59.91 27820 407.186 206 10.729 417.914 Allocation Failure Allocation Failure此外,从Cassandra日志中可以看出,使用相同的段ID和memtable重放位置的次数太多了。
INFO [SlabPoolCleaner] 2015-01-20 13:55:48,515 ColumnFamilyStore.java:840 - Enqueuing flush of bid_list: 112838010 (11%) on-heap, 0 (0%) off-heap
INFO [MemtableFlushWriter:1587] 2015-01-20 13:55:48,516 Memtable.java:325 - Writing Memtable-bid_list@2003093066(23761503 serialized bytes, 211002 ops, 11%/0% of on/off-heap limit)
INFO [MemtableFlushWriter:1587] 2015-01-20 13:55:49,251 Memtable.java:364 - Completed flushing /root/Cassandra/apache-cassandra-2.1.2/bin/./../data/data/bigdspace/bid_list-27b59f109fa211e498559b0947587867/bigdspace-bid_list-ka-3965-Data.db (4144688 bytes) for commitlog position ReplayPosition(segmentId=1421647511710, position=25289038)
INFO [SlabPoolCleaner] 2015-01-20 13:56:23,429 ColumnFamilyStore.java:840 - Enqueuing flush of bid_list: 104056985 (10%) on-heap, 0 (0%) off-heap
INFO [MemtableFlushWriter:1589] 2015-01-20 13:56:23,429 Memtable.java:325 - Writing Memtable-bid_list@1124683519(21909522 serialized bytes, 194778 ops, 10%/0% of on/off-heap limit)
INFO [MemtableFlushWriter:1589] 2015-01-20 13:56:24,130 Memtable.java:364 - Completed flushing /root/Cassandra/apache-cassandra-2.1.2/bin/./../data/data/bigdspace/bid_list-27b59f109fa211e498559b0947587867/bigdspace-bid_list-ka-3967-Data.db (3830733 bytes) for commitlog position ReplayPosition(segmentId=1421647511710, position=25350445)
INFO [SlabPoolCleaner] 2015-01-20 13:56:55,493 ColumnFamilyStore.java:840 - Enqueuing flush of bid_list: 95807739 (9%) on-heap, 0 (0%) off-heap
INFO [MemtableFlushWriter:1590] 2015-01-20 13:56:55,494 Memtable.java:325 - Writing Memtable-bid_list@473510037(20170635 serialized bytes, 179514 ops, 9%/0% of on/off-heap limit)
INFO [MemtableFlushWriter:1590] 2015-01-20 13:56:56,151 Memtable.java:364 - Completed flushing /root/Cassandra/apache-cassandra-2.1.2/bin/./../data/data/bigdspace/bid_list-27b59f109fa211e498559b0947587867/bigdspace-bid_list-ka-3968-Data.db (3531752 bytes) for commitlog position ReplayPosition(segmentId=1421647511710, position=25373052)任何帮助或建议都会很好。我还禁用了持久写错误的KeySpace。谢谢
刚刚发现,在重新启动所有节点后,即使没有发生任何事情,其中一个服务器上的YGC仍在启动。停止倾倒数据等。
发布于 2015-01-22 12:37:05
您使用什么类型的压实?尺寸是分层的还是水平的?如果您使用的是水平压缩,您是否可以切换到大小分层,因为您似乎有太多的压实。增加平整压实的稳定尺寸也可能有帮助。
sstable_size_in_mb (默认值:160 use )使用水平压缩策略的SSTables的目标大小。虽然SSTable的大小应该小于或等于sstable_size_in_mb,但是在压缩过程中有可能有一个更大的SSTable。当给定分区键的数据异常大时,就会发生这种情况。数据不被分成两个SSTables。
(mb)
如果使用的是大小分层压缩,请在看到较小的压缩之前增加SS表的数量。这是在创建表时设置的,因此可以使用ALTER命令对其进行更改。例子如下:
用compaction_strategy_class='SizeTieredCompactionStrategy‘和min_compaction_threshold =6修改表用户;
创建6 SSTables后的紧凑型
https://stackoverflow.com/questions/28054209
复制相似问题