我正面临着Cassandra的问题,每当我尝试启动Cassandra时,我都会收到太多打开文件的错误。
我已经将文件描述符增加到1000000,仍然是相同的错误。
已更新
我查看了调试日志,在启动时,它打开了许多sstables。这是日志
调试SSTableBatchOpen:2 2017-06-20 11:03:40,635 SSTableReader.java:479 -打开/cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/ (60字节)调试SSTableBatchOpen:1 2017-06-20 11:03:40,635 SSTableReader.java:479 -打开SSTableBatchOpen:4 2017-06-20 11:03:40,635 SSTableReader.java:479 -打开SSTableBatchOpen:4 2017-06-20 11:03:40,635mc-181150-big (57字节)调试SSTableBatchOpen:3 2017-06-20 11:03:40,635CASS-打开/cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-188190-big (49字节)调试SSTableBatchOpen:2 2017-06-20 11:03:40,635 SSTableReader.java:479 -打开SSTableBatchOpen:1 2017-06-20 11:03:40,635 SSTableReader.java:479 -打开/cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-159987-big (45字节)调试SSTableBatchOpen:3 2017-06-20 11:03:40,635 SSTableReader.java:479 -打开/cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-198208-big (49字节)调试SSTableBatchOpen:4 2017-06-20 11:03:40,635 SSTableReader.java:479 -打开SSTableBatchOpen:1 2017-06-20 11:03:40,636 SSTableReader。/cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-184041-big :479-打开/cassandra/cass/data/system/IndexInfo-9f5c6374d48532299a0a5094af9ad1e3/mc-177001-big (48字节)调试SSTableBatchOpen:2 2017-06-20 11:03:40,636 SSTableReader.java:479 -打开SSTableBatchOpen (57字节为系统日志:
ERROR [SSTableBatchOpen:1] 2017-06-19 19:08:40,175 CassandraDaemon.java:205 - Exception in thread Thread[SSTableBatchOpen:1,5,main]
java.lang.RuntimeException: java.io.FileNotFoundException: /cassandra/cass/data/crownit/activitylog-60fcc250bc7211e6995a87b62bcc4eac/.controller_idx/mc-1033-big-CompressionInfo.db (Too many open files)
at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:127) ~[apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:91) ~[apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:125) ~[apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.complete(CompressedSegmentedFile.java:132) ~[apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:177) ~[apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.io.util.SegmentedFile$Builder.buildData(SegmentedFile.java:193) ~[apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:745) ~[apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:706) ~[apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:492) ~[apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:375) ~[apache-cassandra-3.0.9.jar:3.0.9]
at org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:534) ~[apache-cassandra-3.0.9.jar:3.0.9]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_101]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
Caused by: java.io.FileNotFoundException: /cassandra/cass/data/crownit/activitylog-60fcc250bc7211e6995a87b62bcc4eac/.controller_idx/mc-1033-big-CompressionInfo.db (Too many open files)
at java.io.FileInputStream.open0(Native Method) ~[na:1.8.0_101]
at java.io.FileInputStream.open(FileInputStream.java:195) ~[na:1.8.0_101]
at java.io.FileInputStream.<init>(FileInputStream.java:138) ~[na:1.8.0_101]
at java.io.FileInputStream.<init>(FileInputStream.java:93) ~[na:1.8.0_101]
at org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:100) ~[apache-cassandra-3.0.9.jar:3.0.9]
... 15 common frames omitted
ERROR [SSTableBatchOpen:1] 2017-06-19 19:08:40,177 JVMStabilityInspector.java:140 - JVM state determined to be unstable. Exiting forcefully due to:
java.io.FileNotFoundException: /cassandra/cass/data/crownit/activitylog-60fcc250bc7211e6995a87b62bcc4eac/.controller_idx/mc-1033-big-CompressionInfo.db (Too many open files)发布于 2017-06-20 16:04:04
由于我不能评论前面的答案,这里有一个小提示:
你的cassandra是怎么开始的?它是如何安装的?您的ulimit更改可能不会影响运行cassandra的用户(请仔细检查数据目录中的ls -l,看看是谁在使用这些文件)。对于debian包,cassandra以用户cassandra的身份运行,并将限制设置为
cassandra01:/etc$ cat security/limits.d/cassandra.conf
# Provided by the cassandra package
cassandra - memlock unlimited
cassandra - nofile 100000
cassandra - as unlimited
cassandra - nproc 8096
cassandra01:/etc$您的数据目录中有多少个sstables?
尝试找出在崩溃之前打开了多少个文件,如下所示:
lsof -n | grep javahttps://stackoverflow.com/questions/44632198
复制相似问题