我正在设置
ulimit -c unlimited. 在c++程序中,我们正在做
struct rlimit corelimit;
if (getrlimit(RLIMIT_CORE, &corelimit) != 0) {
return -1;
}
corelimit.rlim_cur = RLIM_INFINITY;
corelimit.rlim_max = RLIM_INFINITY;
if (setrlimit(RLIMIT_CORE, &corelimit) != 0) {
return -1;
}但是,每当程序崩溃时,由它生成的核心转储都会被截断。
BFD: Warning: /mnt/coredump/core.6685.1325912972 is truncated: expected core file size >= 1136525312, found: 638976.问题可能是什么?
我们使用的是Ubuntu 10.04.3 LTS
Linux ip-<ip> 2.6.32-318-ec2 #38-Ubuntu SMP Thu Sep 1 18:09:30 UTC 2011 x86_64 GNU/Linux这是我的/etc/security/limits.conf
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>
#
#Where:
#<domain> can be:
# - an user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
# - NOTE: group and wildcard limits are not applied to root.
# To apply a limit to the root user, <domain> must be
# the literal username root.
#
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open files
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
# - chroot - change root to directory (Debian-specific)
#
#<domain> <type> <item> <value>
#
#* soft core 0
#root hard core 100000
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
# ftp - chroot /ftp
#@student - maxlogins 4
#for all users
* hard nofile 16384
* soft nofile 9000更多细节
我使用的是gcc优化标志
O3 我将堆栈线程大小设置为.5 mb。
发布于 2012-04-16 21:05:24
我记得有一个硬限制,可以由管理员设置,还有一个软限制,可以由用户设置。如果软限制大于硬限制,则采用硬限制值。我不确定这是否适用于任何shell,但我只从bash中知道它。
发布于 2014-07-11 00:14:40
核心文件被截断时,我也遇到了同样的问题。
进一步的调查显示,ulimit -f (也称为file size,RLIMIT_FSIZE)也会影响核心文件,因此请检查限制是否也是无限的/适当地高。我在Linux kernel 3.2.0 / debian wheezy上看到了这一点。
发布于 2020-01-16 15:08:56
如果您正在使用coredumpctl,一种可能的解决方案是编辑/etc/systemd/coredump.conf并增加ProcessSizeMax和ExternalSizeMax
[Coredump]
#Storage=external
#Compress=yes
ProcessSizeMax=20G
ExternalSizeMax=20G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=https://stackoverflow.com/questions/8768719
复制相似问题