我正在尝试将Java进程的Linux核心转储文件转换为堆转储文件,以便使用Eclipse MAT进行分析。根据this博客文章,适应较新的OpenJDK 12,我创建了一个核心转储,然后运行jhsdb jmap将转储转换为HPROF格式:
>sudo gcore -o dump 24934
[New LWP 24971]
...
[New LWP 17921]
warning: Could not load shared library symbols for /tmp/jffi4106753050390578111.so.
Do you need "set solib-search-path" or "set sysroot"?
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f94c7e9e98d in pthread_join (threadid=140276994615040, thread_return=0x7ffc716d47a8) at pthread_join.c:90
90 pthread_join.c: No such file or directory.
warning: target file /proc/24934/cmdline contained unexpected null characters
warning: Memory read failed for corefile section, 1048576 bytes at 0x7f93756a6000.
warning: Memory read failed for corefile section, 1048576 bytes at 0x7f9379bec000.
...
warning: Memory read failed for corefile section, 1048576 bytes at 0x7f94c82dd000.
Saved corefile dump.24934
> ls -sh dump.24934
22G dump.24934
> /usr/lib/jvm/zulu-12-amd64/bin/jhsdb jmap --exe /usr/lib/jvm/zulu-12-amd64/bin/java --core dump.24934 --binaryheap --dumpfile jmap-dump.24934
Attaching to core dump.24934 from executable /usr/lib/jvm/zulu-12-amd64/bin/java, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 12.0.1+12
null
> ls -sh jmap-dump.24934
3.3M jmap-dump.24934核心转储文件为22 Gb,而堆转储文件仅为3Mb,因此jhsdb jmap命令很可能无法处理整个核心转储。此外,Eclipse MAT无法打开堆转储文件,并显示以下消息:The HPROF parser encountered a violation of the HPROF specification that it could not safely handle. This could be due to file truncation or a bug in the JVM.
发布于 2021-11-07 18:02:21
亚历克斯
这有两种可能性。
首先,gcore是gdb的便捷脚本。我看到它会提示一些警告消息,说明加载solib有困难。gdb可能会在第一时间生成一个损坏的核心文件。您可以尝试使用gdb加载核心文件,并查看它是否可以解析它。
其次,jhsdb自己解析核心文件。您可以使用环境变量LIBSAPROC_DEBUG=1来获取其跟踪信息。它将帮助您了解解析过程中的错误所在。
为什么不直接使用jmap -dump转储java heap呢?这将跳过核心转储文件。
https://stackoverflow.com/questions/60003893
复制相似问题