我有650 K的文件,我要添加到一个蒙戈收藏。我不知道为什么会这么痛苦。
我正在创建一个包含文档的js文件,并使用mongosh。
一。我可以用insertOne添加文档,它是完全可靠的,需要很多,很多,很多小时。
第二,我可以创建一个1000行的insertMany调用,并将它发送到mongosh,它可以工作。
但。如果我把其余的文档放在1000个行桶中,然后把它们发送到mongosh,它就会崩溃。见下文。
我可以把线切成小于1000的块,比如500或200。我可以将它们分割成块,将它们放入单独的文件中,并创建一个脚本来对每个文件调用mongosh。或者我相信还有其他我可以设置和跳过的篮圈。但到底怎么回事?我为什么要这么做?生命太短暂了。
我在这台机器上有很多内存。为什么要让应用程序使用我放入机器中的RAM让它们使用如此困难呢?Urg。
坠机事件:
$ time mongosh "mongodb+srv://HOST/jdi-stats" --username api-reader --password ZEKRET --file 04.jsa
Current Mongosh Log ID: 632d399bf13a696d49897a8b
Connecting to: mongodb+srv://<credentials>@HOST/jdi-stats?appName=mongosh+1.5.4
Using MongoDB: 4.4.15
Using Mongosh: 1.5.4
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
Loading file: 04.jsa
<--- Last few GCs --->
[117290:0xa6c8ef0] 27356 ms: Scavenge 4031.3 (4107.0) -> 4030.0 (4127.7) MB, 8.0 / 0.0 ms (average mu = 0.271, current mu = 0.221) allocation failure
[117290:0xa6c8ef0] 27383 ms: Scavenge 4043.6 (4127.7) -> 4039.9 (4129.2) MB, 16.4 / 0.0 ms (average mu = 0.271, current mu = 0.221) allocation failure
[117290:0xa6c8ef0] 28794 ms: Mark-sweep 4045.6 (4129.2) -> 4042.5 (4143.5) MB, 1405.5 / 0.0 ms (average mu = 0.191, current mu = 0.065) allocation failure scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0xb7ca10 node::Abort() [mongosh mongodb+srv://<credentials>@HOST/jdi-stats]
2: 0xa86201 node::FatalError(char const*, char const*) [mongosh mongodb+srv://<credentials>@HOST/jdi-stats]
3: 0xd6bede v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [mongosh mongodb+srv://<credentials>@HOST/jdi-stats]
4: 0xd6c257 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [mongosh mongodb+srv://<credentials>@HOST/jdi-stats]
5: 0xf23605 [mongosh mongodb+srv://<credentials>@HOST/jdi-stats]
6: 0xf240e6 [mongosh mongodb+srv://<credentials>@HOST/jdi-stats]
7: 0xf3260e [mongosh mongodb+srv://<credentials>@HOST/jdi-stats]
8: 0xf33050 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [mongosh mongodb+srv://<credentials>@HOST/jdi-stats]
9: 0xf35fae v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [mongosh mongodb+srv://<credentials>@HOST/jdi-stats]
10: 0xef777a v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [mongosh mongodb+srv://<credentials>@HOST/jdi-stats]
11: 0x1273006 v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [mongosh mongodb+srv://<credentials>@HOST/jdi-stats]
12: 0x165b5d9 [mongosh mongodb+srv://<credentials>@HOST/jdi-stats]
Aborted (core dumped)
real 1m46.195s
user 0m55.926s
sys 0m3.521s发布于 2022-09-23 18:13:27
我不知道为什么会这么痛苦。
我可以的。你在这份工作中使用了错误的工具。有句老话说,用锤子固定的螺钉比用螺丝刀固定的钉子好。
尝试使用Go编写的蒙古国来处理json中的大量数据,而csv/tsv格式的灵活性较低。
https://stackoverflow.com/questions/73831066
复制相似问题