I’ve got a challenge with incremental GC:
we were using trie to store small fine-grained data, and after storing 700,000 records, the memory footprint was 1.5 G. Then the issue arose, continued to write data to the canister (no delete operations), about 100,000 records, and then the memory growth rate increased significantly, reaching the 4G limit in a few hours. During this process, it seems that the gc cannot catch up with the speed of data writing, resulting in a large amount of garbage buildup. Am I understanding this correctly?
If I use compacting GC, 800,000 records only takes up 1.7G of memory.
I’m testing this in moc 0.9.7, has it improved in version 0.10.3?
What is the best way to deal with this? Would it help if I write an empty update method and keep calling it?