Mini BigMap library

Hi everyone,
As BigMap library is late to the party I decided to build a mini BigMap library which is an auto-scalable canister solution.

Here’s the code: GitHub - gabrielnic/motoko-cdn: Motoko cdn/storage solution. ie: mini big-map
Demo: https://b2r3f-wiaaa-aaaae-aaaxa-cai.ic0.app/

Note this is an MVP and there’s still work to do. eg: inter-canister query calls etc.

From README:

The core architecture relies on actor classes that allows canister creation on the fly. The container actor stores an array of all the canisters created on the fly and the size of that canisters.

In order to get a canister memory size I’m using rts_memory_size : the rts_memory_size is the current size of the Wasm memory array (mostly the same as the canister memory size). This can only grow, never shrink in Wasm, even if most of it is unused. However, unused/unmodified canister memory ought to have no cost on the IC.

You can use the IC management canister interface to get the memory of a canister as well ie: IC.status but rts_memory_size and IC.status numbers are not the same - the implementation will do some more work before and after after you call rts_memory_size (deserializing arguments and serializing results, which may affect memory size after you’ve read it. So you can use either but from my tests rts_memory_size is closer to reality.

Due to the way garbage collector is built in motoko (takes about half of the canister memory so a bit less than 2GB) and from my tests I decided to set a threshold of 2GB private let threshold = 2147483648; // ~2GB so once the threshold is reached the container will spawn a new canister for storage. Note : you only pay for what you consume.

Using the IC management canister I can update the new canister settings compute_allocation = ?5; memory_allocation = ?4294967296; and the controllers to the wallet canister and the container canister.

Frontend: you can upload any type of file from this category: jpeg, gif, jpg, png, svg, avi, aac, mp4, wav, mp3 but you can update the front-end getFileExtension to allow/remove types.

Files are split in chunks of 500Kb and uploaded into an available bucket.

Any improvements/comments are welcome.
Many thanks to @claudio

22 Likes

Absolutely awesome. This is going to require more than one pot of coffee :sunglasses: Looking forward to contributing to this worthy effort!

I am happy to report that a 7MB Pets.Com Sock Puppet png was successfully uploaded on mobile and downloaded on desktop.

1 Like

Brilliant work Gabriel!!

Very cool, works like a charm! Thank you for putting the effort into making this

This is fantastic! I just used it to upload a gif and it worked as advertised. Very cool.

1 Like

Very nice!

A small aside from something I noticed during a quick glance at the code: an expression like

do? { f(x)! }

is the same as simply writing

f(x)

Similarly, any use of do? where the only occurrence of ! is at the very end.

1 Like

Thank you all :slight_smile: Hope this will be helpful to others.

Yeah initially I wanted to return an array of values or an empty array but then discovered that’s not how it works. I will update my code. Thank you.

If you want to speed up the getters (without support for nested queries), as an experiment, why not try replacing each getter on Container with two calls to queries, the first to get the bucket from the container, and the second query the bucket itself. I expect two queries will still be much faster than one nested update call.

This won’t lead to any improvement for canisters getting data, but should speed up any frontend getting data.

Curious to see what the performance results are, or any reason this should not work.

Even better would be if the queries were certified, but that’s a lot more challenging…

5 Likes

Hi @claudio

Sorry for the late response, it went straight to the backlog for any upgrades.

Indeed I’ve separated the logic a bit so now each file info is kept in container so I can get access to query calls.

Next stop: inter-canister query calls but I’m pretty sure that’s not going to happen soon(like a few months ?)

Any news on the new garbage collector?

Thanks

Yeah, I wouldn’t hold my breath for inter canister query calls.

A compacting garbage collector, that makes more memory available to the heap, and better GC scheduling in general (don’t GC on every message, only after reasonable growth), has been out for a while and can be specified as a moc command line option in dfx.json. See the discussion here:

3 Likes

Hey, I am wondering is this library still maintained and is the work ongoing for the “roadmap” specified in the readme of it? :slight_smile: Or, is there some other library based on this one that is supposed to be used instead of it? Thanks a lot for the contribution!

1 Like

Hey, I’ve been slowly working on a new architecture but it’s still WIP. It will come out with the latest dfx. If you have have any suggestions/code improvements feel free to open a PR :slight_smile:
Thanks

1 Like

I am guessing this can only be tested live on the ICP blockchain as it uses the IC container to get canister_status, update_settings etc?

erm no, you can test those on local as well with the IC management canister.

1 Like