Increased Canister Smart Contract Memory

I’m no expert on this, but that matches how I read it.

1 Like

You can use more than 4GiB for the canister’s stable memory. The heap is still limited to 4GiB, that’s still limited by the fact that our current Wasm runtime supports 32-bit native memory (there’s work to allow for using 64-bit memory though).

2 Likes

And what should I do to upgrade canister to use more than 4 GiB stable memory? Same? --memory-allocation 0?

1 Like

You shouldn’t have to do anything really. By default, canisters use “best-effort” memory allocation when they are created (i.e. if you don’t specify anything with the memory-allocation option). As long as you use the 64-bit stable memory apis (should be available through both Motoko and Rust CDK) in your canister, you should be able to use more than 4GiB stable memory.

2 Likes

The State Manager runs on the Exexution Layer and the State Manager stores state on the SSD. I think it is very important to adopt 64-bit WASM to increase capacity. However, “How it works - Internet Computer Execution Layer” has “The replicated state that can be held by a single subnet is not bounded by the available RAM in the node machines, but rather by the available SSD storage.” written on it. In the short term, the method of increasing memory is a very good approach. However, I believe that SSD should be utilized because lower server costs lead to cheaper Cycle. There may be a way to store large amounts of data with not much memory. I would be glad to receive your opinion. Please let me know if my understanding is incorrect.

6 Likes

Using stable to modify variables in motoko, can 32G stable memory be used? From what version of ic is it supported?

The motoko devs should be able to answer this better, but my understanding is that currently motoko stable variables are actually held on the wasm heap and just transferred to stable memory during upgrades. This would mean that they cannot use the full stable memory size yet.

cc @claudio

hey everybody, a quick update on the stable memory limits: DFINITY will propose in the next replica version to further increase the stable memory limit to 64GiB (from the current 48GiB). Our testing shows that this will work without issue, and also in practice we’ve seen eg the bitcoin canister use > 40GiB of stable memory for some months now, and this has been running without problems, which makes us confident we can further increase.

An increase to 64GiB would also give the bitcoin canister more headroom to store the entire UTXO set, which has been growing rapidly in the past months.

12 Likes

New to stable structure in rust. A couple of questions @Manu

For the BoundedStorable impl MAX_SIZE, does it mean it will always take up that amount of bytes in the memory? For example, I use a StableBTreeMap<String, Vec < String>>. The size Vec< String> can increase over time, is it a good practice to put a very large size as the max size of Vec?

Continued on example StableBTreeMap<String, Vec< String>> above, can I use StableVec within a StableBtreeMap? For example, StableBTreeMap<String, StableVec< String>>. If yes, how to initiate it in thread_local! with MEMORY_MANAGER? Would appreciate an example.

Thanks

1 Like

This seems to be working for us up to about 2MB chunks. The error may have been in my other code as upgrades were not working for some reason when I added a 2gb btree. Everything below that worked flawlessly.

Yes, that is correct.

No, I would advise against that since, as mentioned in your earlier question, the BTree does allocate the maximum amount of bytes that you specify.

In general, if you have a situation where you want to store 1-many data, and that data can be large, I recommend instead of representing it as:

StableBTreeMap<Key, Vec<Value>>

To instead represent it as:

StableBTreeMap<(Key, Value), ()>

In this case, the Key and Value are composed together to form a composite key. Whenever you want to retrieve all the values of Key, you can use the StableBTreeMap’s range method. See this tutorial, which contains an example of how to use and scan composite keys.

Stable structures cannot be nested at the moment, so I’d suggest the composite key approach above.

1 Like

Thanks @ielashi

Stable structures cannot be nested at the moment

Will this be supported in the future? Or this is not possible technically

In theory it can be done, but initializing stable structures incurs some overhead because each structure is stored in its virtual address space (this was an intentional decision, as it makes the library much safer to use), so I don’t expect it to be added to our roadmap at the moment.

However, we currently are exploring removing the MAX_SIZE constraint from the values, so then you would be able to declare things like StableBTreeMap<Key, Vec<Value>>. However, even then, I’d only use this representation if you know your vector won’t grow too large, because we’d need to serialize/deserialize this entire vector on each access.

1 Like

Oh please do! I am exploring StableBTreeMap for the very first time today and this limitation / error is literally the first problem I am facing right now.

I use a Principal as key and currently it isn’t implemented in candid.

error[E0277]: the trait bound `candid::Principal: BoundedStorable` is not satisfied
  --> src/stabletest_backend/src/lib.rs:34:39
   |
34 |     static CONTROLLERS_STATE: RefCell<ControllersState> = MEMORY_MANAGER.with(|memory_manager|
   |                                       ^^^^^^^^^^^^^^^^ the trait `BoundedStorable` is not implemented for `candid::Principal`

In addition, I use entities that contains blob, so not sure what value I should set for the MAX_VALUE since it is depending on what the users save.

error[E0277]: the trait bound `MyEntity: BoundedStorable` is not satisfied
  --> src/stabletest_backend/src/lib.rs:34:39
   |
34 |     static CONTROLLERS_STATE: RefCell<ControllersState> = MEMORY_MANAGER.with(|memory_manager|
   |                                       ^^^^^^^^^^^^^^^^ the trait `BoundedStorable` is not implemented for `MyEntity`
1 Like

Regarding my above issue with Principal, it can be solved by wrapping it. If it can help anyone else:

use candid::CandidType;
use serde::Deserialize;
use candid::Principal;

use candid::{decode_one, encode_one};
use ic_stable_structures::{BoundedStorable, Storable};
use std::borrow::Cow;

#[derive(CandidType, Deserialize, Clone, PartialOrd, Ord, Eq, PartialEq)]
pub struct MyPrincipal(Principal);

impl Storable for MyPrincipal {
    fn to_bytes(&self) -> Cow<[u8]> {
        Cow::Owned(encode_one(self).unwrap())
    }

    fn from_bytes(bytes: Cow<[u8]>) -> Self {
        decode_one(&bytes).unwrap()
    }
}

impl BoundedStorable for MyPrincipal {
    const MAX_SIZE: u32 = 29;
    const IS_FIXED_SIZE: bool = false;
}

@peterparker Using candid for serializing and deserializing is an overkill. It’s also not going to work, because candid adds some magic bytes, which will make the principal exceed the 29-byte max size you specified.

Try something like this (untested):

impl Storable for MyPrincipal {
    fn to_bytes(&self) -> Cow<[u8]> {
        Cow::Borrowed(self.0.as_slice())
    }

    fn from_bytes(bytes: Cow<[u8]>) -> Self {
        Self(Principal::from_slice(bytes.as_ref()))
    }
}

Thanks Islam. I don’t get any compilation issue and I’m able to add and get values but, if I dfx deploy changes I loose the state. So probably something incorrect in my sample. Not sure if it is linked to this. I’m trying to debug.

If I use MAX_SIZE for StableBTreeMap<Key, Vec> in current way, will it still serialize/deserialize this entire vector on each read?

From the comment of this piece of code from this blog, it seems the answer is yes, but I want to confirm, Thanks!

impl<K, V, M> struct BTreeMap<K, V, M>
where
  K: BoundedStorable + Ord + Clone,
  V: BoundedStorable,
  M: Memory,
{
    /// Adds a new entry to the map.
    /// Complexity: O(log(N) * K::MAX_SIZE + V::MAX_SIZE).
    pub fn insert(&self, key: K, value: V) -> Option<V>;

    /// Returns the value associated with the specified key.
    /// Complexity: O(log(N) * K::MAX_SIZE + V::MAX_SIZE).
    pub fn get(&self, key: &K) -> Option<V>;

    /// Removes an entry from the map.
    /// Complexity: O(log(N) * K::MAX_SIZE + V::MAX_SIZE).
    pub fn remove(&self, key: &K) -> Option<V>;

    /// Returns an iterator over the entries in the specified key range.
    pub fn range(&self, range: impl RangeBounds<K>) -> impl Iterator<Item = (K, V)>;

    /// Returns the number of entries in the map.
    /// Complexity: O(1).
    pub fn len() -> usize;
}

Found my issue, it’s because I used the stable structures together with existing data stored on the heap without explicitely taking care of the serialization in pre/post_upgrade.

I had to land on the quickstart example to understand that both conflicts when upgrading and to ultimately figure out what’s happening.

Would it be worth to add a note within the README @ielashi ?

1 Like

Yes, that is correct, so I’d only favor this approach if you expect the vectors to be small or medium-sized. For large datasets, I recommend the composite key approach. We use the composite key approach in the Bitcoin canister to store this list of UTXOs of each Bitcoin address.

You wouldn’t get a compilation issue, but what I would expect to happen is that if you try to insert a 29-byte principal, inserting it into the stable BTreeMap will cause a panic, because the candid serialization will add its own overhead and will make its serialized length exceed 29 bytes. Does that make sense?

IIUC you were using stable structures but kept serializing/deserialization the heap data without using the memory manager? Contribution to the docs are more than welcome :slight_smile: