Error storing in ic stable structure

first of all, I tried to save short content data to my backend and it was fine. But when the content get bigger i can’t save anymore. Also, Could that happend because I migrate some field before like adding new field or removing field in the struct ? I think that because locally i ran dfx deploy backend --mode=reinstall and the error gone. But on the IC i don’t wanna lose my data.

[Canister bkyz2-fmaaa-aaaaa-qaaaq-cai] Panicked at 'Attempting to allocate an already allocated chunk.', /Users/ahmed/.cargo/registry/src/index.crates.io-6f17d22bba15001f/ic-stable-structures-0.6.5/src/btreemap/allocator.rs:166:9


my code

#[derive(Clone, Debug, Deserialize, CandidType)]
pub struct ContentNode {
    pub id: ContentId,
    pub parent: Option<ContentId>,
    pub _type: String,
    pub value: String,
    pub text: String,
    pub language: String,
    pub indent: u64,
    pub data: Option<ContentData>,
    pub listStyleType: String,
    pub listStart: u64,
    #[serde(default)]
    pub children: Vec<ContentId>,
}

pub type ContentTree = Vec<ContentNode>;
#[derive(Clone, Debug, Deserialize, CandidType)]
pub struct ContentNodeVec {
    pub contents: HashMap<FileId, ContentTree>,
}


static FILE_CONTENTS: RefCell<StableBTreeMap<String, ContentNodeVec, Memory>> = RefCell::new(
        StableBTreeMap::init(
            MEMORY_MANAGER.with(|m| m.borrow().get(MemoryId::new(5))),
        )
    );



pub fn update_file_contents(file_id: FileId, content_nodes: ContentTree) {
        FILE_CONTENTS.with(|file_contents| {
            let mut contents = file_contents.borrow_mut();
            let mut content_map = HashMap::new();
            content_map.insert(file_id.clone(), content_nodes);
            contents.insert(
                file_id,
                ContentNodeVec {
                    contents: content_map,
                },
            );
        });
    }


impl Storable for ContentNodeVec {
    fn to_bytes(&self) -> Cow<[u8]> {
        Cow::Owned(Encode!(self).unwrap())
    }

    fn from_bytes(bytes: Cow<[u8]>) -> Self {
        Decode!(bytes.as_ref(), Self).unwrap_or_else(|_| ContentNodeVec {
            contents: HashMap::new(),
        })
    }

    const BOUND: Bound = Bound::Unbounded;
}
2 Likes

I believe the ContentNodeVec needs to be a bounded type. Storing a HashMap in your StableBTreeMap is probably not a good idea.

If you haven’t read it I highly recommend this blog post:

The BTreeMap stable structure is an associative container that can hold any bounded storable type. The map must know the sizes of the keys and values because it allocates nodes from a pool of fixed-size tree nodes.

Have you considered compiling your canister as WASI, with the support of wasi2ic? Then you could store your files using standard file system functions.

1 Like

Hi, thank you for helping again.

do u mean

use ic_stable_structures::{BoundedStorable,Storable};

impl BoundedStorable for ContentNodeVec {?

Panicked at 'Attempting to allocate an already allocated chunk.',

From the source stable-structures code:

        // Get the next available chunk.
        let chunk_addr = self.free_list_head;
        let mut chunk = ChunkHeader::load(chunk_addr, &self.memory);

        // The available chunk must not be allocated.
        assert!(
            !chunk.allocated,
            "Attempting to allocate an already allocated chunk."
        );

This indicates that a data chunk is present in the free list while still being marked as allocated. This suggests a corruption in the allocator’s internal state. Data recovery in such a situation may be challenging…

No, I meant, it is generally not a good idea to store an item with unknown size (ContentNodeVec) as the value of a StableBTreeMap. Correct me if I am wrong @berestovskyy.

1 Like

I just added const BOUND: Bound = Bound::Bounded { max_size: 9999999, is_fixed_size: false, }; but still not getting any results

I think you should consider rethinking the storage strategy and mimic that of a regular file system.

Separate the metadata structure from the actual blocks of a file.

Metadata structure

  • Stored as a stable BTreeMap<filename, metadata>
  • Contains file name etc
  • Contains a block index, a fixed size vector with pointers to the file blocks

This works well if you have a fixed maximum file size that is not huge. If number of file blocks exceed what you have allocated for the metadata structure you would need to be able to link to another metadata structure. More advanced block indexes could be used as well, b-trees etc.

File blocks

  • Stored as a stable Vec<file_block> where each block is 4KB or something
  • On save, a file is divided up into blocks and saved, metadata structure updated with the pointers (indexes in the file block vector)

If I remember correctly, that limitation was lifted. It now stores values that are too big like a linked list under the hood as far as I’ve read.

Still curious to know what caused the possible stable structure data corruption. @AliSci if you manage to make this reproducible, please post the steps here.

For now, only way forward I could think of would be to migrate the stored values to another memory in a scheduled timer moving X values at a time.

Any idea where you read that?

Docs and library both references the tutorial by Roman that states:

@kristofer can you give me any code example please? do u mean
making small blocks of the actual data . So the issue is a matter of size only



    static FILE_CONTENTS: RefCell<StableBTreeMap<String, ContentNodeVec, Memory>> = RefCell::new(
        StableBTreeMap::init(
            MEMORY_MANAGER.with(|m| m.borrow().get(MemoryId::new(5))),
        )
    );


#[derive(Clone, Debug, Deserialize, CandidType)]
struct FileBlock {
    index: u32,
    data: Vec<u8>,
}

1 Like

This error indicates that there’s some kind of significant corruption to the StableBTreeMap. Did you ever change the bounds of your key/value structs? Unfortunately you cannot change the bounds of structs once you start storing them in a StableBTreeMap, and doing so could corrupt your data.

2 Likes

I did changed the bound but not of the content vector. However, it still work fine, but content vector does not work anymore.

How to make sure this never happen in future?

How to get better error message instead of that undefined things ?

Ok how to reset only one of my statics, I want to chose this and empty it.

@ielashi if we put the upgrade issues aside, is the storage layout as described by @AliSci ok?

Looks like FILE_CONTENTS can be simplified to store a Vec<ContentNode> instead of a ContentNodeVec with a HashMap containing the Vec<ContentNode>.

ps. We need to update docs to better describe the behaviour of Unbounded.

I can’t do RefCell<StableBTreeMap<String, Vec<Anything>, Memory>> because storable is not implement for Vec

Moving from bounded to unbounded is fine and is supported, but what I meant is changing the max_size - that would cause corruption in your data. If all you did was change the size to Unbounded then this, in theory, shouldn’t cause any issues.

I’d echo your statement that the schema can be made simpler and more efficient. It’s not exactly clear to me what the key of FILE_CONTENTS is. Is that the file name? Also, are there many file IDs for a single file?

1 Like

Yeah, sorry.

pub struct File {
    pub contents: Vec<ContentNode>
}

impl Storable for File {
  //...
}

This have nothing to do with your issues it seems though.

1 Like

the key is the user id
for each user there is a list of file_contents, I don’t think that matter now.

I updated my quastion first of all, I tried to save short content data to my backend and it was fine.
Does const BOUND: Bound = Bound::Unbounded; mean i can store unlimited data