Nice! I’ll have a play around with it shortly.
If we have maps where the keys and values are fixed size would they also benefit from using v2 or should they remain on v1?
Nice! I’ll have a play around with it shortly.
If we have maps where the keys and values are fixed size would they also benefit from using v2 or should they remain on v1?
As far as beta testing is concerned, you won’t observe much of a difference. When V2 is ready and we make a production release, developers will not actually choose between V2 and V1 - that’ll be abstracted away. BTreeMaps would all automatically get upgraded to V2 under the hood.
Which of the main options discussed in the OP were selected for the beta? I don’t see that the example is showing how to define any of the traits
We opted for solution #2, which was the favored solution in this thread. All the examples have been updated with the new API already. Here’s an example on how to implement Storable
for a type: https://github.com/dfinity/stable-structures/blob/main/examples/src/custom_types_example/src/lib.rs
I see it’s all on main. This is extremely exciting! I’m hoping to get a chance to dive in soon, actually I was just in the middle of reimplementing stable structures in Azle.
So far so good, got our first set of tests passing with the unbounded types
Islam any news on support for this feature from the Motoko team at DFINITY? @claudio is this on the Motoko team radar? Any estimates when it would make it to the code base?
It is great news, great job fellows, I am just wishing it works in Motoko too.
Hey Joseph, I’m unaware of this specific change being ported to Motoko. AFAIK there is a Motoko port of StableBTreeMap that’s been done by the community and is not owned by the Motoko team. I do know that the Motoko team are working on a memory manager to allow giving separate virtual memories to a type, which is a necessary building block for supporting something like stable structures there.
We’ve just release Motoko 0.10.0 which contains a new library Region.mo for declaring isolated subregions of IC Stable Memory, similar to the isolated Memories of stable structures but with a little more integration into the type system of Motoko.
We don’t have anything equivalent StableBTree yet, but @matthewhammer is investigating adapting the work of @sardariuss (a Motoko port of the original Rust StableBTree, AFAIU) to use Region.mo for better encapsulation.
Good news @ielashi and @claudio so we don’t have the Rust equivalent functionality yet, but there is work happening, and I am glad for StableBTreeMap it has been a lifesaver, and it makes the whole upgrade process far easier.
@claudio I will be looking forward for Motoko 0.10 and I am hopeful Regions enable a close equivalent to Rust’s Stable Memory features, including one day the Bounded Size updates.
Dragginz is now using this, was an easy integration. Will let you know if we have any problems, thanks again!
When trying to implement this with version 0.6.0-beta.0
i ran into the issue where;
WHITELIST
data isn’t persistent between canister upgradesWHITELIST_ID
is persistent between canister upgradesAnything known what could cause this / what I am doing wrong?
type Memory = VirtualMemory<DefaultMemoryImpl>;
type RMemory = RestrictedMemory<DefaultMemoryImpl>;
thread_local! {
static MEMORY_MANAGER: RefCell<MemoryManager<DefaultMemoryImpl>> =
RefCell::new(MemoryManager::init(DefaultMemoryImpl::default()));
pub static WHITELIST_ID: RefCell<StableCell<u64, RMemory>> = RefCell::new(
StableCell::init(
RMemory::new(DefaultMemoryImpl::default(), 0..MAX_PAGES),
0,
).expect("failed")
);
pub static WHITELISTS: RefCell<StableBTreeMap<u64, Whitelist, Memory>> = RefCell::new(
StableBTreeMap::init_v2(
MEMORY_MANAGER.with(|m| m.borrow().get(MemoryId::new(0))),
)
);
}
Whitelist struct
#[derive(CandidType, Deserialize, Clone, Debug)]
pub struct Whitelist {
pub id: u64,
pub name: String,
pub color: String,
pub owner: Principal,
pub whitelist: Vec<Principal>,
}
impl Storable for Whitelist {
fn to_bytes(&self) -> std::borrow::Cow<[u8]> {
Cow::Owned(Encode!(self).unwrap())
}
fn from_bytes(bytes: std::borrow::Cow<[u8]>) -> Self {
Decode!(bytes.as_ref(), Self).unwrap()
}
const BOUND: Bound = Bound::Unbounded;
}
The problem lies in how you’re structuring stable memory.
MEMORY_MANAGER
, giving it the entire stable memory. WHITELIST_ID
, also giving it the entire stable memory (a restricted memory from 0..MAX_PAGES
is practically all the stable memory that there is).This clash is why you’re seeing only WHITELIST_ID being persisted, while other structures are not persisted. I recommend that you keep it simple and use the memory manager for everything.
Another approach would be to give WHITELIST_ID
a RMemory
with the range 0..1
and the MEMORY_MANAGER
a RMemory
with the range 1..MAX_PAGES
.
And, do you need to store the WHITELIST_ID
separately? Isn’t that information already stored in WHITELISTS
?
Thanks, That makes sense, i assumed it was a memory allocation for that specific storage.
The reason why i keep a seperate WHITELIST_ID
is because i assume it is cheaper to increment this id compared to searching for the highest value of the whitelist.id
You can use the last_key_value method on the BTreeMap. That’s an efficient call.
Hello @ielashi ,
I tried out today the latest release, v0.6.0-beta.1
, and encountered the exact same issue as the one I discovered when testing an earlier branch of the v2 version.
When attempting to upload a 10mb file (an image) to my canister, the upload process appears to work correctly. However, I’m unable to deserialize the data through http_request
, and I receive the following error:
Replica Error (5): “IC0522: Canister ajuq4-ruaaa-aaaaa-qaaga-cai exceeded the instruction limit for single message execution.”
It’s possible that the issue is on my end, or there may be no issue at all, and this could be a limitation of the IC. Or it might just be simply dumb of me to think that I can serialize and deserialize 10mb in one go.
To help diagnose, I’ve prepared a sample repository along with detailed instructions. I’ve made an effort to streamline the code, eliminating any unrelated components. However, please be aware that due to the presence of upload and HTTP endpoints, there is still some code in the repository. Tried my best.
Please let me know if you have any questions or require further information.
https://github.com/peterpeterparker/stable_structure_execution_limit
Once again, I apologize in advance if the issue turns out to be on my side.
P.S.: In the real life solution I’ll have to limit to 2 mb given the ingress max size, still thought this example was interesting.
Today I incorporated an additional stable tree map into my actual implementation and conducted refactoring to restrict the chunk size to two megabytes (PR here). The result was successful. I performed a quick test involving the upload of four 10-megabyte images, followed by a 200-kilobyte image, and everything worked perfectly.
Therefore when it comes to my project, the issue I shared yesterday isn’t a blocker. I would still be interested to know if my feedback / question was totally stupid or interesting a bit though.
Hey @peterparker,
Thanks for taking the time to put together the example. I wrote a quick benchmark to see if the BTreeMap
was indeed the issue. In this benchmark, I measure how many instructions it takes to read 4 10MiB assets.
#[query]
pub fn btreemap_get_10mib() -> BenchResult {
let mut btree: BTreeMap<String, Vec<u8>, _> = BTreeMap::new(DefaultMemoryImpl::default());
// 4 assets, each is 10MiB.
let entries = vec![
("some/path/asset1".to_string(), vec![1; 10 * 1024 * 1024]),
("some/path/asset2".to_string(), vec![2; 10 * 1024 * 1024]),
("some/path/asset3".to_string(), vec![3; 10 * 1024 * 1024]),
("some/path/asset4".to_string(), vec![4; 10 * 1024 * 1024]),
];
// Insert the assets into the map.
for (key, value) in entries.iter() {
btree.insert(key.clone(), value.clone());
}
// Benchmark retrieving all assets from the map.
benchmark(|| {
for (key, value) in entries.into_iter() {
assert_eq!(btree.get(&key), Some(value));
}
})
}
Our internal benchmarks showed that reading all four 10MiB assets took ~1.5 billion instructions, which is well below the instruction limit. In the example you shared, reading a single asset exceeded the instruction limit. The example code is quite complex, so it seems that the reason the instruction limit is hit is due to other code in the example, not the BTreeMap itself.
Thanks for double checking and for the benchmark!
Cool trick to generate large mock objects. Definitely going into my toolbelt. Noob me would probably have read 10MB files from disk