So, this is what I’ve developed today.
- As I said in my previous post, I implemented
Unbounded
types. PR #522.
Basically replaced BoundedStorable
:
impl BoundedStorable for PageView {
const MAX_SIZE: u32 = PAGE_VIEW_MAX_SIZE as u32;
const IS_FIXED_SIZE: bool = false;
}
With Unbounded
:
const BOUND: Bound = Bound::Unbounded;
- While the data becomes unbounded, in reality, they remain bounded because I couldn’t just replace the custom deserializer with a generic serializer (Ciborium) due to the way I serialized my data, or at least, that’s how I understand the error I reported this morning. So, in another PR #523, I implemented fallback serialization for backward compatibility.
Basically the following:
fn from_bytes(bytes: Cow<[u8]>) -> Self {
from_reader(&*bytes).unwrap_or_else(|_| deserialize_bounded_page_view(bytes))
}
i.e., deserialize with Ciborium if it does not work, try to deserialize with the custom bytes length deserializer. This has the downside of making the deserialization more costly and less efficient for existing data, but given the use case, I think it’s an acceptable tradeoff and also kind of safer than adding some magic number to perform this or that deserialization.
- Once I finished with the above, I realized that I might face some issues after an upgrade if one canister tries to update an existing
Bounded
data. I fear that for such a use case either the data will be duplicated (given that the key serialization was modified) or somehow would break because I’m not sure what happens if I update a non-fixed entry which was previously saved with a fixed size.
Long story short, I did the following in another PR #524:
a. I reverted my keys to remain bounded, which is fine given that they are less likely to evolve in the future.
So replaced for the keys (and only the keys):
const BOUND: Bound = Bound::Unbounded;
With:
const BOUND: Bound = Bound::Bounded {
max_size: ANALYTIC_SATELLITE_KEY_MAX_SIZE as u32,
is_fixed_size: false,
};
Then, for the data, I created an enum:
#[derive(CandidType, Serialize, Deserialize, Clone, Debug)]
pub enum MemoryAllocation {
Unbounded,
Bounded,
}
and extended my struct with the information as optional:
pub memory_allocation: Option<MemoryAllocation>,
That way, when I deserialize manually the old data, I set the value to bounded:
pub fn deserialize_bounded_page_view(bytes: Cow<[u8]>) -> PageView {
...
PageView {
...
memory_allocation: Some(MemoryAllocation::Bounded),
}
}
I did not change the serializer - i.e., I don’t save “Bounded” in memory - that way, the fixed size remains the size.
In my store, I implemented a bit of logic to reuse or assign an allocation:
let memory_allocation: Option<MemoryAllocation> = match current_page_view.clone() {
None => Some(MemoryAllocation::Unbounded),
Some(current_page_view) => current_page_view.memory_allocation,
};
Finally, in my implementation, I used the information to either use the custom serialization or the generic.
impl Storable for PageView {
fn to_bytes(&self) -> Cow<[u8]> {
match self.memory_allocation {
Some(MemoryAllocation::Bounded) => serialize_bounded_page_view(self),
_ => serialize_to_bytes(self),
}
}
Following this pattern, I can add new optional types to my struct.
Wrote some PicJS / PocketIC tests to assert upgrade, so far everything seems fine.
However, if anybody is reading these lines and notices anything incorrect, please scream!!!