Awarded: ICDev.org Bounty #36 - Signing Tree and DER Encoding - Motoko $10,000

Maybe, but if only canisters are involved, why bother with certification? If you get a message from canister A, it will be from canister A, all in the trusted environment of the IC .

Or put differently: inter canister messages are already automatically certified by the IC.

This scenario that I considered here was when you wanted to process a time shifted certification without having to involve the other canister again.

For example, if I have a canister that lets me record nonrevocable data, I could produce a certificate of this and give it to someone for use later. They could wait to present it later, and have a different canister trust it since it was certified.

like a check, but for data.

Yes, that should be possible. A minor wrinkle is that the certificate will have to be be fetched via a query call (not update call, not inter-canister call), so you need some external tooling for that. But once you have the certificate you can of course keep it, store it in canister, and validate it in a canister (the validation is just some code to run, no difference whether you run it outside or inside the IC).

Thanks, @nomeata
I’m not sure, but the certificate is temporary. I wanted to link it with the assignment of roles to cans (instead of tokens), an elegant solution would turn out. Details in (Bounty #62)

I really appreciate this library. It was extremely needed. I just wanted to make sure I was understanding this correctly to properly use it as my understanding of merkle trees is well, subpar. :smiling_face_with_tear:

So is the merkle tree itself not stable memory therefore limiting us to storing 4gb of info or less this data structure? But somehow, I can upgrade my canister and the lookup function still retrieves my data properly…?

And to further this question, you use CertifiedData to hash the entire merkle tree structure in pic below. So The limiting amount of info wouldnt actually be the amount of data but actually the cycle limit to hash the entire tree which would be less than 4gb right?

Correct, although depending on how you use it, you are only storing hashes in its leafs (e.g. for the HTTP Certification use case), and these hashes can of course come from much larger data.

Good question! The treeHash is actually cached, so this functional call is cheap. Whenever you modify the tree, only a few hashes are updated – if the tree is nicely balanced, O(log(n)) hashes need to be recomputed.

But yes, always on the IC there is a risk of running out of cycles, and I always suggest to stress test your canisters against (more than) the expected load to see if you are on the safe side.

Correct, although depending on how you use it, you are only storing hashes in its leafs (e.g. for the HTTP Certification use case), and these hashes can of course come from much larger data.

But okay so if youre storing the hash only of raw data. Returning the data in a query won’t be gauranteed to be certified right? Only verified? Let me give an example.

lets say I had a list of token balances in a hashmap<Principal,Nat>. Not as a merkle tree. call it

balance_hashmap<Principal,Nat>

And I certify each balance every time I update a users balance doing

ct.put([Principal.toBlob(\<principalX\>)], hash(BlobfromNat(\<balanceX\>) ) 

Then if I have a query function

public query func get_balance(principal:Principal): BalanceInNat, CertificationHeader
{
//return some form of the function name, the names arent exatcly correct.
ct.certification_header[Principal.toBlob(\<principalX\>)]; //the witness and certification header
balance_hashmap.get(\<principalX>);
//return some form of the two combined above
}

My issue is that there still is no gaurantee that the balance is returned correctly. A node can still return an improper balance only thing now is we can verify that it is wrong right?. Having the certHeadder only verifies that the balances returned is correct then right after the caller of get_balance recieves the function output. Or is my example improper in usage?

Also, how is it possible to be limited to 4gb/Whatever the unstable memory limit is but also able to be stable in terms of upgrade (as I dont seem to need to do anything with the CertifiedTree in pre/post upgrade)? Doens’t make sense to me. I did run a test just populating a bunch of 3000byte arrays and verified the memory limit is definately not that larger 32-64gb of stable memory

It is only stable in the motoko sense( var stable). A better description would have been managed. The motoko machine pushes stable vars to stable memory for upgrades and then loads it back in post upgrade.

Yea but stable in motoko is still quite large i.e. greater than at least 15 Gb no?
I ran this test below with the certified tree.
Basically what below does is it stores 3000 bytes as the value and 2, 35 byte arrays for the key into the merkle tree of the library Joachim wrote. Lets just round up to 4000 bytes per key,value pair.

Then I looped through it like 400k times using a js script

The exact number of counts.
image

I can still populate it but when i run upgrade I get the memory error below. I have no post/pre upgrade function either

If you do the math,
(4000 bytes +lets say 2000 byte padding for the merkle tree structure)*500,000 counts rounded up from counts = 6000*5e5 = 3e9 = 3Gb. Even for motoko stable thats not even close to the limit. Maybe im missing something in my understanding. Would appreciate any clarification or correction. I know its not going to be the 32GB or 64 GB limit (whichever it currently is) but 3GBs and an a memory error doesn’t seem correct.

Stable is 64GB but managed stable is maxed at 4GB I think. Different things. Very confusing.

If you want 64GB you need to use @sardariuss 's library. You could retrofit what @nomeata did here in stable, but I’d imagine your rebalance calc might get out of hand.

1 Like

Ohhh, that clears things up. Appreciate it. But oh okay, so if using @sardariuss’s implementation, I’d have to implement the hash computation part for each insertion or deletion then. Hmm

Yes, but you could stick that hash into the managed memory tree. That way you get 4 GB of tree, space, and 64 GB of storage space. You’d have an extra step to add the base of your witness, but I think that should work.

1 Like