Enable Canisters to Hold ICP

Understood. If it’s really that easy to locate a specific memory location and read it across 2/3 of the nodes at the same time then that would be good to know. I still imagine there are a lot of other moving parts that would make the attack more complex, but this is good information.

1 Like

Hi @JensGroth! Any updates on whether a proposal is coming this week? Thank you!

6 Likes

Yes, the plan is to make the proposal this week, we’re currently testing the upgrade on our testnet.

7 Likes

@JensGroth does this also come with a new complete candid interface for the ledger in this proposal this week? Or is that at a later time?

1 Like

Yes, the ledger will be getting its new interface to support this upgrade, and you can see a preview of it in this draft PR of an example that calls it Draft: a canister that transfers funds using ICP Ledger API by roman-kashitsyn · Pull Request #123 · dfinity/examples · GitHub

5 Likes

Thanks for the glimpse! I wonder if we will regret not putting the error variant in an opt, which is necessary if you want to extend the set of variant tags (e.g. errors) later in a backwards-comlatible way. Is it too late to change that?

Yuck, although I just noticed that with the latest iteration of subtyping on Candid values, the idea of wrapping variant in opt to make them extensible doesn’t work: with the subtype check on the level of types, such an extended opt type will always be null, even if a known tag is observed. I will bring this up in the Candid repo.

4 Likes

Looks great, I see its a draft, where/when are the candid methods to read the ledger and find specific transactions? For a canister to cept icp it will also want to be able to confirm that someone specific sent it. Will that also be with this proposal?

2 Likes

Hi levi,

The version that we’ll release will not have a method to fetch blocks – we are still working on that. However, one can already do the kind of check you suggest by encoding in the destination subaccount all the information you’d want to be associated with the transaction. The subaccount would need to be agreed with the sender somehow, but then checking the balance of the corresponding address (which should be non-zero if the correct transfer was made) should ensure that all is as expected: the balance should be the expected amount and the subaccount info confirms the parameters.

Of course, once we’ll have fetching blocks there will be an additional mean to check by looking at the transaction rather than at the destination address. I hope this helps.

1 Like

Hey, yes that is a good solution, gratitude. I will want to make sure that the sub-account balance is zero (or some amount that I keep track of) before I give the user the sub-account address to send to. Not the most sustainable but it works for the now.

For the people out there who are making canisters that want to analyze and track the ledger, can you give us a timeframe for the fetch of the blocks? Are we looking at a couple of weeks? Or after the Bitcoin and sns stuff next year? So we know how to plan our canisters.

One more question on the ledger, i don’t see a notify method in the draft, are the old methods (send and then notify) still going to work for the creation of a canister through the ledger? Are there plans for a new method(s) to create a canister through the ledger? And what is the timeframe we are looking at with that?

2 Likes

The sub-account based check is a workable solution which can work well/complement other solutions where one fetches and looks at blocks.

The fetch_blocks method won’t be available in two weeks but, and here I’m venturing a guess perhaps by the end of December. I’d expect it to be available before the sns & btc integration stuff.

We plan to remove notify completely (that’s why it’s not spec’ed out) The current thinking is that will have the user/canister notify the cycles minting canister about the transfer. The notify method will still be available (but not documented/added to the Candid interface) until we do this change. Can’t give an estimate for that though.

1 Like

My mistake, you are right, the sub-account feature is great and will work great. Thanks for those specifics and for the solution.

2 Likes

A couple high-level questions:

  • Can you explain why a user/canister would want to notify the cycles minting canister about a transfer? Why do cycles come into play when transferring ICP between accounts?
  • Just to confirm―if notify is removed, how would a recipient canister learn about that? In Ethereum, ether-receiving contracts typically implement a receive function, which is automatically executed. Will something like that be available?
  • Why does the ledger canister even have a hashed blockchain of transactions? Is it for additional security? I’m not sure why the underlying security guarantees of the IC aren’t enough here.

A couple low-level questions about the interface leaked by @kpeacock:

  • Why is the fee a field in the TransferArgs instead of being an implementation details?
  • Is Address the same as AccountIdentifier inside ledger.did? Why not make SubAccount a field inside one of those instead of making it an opaque blob?

Thanks! The ICP ledger canister interface is important because it also serves as inspiration for token canister interfaces. I hope they can learn from one another.

1 Like
  • The cycles minting canister is not notified of all transfers: it is only notified by transfers of ICP which should be converted to cycles. That is: if i want to get cycles (to either create a canister or top up a canister) I make a ledger transfer to a specific account of the CMC and then ask the ledger to notify the CMC. The CMC burns the ICPs mints cycles & either creates a canister or tops up a canister.

  • The sender of the ICPs would have to call the recipient directly to let it know about the incoming transfer. Above I outlined two ways in which the recipient can confirm the transfer was “as it should be”.

  • As you hint, the guarantees are sufficient for all replicated execution but not for unreplicated execution. For example, Rosetta nodes fetch the blocks of the ledger via query calls. Instead of (somehow) verifying that each individual block they got was correct (e.g. by making repeated queries to different replicas) one can get all blocks of the ledger, verify that chaining is correct and get a certificate (digital signature) on the hash of the last block in the chain.

  • The intention of the fee field is to indicate a max fee the caller is willing to pay for the transfer – this is in a world where the fee for transfer are dynamic (e.g. to throttle calls in case of overload). We’re not there yet, but it is part of the thinking.

  • They’re related but not the same: the Address is the AccountIdentifier prepended with a CRC32 checksum.

3 Likes
  • The sender of the ICPs would have to call the recipient directly to let it know about the incoming transfer. Above I outlined two ways in which the recipient can confirm the transfer was “as it should be”.

What about the use case where a user or canister wants to pay for a service using ICP, and the service should be performed in the same atomic transaction as the ICP transfer? I think this is a common use case in Ethereum, hence the receive callback.

  • As you hint, the guarantees are sufficient for all replicated execution but not for unreplicated execution. For example, Rosetta nodes fetch the blocks of the ledger via query calls. Instead of (somehow) verifying that each individual block they got was correct (e.g. by making repeated queries to different replicas) one can get all blocks of the ledger, verify that chaining is correct and get a certificate (digital signature) on the hash of the last block in the chain.

Interesting, this seems really important. It means that token canisters will also need to implement some kind of hashed blockchain, or only support update (not query) calls for all functions, if they want maximum security.

1 Like
  • It is difficult to ensure atomicity when multiple canisters are involved in a transaction: for example one of the canisters involved, say the service, may have its input queue full and calls may fail. So some retry logic would be needed, or some two-phase commit involving the ledger would have to be in place. We did have some discussions regarding two-phase commit protocols but are right now not on top of the agenda.
    i should highlight that having the ledger as part of a multi-canister call tree (e.g. what we get now using notify ) does not solve the problem since, to keep complexity low and state manageable we do not implement retry logic in the ledger for notify calls that failed and it’s up to the caller to “re-notify” the recipient.

  • Very good and important point! In general and data that is exposed via unreplicated query calls should be considered untrusted. What we use for the ledger is just an instace of using certified variables: the ledger is mantained as a hashed chain and we set the hash of the tip as a certified variable. The IC will then return (if asked) a certificate which can then be used to verify that whatever was hashed is genuine, i.e. comes from the IC. One can imagine other hashing schemes (e.g. a Merkle trees balanced one way or another) depending on the application.

6 Likes

I’m terribly sorry to disappoint and for having been too optimistic in my ETA. We hit roadblocks when testing the upgrade and will have to delay the proposal. I cannot give an ETA right now, we are treating it as a high priority but it may take a few weeks. I will keep you posted and give a progress update by the end of next week.

7 Likes

You are unbelievable, keep working. :smiley_cat:

1 Like

24 Likes

The canister can’t hold ICP just like Ethereum smart contract can’t hold ETH. This is unreasonable .
If you don’t finish it ASAP ,the whole ecosystem will continue to be insulated from financial applications.

4 Likes

While the inability to hold ICPs is a big disappointment, we should also recognize that the team is trying to do the best under the circumstances.

However because this feature was deemed to be ready to go in a few days and we are now pulling it back by several weeks, I would ask the obvious question.

How did we think that it was ready to go in a few days & what made us decide that there was a fairly severe issue that could not remediated in days but would take weeks?

This question is pretty important to be answered from a community standpoint; so that we minimize this kind of possibility in the future as well as provide surety to teams whose success absolutely depends on this feature.

I look forward to the update to be provided by @JensGroth this week & hopefully it will include the questions posed.

7 Likes