ICRC-1 Token Standard Ledger

Another potential flow could be this one, combining both approvals and notications:
ICRC1-a

The benefit of it are:

  1. If the service canister is down the transfer isn’t immediately executed so the client still owns the tokens and can revoke the approval.
  2. If the service canister can’t process the payment due to potential errors or the service being purchased not being available anymore, e.g NFT bought by someone else, there is no need to handle refund logic, just deny the transfer.

case: ONE and TWO are canisters
Turns out it’s not that hard to do with the existing mechanics
Note: When ONE does “initiate” it is an untrusted inter-canister call so it’s not waiting for a response. Instead, Receiver (trusted custodian) is calling ONE later

Now at that point ONE and TWO can call Reciever and move their tokens from RCA1 and RCA2 to wherever they want.
Or - we can do it before sending success. For that to happen, both “notify” calls need to be providing target account identifiers. (I will post that version next)
(Notice in all sequence diagrams how useful Account Identifiers are, one canister owning a lot of them leads to a lot of possibilities)

2 Likes

And there we have the whole solution.
Claim 1: everything needed for DeFi is atomic swap. Transfer functions are basically atomic swap where one of the parties gets nothing
Claim 2: that exact flow solves everything DeFi needs and does it with the existing token flow used for a long time by ICP Ledger. No changes needed.

Can anyone see problems with this?


actor "ONE (Initiator)" as ONE
actor TWO

participant ICP
participant BTC

participant "Receiver (Custodian)" as Receiver
ONE->ICP:transfer tokens to receiver controlled account (RCA1)
ICP-->ONE:transfer successful
ONE->Receiver:notify
Receiver->ICP:query balance of RCA1
ICP-->Receiver:balance
Receiver-->ONE:success
ONE->TWO:initiate

TWO->Receiver: query to get confirmation ONE has done their part of the deal
Receiver-->TWO: Yes, proceed

TWO->BTC:transfer tokens to receiver controlled account (RCA2)
BTC-->TWO:transfer successful
TWO->Receiver:notify
Receiver->BTC:query balance of RCA2
BTC-->Receiver:balance

note over Receiver: Swap internally the owners of RCA1 and RCA2. From now on swap is done
note over Receiver: From now on If async calls fail, ONE and TWO can manually request tokens

Receiver->ICP: transfer tokens to final account 1
ICP-->Receiver: success
Receiver->BTC: transfer tokens to final account 2 
BTC-->Receiver: success

Receiver->ONE: success
Receiver-->TWO:success

Ratoko FTW

1 Like

If the service goes down after the first transfer you’d lose the money, at the same time the service also has to handle cases where the TWO party backs down from the deal and refund ONE.

1 Like

Thanks for your challenging it.
I have shown the successful flow, if things start erroring, I would have to make 50-60 sequence diagrams showing how it works, but I am pretty sure there won’t be a problem. The reason is if Receiver goes down (I am assuming this means the connection fell apart, not that Receiver has gone malicious.)
ONE can call the refund function and take its tokens back.
if TWO backs down ONE gets refund as well.
ONE has all its tokens in RCA1 and Receiver can easily return everything.
RCA1 can be accountIdentifier.fromPrincipal(Receiver, ONE)
Receiver doesn’t have to store in memory pairs with RCA1 → One
Because it can make a check if accountIdentifier.fromPrincipal(Caller, request.subaccount) equals RCA1

Your challenge did not disprove the claim * writing it for easier readability

I am assuming this means the connection fell apart, not that Receiver has gone malicious

Could be it became malicious, ran out of funds or became unresponsive due to errors. There is also the possibility the refund logic is flawed.

Having to manually implement refund logic right now is a downside, maybe in future it will be abstracted away by libraries/services but at the moment it has to be coded from scratch, which means you have to be extra careful cause an error could mean losing people’s money.

2 Likes
  • Malicious code - Not a problem in our current domain. We can get ICP, BTC and Receiver to be canisters controlled by NNS if we can’t put trust in anyone except it. You can have malicious code in all of them. You can have the BTC canister lying about sending BTC at the end. We can have a report system inside this which gathers cryptographic evidence in case of such problems.
  • Canisters running out of cycles is again a trust problem. Otherwise, anyone can put an auto refueling system.

Your challenges did not disprove the claim

ICP and BTC canister will be handled by the NNS and their code will 99.9% be safe and non malicious, but you can’t expect every service canister’s code to be verified. The entire IC is under NNS control but I doubt we can make a proposal and expect it to pass everytime a couple ICPs are stuck in a canister. The effort required to verify what happened is too much, so the community won’t care most likely.

1 Like

Sure. In our mint governance dapp at ORIGYN Governance Dashboard to stake, you 1. Send OGY to the governance canister, 2. Claim them. 3. Stake them.

From a useability standpoint If 1 failed due to a network issue we can just try again. If 2 fails then they have stranded tokens on the server and we have to build some search code to look at the ledger and ‘refind’ a user’s unburned transaction. If 3 fails then we have deposited tokens but they aren’t staked and we have to expose a direct deposit function(Now we have two ways of doing something which confuses users).

When we have a commit_approve this will be a bit easier because the tokens won’t actually move until we’re ready for them.

It is true that you can’t trust canisters easily. They have to be owned by NNS or another reputable DAO. That is the plan and what DAOs do, decentralize governance to gain trust.

So the first way where each developer deploys their own canisters looks like this:

The problem you specify is real, it just requires a bigger change I am not sure everyone will be up to. The solution above is using canisters anyone can deploy (*1). Devs can also add custom code to these Ledger canisters, which I currently don’t know if it’s beneficial or not compared to a solution where Ledger canisters are all the same and customization is done with configuration. So far I haven’t found something that can’t be done.

Now it may seem like we were plotting towards this moment, when I am going to advertise :slight_smile: But the other solution is what (open-source) Anvil infrastructure is actually doing and I am working on it for 9 months now ( I have started with NFTs because they are less dangerous to start with, but it also has FTs). There is a gate controlled by a DAO and all tokens are created through it, configured and the DAO makes sure it’s all executed as planned and even their creators can’t mess with some token properties. This DAO could be the NNS, this could result in everyone adopting it. (Note, this does not mean there can be only one of these, there can be more NNS-controlled gates/mints) Or if the NNS doesn’t want to get Anvil from me and govern it, it could be our own ANV token, but adoption will be local. Or every ecosystem can have one of these multi-token mints, because they are not sure if one will work for all, because of ->(*1)
Personally, I prefer working, finishing it, and transferring keys to the kingdom to NNS for a reward. Rather than getting VC funding, selling a new token, and IC having another token we have to decentralize, rooted in the foundation of an ecosystem. It will result in many ecosystems splitting developers. Developers working to exit to NNS should be hundreds of times cheaper and more secure for the IC ecosystem and ICP token-holders.

The DAO-controlled multi-token mints look like this (this is using the current Anvil multi-canister multi-subnet token solving the issue where one subnet is loaded and the 750 concurrent users threshold for single canister tokens you have mentioned in Discord. But the multi-cansiter token is whole other thing down the rabbit hole. Diagram will look similar with single canister tokens)

1 Like

We discussed doing notify with one shot that would be secure. I think it was moved away from because of complexity, but I think it should be an extension. Maybe I’ll work up the notify extension and see if we can’t get it included.

2 Likes

We postponed notify just because a change at replica level, either one-shot or named callbacks, is needed in order to properly support notify. We will definitely talk about it in future as an extension.

1 Like

Thanks!

I recall that we had similar issues in the NNS Dapp and CMC.

I think the argument here is that a complex multi-step action is much more likely to succeed if it runs on the IC and not on the client. So requiring the client to make a transfer followed by notification is riskier than requiring the client to approve and then notify.

approve/transferFrom flow is probably not the only way to address the unreliability of the client. I wonder if the notify flow (that we agreed we want to have as an extension once named callbacks are available) can address the issue as well: in this case, a single transferAndNotify call would make the transfer and notify the governance, so even if the client fails, the governance won’t have troubles learning about the transaction.

Not sure if they can be called extensions, to me they look like completely new standards considering blackholed ledger and service canisters using the first version won’t be able to use the new features.
If we keep adding features to a standard that make the old ones obsolete in the long run it will have add needlessy complexity for services that need to support all payment flows.

Something like this?

1 Like

I have had “hanging” tokens in between calls too (because of some interruption). Maybe it was something like your problem. So far I have solved them by saving an id (sometimes not needed) in browser localStorage. Then a function (on browser focus) is checking if there are such unresolved ids and resuming the requests. That is so far works but looks like a hack.

So let us try to fix that. The default way of doing things (At least what first comes to mind) is that when a user authenticates in a dapp with Internet Identity, the dapp is given a public key and PrincipalId.
From there we have been directly transforming the PrincipalId + subaccount to AccountIdentifier.
Then directly calling the Ledger.

I have been thinking for a while. What if we don’t do it directly, but let a canister owned by the dapp be in the middle - called Wallet. So the AccountIdentifier we show in the dapp (with user personal tokens) is actually
AccountIdentifier.fromPrincipal(Wallet, UserPrincipal)
and not
AccountIdentifier.fromPrincipal(UserPrincipal, null)

Now when a canister is holding it, communication perhaps will be more secure, we can save things into memory, resume if we need to and it can be called by other canisters, while browser cant. It can proxy all calls or only selected.
So in the diagram below, ONE is a canister - the Wallet. TWO is let’s say staking canister.
The idea is that we are swapping ICP for ICP_LP (staked or in a liquidity pool)

With an on chain wallet a good bit of this can be automated away from the user. :slight_smile: It is on the road map for Origyn and ICDevs has a bounty out on it. Hopefully, we have a starter on-chain wallet soon.

The model I had in mind is XMPP: it has a core messaging part and hundreds of extensions (XEPs). You can send messages around securely without any extensions, so it’s not terribly hard to implement a client (I used to use a client implemented in Emacs lisp), but extensions provide a lot of quality-of-life improvements to the protocol.

The extensions won’t supersede existing functions, they will extend the capabilities. Some extensions I have in mind:

  1. transferAndNotify and notify. They don’t make transfer obsolete in any way.
  2. Specification for standard block encoding and certification. This is crucial to implementing a rosetta node.
  3. Specification for executing transactions pre-signed by the sender, the flow that Jordan proposed in one of the working group meetings.

Not all ledgers need to implement all the features. Services like DEXes can inspect the capabilities of a ledger and use a more efficient flow if it’s supported or fall back to the core API otherwise.

I firmly believe that it’s far more important to have an evolution path than to have a perfect artifact. IC is going to evolve, the core teams will ship new exciting features that can change the way we think about canister communication. No matter what we standardize now, there will be better ways of solving problems.

Side note on blackholed canisters (that’s my opinion, not necessarily reflecting the views of my employer): I think the concept of immutability is overrated:

  1. All interesting software is buggy; I can’t imagine how we could trust a system that cannot be fixed if a major flaw is discovered.

  2. In a Turing-complete environment, immutability doesn’t mean much. Code is data. Are you sure the canister doesn’t have a little stack machine interpreter somewhere and a hidden endpoint that allows the authors to upload a script? Or maybe it has a switch that forces the canister to forward all calls to another (not blackholed) canister?

Yes, something like that (I assume the “success” after notify should go to the ledger, not to the client).

1 Like

Which means the more flows we add, the more code DEXes and services have to write, test and maintain.

True, but a ledger isn’t that complex, software running on airplanes is formally verified, we could do the same with the ledger code and provide everyone with a bug free implementation.

Verified builds can be used for open source code. Immutability is a must for a ledger, without it there is no point in using a blockchain. I know DAOs can be used but it’s not easy to make sure the allocations and distribution are done correctly, it requires effort on the devs end and trust by the user, which is something we are trying to take out of the equation.
If you ask me one of IC’s biggest flaws right now is precisely the NNS due to how tokens were distributed, even big entities with hundreds of experts can make mistakes. With all recent events surrounding Luna and Solend I’m more skeptical about DAO governance than ever, especially when there is no Proof of Personhood.

Actually both ledger and user now that I think about it, the user needs a confirmation to know the purchase went through, ledger needs it so it doesn’t keep waiting for a response.

I know it’s very tempting to fix an API once and for all, but there are very few successful examples of this approach. The most efficient and convenient APIs are the ones that evolve with the industry. The core of the C standard library is the same as it was 30 years ago, it’s also one of the worst examples of API design I’m aware of.

I doubt that all of it is verified, most likely, it’s merely certified by authorities. Maybe the OS kernel is formally verified if they use the seL4 kernel, which is only a few thousands of lines of C code, which required hundreds of thousands of lines of spec and engineer-decades of development effort.

As a person who patched subtle security holes in a ledger holding millions in assets, I politely disagree.

To verify a build, you need to inspect all the source code, the source code of all dependencies, the source code of the build, and the source code of all the build dependencies (what if the last step of the build script is “throw away the build artifact and replace it with another binary”?). Nothing stops you from reviewing the full JS code of applications you use on the web, but I don’t know anyone who does that.

I agree it’s not easy, but at least we should try to do out best with the knowledge we have at the time, we already know the current standard will be made obsolete by new tech.

Management said there are currently no resources to work on named callbacks and it’d take too much, it would have been nice to know a rough estimate and maybe reconsider some other features to prioritize named callbacks over. Tokenization is one of the key components of Web 3 and as the name suggests a token standard is the foundation of it, as a proof both SNS and ckBTC are blocked by this pillar, considering its importance it would seem reasonable to make it the absolute priority, Dfinity should focus as much as possible on the protocol instead of the application layer, ckBTC and SNS could have been worked on by the community and time spent on them could have been used elsewhere.

Now my question is: is it worth creating future debt for services and headaches for ledger devs for a standard that seems to lack community support? At the very least Dfinity should get a concrete understanding of who will and who won’t support it and then evaluate if it’s really worth releasing the standard as is.

I stand corrected on my claim about ledger complexity then, but I’m still firmly convinced a ledger should ideally be as immutable as possible.