Thoughts on the token standard

type Receiver = {
#account : AccountIdentifier;
#principal : Principal;
#canister : {
principal : Principal;
before_transfer_callback : Callback ->bool;
after_transfer_callback : Callback;
};
};

token transfer can call before_transfer_callback , if return true ,execute transfer , if return false ,stop transfer.

I really like your idea of using pub/sub instead of approvals. I’ve always found them to be somewhat confusing and not really secure (you cannot easily check how many approvals where given). However there is an issue with pub/sub model that needs to be addressed before we can use it.

Imagine an malicious actor that would like to exploit pub/sub model. It is possible to write a canister that will make circular pub/sub, similar to re-entrancy attack. I think we need to establish additional authorization step to public pub/sub so that it possible to control which contracts are actually subscribed to notifications.

Additionally such control is also required if we are to have any control over cycle usage of our canister. Basically cost of publish is directly related to number of subscribers, more subscribers means bigger execution cost.

1 Like

Before we set token standards, I think we should come to agreement on a cycles transfer standard first. I’ve started working on such a proposal:

Internet Computer Cycles Common Initiative (github.com)

The common cycles interface proposal by quintolet · Pull Request #1

1 Like

Hey folks,

This thread has been a great source of inspiration and I’ve learned a lot following your code and examples. Clearly there’s a lot of community effort going on… many different wallet implementations, many different tokens coming out, NFTs, etc. It’s super exciting to see.

I think the discussion of a standard remains very important so I decided to start working on a draft post which I hope to distribute more widely. But before that I wanted to share with this forum and get some feedback and thoughts. The way I see it, a token standard belong to the community - the Foundation’s role is to help facilitate the conversation and that’s what the post hopes to help encourage. Let me know if I’ve left something out, if things need more clarification, or if I should try to dive deeper on any specific points.

The IC Token Standard Discussion

The Internet Computer developer community has been having ongoing discussions about defining a token standard fit for purpose for the IC blockchain. While other blockchain ecosystems have demonstrated a clear product/market fit for tokens, the IC provides a new paradigm for blockchain computation, and as such there is a strong desire to build a native token standard that can in time scale to the demands of millions of users.

This document will attempt to catalog some of the existing discussions around a standard, highlight key considerations, and generally serve as a resource for the early reference implementations of tokens on the Internet Computer.

All credit is due to the members of the IC developer community. While it would be too difficult to exhaustively list everyone’s contributions, we attempt to recognize all of those who have contributed to the conversation. Special thanks to: senior.joinu, ICVF, hackape, stephenandrews, harrison, geokos, Hazel, dostro, paulyoung, dmd, skilesare, claudio, jzxchiang, Jessica, wang, Ori, flyq, PaulLiu, witter, stopak, quinto, …

Introduction

What is a Token?

A token is a type of digital asset that is native to a blockchain ledger. ICP is an example of a token, and serves as a utility token for the Internet Computer blockchain.

Given that blockchains provide general purpose execution environments, developers use tokens as a foundational building block for building their decentralized applications — not only for bootstrapping funding, but also for community engagement and decentralized control of the project.

Why a Standard?

The topic of token standards has a storied history going all the way back to the days of colored coins on Bitcoin. With the advent of Ethereum smart contracts, the token standard discussion really gained prominence because for the first time a general purpose scripting environment was available to developers.

Arguably the most successful token standard is known as ERC-20, which was the initial catalyst for broad token interoperability on the Ethereum blockchain. The standard defines an interface for basic functionality to transfer tokens, as well as standard token metadata such as balances, token name, symbol, etc.

Tokens are generally considered to be primitives for a blockchain’s community. They act as coordinating mechanisms for projects, and enable many add-on ecosystem services such as decentralized exchanges, lending platforms, marketplaces, launchpads, DAOs, and so forth. A token standard interface allows any token that implements the interface to be re-used by other tools and platforms such as exchanges and wallets.

Design Considerations

PubSub

Most popular token implementations that pre-date the Internet Computer are generally designed for single-threaded execution environments such as the Ethereum Virtual Machine. Given the sequential nature of such blockchains, these tokens are rather simplistic interfaces that rely on the blockchain’s native consensus mechanism (generally Proof of Work) to order transactions, execute state transitions, and produce blocks.

Because the Internet Computer provides a truly distributed compute environment as compared to other blockchain systems, developers can find more expressive ways to develop their software architectures. PubSub (or “notifications,” “subscriptions,” “topics,” or “events”) is a common pattern that reduces complexity and creates code that is simpler and easier to extend.

Forum user senior.joinu provides an example PubSub interface (here named subscribe):

fn subscribe(

from: Option<Option<Principal>>,

to: Option<Option<Principal>>,

callback: String

);

The semantics of this style of subscription adds a callback to the subscription list. From here on, whenever there is a transfer from → to, then the callback method will be called asynchronously without awaiting.

Whether or not PubSub should be considered a standard way of implementing tokens on the Internet Computer is a topic that is currently being discussed, and if so, what is the best way to execute on this pattern (naming conventions, extensibility, etc). It is worth noting that the ICP ledger canister implements some methods in the PubSub pattern.

Atomicity

The canister messaging model creates important differences compared to the Ethereum/EVM messaging model when it comes to the atomicity of transactions.

In the EVM (and other similar blockchains), if there is an exception when processing a transaction, the entire call stack of that transaction is rolled back. That is to say, when an Ethereum transaction is being processed, it has a global lock on the entire state of the EVM and can call in to other smart contracts in order to execute some complex logic. If the transaction is successful, then state transitions are applied to all contracts that were invoked. If the transaction is unsuccessful, then the entire call stack is rolled back as if the transaction had never happened in the first place.

Within the canister messaging model, no such rollback guarantees are provided. If an exception occurs within a canister, it only rolls back that canister’s state, not the entire state of the environment.

This difference means that token implementations on the Internet Computer need to consider designing for atomic, cross-canister transactions within the application logic, whereas token implementations on Ethereum get this atomic behavior “for free” (which is why such use cases as Flash Loans are popular within the Ethereum ecosystem).

Any token standard on the Internet Computer should leave room to enable cross-canister transactions.

Extensibility

In their simplest form, tokens are used for value transfer: sending the value of a digital asset from A to B. But tokens can also have much richer functionality. Indeed, some functionality can be taken for granted. For instance, developers on the Internet Computer who want to maintain the entire ledger of a given token’s transaction history would need to implement this functionality themselves. Developers on other blockchains such as Ethereum would simply get that functionality for free as a part of the platform’s native functionality.

From a standards perspective, it would be useful to agree on what the most basic form of a token API would look like, and what sorts of extension mechanisms could be implemented on top of that API. Some potential token extensions may be (h/t to Hazel & Toniq Labs):

  • Burning
  • History
  • Allowances
  • Batching
  • Extended Metadata
  • Fees
  • Etc.

The fact that a token can be extended (and upgraded!) is a compelling and novel addition to the blockchain landscape that is not easily replicable on other blockchain environments.

Scalability

In other blockchain ecosystems, tokens are as scalable as their underlying systems. Users who are willing to pay larger gas fees are generally given priority in their transactions (edge cases involving concepts such as Miner Extractable Value are outside of the scope of this document). As the token’s transaction history expands, the underlying blockchain hosts the token state at no additional fee to the users, but at the cost of expanding the global state of the system and therefore limiting scalability (proposals regarding State Fees are outside the scope of this document) of the system.

Canisters in the Internet Computer are given no such “free lunch.” If canisters need to maintain their entire transaction history, then it is the responsibility of the application to implement that functionality. In the case of tokens, this can be achieved by implementing the functionality directly into the canister logic or via extensions, as previously discussed.

The design of the ICP ledger canister includes a mechanism for scaling transaction ledger storage beyond the limit of a single canister. The mechanism is implemented by maintaining many archival node canisters. As the current “tip” of the ledger approaches the limit of a single canister, a new canister is created as the tip and the existing canister is added to the collection of archives.

Immutability (Read-Only Code)

Due to canisters having an ability to be upgraded over time, there exists a possibility that the API of a canister as well as its underlying implementation may change at any time. This is different from other blockchain environments where contracts are immutable upon deployment and upgrade paths require re-deployment and migration of state.

As a result, there exists the potential for malicious token implementers to deploy a canister that seems benign at first but then upgrade the canister at some later date to some implementation that is not to the end users’ expectations, or which manages to steal funds or do other harm to consumers of the canister.

This is an edge case unique to the Internet Computer, and as such a token standard on the IC will require mitigation to prevent malicious token implementers from changing APIs in a manner that causes harm to consumers of the token.

One potential workaround is to imagine the existence of an open internet service that provides a “verified source” type of functionality for canisters. The well-known blockchain explorer Etherscan has a “contracts with verified source code” feature that is an example of this pattern. If such an open internet service existed, then token contracts could be certified by this open internet service, and any token that changed its interface would need to recertify or risk being delisted from the “accepted” use registry. Alternatively, token contracts may adopt using tools like Black Hole in order to make the canister public and immutable.

In any event, it’s worthy for the community to have a robust conversation about this feature of the Internet Computer and how it differs from other blockchain environments.

Rosetta API Integration

Rosetta is an open standard designed to simplify blockchain deployment and interaction. Many exchanges around the world, such as Coinbase, use the Rosetta API and expect blockchain projects to implement the API as a part of an integration and onboarding process.

The design of the ICP ledger canister implements the Rosetta API, and so there already exists a token canister with Rosetta API integration in the Internet Computer ecosystem.

It is worth noting that simply implementing the Rosetta API does not guarantee that a token will be listed on any given centralized exchange platform. However, having a well tested, off-the-shelf implementation of the Rosetta API for token canisters may be a boon for the token ecosystem, as tokens may be more readily supported by third-party tools and platforms.

Other Considerations

Rust vs Motoko (vs other Wasm-compatible languages)

The two primary programming languages for development on the Internet Computer (as of Summer 2021) are Motoko and Rust. While Motoko has been developed specifically for the Internet Computer, Rust is also a popular choice due to its robust community and extensive collection of libraries.

In an ideal world, the two programming languages would offer near parity in terms of performance characteristics, and choice would ultimately come down to developer preference. In practicality, there may be tradeoffs between the two languages. Further benchmarking may be required to fully understand the performance characteristics of tokens implemented with various languages.

Principal Identifiers vs Ledger Account Identifiers

Unfortunately, the ICP ledger canister uses a different cryptographic scheme for its account ids than the Internet Computer proper uses for its principal identifiers. The reason for these two different schemes is mostly historic (the keys that were used in the seed round were secp256k1 keys — as a result, they needed to be supported by the ledger canister at genesis).

Through the development phases of the Internet Computer, it was decided that Ed25519 would be used as the main signature scheme for the IC; this made sense as an isolated decision but unfortunately created the current conflict.

There is no clear way to unify these things in the near future, as the roadmap for doing so involves many components and there strictly isn’t enough bandwidth from the Foundation’s roadmap to prioritize such an effort.

Since most industry standards follow secp256k1 (including hardware wallets), perhaps it is a vote in favor of moving toward that direction for canister development in general.

Security Considerations

This section remains a big to-do. An exhaustive list of security considerations for IC style tokens needs further exploration from the community and from security experts. A few high level topics to consider include:

Re-entrancy

Double spend

Canister upgrade rug pulls

Appendix I - Existing Implementations

Appendix II - Existing Proposals

Appendix III - Existing Forum Discussions

16 Likes

Thanks for putting this together, really great stuff! Here’s my thoughts on this:

PubSub: Events is the standard nomenclature from other blockchains, so likely the ideal nomenclature for this. Subscriptions might be used for streaming payments for apps in the future, and the mixup might get confusing. Notifications and topics require educating people on the differences with the internet computer.

Atomicity: We’ll need a standard two phase commit structure for tokens.

Extensibility: EXT-token is a great starting point, maybe we should all focus our efforts on standardising and auditing this.

Security: My biggest concerns that would need to be solved before deploying high value tokens are

  • Immutable token canisters. We require a ‘blackhole’ on the issuance/balances canister to be verified, and perhaps token extensions can remain upgradeability.

  • A 2-phase commit scheme seems required for cross canister transfers, and also likely requires some a threshold of canisters to approve before releasing. Maybe it could be configurable similar to how Bitcoin exchanges can configure how many confirmations before approving withdrawals.

  • MEV is a bit of a black box for me as data center and node operations is private. Seems like this would be much easier to do than on traditional blockchains, since there would be essentially no cost for data centers to do so.

  • Canister balance attack. How do we prevent tokens from becoming frozen by a spam (intentional or not) on the balance canister or some other critical canister?

I’ll update this post as other thoughts come to mind on it.

4 Likes

Great discussion.Here’s my thoughts on this:

Rules of Token Standard Design

ERC20 is the first token standard in the blockchain world, and it has been fully verified and recognized. Therefore, when designing the [Dfinity Fungible Token Standard], it is necessary to refer to the existing ERC20 standard.

At the same time, the formulation of [Dfinity Fungible Token Standard] should meet the following goals:

  1. Improving the ERC20 standard
  2. Being suitable for Dfinity

Improve ERC20 standard

How to improve ERC20

ERC20 was created in the early days of Ethereum. In the process of Eth ecological development, the developer found that ERC20 was not perfect, so the developer designed the ERC223\ERC667\ERC777 standard in an attempt to improve ERC20. We will refer to these standards and develop a one that combines the advantages of these standards.

  1. ERC223 tries to solve that the ERC20 transfer recipient (contract) does not support the transfer of Token, the transferred Token will be lost (similar to sending Token to a black hole address)Solution details: Fallback processing method, the recipient (contract) must implement the tokenFallback method to determine whether the recipient supports ERC20 Token
  2. ERC667 adds transferAndCall, in order to realize the simultaneous execution of transfer and call, and solve similar problems with ERC223
  3. ERC777 uses send as the standard sending method, controls whether the transfer is accepted or not through the hook function, and uses the operator authorization method to replace the approve method as a proxy transfer solution

With reference to the above standards, we have the following considerations:

  1. ERC667 and ERC223 solve similar problems, so just keep one of them
  2. ERC777 send VS ERC20 transfer is to realize the transfer. Which plan do you choose to keep?ERC20 transfer does not contain other logic besides the transfer;ERC777 send contains transfer and:
  • During the transfer process, if the sender implements the tokenToSend hook function, the function will be called to accept or reject the transfer before the transfer
  • During the transfer process, if the transfer receiver implements the tokensReceived hook function, the function will be called after the transfer to accept or reject the transfer

ERC777 implements the capabilities that ERC20 does not have, allowing the sender/receiver to control whether to accept the transfer. It seems more reasonable to use ERC777 send method. ERC20 is more popular, so the ERC777 scheme is adopted, but using transfer as the method name is easier for ERC20 users to accept.

The implementation of ERC777 relies on the ERC1820 registration contract to register the sender/receiver hook function, so no matter the sender and receiver are ordinary addresses, even the contract address can register hook functions. (This topic will be discussed again in the [Suitable for Dfinity] section below)

  1. The hook function of the ERC777 receiver realizes a function similar to ERC667, so the function coverage of ERC667 can be completed by adopting the ERC777 standard
  2. Operator authorization solution of ERC777 VS ERC20 approve solutionThe operator authorization scheme of ERC777 does not limit the allowance of authorization, and the management granularity is bigger. ERC20 Approve can not only meet the needs of the ERC777 authorization program, but also through the approval allowance Approve program seems to be a more reasonable choice, which can control the credit range available to everyone and achieve more refined management than ERC777
  3. ERC777 provides a default precision value of 18 for the token, and supports setting the minimum step unit for tokens.
  • Different precision support is more suitable for the needs of different scenes, and the design of keeping decimals seems to be a more reasonable choice
  • ERC777 non-granular integer operations will be reverted, which will increase the frequency of abnormal user calls, so this design is abandoned

Improved standards

Based on the above considerations, the improved draft standard is as follows:

service: {
  name: () -> (text) query;
  symbol: () -> (text) query;
  decimals: () -> (nat64) query;
  totalSupply: () -> (nat64) query;

  balanceOf: (owner: principal) -> (nat64) query;
  allowance: (owner: principal, spender: principal) -> (nat64) query;
  approve: (spender: principal, value: nat64) -> (bool);
  transferFrom: (sender: principal, receiver: principal, value: nat64) -> (bool);
  send: (receiver: principal, value: nat64, args:opt vec nat8) -> (bool);
}

Suitable for Dfinity

Problems to be solved

The design of Token Standard should fully consider the difference between Dfinity and Ethereum, and clarify the problems to be solved:

  1. No atomicity similar to EVM cross-contract calls
  • Conclusion: It is necessary to refactor the interface;
  1. No built-in EVENT support
  • Probelm: Historical content such as transaction records needs to be separately for storage
  • Consideration: On Forum, there are two ideas (Pubsub/Notify)When the Token is transferred, Notify informs the recipient, which can fill the missing EVENT.When the Token recipient not a canister, which means can not notify, it is necessary to support query transaction records.Token does not have sufficient reason to implement Pubsub to satisfy third parties irrelevant to actual operations
  • Conclusion: Notify is a better way; should support query transaction history;
  1. Built-in storage support, can store more data content
  • Problem: The current storage limit is 4G, which can store more content cheaply, but storage expansion needs to be considered
  • consider:tx history, should be stored separately to avoid storage limitationsBuilt-in storage can support Token to store more self-describing information
  • Conclusion:Separate storage of transaction history Token implements self-description
  1. The call of the contract does not require the caller to pay gas fees (the contract publisher provides gas fees in the contract)
  • Problem: Need to consider the cost of DDOS attacks that call the contract
  • Conclusion: The charging logic should be designed in the Token
  1. There are two different identities in Dfinity, Internet Identity (II for short) and Principal ID
  • Problem: which identity to use as the choice of token standard is an important question
  • Consideration: Dfinity’s II is an implementation of DID, although DID is based on Principal ID
  • Conclusion: It is necessary for the Token standard to be compatible with different identities, in order to meet the needs of different identity scenarios
  1. No black hole address
  • Question: If there is a need to destroy Token, how to deal with it?
  • Conclusion: The burn interface should be designed in the Token standard
  1. approve/transferFrom (Approve is a pairing function for TransferFrom) keep or remove
  • Question: Whether approve/transferFrom is removed is controversial in the Forum discussion
  • consider:approve/transferFrom appears in ERC20 mainly because:

Using Ethereum’s native ETH token, you can call the smart contract function and send ETH to the contract at the same time. This is done using payable. But because the ERC20 token itself is a smart contract, it is not possible to directly send the token to the smart contract while calling one of its functions; therefore, the ERC20 standard allows smart contracts to transfer tokens on behalf of the user-using the transferFrom() function. For this, users need to allow smart contracts to transfer these tokens on their behalf
However, in the Dex and lending scenarios of Ethereum, Approve is often accompanied by the possibility of simultaneous operation of two tokens. Approve can avoid the repeated payment problem which transaction brought about, has a good supplementary use scenario for transfer.

  • Conclusion: Approve/transferFrom should be supported
  1. TransferAndCall vs Receiver Notify
  • Probelm: which option is more suitable
  • consider:

Notify can meet the basic notification needs. Although it cannot support better flexibility, it is sufficient to meet the transfer scenario

TransferAndCall provides better flexibility, but it depends on the transfer caller to fully understand the method and parameters corresponding to the call, which is not needed for most transfer scenarios

  • Conclusion: Both are supported at the same time, integrated in the transfer function

If the user specifies the call (specify the target method and parameters), only the call will be executed, and the notification will not be executed;

If the user does not specify the call (specify the target method and parameters), only execute Notify;

Token standard should execute Notify first, and then execute call;

  1. approveAndCall VS transferAndCall
  • Problem: Some developers support approveAndCall, so we compare it with transferAndCall. Due to problem 1 (atomic problem), methodAndCall and transferAndCall are two sets of non-atomic operations, and there is no difference in essence.
  • Consideration: In some scenarios, when multiple Tokens need to be transferred at the same time, transferAndCall can not meet such needs. After approval, execute transferFrom in the final call to pay multiple tokens at once
  • Conclusion: Support approveAndCall and transferAndCall to meet the flexible needs of more scenarios.

What does Dfinity Fungible Token Standard need to achieve?

  1. Interface self-description

Dfinity needs to provide a common contract interface registration/query service similar to ERC1820.

Dfinity currently does not have such a service, but because of [problems to be solved] economic considerations, no one wants to build such a service.

Dfinity can solve the problem solved by ERC1820 through Dfinity Self Describing Standard

  1. Information self-describing

Etherscan, MyEthereumWallet, Imtoken, TokenPocket, Dapp all have more information requirements for ERC20, such as Logo, introduction, white paper, social media, official website, contact information, etc. Each place that needs this information needs to be maintained independently, so information appears Inconsistent. It is necessary to solve this problem through the design of [Dfinity Fungible Token Standard]

Based on the above problems and requirements, combined with the ERC standard formed in the previous step, the following draft standards are formulated:

type ApproveResult = variant { Ok : opt String; Err : String };
type BurnResult = variant { Ok; Err : String };
type CallData = record { method : text; args : vec nat8 };
type Fee = record { lowest: nat; rate :nat32 };
type KeyValuePair = record { k : text; v : text };
type MetaData = record {
  fee : Fee;
  decimals : nat8;
  name : text;
  total_supply : nat;
  symbol : text;
};
type TokenHolder = variant { Account : text; Principal : principal; };
type TransferResult = variant {
  Ok : record { nat; opt vec String };
  Err : String;
};
service : {
  // Return all of the meta data of a token.
  meta: () -> (MetaData) query;

  // Return all of the extend data of a token.
  // Extend data show more information about the token
  // supported keys:
  // OFFICIAL_SITE
  // MEDIUM
  // OFFICIAL_EMAIL
  // DESCRIPTION
  // BLOG
  // REDDIT
  // SLACK
  // FACEBOOK
  // TWITTER
  // GITHUB
  // TEGEGRAM
  // WECHAT
  // LINKEDIN
  // DISCORD
  // WHITE_PAPER
  extend: () -> (vec KeyValuePair) query;

  // Return token logo picture
  logo : () -> (vec nat8) query;

  // Returns the account balance of another account with address owner.
  balanceOf: (holder: text) -> (nat) query;

  // Returns the amount which spender is still allowed to withdraw from owner.
  allowance:(owner: text, spender: text)->(nat) query;

  // Allows spender to withdraw from your account multiple times, up to the value amount. If this function is called again it overwrites the current allowance with value.
  // If calldata is not empty, approveAndCall will be executed.
  approve: (fromSubAccount: opt vec nat8, spender: text, value: nat, calldata: opt CallData) -> (ApproveResult);
  // Transfers value amount of tokens from [address from] to [address to].
  // The transferFrom method is used for a withdraw workflow, allowing canister
  // to transfer tokens on your behalf.
  transferFrom: (spenderSubAccount: opt vec nat8, from: text, to: text,value: nat) ->(TransferResult);

  // receiver's Notify hood function if exist.
  // Transfers of 0 values ​​will be reject.
  // Generates an AccountIdentifier based on the caller's Principal and
  // the provided SubAccount*, and then attempts to transfer amount from the
  // generated AccountIdentifier to recipient, and returns the outcome as TransferResponse.
  // recipient can be an AccountIdentitifer, a Principal (which then transfers to the default subaccount),
  // or a canister (where a callback is triggered).
  // calldata means transferAndCall
  transfer: (fromSubAccount:opt vec nat8,to: text, value: nat, calldata: opt CallData) -> (TransferResult);

  // Destroys `amount` tokens from `account`, reducing the total supply.
  burn: (fromSubAccount: opt vec nat8,amount: nat) -> (BurnResult);


  // Return if canister support interface, for example: supportedInterface("balanceOf:(text)->(nat)")
  // Implement [Dfinity Self Describing Standard](https://github.com/Deland-Labs/dfinity-self-describing-standard)
  supportedInterface : (text) -> (bool) query;
}

Here is a rust-based implementation example

9 Likes

Thinking about atomicity
Atomicity is a very important matter, but in the traditional distributed development environment, there are two solutions:

  1. Distributed transaction, similar to 2-phase commit (or 3-phase commit)
  2. sagas

Distributed transactions require each participant to support 2-phase commit, but sagas does not have such a requirement. Based on this situation, saga can reduce the complexity of a single canister implementer and complete the consistency requirements independently of a single canister.

So I think sagas is a better consistency solution

5 Likes

Why is fee designed like this?

type Fee = record { lowest: nat; rate :nat32 };

Dfinity should consider the cost of DDOS attacks. The cost design does not appear in the ERC20 standard, but Dfinity is necessary.
The cost design should first consider the minimum handling fee for each update operation to prevent ddos attacks, and some services may be charged according to the rate. The two cost logics are integrated into [lowest + rate] to support different scenarios.

  1. Only minimum charge is required x
    fee= record {lowest: x, rate:0}

  2. Charged at rate y%, minimum charge x
    fee= record {lowest: x, rate:y%}

2 Likes

Why do we need approve?

Approve can improve the possibility of repeated payments. In most payment scenarios, post-payment operations, such as shopping, will be followed by order processing, such as transactions, and there will be exchange operations. In these scenarios, approve is better than transfer:

Approve can actually charge through transferFrom when the next specific operation is performed, but transfer must complete the transfer before the next operation. If the user has multiple transfers, it may lead to repeated payments. Approve x can only transfer x, which can be eliminated Repeat payment.

At the same time, based on approve, many innovations were born, such as superfluid . Dfinity needs the approve interface to open the window to accept innovations from Ethereum

2 Likes

We have made some effort to implement some token canister templates.
About 3, currently we have implemented built-in tx storage and separate canister tx storage, ultimately I think we need an auto-scale storage solution for tx history storage.

About 4, fee logic is indeed needed, pay a fixed amount of token for each update call is reasonable.

About 5, I think you mean account id and principal id, here is a picture explains the different, principal id is the unique and native identity on the ic, we choose to use principal in the implementation.

About 6, I think aaaaa-aa can be used as the blackhole address, its the ic-management canister id, not an actual canister, just an abstraction of some system level APIs.

5 Likes

thanks for your feedback. @ccyanxyz
When I designed this token standard, your code was one of my reference codes, thank you for you and your team’s work.

About3, I agree with you about auto-scale storage, I choose sudograph as separate canister tx storage(sudograph can provide richer query support, thanks for the work of the sudograph team @lastmjs .

we need an auto-scale storage solution for tx history storage

Yes, fixed fee can meet the needs of most tokens.
A common fee model is a fixed fee or rate.
type Fee = record {lowest: nat; rate :nat32 };
Can take care of the above two types of needs.

About 4, fee logic is indeed needed, pay a fixed amount of token for each update call is reasonable.

Yes, I mean account id and principal id. Before designing the token standard, I saw this picture.
I don’t know which is the best, and nobody can give a perfect answer, so compatible with both may be a better choice .

About5 ,I think you mean account id and principal id

Yes,I have considered this address, but can official developers call this address to perform operations? I did not find a clear answer, so I gave up this choice. Burn has a similar implementation in ERC20, which is a good choice.

About 6, I think aaaaa-aa can be used as the blackhole address, its the ic-management canister id, not an actual canister, just an abstraction of some system level APIs.

5 Likes

Just added a token canister template with auto-scale history transaction storage, haven’t been thoroughly tested yet, just for reference: ic-token/motoko/auto-scale-storage at main · dfinance-tech/ic-token · GitHub, welcome feedback.

4 Likes

For example, in a scenario, my canisters call the token function: transferfrom(); At this point, my canisters are abnormal. How do I know if my call is successful? Therefore, the standard should provide the ID corresponding to the transaction before sending a transfer。
use transaction ID, we can query the transaction details afterwards. In addition, it is necessary to provide a transaction (index: nat64): record query interface and a current index (nonce similar to ETH) index (CID: Principal): nat64 query interface;

First of all, dfinity is not eth, which means that your experience in eth cannot be 100% copied to dfinity.

Please learn about the atomicity of dfinity from here

Secondly, Canister’s current largest storage is 4G, so the production environment should store transaction history separately. Token Standard implemented by Deland implement separate storage as default:

I try to understand your question:
Scenario: You call transferFrom in your own canister. After calling transferFrom, you deliberately set a trap to make your canister call fail
Question: I can’t confirm whether what you want to know is whether the transferFrom call was successful, or whether your own canister method was successfully called?
My answer is:
Once you call transferFrom and the returned result contains TransactionID, it means that your call was successful. If it is unsuccessful, an error message will be returned.

Even if there are exceptions in the execution of other logic of your canister, transferFrom will not be rolled back because of these exceptions, that is, in this case, transferFrom is still successful.

If you want to obtain whether the transferFrom is successful, or want to obtain the details of the transferFrom transaction in the future, you can obtain it in the following way in the Token Standard implemented by Deland:

  1. Get Token’s external storage canister Id: tokenGraphql: () → (principal) query;

  2. Get your tx details through sudograph query: “graphql_query”: (text, text) → (text) query;

for example: dfx canister call graphql graphql_query ‘(“query { readTx(search:{ txid:{eq:“your transcation id”} }) { id,txid,txtype,from,to,value,fee,timestamp} }”, “{}”)’

1 Like

Why choose sudograph as separate canister tx storage?

Sudograph can provide richer query support.

You can learn more from Sudograph book

3 Likes

let result = await canister.transferFrom(from, to, amount);
let b = 100/0; or other exception occurs here…I’ll never get transaction ID.

Whether a function should be provided, hashID:(from, to, amount) → txid; This txid is equal to the ID returned by transferfrom().

The above code should be written as follows:

let txID = hashID(from, to, amount);
writeToRecord(txID); //record transaction id
let result : Bool = await canister.transferFrom(from, to, amount);
let b = 100/0; exception occurs here,but I got txID。
Then through the query interface, I can determine whether my transaction is successful

As far as I know, the answer is no.

Whether a function should be provided, hashID:(from, to, amount) → txid; This txid is equal to the ID returned by transferfrom().

You can do it like this:

let transferResult =canister.transferFrom(from, to, amount). await;
match transferResult {
   TransferResult::Ok(txid, inner_errors_opt) => {writeToRecord(txid); },
   _=>{}
};
let b = 100/0;  // exception occurs here,but you can got txID。

The above code should be written as follows:
let txID = hashID(from, to, amount);
writeToRecord(txID); //record transaction id
let result : Bool = await canister.transferFrom(from, to, amount);
let b = 100/0; exception occurs here,but I got txID。
Then through the query interface, I can determine whether my transaction is successful

let b = 100/0;Just think of an example, What I want to express is: will my own canisters suddenly break the link with other canister or the cycle is suddenly stopped due to lack of cycles. if so ,the following code has no chance to be executed
match transferResult {
TransferResult::Ok(txid, inner_errors_opt) => {writeToRecord(txid); },
_=>{}
};

Let’s replace these Web2 social platforms with IC Native equivalents:

  // Return all of the extend data of a token.
  // Extend data show more information about the token
  // supported keys:
  // OFFICIAL_SITE
  // OFFICIAL_EMAIL
  // DESCRIPTION
  // BLOG
  // DSCVR
  // OPENCHAT
  // DISTRIKT
  // WEACT
  // NUANCE
  // ETC…
  // GITHUB
  // DISCORD
  // WHITE_PAPER

Good idea, thanks for your suggestion.
I will add the IC Native equivalent.