Non Fungible Token (NFT) Standard [Community Consideration]

Hey, administrator might pin the featured git repositories? Appreciate very much.

1 Like

imo, Token standards were created so that the blockchain would have something on it, besides its native coin, for people to spend their money on. Creating something exciting/hyped for people to FOMO into has been around since before I got into(jan '14) Crypto. The current ‘cutting-edge’ of standards for NFTs is the addition of an attributes field and the creation of optional royalties.
If ERCs were very mature standards, I’d see value in emulating them. But, as senior.joinu has done a great job of pointing out, Eth and ICP diverge almost right after the word ‘Token’. For instance, what if I want to create a dns system in which, depending on the NFT owned, will shape the response given? ICP lends itself well to this, but Ethereum does not.

1 Like

Hey @rossberg, @claudio, etc…what chance is there of function overloading on the IC/Motoko. Right now we have a collision on transfer for Departure Labs and EXT. It sure would be nice if we could have overloading and support both based on the candid type that comes in. If not overloading, then some kind of way to pre parse a candid input and transform it? Just a thought…it would solve a good bit of stressing about standards.

1 Like

You mean overloading for canister methods? I don’t think that can work. The IC can only distinguish by name, so it would have to become a single method from the IC/Candid perspective that does something different based on the argument type. Pretending these are two methods inside Motoko would be a leaky abstraction at best, since it could not really separate them (consider function references). It would also be super-hairy and likely impossible to reconcile with Candid’s interface evolution subtyping.

I’m not sure I follow how a method name clash can even happen in Motoko, given that there has to be a single place where an actor and its methods are defined. Can you elaborate?

My guess is that you are using a single actor to implement two separate interfaces that happen to have two methods with a common name but different signature?

In that case, overloading wouldn’t be a general solution either, even ignoring everything else, because the clashing methods may happen to have similar types.

I’d suggest that actor interfaces defined to be implemented by other actors should take care of properly namespacing method names, e.g., by some “owner” prefix. We might want to think about establishing some conventions for that.

My guess is that you are using a single actor to implement two separate interfaces that happen to have two methods with a common name but different signature?

Exactly. Departure Labs has transfer(to : Principal, id : Text) and EXT has transfer: shared (request : TransferRequest) → async TransferResponse; where TransferRequest and Response are

public type TransferRequest = {
    from : User;
    to : User;
    token : TokenIdentifier;
    amount : Balance;
    memo : Memo;
    notify : Bool;
    subaccount : ?SubAccount;
  };
  public type TransferResponse = Result.Result<Balance, {
    #Unauthorized: AccountIdentifier;
    #InsufficientBalance;
    #Rejected; //Rejected by canister
    #InvalidToken: TokenIdentifier;
    #CannotNotify: AccountIdentifier;
    #Other : Text;
  }>;

So if I want to write an NFT canister to support both, I currently can’t. I’d love for the community to come to some resolution and extensibility with prefix/post fix would be a great solution.

If every interface in the IC was namespace it would fix a lot. It would also be kind of ugly.

transfer_com_ext and transfer_com_departure

That is going to look pretty ugly inline in code.

Another option is to get everyone to agree to always use one interface function that takes a standard variant type:

public shared(msg) _interface(command: VariantType) : async Result<VariantType, Text> {
     switch(Command){
          case(#Class(val)){
               let command = VariantHelper.findProperty("command");
               switch(command){
                    case(null){return #err("command required")};
                    case(?command){
                        if(command == "transfer"){
                            let namespace = VariantHelper.findProperty("namespace");
                            switch(namespace){
                                case(null){
                                      //default;
                                      let result = transfer_ext(VarientHelper.unwrap(VariantHelper.findProperty("paramaters", EXTTypeDefHelper));
                                      return VarientHelper.wrap(result,EXTTypeDefHelper);
                                 };
                                case(?namespace){
                        if(namespace == "com_ext_nonfungible"){
                                let result = transfer_ext(VarientHelper.unwrap(VariantHelper.findProperty("paramaters", EXTTypeDefHelper));
                                      return VarientHelper.wrap(result,EXTTypeDefHelper);
                        } else if(namespace == "com_departure_nonfungible"){
                               let result = transfer_ext(VarientHelper.unwrap(VariantHelper.findProperty("paramaters", DepartureTypeDefHelper));
                                      return VarientHelper.wrap(result,DepartureTypeDefHelper);
                        };
                    };
               };
                        }; //else other commands
                    };
               };
          };
    };
};

To me, the above seems extraordinarily clumsy.

Could candid have a namespace that is applied behind the scenes so the code stays clean but behind the scenes the namespaces are appened to function?

Something like:

type DepartureNFT = actor{
transfer: (Principal, Text) → Bool;
} : com_departure_nft;

type EXTNFT = actor{
transfer: (Principal, Text) → Bool;
} :: com_ext_nft;

and then in motoko actor:

public shared(msg) transfer::com_departure_nft(Principal, Text) : Bool {//code};
public shared(msg) transfer::com_ext_nft(TransferRequest) : TransferResponse {//code};

Behind the scenes, both Motoko and candid are appending or pretending the namespaces to the function calls? Maybe that is not much better and I’m not a language designer.

In that case, overloading wouldn’t be a general solution either, even ignoring everything else, because the clashing methods may happen to have similar types.

In that instance, wouldn’t you be doing the same thing anyway. I guess if one had a publish(Text) that was supposed to send an event and one had a publish(Text) that was supposed to post a blog post you might have that issue. But in the context of a token, two standards probably mean the same thing by transfer. Maybe not…in any case if you need the context of what ‘kind’ of call intention you had…maybe some kind of automagical namespacing…would help?

1 Like

What areas of overlap would give you the biggest uplift? Likely
can accommodate.

I think ext and departure labs should put their heads together and come up with a standard…invite others…maybe the standard should go beyond just tokens…that fixes this glitch. We need a design pattern we can push and teach to new devs. Would be good to do it while we have 20 NFT projects, before we have 200, and before the first person blackholes their canister and can’t upgrade.

7 Likes

I always have a liability concern about canister as smart contract engine. Maybe I am wrong. as my understand canister’s controller can always make change of actor and canister need cycles to keep it live.
On the contrary, smart contract on Ethereum can not be changed and offline ever. If it’s true, maybe consider some technical methods to avoid it as NFT standard.

Yeah, but you only need it for interfaces that others are supposed to match. I’d use a prefix convention that is simple enough:

ext__transfer    departure__transfer

Theoretically, yes, but would writing a::f in actual code be any less ugly than a_f? And it’d be introducing quite a bit of extra machinery for it for all parties involved (Candid, Motoko, all other language bindings).

I wouldn’t necessarily assume that. More importantly, we shouldn’t be satisfied with a solution that would be limited to the specific needs of tokens.

I have seen horribly overengineered solutions to the namespacing problem. Ultimately, they just push the need for agreeing on a good naming convention elsewhere.

3 Likes

NFT ledger canister must then be deployed on the System Subnet ( like ICP ledger canister on system subnet)

some transaction fees should go to the ledger to keep the canister live

I like this line of thought.

There’s an alternative to this approach to solving compressibility, determinism, and uniqueness, which comes from embracing the idea that all tokens are non-fungible.

I have perhaps a very long post to write about semi-fungibility, but it’s summary is, “most tokens are neither fungible nor non-fungbile, they instead occupy a point on the spectrum of fungibility.”

For example, USD is perfectly fungible until you need to deposit more than 10k into a bank at one time. At that point its effective fungibility is limited because the U.S. government wants to prevent money laundering and terrorist financing.

On the other hand, even perfectly distinguishable items can have fungible properties. E.g. the famous rai stones on the island of Yap. These stones were so large they couldn’t be physically transferred to one another. Ownership was instead tracked through community consensus.

In other words, I think the emphasis on fungibility is overstated. Instead, what’s interesting is the degree of fungibility and the price inference mechanisms to make fungibility possible.

My claim is that everything becomes perfectly fungible, including NFTs, as long as you have a sufficiently subscribed price inference mechanism. I think this is non-controversial, but it doesn’t seem like what it offers has been metabolized.

To give a quick example: let’s say there’s an NFT generating canister that outputs an image with 4 pixels that are each 1 of 4 colors. The probability of a red pixel is 4/10, of a blue is 3/10, of a yellow is 2/10, and a green is 1/10. An NFT image with all green pixels would have low, 0.01%, probability of occurring, it’s the least probable image. Let’s say an all green NFT is generated. Will it be highly desired or not?

Right now, the default price inference mechanism that we reach for is scarcity. The rarer the item, the more highly we value it. But what if instead the market had a strange preference for the color blue (as it should, blue is better. HODL BLEU becomes the mantra.) A completely blue image wouldn’t be all that uncommon, still quite rare, but not as rare as the all green. However, the preference for blue would still drive up the overall price. Maybe this is what Vitalik means about memes: if each person believes that every other person values blue above all else, then blue would have more value. The value is driven (in part) by the expected fungibility of the asset once you own it. Fungibility is about subscribing to a price inference mechanism.

So, memes are a mechanism to drive prices. As people subscribe to memes, the assets that represent those memes can take on value. I know this is basic, but it seems like lots of people prefer to think about NFTs and fungible tokens as if they’re different things. In reality, fungible tokens are just a class of nfts, backed by some set of memes that make price inference easy.

E.g. a government that says your dollar can be exchanged for a new one, no matter how tattered your dollar may be, has introduced a mechanism that makes price inference easy.

Notice that one way to make an NFT more accepted as a currency is to make more of them. You can either collateralize an asset with tokens, or have many similar assets, each with one token. The reason a singular and unique nft performs worse than Bored Apes is because Bored Apes is actually a fungible token. You know that others know how to value it. This is also why Bored Apes would be less successful if it were perfectly random, perfect randomness defies parsimonious valuation.

My bet is that if more people bought this idea we’d have tokens that do more interesting things. In the next post, this claim will also allows us to unify nfts and fts into a single implementation.

See this post for another example of a non-fungible, yet still “fungible” token.

6 Likes

On the internet computer, given that storage is quite cheap, we can have many effectively-fungible, still technically non-fungible tokens. These kinds of token will each have different data associated with them (non-fungible), but they’ll still be easily compared (fungible).

Our question is: is it possible to make a fungible token standard that inherits from the non-fungible one?

A ‘fungible’ NFT (which I’ll hereon just call “nft”) would basically work like this:

  1. On mint, Alice receives an nft whose data is just a number “balance” that represents the starting tokens in circulation
  2. On send, Alice intends to send x tokens to Bob. Alice’s nft burns x tokens in order to create a new nft with a starting balance of x, then Alice’s nft transfers ownership of the new nft to Bob.

What’s cool about this approach is that these are atomic operations. We can use that attribute to solve the DEX problems that the token standard conversation is talking about.

To do so, introduce 2 new functions and a new rule:

New “share” function: To share an nft let’s create a function share(quantity, target). When called it creates a new nft that mints the desired quantity of tokens, and shares ownership of this new nft with the target address (after share, the token is co-owned by both the original nft owner and the target address). Unlike send it doesn’t burn tokens before minting, you’ll see why next.

New transaction rule that the nft respects: If the nft has more than one owner (it is shared) it cannot call the send method. It can only share and revoke ownership. This makes it so that nfts with multiple owners are valueless, at least until revoke is called.

New “revoke” function: To revoke ownership of an nft (which you’ll want to do so that the address you shared it with can spend it), let’s create the function revoke(). In order to revoke ownership, the calling nft must burn the amount of tokens specified as the starting quantity when share(quantity, target) was called.

How about an example?

Let’s say Alice has an nft called Token A with a starting balance of 500, she wants to use a decentralized exchange (DEX) to swap 200 Token As for Token B. Here’s what happens:

  1. Alice calls the method share(200, DEX) on the Token A nft that she holds. This prompts Token A to create a new Token A nft that mints 200 new tokens, let’s call it Token A_1. Alice’s Token A still has 500 tokens, and Token A_1 has 200. However, Token A_1 is now co-owned with the DEX canister and therefore valueless because it can’t be spent, so the total quantity of tokens in circulation is equivalent to before calling share.
  2. The DEX canister calculates the number of tokens to request of Token B, let’s call this value q where q = exchange rate * 200
  3. The DEX canister now requests that Token B also shares its tokens with DEX. Token B calls share(q, DEX), following the same process as Token A: split the nft token, share ownership of the newly created nft which we’ll call Token B_1.
  4. DEX now shares ownership of Token B_1 with Token A, and shares Token A_1 with Token B. The swap has happened, but at this point, Tokens A_1 and B_1 are still valueless since they have more than one owner. (in fact, they each have 3 owners: DEX, Token A, and Token B)
  5. Upon receipt of Token B_1, Alice’s Token A revokes ownership of Token A_1, burning 200 tokens to do so.
  6. Upon receipt of Token A_1, Token B revokes ownership of Token B_1, burning q tokens to do so.
  7. DEX still co-owns both nfts right now, so they’re still worthless. DEX now has to revoke ownership. DEX first ensures that Token A and B have already revoked ownership of A_1 and B_1 respectively. Upon confirmation, DEX revokes its ownership of Token B_1 and Token A_1 finally making them usable. The swap is complete.

It’s elegant because the process can fail at any point as if nothing happened. No need to roll back any transfers. You can run out of cycles at any time or be as malicious as you want, you’ll only complete the exchange if it went right.

3 Likes

I know these have lots of grammatical and formatting errors. I tried to go back and edit but I keep getting 403 errors. Perhaps this is because the posts are so long. You’ll have to put up with the mistakes until I can figure out the issue :confused:

Figured out the issue, I was using uBlock Origin and that seemed to be triggering some security stuff on dinity.org’s side. Nope, that wasn’t it. You now just get to read this in four posts instead of 2. They’re worth it though, I promise :wink:

[Note: This was originally included in my first post, but had to be moved so I could edit the original without a 403 error]

Let me give one more example so that you can get the flavor of how nfts can become fungible with the right price inference mechanism. Let’s say there’s a wallet + nft scheme such that when the nft is transferred it checks to see what its “special value” is. If the special value is prime, the nft thereafter can’t be transferred to another account (it becomes worth 0). To find that special value we first number each nft by its order of creation:
0
1
2
etc.

Then, each transaction is numbered. The special value is transaction number + creation number. If it’s prime you can’t transfer it. The prime tokens die.

In this market, if I sent you token #3, and my payment to you was the 1018th transaction in the network’s life then, bummer, your newly received token would be worth nothing (because 1018 + 3 = 1021 which is prime)!

In principle, this system is still “fungible” because you can (at a certain computational expense) estimate the likelihood that each token’s special value will be prime. E.g. larger numbered tokens will have higher probability of being safe, since prime density is approximately n/log(n). Similarly, tokens will have higher value if they are transacted later in the network’s life (i.e. they become more fungible as the transaction count increases, for the same n/log(n) reason). Classes of tokens will change in value depending on the timing. E.g. if someone held tokens 10 000, 10 001 and 10 002, then as the market approached 21 398 transactions the value of their tokens would go up in value and stay high until the market got closer to 21 466 transactions. This is because tokens 10 000 through 10 002 have no special number primes for the transactions from 21 389 through 21 466.

The fungibility of these tokens would be a function of the price inference mechanism you put in place. If evaluated manually by each human for each transaction, you can bet that it would be considered highly non-fungible. But, you could create a second layer to this network that computationally priced each token, rolling up all the uncertainties of the value of a wallet into a single “fungible” number that told you how much you’d have to transfer for the other person to get the value they care about.

This is a proof, of sorts, that fungibility is created by the price inference mechanism.

1 Like

[Note: This was originally included in my second post, but had to be moved so I could edit the original without a 403 error]

In Summary

  1. Fungibility is defined by your price inference mechanism, not by your token standard.
  2. All “fungible” tokens can be implemented as types of non-fungible tokens, where the data portion of the non-fungible token tracks the current “balance” and follows certain ownership and burn rules.
  3. Treating fungible tokens as a type of non-fungible token gives us advantages over classic fungible tokens. For example, it makes it possible to implement a decentralized exchange without ever needing rollback.

Finally, and perhaps most importantly, this approach can likely dramatically simplify the token standard that each wallet needs to support. The nft itself will carry the logic and permissions, and wallets can call simple handles like share, transfer, and balance.

I’d like to at some point explore implementing this standard but I’m still learning rust. If anyone else finds anything interesting about the idea of unifying nfts and fts into a single standard feel free to steal, riff, rip, and repurpose.

An open question is how much additional cost overhead in terms of storage and compute this approach would incur.

Also, please let me know the ways in which I’m wrong.

This is actually really deep. I’m still trying to fully understand it:

  1. DEX now shares ownership of Token B_1 with Token A, and shares Token A_1 with Token B.

I thought in order to share a NFT, the caller has to specify an amount to “lock up” and split the NFT? Does the DEX canister do that here?

  1. DEX still co-owns both nfts right now, so they’re still worthless. DEX now has to revoke ownership.

Like the above, doesn’t the DEX canister have to burn tokens to revoke ownership? Or maybe NFTs only get split or burned when the caller of share and revoke is the first owner, but not for subsequent owners?

1 Like

Hi @jzxchiang, I was hoping you would see this. Excited to hear your thoughts :slight_smile:

share() could infer from its arguments whether it’s a split share, or just a co-own share. If share(addr) it’s clearly just trying to co-own with addr.

Alternatively, share(0, addr) could be taken to mean that you’re sharing ownership without duplicating any tokens.

Less elegantly, but equivalently and perhaps more pragmatically, you could have two functions: splitShare(quantity, address) and share(address).

Yeah I think you have to require the original owner to burn to revoke. That’s the only thing that makes sense if you want to preserve fungibility.

2 Likes