DAB (https://dab.ooo) solves the multiple NFT interface issue. DAB wraps each standard to allow for one common interface.
I thought DAB was intended to be temporary solution?
Hey, administrator might pin the featured git repositories? Appreciate very much.
imo, Token standards were created so that the blockchain would have something on it, besides its native coin, for people to spend their money on. Creating something exciting/hyped for people to FOMO into has been around since before I got into(jan '14) Crypto. The current ācutting-edgeā of standards for NFTs is the addition of an attributes field and the creation of optional royalties.
If ERCs were very mature standards, Iād see value in emulating them. But, as senior.joinu has done a great job of pointing out, Eth and ICP diverge almost right after the word āTokenā. For instance, what if I want to create a dns system in which, depending on the NFT owned, will shape the response given? ICP lends itself well to this, but Ethereum does not.
Hey @rossberg, @claudio, etcā¦what chance is there of function overloading on the IC/Motoko. Right now we have a collision on transfer for Departure Labs and EXT. It sure would be nice if we could have overloading and support both based on the candid type that comes in. If not overloading, then some kind of way to pre parse a candid input and transform it? Just a thoughtā¦it would solve a good bit of stressing about standards.
You mean overloading for canister methods? I donāt think that can work. The IC can only distinguish by name, so it would have to become a single method from the IC/Candid perspective that does something different based on the argument type. Pretending these are two methods inside Motoko would be a leaky abstraction at best, since it could not really separate them (consider function references). It would also be super-hairy and likely impossible to reconcile with Candidās interface evolution subtyping.
Iām not sure I follow how a method name clash can even happen in Motoko, given that there has to be a single place where an actor and its methods are defined. Can you elaborate?
My guess is that you are using a single actor to implement two separate interfaces that happen to have two methods with a common name but different signature?
In that case, overloading wouldnāt be a general solution either, even ignoring everything else, because the clashing methods may happen to have similar types.
Iād suggest that actor interfaces defined to be implemented by other actors should take care of properly namespacing method names, e.g., by some āownerā prefix. We might want to think about establishing some conventions for that.
My guess is that you are using a single actor to implement two separate interfaces that happen to have two methods with a common name but different signature?
Exactly. Departure Labs has transfer(to : Principal, id : Text) and EXT has transfer: shared (request : TransferRequest) ā async TransferResponse; where TransferRequest and Response are
public type TransferRequest = {
from : User;
to : User;
token : TokenIdentifier;
amount : Balance;
memo : Memo;
notify : Bool;
subaccount : ?SubAccount;
};
public type TransferResponse = Result.Result<Balance, {
#Unauthorized: AccountIdentifier;
#InsufficientBalance;
#Rejected; //Rejected by canister
#InvalidToken: TokenIdentifier;
#CannotNotify: AccountIdentifier;
#Other : Text;
}>;
So if I want to write an NFT canister to support both, I currently canāt. Iād love for the community to come to some resolution and extensibility with prefix/post fix would be a great solution.
If every interface in the IC was namespace it would fix a lot. It would also be kind of ugly.
transfer_com_ext and transfer_com_departure
That is going to look pretty ugly inline in code.
Another option is to get everyone to agree to always use one interface function that takes a standard variant type:
public shared(msg) _interface(command: VariantType) : async Result<VariantType, Text> {
switch(Command){
case(#Class(val)){
let command = VariantHelper.findProperty("command");
switch(command){
case(null){return #err("command required")};
case(?command){
if(command == "transfer"){
let namespace = VariantHelper.findProperty("namespace");
switch(namespace){
case(null){
//default;
let result = transfer_ext(VarientHelper.unwrap(VariantHelper.findProperty("paramaters", EXTTypeDefHelper));
return VarientHelper.wrap(result,EXTTypeDefHelper);
};
case(?namespace){
if(namespace == "com_ext_nonfungible"){
let result = transfer_ext(VarientHelper.unwrap(VariantHelper.findProperty("paramaters", EXTTypeDefHelper));
return VarientHelper.wrap(result,EXTTypeDefHelper);
} else if(namespace == "com_departure_nonfungible"){
let result = transfer_ext(VarientHelper.unwrap(VariantHelper.findProperty("paramaters", DepartureTypeDefHelper));
return VarientHelper.wrap(result,DepartureTypeDefHelper);
};
};
};
}; //else other commands
};
};
};
};
};
To me, the above seems extraordinarily clumsy.
Could candid have a namespace that is applied behind the scenes so the code stays clean but behind the scenes the namespaces are appened to function?
Something like:
type DepartureNFT = actor{
transfer: (Principal, Text) ā Bool;
} : com_departure_nft;
type EXTNFT = actor{
transfer: (Principal, Text) ā Bool;
} :: com_ext_nft;
and then in motoko actor:
public shared(msg) transfer::com_departure_nft(Principal, Text) : Bool {//code};
public shared(msg) transfer::com_ext_nft(TransferRequest) : TransferResponse {//code};
Behind the scenes, both Motoko and candid are appending or pretending the namespaces to the function calls? Maybe that is not much better and Iām not a language designer.
In that case, overloading wouldnāt be a general solution either, even ignoring everything else, because the clashing methods may happen to have similar types.
In that instance, wouldnāt you be doing the same thing anyway. I guess if one had a publish(Text) that was supposed to send an event and one had a publish(Text) that was supposed to post a blog post you might have that issue. But in the context of a token, two standards probably mean the same thing by transfer. Maybe notā¦in any case if you need the context of what ākindā of call intention you hadā¦maybe some kind of automagical namespacingā¦would help?
What areas of overlap would give you the biggest uplift? Likely
can accommodate.
I think ext and departure labs should put their heads together and come up with a standardā¦invite othersā¦maybe the standard should go beyond just tokensā¦that fixes this glitch. We need a design pattern we can push and teach to new devs. Would be good to do it while we have 20 NFT projects, before we have 200, and before the first person blackholes their canister and canāt upgrade.
I always have a liability concern about canister as smart contract engine. Maybe I am wrong. as my understand canisterās controller can always make change of actor and canister need cycles to keep it live.
On the contrary, smart contract on Ethereum can not be changed and offline ever. If itās true, maybe consider some technical methods to avoid it as NFT standard.
Yeah, but you only need it for interfaces that others are supposed to match. Iād use a prefix convention that is simple enough:
ext__transfer departure__transfer
Theoretically, yes, but would writing a::f
in actual code be any less ugly than a_f
? And itād be introducing quite a bit of extra machinery for it for all parties involved (Candid, Motoko, all other language bindings).
I wouldnāt necessarily assume that. More importantly, we shouldnāt be satisfied with a solution that would be limited to the specific needs of tokens.
I have seen horribly overengineered solutions to the namespacing problem. Ultimately, they just push the need for agreeing on a good naming convention elsewhere.
NFT ledger canister must then be deployed on the System Subnet ( like ICP ledger canister on system subnet)
some transaction fees should go to the ledger to keep the canister live
I like this line of thought.
Thereās an alternative to this approach to solving compressibility, determinism, and uniqueness, which comes from embracing the idea that all tokens are non-fungible.
I have perhaps a very long post to write about semi-fungibility, but itās summary is, āmost tokens are neither fungible nor non-fungbile, they instead occupy a point on the spectrum of fungibility.ā
For example, USD is perfectly fungible until you need to deposit more than 10k into a bank at one time. At that point its effective fungibility is limited because the U.S. government wants to prevent money laundering and terrorist financing.
On the other hand, even perfectly distinguishable items can have fungible properties. E.g. the famous rai stones on the island of Yap. These stones were so large they couldnāt be physically transferred to one another. Ownership was instead tracked through community consensus.
In other words, I think the emphasis on fungibility is overstated. Instead, whatās interesting is the degree of fungibility and the price inference mechanisms to make fungibility possible.
My claim is that everything becomes perfectly fungible, including NFTs, as long as you have a sufficiently subscribed price inference mechanism. I think this is non-controversial, but it doesnāt seem like what it offers has been metabolized.
To give a quick example: letās say thereās an NFT generating canister that outputs an image with 4 pixels that are each 1 of 4 colors. The probability of a red pixel is 4/10, of a blue is 3/10, of a yellow is 2/10, and a green is 1/10. An NFT image with all green pixels would have low, 0.01%, probability of occurring, itās the least probable image. Letās say an all green NFT is generated. Will it be highly desired or not?
Right now, the default price inference mechanism that we reach for is scarcity. The rarer the item, the more highly we value it. But what if instead the market had a strange preference for the color blue (as it should, blue is better. HODL BLEU becomes the mantra.) A completely blue image wouldnāt be all that uncommon, still quite rare, but not as rare as the all green. However, the preference for blue would still drive up the overall price. Maybe this is what Vitalik means about memes: if each person believes that every other person values blue above all else, then blue would have more value. The value is driven (in part) by the expected fungibility of the asset once you own it. Fungibility is about subscribing to a price inference mechanism.
So, memes are a mechanism to drive prices. As people subscribe to memes, the assets that represent those memes can take on value. I know this is basic, but it seems like lots of people prefer to think about NFTs and fungible tokens as if theyāre different things. In reality, fungible tokens are just a class of nfts, backed by some set of memes that make price inference easy.
E.g. a government that says your dollar can be exchanged for a new one, no matter how tattered your dollar may be, has introduced a mechanism that makes price inference easy.
Notice that one way to make an NFT more accepted as a currency is to make more of them. You can either collateralize an asset with tokens, or have many similar assets, each with one token. The reason a singular and unique nft performs worse than Bored Apes is because Bored Apes is actually a fungible token. You know that others know how to value it. This is also why Bored Apes would be less successful if it were perfectly random, perfect randomness defies parsimonious valuation.
My bet is that if more people bought this idea weād have tokens that do more interesting things. In the next post, this claim will also allows us to unify nfts and fts into a single implementation.
See this post for another example of a non-fungible, yet still āfungibleā token.
On the internet computer, given that storage is quite cheap, we can have many effectively-fungible, still technically non-fungible tokens. These kinds of token will each have different data associated with them (non-fungible), but theyāll still be easily compared (fungible).
Our question is: is it possible to make a fungible token standard that inherits from the non-fungible one?
A āfungibleā NFT (which Iāll hereon just call ānftā) would basically work like this:
- On mint, Alice receives an nft whose data is just a number ābalanceā that represents the starting tokens in circulation
- On
send
, Alice intends to sendx
tokens to Bob. Aliceās nft burnsx
tokens in order to create a new nft with a starting balance ofx
, then Aliceās nft transfers ownership of the new nft to Bob.
Whatās cool about this approach is that these are atomic operations. We can use that attribute to solve the DEX problems that the token standard conversation is talking about.
To do so, introduce 2 new functions and a new rule:
New āshareā function: To share an nft letās create a function share(quantity, target)
. When called it creates a new nft that mints the desired quantity of tokens, and shares ownership of this new nft with the target address (after share, the token is co-owned by both the original nft owner and the target address). Unlike send
it doesnāt burn tokens before minting, youāll see why next.
New transaction rule that the nft respects: If the nft has more than one owner (it is shared) it cannot call the send
method. It can only share
and revoke
ownership. This makes it so that nfts with multiple owners are valueless, at least until revoke is called.
New ārevokeā function: To revoke ownership of an nft (which youāll want to do so that the address you shared it with can spend it), letās create the function revoke()
. In order to revoke ownership, the calling nft must burn the amount of tokens specified as the starting quantity when share(quantity, target)
was called.
How about an example?
Letās say Alice has an nft called Token A with a starting balance of 500, she wants to use a decentralized exchange (DEX) to swap 200 Token As for Token B. Hereās what happens:
- Alice calls the method
share(200, DEX)
on the Token A nft that she holds. This prompts Token A to create a new Token A nft that mints 200 new tokens, letās call it Token A_1. Aliceās Token A still has 500 tokens, and Token A_1 has 200. However, Token A_1 is now co-owned with the DEX canister and therefore valueless because it canāt be spent, so the total quantity of tokens in circulation is equivalent to before callingshare
. - The DEX canister calculates the number of tokens to request of Token B, letās call this value
q
whereq = exchange rate * 200
- The DEX canister now requests that Token B also shares its tokens with DEX. Token B calls
share(q, DEX)
, following the same process as Token A: split the nft token, share ownership of the newly created nft which weāll call Token B_1. - DEX now shares ownership of Token B_1 with Token A, and shares Token A_1 with Token B. The swap has happened, but at this point, Tokens A_1 and B_1 are still valueless since they have more than one owner. (in fact, they each have 3 owners: DEX, Token A, and Token B)
- Upon receipt of Token B_1, Aliceās Token A revokes ownership of Token A_1, burning 200 tokens to do so.
- Upon receipt of Token A_1, Token B revokes ownership of Token B_1, burning
q
tokens to do so. - DEX still co-owns both nfts right now, so theyāre still worthless. DEX now has to revoke ownership. DEX first ensures that Token A and B have already revoked ownership of A_1 and B_1 respectively. Upon confirmation, DEX revokes its ownership of Token B_1 and Token A_1 finally making them usable. The swap is complete.
Itās elegant because the process can fail at any point as if nothing happened. No need to roll back any transfers. You can run out of cycles at any time or be as malicious as you want, youāll only complete the exchange if it went right.
I know these have lots of grammatical and formatting errors. I tried to go back and edit but I keep getting 403 errors. Perhaps this is because the posts are so long. Youāll have to put up with the mistakes until I can figure out the issue
Figured out the issue, I was using uBlock Origin and that seemed to be triggering some security stuff on dinity.orgās side. Nope, that wasnāt it. You now just get to read this in four posts instead of 2. Theyāre worth it though, I promise
[Note: This was originally included in my first post, but had to be moved so I could edit the original without a 403 error]
Let me give one more example so that you can get the flavor of how nfts can become fungible with the right price inference mechanism. Letās say thereās a wallet + nft scheme such that when the nft is transferred it checks to see what its āspecial valueā is. If the special value is prime, the nft thereafter canāt be transferred to another account (it becomes worth 0). To find that special value we first number each nft by its order of creation:
0
1
2
etc.
Then, each transaction is numbered. The special value is transaction number + creation number
. If itās prime you canāt transfer it. The prime tokens die.
In this market, if I sent you token #3, and my payment to you was the 1018th transaction in the networkās life then, bummer, your newly received token would be worth nothing (because 1018 + 3 = 1021 which is prime)!
In principle, this system is still āfungibleā because you can (at a certain computational expense) estimate the likelihood that each tokenās special value will be prime. E.g. larger numbered tokens will have higher probability of being safe, since prime density is approximately n/log(n)
. Similarly, tokens will have higher value if they are transacted later in the networkās life (i.e. they become more fungible as the transaction count increases, for the same n/log(n)
reason). Classes of tokens will change in value depending on the timing. E.g. if someone held tokens 10 000, 10 001 and 10 002, then as the market approached 21 398 transactions the value of their tokens would go up in value and stay high until the market got closer to 21 466 transactions. This is because tokens 10 000 through 10 002 have no special number primes for the transactions from 21 389 through 21 466.
The fungibility of these tokens would be a function of the price inference mechanism you put in place. If evaluated manually by each human for each transaction, you can bet that it would be considered highly non-fungible. But, you could create a second layer to this network that computationally priced each token, rolling up all the uncertainties of the value of a wallet into a single āfungibleā number that told you how much youād have to transfer for the other person to get the value they care about.
This is a proof, of sorts, that fungibility is created by the price inference mechanism.
[Note: This was originally included in my second post, but had to be moved so I could edit the original without a 403 error]
In Summary
- Fungibility is defined by your price inference mechanism, not by your token standard.
- All āfungibleā tokens can be implemented as types of non-fungible tokens, where the data portion of the non-fungible token tracks the current ābalanceā and follows certain ownership and burn rules.
- Treating fungible tokens as a type of non-fungible token gives us advantages over classic fungible tokens. For example, it makes it possible to implement a decentralized exchange without ever needing rollback.
Finally, and perhaps most importantly, this approach can likely dramatically simplify the token standard that each wallet needs to support. The nft itself will carry the logic and permissions, and wallets can call simple handles like share
, transfer
, and balance
.
Iād like to at some point explore implementing this standard but Iām still learning rust. If anyone else finds anything interesting about the idea of unifying nfts and fts into a single standard feel free to steal, riff, rip, and repurpose.
An open question is how much additional cost overhead in terms of storage and compute this approach would incur.
Also, please let me know the ways in which Iām wrong.