Call for Participation: NFT Token Standard Working Group [Status Updated]

I get the idea, but doesn’t this largely overlap with the minting and burning concepts? (With minting you would set the initial metadata).

I think concepts like minting, updating (metadata) and burning could be seen as CRUD operations on the tokens that probably are things you’d want to standardize in relation to one another.

While the current standard is mainly focused on consumers of the ledger e.g. wallets and marketplaces instead of e.g. minting tooling.

Considering the low costs of creating an NFT collection on the IC including it’s metadata and even assets, standardizing this seems like a logical next step to make a large choice of tooling available that’s preferably interoperable so e.g. users aren’t tied to the minting tools of a given marketplace.

1 Like

Thanks, @sea-snake, for your response!

To some extent I definitely must agree with you with the overlap with creation / destruction of NFTs and that this is all using CRUD operations.

But I had the feeling that an update is something that we could want to specify at the API level because many use cases might want to use it. The update is somewhere in the middle between minting tooling and the consumers of the ledger because an update is an after market operation so to say.

An update could be as simple as replacing the current metadata with new metadata. Having this surfaced at the API level helps create a standard way on how to change this information that can be used by applications. I felt it is different from mint and burn as those can be much more diverse and depending on the use case and thus are harder to standardize cleanly. But a generic update is something that would work for everyone. And specific ledgers could then also have more specific update methods implemented in addition.

Need to think about it more considering your response. Maybe also something to keep for a discussion in the upcoming meeting.

We have the following functions in our reference implementation.

We use Candy as the Metadata storage under the hood because:

  1. With a Class we can set certain properties in the item as immutable and provide guarantees to the user.
  2. The Class/Maps are held internally as Maps so we can more easily search through them and do not have to iterate over the whole collection.

In this code, the set_nft is used to completely overwrite the object. update_nft uses the Candy.Properties package(borrowed from @quint) that allows you to specify and update graph to a Class such that you can make deep updates to a nested class and immutable arguments are honored.

When we return icrc7_token_metadata we dump all the Classes to standard Maps according to the Value schema(and other data types…you can see the mapping at the end). This loses some context for the user, but the application can choose to add another endpoint that provides the object as a full annotated candy value.

///hard sets an NFT metadata; For incremental updates use update_nft
    public func set_nft(request: [SetNFTRequest], environment : ?Environment) : [Bool]{
      
      //todo: Security at this layer?
      //todo: where to handle minting and setting data

      let results = Vec.new<Bool>();
      label proc for(thisItem in request.vals()){

        //does it currently exist?
        switch(Map.get<Nat, CandyTypes.Candy>(state.nfts, Map.nhash, thisItem.token_id)){
          case(null){};
          case(?val){
            //this nft is being updated and we need to de-index it.
            switch(get_token_owner_canonical(thisItem.token_id)){
              case(#err(_)){};
              case(#ok(val)) ignore unindex_owner(thisItem.token_id, val);
            };
            
          };
        };


        ignore Map.put<Nat, CandyTypes.Candy>(state.nfts, Map.nhash, thisItem.token_id, thisItem.metadata);
        Vec.add(results, true);

        D.print("about to check canonical owner" # debug_show(thisItem));
        switch(get_token_owner_canonical(thisItem.token_id)){
          case(#ok(owner)){
             D.print("about to index owner" # debug_show(thisItem));
            ignore index_owner(thisItem.token_id, owner);
          };
          case(_){};
        };
      };
      return Vec.toArray(results);
    };

    ///updates an NFT metadata; 
    public func update_nft(request: [UpdateNFTRequest], environment : ?Environment) : Result.Result<[Bool], Text>{
      
      //todo: Security at this layer?
      //todo: where to handel minting and setting data

      let results = Vec.new<Bool>();
      label proc for(thisItem in request.vals()){

        //does it currently exist?
        switch(Map.get<Nat, CandyTypes.Candy>(state.nfts, Map.nhash, thisItem.token_id)){
          case(null){};
          case(?val){
            var owner_found : ?Account = null;
            //this nft is being updated and we need to de-index it.
            switch(get_token_owner_canonical(thisItem.token_id)){
              case(#err(_)){};
              case(#ok(val)){
                //do any of the updates affect the owner
                for(thisUpdate in thisItem.updates.vals()){
                  if(thisUpdate.name == token_property_owner_account){
                    owner_found := ?val;
                  };
                };
                
              };
            };

            switch(val){
              case(#Class(props)){
                let updatedObject = switch(CandyProperties.updateProperties(props, thisItem.updates)){
                  case(#ok(val)) val;
                  case(#err(err)) {
                    Vec.add(results, false);
                    continue proc;
                  };
                };

                switch(owner_found){
                  case(?val){
                    ignore unindex_owner(thisItem.token_id, val);
                  };
                  case(null){};
                };

                ignore Map.put<Nat, CandyTypes.Candy>(state.nfts, Map.nhash, thisItem.token_id, #Class(updatedObject));
                Vec.add(results, true);

                switch(owner_found){
                  case(?val){
                    D.print("about to check canonical owner" # debug_show(thisItem));
                    switch(get_token_owner_canonical(thisItem.token_id)){
                      case(#ok(owner)){
                        D.print("about to index owner" # debug_show(thisItem));
                        ignore index_owner(thisItem.token_id, owner);
                      };
                      case(_){};
                    };
                  };
                  case(null){};
                };
              };
              case(_) return #err("Only Class types supported by update");
            };
          };
        };

        
      };
      return #ok(Vec.toArray(results));
    };

Converting the Internally Stored Candy to Value:

///converts a candyshared value to the reduced set of ValueShared used in many places like ICRC3.  Some types not recoverable
  public func CandySharedToValue(x: CandyShared) : ValueShared {
    switch(x){
      case(#Text(x)) #Text(x);
      case(#Map(x)) {
        let buf = Buffer.Buffer<(Text, ValueShared)>(1);
        for(thisItem in x.vals()){
          buf.add((thisItem.0, CandySharedToValue(thisItem.1)));
        };
        #Map(Buffer.toArray(buf));
      };
      case(#Class(x)) {
        let buf = Buffer.Buffer<(Text, ValueShared)>(1);
        for(thisItem in x.vals()){
          buf.add((thisItem.name, CandySharedToValue(thisItem.value)));
        };
        #Map(Buffer.toArray(buf));
      };
      case(#Int(x)) #Int(x);
      case(#Int8(x)) #Int(Int8.toInt(x));
      case(#Int16(x)) #Int(Int16.toInt(x));
      case(#Int32(x)) #Int(Int32.toInt(x));
      case(#Int64(x)) #Int(Int64.toInt(x));
      case(#Ints(x)){
         #Array(Array.map<Int,ValueShared>(x, func(x: Int) : ValueShared { #Int(x)}));
      };
      case(#Nat(x)) #Nat(x);
      case(#Nat8(x)) #Nat(Nat8.toNat(x));
      case(#Nat16(x)) #Nat(Nat16.toNat(x));
      case(#Nat32(x)) #Nat(Nat32.toNat(x));
      case(#Nat64(x)) #Nat(Nat64.toNat(x));
      case(#Nats(x)){
         #Array(Array.map<Nat,ValueShared>(x, func(x: Nat) : ValueShared { #Nat(x)}));
      };
      case(#Bytes(x)){
         #Blob(Blob.fromArray(x));
      };
      case(#Array(x)) {
        #Array(Array.map<CandyShared, ValueShared>(x, CandySharedToValue));
      };
      case(#Blob(x)) #Blob(x);
      case(#Bool(x)) #Blob(Blob.fromArray([if(x==true){1 : Nat8} else {0: Nat8}]));
      case(#Float(x)){#Text(Float.format(#exact, x))};
      case(#Floats(x)){
        #Array(Array.map<Float,ValueShared>(x, func(x: Float) : ValueShared { CandySharedToValue(#Float(x))}));
      };
      case(#Option(x)){ //empty array is null
        switch(x){
          case(null) #Array([]);
          case(?x) #Array([CandySharedToValue(x)]);
        };
      };
      case(#Principal(x)){
        #Blob(Principal.toBlob(x));
      };
      case(#Set(x)) {
        #Array(Array.map<CandyShared,ValueShared>(x, func(x: CandyShared) : ValueShared { CandySharedToValue(x)}));
      };
      case(#ValueMap(x)) {
        #Array(Array.map<(CandyShared,CandyShared),ValueShared>(x, func(x: (CandyShared,CandyShared)) : ValueShared { #Array([CandySharedToValue(x.0), CandySharedToValue(x.1)])}));
      };
      //case(_){assert(false);/*unreachable*/#Nat(0);};
    };
  };

Usage from our test:

test("Update immutable and non-immutable NFT properties", func() {
  //Arrange: Set up the ICRC7 instance and required parameters
  let icrc7 = ICRC7.ICRC7(?icrc7_migration_state, testCanister, base_environment);
  let token_id = 12;  // Assuming a token ID for testing
  let initialMetadata = #Class([
    {immutable=false; name=ICRC7.token_property_owner_account; value = #Map([(ICRC7.token_property_owner_principal,#Blob(Principal.toBlob(testOwner)))]);},
    {name="test"; value=#Text("initialTestValue"); immutable = false},
    {name="test3"; value=#Text("immutableTestValue"); immutable = true}
  ]);  // Define the initial metadata for testing


  let targetMetadata = #Class([
    {immutable=false; name=ICRC7.token_property_owner_account; value = #Map([(ICRC7.token_property_owner_principal,#Blob(Principal.toBlob(testOwner)))]);},
    {name="test"; value=#Text("updatedTestValue"); immutable = false},
    {name="test3"; value=#Text("immutableTestValue"); immutable = true}
  ]);  // Define the initial metadata for testing

  let updateImmutable = {name="test"; mode=#Set(#Text("updatedTestValue"))};  // Define an update for non-immutable property
  let updateNonImmutable = {name="test3"; mode=#Set(#Text("updatedImmutableTestValue"));};  // Define an update for immutable property
  
  let mintedNftMetadata = CandyTypesLib.unshare(initialMetadata);
  let nft = icrc7.set_nft([{token_id=token_id;metadata=mintedNftMetadata;}], ?base_environment);

  // Act and Assert: Attempt to update the immutable and non-immutable properties
  let #ok(resultNonImmutableUpdate) = icrc7.update_nft([{token_id=token_id;updates=[updateImmutable];}], ?base_environment) else return assert(false);

  D.print("resultNonImmutableUpdate" # debug_show(resultNonImmutableUpdate));

  assert(
    // Ensure the update for the immutable property fails and returns false
    resultNonImmutableUpdate[0] == true//, "Update for immutable property should fail"
  );

  let #ok(resultImmutableUpdate) = icrc7.update_nft([{token_id=token_id;updates=[updateNonImmutable];}], ?base_environment) else return assert(false);

  D.print("resultImmutableUpdate" # debug_show(resultImmutableUpdate));

  assert(
    // Ensure the update for the non-immutable property succeeds and returns true
    resultImmutableUpdate[0] == false//, "Update for non-immutable property should succeed"
  );

  // Assert: Check if the updated metadata matches the expectation
  let ?retrievedMetadata = icrc7.get_token_info(token_id) else return assert(false);
  assert(
    // Ensure the updated metadata matches the non-immutable update
    CandyTypesLib.eq(CandyTypesLib.unshare(targetMetadata), retrievedMetadata)//,
    //"Updated non-immutable property matches the expectations"
  );
});
1 Like

The next meeting of the WG is taking place tomorrow, November 21, 2023, 17:00-18:00 UTC+1 time.

Current ICRC-7 draft

The goal is to address the comments on the current draft that have come in since the recent meeting. See the forum discussion above for the topics to cover.

Hope to see you all there to bring ICRC-7 over the finish line!

2 Likes

Hi

Thought I post my interface design proposal here, in case I cant make it to the meeting. Anyway, I’m satisfied with the current interface standard except the ones that I redesigned below, standardized to ICRC-1 for naming system, ICRC-4 for batch transfer pattern and scanning (from, take, next) is according to CanScale’s RBTree scan interface. I also have removed all collection method interface by allowing the caller to set token_ids to empty array if the caller wants to approve/revoke all of what he owns… but for transfer operation, the token_ids array cannot be empty. Let me know what you think.

  1. Token getters

  2. Approval argument type, return type, and method interfaces

  3. Revoke argument type, return type, and method interfaces

  4. Transfer argument type, return type, and method interfaces

I don’t think the added confusion resulting from having two different behaviors behind a single method is worth it compared to having two explicit methods with two clearly defined behaviors :confused:

Previously we had a combined method with a variant argument to choose between collection or token ids but this only added complexity, so we opted to split it into two methods to simplify things and make the behavior more explicit and contained to each individual method.

Also a collection approval is very different from e.g. approving all tokens at once since a collection approval is not per token it’s also valid for tokens that the user might receive in the future.

1 Like

ah i see. then we will need the collection methods… which I don’t have to propose anything since it’s already good as it is.

I tried jumping on the meeting, but no one was on. Hopefully we can wrap up the standard soon. My one issue was with updating the response of a couple of the queries to allow an [Nat, opt Account] instead off just [Nat, account] so that we don’t have to trap the whole query for one missing token id.(I believe there is already a fix comment on the line.

We’re all on :sweat_smile:
I’ll DM you a link.

Dear colleagues!

Here is a link to the shared Google drive where slides and other material for the Working Group is hosted: NFT working group - Google Drive

You find last week’s slide deck and decisions there as well: ICRC-7 WG Meeting 2023-11-21 - Google Slides

I have addressed the comments from the recent WG meeting last week. The standard has been split into ICRC-7 and ICRC-30 (the next free number).

ICRC-7, ICRC-7 diff to last week’s meeting
ICRC-30

Furthermore, as ICRC-3 has received its final polishing touches, I took the freedom to already draft the ICRC-3-compliant block schemes for both ICRC-7 and ICRC-30.

Feedback welcome!

Discussion point for 7mint: Since we want transaction logs to be recomputable and verifiable, should mint have something in it pertaining to the hash of metadata/nft content? We kind of kicked mint and burn down the road as there was a discussion about onchain/offchain assets and metadata that we talked about resolving down the line. Since the IC can compute over and mutate content things get a bit more complicated.

Looks like icrc7_max_revoke_approvals needs to make the move to ICRC30 still.

1 Like
`icrc30:max_approvals_per_token_or_collection` of type `nat` (optional): 
The maximum number of active approvals this ledger implementation allows per token or 
per principal for the collection. When present, should be the same as the result of the..

@dieter.sommer @sea-snake @benji This is stressing me out during implementation. Everything goes by account, but this says principal…I’m debating keeping a separate index around that keeps track by principal. But say you have a wallet-service canister of some kind that holds tokens for lots of different people at different subaccounts…limiting by principal would limit this use case.

The counterargument is an attacker could just fill up your approvals with infinite sub-accounts until all other approvals are cleaned out of memory.

Also…having the number be the same for the token_id: Approvals and owner_prinicpal: Approvals seems odd.

At the moment I’m going to do it by Account as opposed to principal unless you all have some pushback.

2 Likes

7 has tid for token_id and 30 has token_id.

For 30…do we need separate ops for collection and token…arent they the same except that token Id is optional for collection approvals?

For 30 there is a situation where the canister may revoke approvals if the max is reached. In this case, since the memo will be added by the ledger, should it be possible to put at the top level? Tagging @mariop as I understand they do something like this with the ledgers. I was tagging these with the following for tracking, but I’m putting it in tx and not at the top.

Vec.add(trx, (“memo”, #Blob(Text.encodeUtf8(“icrc30_system_clean”))));

If is possible to put memo at the top level similarly to what we do with the fee. It’s not mandatory to follow this best practice but it’s nice to have.

Hi may i know why are we returning the token ids to the caller, instead of returning the token metadata itself directly?
Is it because of the strengths/weaknesses of ICP (in terms of I/O, cycles, latency, etc.)?

Also, let’s say i have this data structure and it’s changing rapidly

[
{ key: '',  val: [...] },
{ key: '',  val: [...] },
{ key: '',  val: [...] },
{ key: '',  val: [...] },
{ key: '',  val: [...] },
...
]

… and if i want to do rapid polling on ICP to this array/map to check for its changes, is the practice used by ICRC-7 above would be more suitable for ICP? where I return the array keys first, then the caller will query the val of each key in separate calls?

One more thing, the pagination (token_ids: [Nat] or prev+take) is another form of batching right? Why did ICRC-7 uses batching instead of singular calls for each token_id? Is it to reduce the latency/cost of the calls?

tokens_of_with_metadata might be a nice end point and maybe we should consider it for a future ICRC, but the thought was that you could call 1: tokens_of and then 2 tokens([owned_NFTs]) and get the info you need. It is more composable and queries are pretty quick, so hopefully it doesn’t slow things down too much.

where I return the array keys first, then the caller will query the val of each key in separate calls

a tokens_filter end point where you can provide a list of keys that you want was consider. I don’t remember why it was dropped, but probably for simplicity. Again, this would be a great add on for a future ICRC. You figure out a lot when you try to actually implement one of these, so I’d imagine we’ll learn quite a bit as real world use cases are deployed.

One more thing, the pagination (token_ids: [Nat] or prev+take) is another form of batching right? Why did ICRC-7 uses batching instead of singular calls for each token_id? Is it to reduce the latency/cost of the calls?

The IC can only return about 2MB per call. The places you see prev, take are calls where we could envision a collection growing to a size where the results would need to be paginated.

2 Likes

Excellent point, the actual NFT content is in no way reflected in the block as currently defined. Would a representation independent hash of the Value element containing the metadata be enough here? I would assume that any kind of metadata change in future standards would then also include the metadata hash in the created block.

Yes, fixing it in the upcoming iteration!

Right, I have chosen tid for it to be shorter in the first standard, but then thought that token_id is more in line with best practices from ICRC-1 etc., but haven’t changed tid. Fixing it.

This has been discussed and the group has decided to split it into two methods as they are less convoluted in their spec.