Call for Participation: NFT Token Standard Working Group [Status Updated]

Full Batch API semantics

In last week’s NFT standards WG meeting the group discussed the batch interface again, because this is the current work item of the Ledger and Tokenization WG (ICRC-4 Batch transfers for fungible tokens). The outcome of the discussions was that we should align the NFT batch API with the upcoming ICRC-4 batch API. The current discussions have gone into the following direction:

  • The group thinks that it is beneficial to generalize the currently constrained ICRC-7 (and ICRC-37, formerly ICRC-30) batch APIs to be full batch. The current API is rather a “bulk” API in the sense that the same transfer (same from, to, memo etc.) is performed on multiple token ids. The token id being the only think that may change. This is nice in itself and has been seen as one of the big use cases: Moving multiple NFTs together, in a batch, or bulk, operation. But it seems that the constraint is unnecessary.
    • The current constrained bulk-style interface of ICRC-7 / -37 would not be compatible with ICRC-4. This would be a big drawback for a series of token standards and would hamper adoption.
    • The interface is harder to implement, e.g., tracking of past transaction batches for deduplication requires additional code to that of handling individual transactions.
    • The interface is less powerful and does not allow important use cases such as batch transfers by the creator of a collection to the customers. This is also one very important use case for an NFT standard.
    • An advantage of the current API clearly is that for the bulk transfers it supports it has a reduced message size as the to, memo and timestamp fields need not be repeated for each element in the bulk operation. That is the only obvious advantage of this API.

Given that the working group intends to switch to a more generic batch interface that is not constrained to the same token recipient as the current one, the groups (both ICRC-1 WG and ICRC-7 WG) must decide which approach to use. Both standards should be aligned with respect to this. The difference is mainly in how error responses are handled, either (Option 1) through a top-level error response in case of an error related to the whole batch as in the current ICRC-4 and ICRC-7 proposals, or (Option 2) using a “flat” response structure comprising only responses for the contained transactions, but no top-level error response in case of a batch error.

The below proposals have been aligned with the current ICRC-4 proposal as much as possible, including the naming of the records.

Option 1: Top-level error response

The main change here w.r.t. the current API is to move the from subaccount, to, memo, and created_at_time into the individual transfer arg instead of having them at the batch level. This implies also that some errors move from the top-level to the item-level error.

  • The hierarchical error modeling has the advantage of more structure, i.e., it is very explicit when a batch-error happens.
  • However, it also means that in case of a top-level error, no transfers may be executed.
  • The processing of a batch requires more thorough checks than Option 2 as it needs to assure certain things before starting with processing, e.g., that there’s enough space in the response message for all responses (it must send a response for every element in the request, so it must not start processing unless it can make sure there is enough space in the response).
  • Even if the implementation performs thorough checks, things can still go wrong while processing the batch elements and it may be hard to return a response for all elements. Returning a respones for each request is a strong guarantee and things may go wrong.
  • The option needs to repeat the requests in the response to be able to associate responses with requests. This wastes space.
TransferArg = record {
    subaccount: opt blob; // the subaccount of the caller (used to identify the spender)
    to : Account;
    token_id: nat;
    memo : opt blob;
    created_at_time : opt nat64;
}

type TransferError = variant {
    TooOld;
    CreatedInFuture : record { ledger_time: nat64 };
    InvalidRecipient;
    NonExistingTokenId;
    Unauthorized;
    Duplicate : record { duplicate_of : nat };
    GenericError : record { error_code : nat; message : text };
};

type TransferBatchError = variant {
    TooManyRequests: record {limit : Nat};
    GenericError : record { error_code : nat; message : text };
};

type TransferBatchResult = variant {
    Ok : vec record {
        transfer : BatchTransferArg; // do we need this? is token_id sufficient? (it would be slightly constraining that one token cannot be in two transfers in one batch, e.g, move to specific merchant subaccount first, then transfer to recipient)
        transfer_result : variant {
            Ok : nat; // Transaction index for successful transfer
            Err : TransferError
        };
    };
    Err : TransferBatchError
};

icrc7_transfer : (vec TransferArg) -> TransferBatchResult;

Option 2: Flat response structure without top-level error response

This option has a flat response only containing a vector of variants, each variant being a transaction index in the success case and a per-item error otherwise. The per-item error can be a batch error in case processing was interrupted while processing this item or a regular per-item error, differentiated by its type. (This has less structure than Option 1) Thei-th in the response corresponds to the i item in the request, so ordering is crucial. This also means that the request data does not need to be repeated in the response, which saves quite some space compared to Option 1. The response must contain a contiguous list of items or nulls up to a point e when processing was stopped and may leave out responses for the suffix of the request items following item e.

  • Less nice than Option 1 in terms of being structured w.r.t. batch errors. A batch-level error that occurs in the context of processing item e results in an according error in this place an no responses for items afterwards this item, i.e., a prefix of responses instead of all responses. This may greatly simplify implementation complexity as it relaxes the strong assumption of every request item getting a response item.
  • Although the API hints that processing must be done in the sequence of the elements, the implementation is free in parallelizing. It must only be assured that if a batch-level error happens, there is a final element with index e containing the batch error, no subsequent elements with indices larger than e, and possibly null values for elements up to e for which processing has not been attempted. All other elements 0 <= j < e have a success or error response.
  • Avoids exhaustive checks for certain issues, e.g., response size, before starting with processing. Thus, it is (much) simpler to implement as processing can stop any time with an error and return a prefix of all responses to the caller instead of a response for every request element.
  • Deviates from the principle we set up that every request element requires a response element. But this also simplifies implementation.
  • Results in the “simplest” API of all because of less nesting.
TransferArg = record {
    subaccount: opt blob; // the subaccount of the caller (used to identify the spender)
    to : Account;
    token_id: nat;
    memo : opt blob;
    created_at_time : opt nat64;
}

// both batch-level and item-level errors are contained here
type TransferError = variant {
    // batch errors
    TooManyRequests: record {limit : Nat};
    GenericBatchError : record { error_code : nat; message : text };
    // token errors
    TooOld;
    CreatedInFuture : record { ledger_time: nat64 };
    InvalidRecipient;
    NonExistingTokenId;
    Unauthorized;
    Duplicate : record { duplicate_of : nat };
    GenericError : record { error_code : nat; message : text };
};

type TransferBatchResult = vec opt record {
    transfer_result : variant {
        Ok : nat; // Transaction index for successful transfer
        Err : TransferError
    };
};

icrc7_transfer : (vec TransferArg) -> TransferBatchResult;

Correction: added opt for the record in the response type TransferBatchResult, that’s how it was meant (Austin’s Option 2.1 below refers to this option without this opt)

Variation of Option 2

A different variant of Option 2 would contain also the transfer argument in the Ok part and not rely on ordering of the response. As for Option 1, not all parameters may be required. It is not obvious what the advantage of this would be. It would require duplication of the request information and thus consume some space.

type TransferBatchResult = vec record {
    transfer : BatchTransferArg; // do we need this? is token_id sufficient? (it would be slightly constraining that one token cannot be in two transfers in one batch, e.g, move to specific merchant subaccount first, then transfer to recipient)
    transfer_result : variant {
        Ok : nat; // Transaction index for successful transfer
        Err : TransferError
    };
};

Discussion

A move forward with adopting full batch semantics requires us to make a decision on how to define the API, i.e., which style to use. Option 1 and 2 are viable options for achieving a more general batch API, with different API styles and advantages.

Note that all bulk / batch update calls will be chaged accordingly in case we decide to move forward.

Moving to full batch semantics for ICRC-7 / -37 requires also to discuss what this would mean for having a split transfer and transfer_from as this constrains a batch to contain only transfer operations for token one owns or only making transfer_from operations in one batch. This is not a large constraint, but needs to be discussed with this move. It seems that leaving the two transfers methods separate is fine and does not have major issues when going for full batch APIs.

This discussion is equally relevant to the ICRC-4 Batch API for fungible tokens as it is for ICRC-7 and ICRC-37 (formerly ICRC-30).