That’s great news.
Do you know if the WG has plans to discuss marketplace standards in the near future?
That’s great news.
Do you know if the WG has plans to discuss marketplace standards in the near future?
It is on the agenda. Actually…a bit on my todo list What are your priorities?
Personally, I would consider it to be a high priority if we want to grow the ecosystem.
Right now, as you know, most NFT projects are bound to their FE marketplace provider (Toniq, Yumi, etc). This is because the marketplace address is baked into the canister code at deployment. Independent NFT marketplaces like DGDG are forced to pay a special fee to the controlling marketplace in order to sell those NFTs on their site.
After working on several EXT-based NFT collections I realized that this was not necessary. A canister interface could just as easily allow a FE to set their fee and payment address without baking it in. That’s why for the Gen2 PokedBots I added a market_lock()
method that allows any registered marketplace address to be passed as an argument to the canister. Now, DGDG can sell Gen2 bots and not have their profits taxed by Toniq. Being able to set their own marketplace fee also allows for competition between FE providers.
Those are my reasons for asking. I hope this doesn’t come across as demanding some action from the WG. I appreciate the time you all put into this effort and respect your decisions on what takes priority.
It doesn’t sound demanding at all.
Those concerns were what was behind the origyn_nft standard to begin with. We put the marketplace inside the NFT collection with standardized market functions to create a balance between marketplaces that tend to accrue too much power in the classic paradigm and creators who regularly get routed around through crypto shenanigans. Askers can supply a broker code and bidders can provide a broker code and the broker fee(set by the creator at mint) is split between those two principals. This creates a bit of game theory where a creator has to balance the fee they charge(and how that will affect holders) and what fee will be attractive to marketplaces(otherwise why would they promote and feature your collection).
A few months ago I took a stab at passing the origyn_nft standard through the ICRC7/3 lense and came up with the following: ICRC/ICRCs/ICRC-8/icrc8.mo at patch-1 · skilesare/ICRC · GitHub
It is on my todo list revisit because both 7 and 3 have changed a lot and I want 8 to be compatible and have good symmetry with the work that has been done. In general, my order of operations on my todo list is:
Fortunately, most of that code is already written(except for certification v2) and once the interface is defined it is just a matter of rearranging the entry and exit points.
And if you just need some form of all of that today, the origyn_nft standard is sitting there and should, in time, support all of those standards as they emerge. The hurdle is getting Tonic, Yumi/Yuku(they already support the standard with the gold and opie collections), and dgdg to implement the standard.
In last week’s NFT standards WG meeting the group discussed the batch interface again, because this is the current work item of the Ledger and Tokenization WG (ICRC-4 Batch transfers for fungible tokens). The outcome of the discussions was that we should align the NFT batch API with the upcoming ICRC-4 batch API. The current discussions have gone into the following direction:
to
, memo
and timestamp fields need not be repeated for each element in the bulk operation. That is the only obvious advantage of this API.Given that the working group intends to switch to a more generic batch interface that is not constrained to the same token recipient as the current one, the groups (both ICRC-1 WG and ICRC-7 WG) must decide which approach to use. Both standards should be aligned with respect to this. The difference is mainly in how error responses are handled, either (Option 1) through a top-level error response in case of an error related to the whole batch as in the current ICRC-4 and ICRC-7 proposals, or (Option 2) using a “flat” response structure comprising only responses for the contained transactions, but no top-level error response in case of a batch error.
The below proposals have been aligned with the current ICRC-4 proposal as much as possible, including the naming of the records.
The main change here w.r.t. the current API is to move the from subaccount
, to
, memo
, and created_at_time
into the individual transfer arg instead of having them at the batch level. This implies also that some errors move from the top-level to the item-level error.
TransferArg = record {
subaccount: opt blob; // the subaccount of the caller (used to identify the spender)
to : Account;
token_id: nat;
memo : opt blob;
created_at_time : opt nat64;
}
type TransferError = variant {
TooOld;
CreatedInFuture : record { ledger_time: nat64 };
InvalidRecipient;
NonExistingTokenId;
Unauthorized;
Duplicate : record { duplicate_of : nat };
GenericError : record { error_code : nat; message : text };
};
type TransferBatchError = variant {
TooManyRequests: record {limit : Nat};
GenericError : record { error_code : nat; message : text };
};
type TransferBatchResult = variant {
Ok : vec record {
transfer : BatchTransferArg; // do we need this? is token_id sufficient? (it would be slightly constraining that one token cannot be in two transfers in one batch, e.g, move to specific merchant subaccount first, then transfer to recipient)
transfer_result : variant {
Ok : nat; // Transaction index for successful transfer
Err : TransferError
};
};
Err : TransferBatchError
};
icrc7_transfer : (vec TransferArg) -> TransferBatchResult;
This option has a flat response only containing a vector of variants, each variant being a transaction index in the success case and a per-item error otherwise. The per-item error can be a batch error in case processing was interrupted while processing this item or a regular per-item error, differentiated by its type. (This has less structure than Option 1) Thei
-th in the response corresponds to the i
item in the request, so ordering is crucial. This also means that the request data does not need to be repeated in the response, which saves quite some space compared to Option 1. The response must contain a contiguous list of items or null
s up to a point e
when processing was stopped and may leave out responses for the suffix of the request items following item e
.
e
results in an according error in this place an no responses for items afterwards this item, i.e., a prefix of responses instead of all responses. This may greatly simplify implementation complexity as it relaxes the strong assumption of every request item getting a response item.e
containing the batch error, no subsequent elements with indices larger than e
, and possibly null
values for elements up to e
for which processing has not been attempted. All other elements 0 <= j < e
have a success or error response.TransferArg = record {
subaccount: opt blob; // the subaccount of the caller (used to identify the spender)
to : Account;
token_id: nat;
memo : opt blob;
created_at_time : opt nat64;
}
// both batch-level and item-level errors are contained here
type TransferError = variant {
// batch errors
TooManyRequests: record {limit : Nat};
GenericBatchError : record { error_code : nat; message : text };
// token errors
TooOld;
CreatedInFuture : record { ledger_time: nat64 };
InvalidRecipient;
NonExistingTokenId;
Unauthorized;
Duplicate : record { duplicate_of : nat };
GenericError : record { error_code : nat; message : text };
};
type TransferBatchResult = vec opt record {
transfer_result : variant {
Ok : nat; // Transaction index for successful transfer
Err : TransferError
};
};
icrc7_transfer : (vec TransferArg) -> TransferBatchResult;
Correction: added opt
for the record
in the response type TransferBatchResult
, that’s how it was meant (Austin’s Option 2.1 below refers to this option without this opt
)
Variation of Option 2
A different variant of Option 2 would contain also the transfer argument in the Ok
part and not rely on ordering of the response. As for Option 1, not all parameters may be required. It is not obvious what the advantage of this would be. It would require duplication of the request information and thus consume some space.
type TransferBatchResult = vec record {
transfer : BatchTransferArg; // do we need this? is token_id sufficient? (it would be slightly constraining that one token cannot be in two transfers in one batch, e.g, move to specific merchant subaccount first, then transfer to recipient)
transfer_result : variant {
Ok : nat; // Transaction index for successful transfer
Err : TransferError
};
};
A move forward with adopting full batch semantics requires us to make a decision on how to define the API, i.e., which style to use. Option 1 and 2 are viable options for achieving a more general batch API, with different API styles and advantages.
Note that all bulk / batch update calls will be chaged accordingly in case we decide to move forward.
Moving to full batch semantics for ICRC-7 / -37 requires also to discuss what this would mean for having a split transfer
and transfer_from
as this constrains a batch to contain only transfer
operations for token one owns or only making transfer_from
operations in one batch. This is not a large constraint, but needs to be discussed with this move. It seems that leaving the two transfers methods separate is fine and does not have major issues when going for full batch APIs.
This discussion is equally relevant to the ICRC-4 Batch API for fungible tokens as it is for ICRC-7 and ICRC-37 (formerly ICRC-30).
The above post lists the reasonable API options for batch APIs so we can move forward with full batch semantics, compared to the more constrained bulk semantics we have now.
My personal preference is one of the variations of Option 2 for the following reasons.
I also think that Option 2 works best.
As far as needing to include the request with the result in the response, it puts a constraint on the possible implementation space that you will not be able to optimize the processing of transactions in a scenario where parallelization might improve the performance of the processing.
Silly Example:
An ICRC4 token employs a system where when a user burns a number of tokens, they get issued an NFT out of one of three different NFT Collections in Different ICRC7 Canisters.
If amount % 3 == 0 they get a Warrior NFT
If amount % 3 == 1 they get a Wizard NFT
If amount % 3 == 2 they get a Thief NFT.
If a user submits 100 burn transactions of different values, it would make sense to collate the burns such that one icrc7_transfer is called to each canister for a total of three calls(using our batch by default).
If we go with option 2.1 we would not be able to do this. We would need to process them in order in case we hit a batch error after one of them.
The alternatives would be to either do 2.2 or to allow a null at each position which means that “this was not handled”. (I think we touched on this on the call).
So this would give us an option 2.3 that had:(note the opt after vec)
type TransferBatchResult = vec opt record {
transfer_result : variant {
Ok : nat; // Transaction index for successful transfer
Err : TransferError
};
};
This is a contrived example, but I thought it was illustrative enough to provide for discussion. We obviously can’t provide for every possible implementation in the universe, but we should consider if this one is close enough to possible that we want to support it. It would be fine to make it a limitation, but we should not the limitation as such so that developers don’t get halfway down an implementation like this only to discover the problem for themselves.
The answer is probably “we are ok excluding these use cases for simplicity.”
@pramitgaha has a valid question implementing the standard in rust. does anyone have any ideas?
I responded. We need more input from him in order to be able to discuss in the WG. See my comment on this in the other topic.
The Option 2.3 you give, quoted below, is quite neat.
My understanding is that with this option that allows for null
responses with the semantics that the request at the corresponding index has not been handled would allow for handling your contrived example with concurrent processing of multiple / all request items in the batch. This is nice! Yes, we briefly touched on this option in the call as well.
(this is the option I meant to express with my Option 2.1 above, but missed the crucial opt
keyword originally)
This option does feel like it’s clean and simple and saves space in addition.
hello, I’ve replied in the another thread, can you check?
ICRC-30 is now ICRC-37. The reason is that it did not stick to a naming convention initially.
In the ICRC-4 discussions in the Ledger and Tokenization Working Group we have made decisions on the batch API and think that it is crucial to align ICRC-7 and ICRC-37 with the batch API standards. I changed ICRC-7 and ICRC-37 accordingly. See the links for the proposals. We would like to discuss those in the upcoming WG meeting on Tuesday, Feb 6, 2024 and then move forward to voting.
It is clear to us that the current API has been implemented already and that it will take a bit of an effort to bring it to the new one. We think most changes should be rather on the surface and not too invasive.
The proposal is a dramatically simplified API that is easier to implement and more expressive as it is a general batch API rather than a bulk API as before. The (rather small) tradeoff is that operations on a list of token ids that used to have the same recipient and other parameters before now need to repeat those parameters. We think it is worth the additional bandwidth used.
Links
@skilesare, @sea-snake, @kayicp, @benji, @cryptoschindler
@ all
Please have a look and let us know what you think!
Looks very good, the batch semantic has been really explained well into detail.
I agree, with @sea-snake , very well explained batch semantics.
I left some comments in the respective PRs for ICRC7 and ICRC37
Thank you for the valuable comments, they have been addressed!
For the upcoming meeting tomorrow, Jan 30, 2024, 17:00 Swiss time, we would like to propose the following agenda:
Links
The Working Group has adopted the most recent proposals to ICRC-7 and ICRC-37 and considers both drafts to be ready for voting by the WG.
Please vote on the items through the following GitHub issues:
Please vote, as usual, within a week’s timeframe.
I had a thought on 37 and I think I’ve talked myself out of it but I wanted to voice it just in case. As I implement ICRC2 elsewhere, I think there was a small mistake made in that the icrc2_allowance call should have also returned back the current balance of the account. Currently, I’m having to make two calls to both confirm that I’ve been approved AND that the user actually has that many tokens. (2 calls instead of 1…not a huge mistake…but annoying in implementation).
Now I think we are OK on 37 here because I think we said that you have to own the token to approve it? If not there might be some benefit to looking at if we should add actual balance info to the response for checking allowances. I’m only talking about token-level approvals here.
The collection level is different and maybe we’d want to return tokens_of in that response to save some time? But then we get into pagination of pagination. Probably simpler here to require two calls.
I’m going to vote for the current proposal assuming my thinking above is correct. Someone slap me if we should reconsider.