if my max update batch size is 10, but my input batch size is 100, should my response batch size …
a) becomes full 100 (with last 90 filled with nulls which means “unprocessed”), or…
b) truncated to first 10 only, or…
c) truncated to first 11 only, with 11th element being the #GenericBatchError(“max batch size is 10”)
also, what about for query batch methods? what should their response batch size be should the max query batch size was violated?
The first general part of the spec tries to explain this concept.
Update:
First 10 or first 10 and 90 null (Both are considered the same, truncating trailing nulls is considered an optimization).
Keep in mind, it’s also valid to randomly pick 10 items out of the 100 inputs and return nulls for the other 90. This in most cases won’t really make sense, but could be something that could occur with ledgers that rely on async calls with other canisters.
Alternatively you can also return the generic batch error as first and only element, this mostly only makes sense if your ledger is atomic (there’s an optional metadata field to indicate this).
Query:
Since it’s read only, the inputs should be processed and returned in sequential order. This means first 10 only. Null does NOT mean the item hasn’t been processed here, so do not add trailing null, these null values would be incorrectly considered by a client to be actual response values of the given method.