the thing is in candid we pass one principal and its other parameter it is not returning list we wantr user account data of multiple users in frontend to filter the user with hf <1 which is taking time although it is query call
The current limits on query execution are as follows:
There’s a limit of 4 queries total per node
There’s an additional limit of 2 queries per canister per node.
With the above in mind, if you are hitting one single canister, then you cannot hope to get a lot more improvement by increasing the number of batches to > 2. Going from 1 to 2 queries you should be able to execute truly in parallel, if you use 6-7 batches as you say, then I expect that a backlog of queries is built up for this canister. If you keep doing this continuously, then it could lead into bigger delays for some queries while they’re waiting for processing in the queue.
That said, I would also not expect delays in the order of minutes. If you’re willing to share the canister id we can try to look into some metrics from our side to see if we can identify some issue.
Regarding recommendations on how to optimize, you can try to scale out to more canisters, I’m not sure if I can give more concrete advice if I don’t see more details about how the calls are made, how the batches are created and sent and so on. But as I said, I would not really expect delays in the order of minutes, that indeed looks a bit suspicious to me.
Fetching and processing are two differ things. In what way are you processing during a query? These would be temporary state and memory changes that would be discarded and have to be recalculated each time. Ideally, for speed you need to pre process and have the data ready to go so you can just dump it. That should be fast…and query up to 2mb of data at a time so when you get a slot it all goes in one “page read”. If you have more than 2MB of data you need then it makes sense to paginate.
Maybe related, but I find myself creating batch query functions again and again to address this. For example for the method get_file_by_id(file_id : Nat) which fetches metadata for one file at a time I’ve also created get_file_by_id_batch(file_ids : [Nat]) to query multiple at the same time. It would be nice if there would be a generic canister library feature or icp specification feature to submit multiple calls in one icp request instead of having to implement all these custom _batch functions here the full example:
public query (msg) func get_file_by_id(file_id : Nat) : async ?DiodeFileSystem.File {
assert_membership(msg.caller);
DiodeFileSystem.get_file_by_id(file_system, file_id);
};
// braindead call we have to add because ICP has no native query batch
public query (msg) func get_file_by_id_batch(file_ids : [Nat]) : async [?DiodeFileSystem.File] {
assert_membership(msg.caller);
Array.map<Nat, ?DiodeFileSystem.File>(file_ids, func(file_id : Nat) : ?DiodeFileSystem.File {
DiodeFileSystem.get_file_by_id(file_system, file_id);
});
};
One function handles all cases. pass everything and return everything as an array. yes…it is a few more characters to type (braces) but it is far more flexible.
I think this is a fair point in general for ICP API design to generally prefer returning multiple values from functions.
That said it doesn’t solve the wider issue of batching multiple calls. To expand on the use case here. We have a single zone canister for every diode zone (think slack workspace). So when a user start his app locally his app will need to reach out to the canister and sync all kind of information:
Getting current canister version
Reading new messages
Loading new file & directory listing
Potentially downloading files (in folders tracked by the user)
Read zone metadata + settings
Read user lists and user roles
So we are dealing with a pretty large API interface with many different functions, many of which need to be called immediately on app startup. It would be great to be able to call function in a batch:
Public API function signature ergonomics does not justify the protocol level changes that would need to be made to support it. A better option is to have a state object and return it, and access members via the candid. Depending on your source you probably already encapsulated state in a Record, just return it and have the caller deal with parsing it. You can control access based on the caller’s principal for admin rights if needed. It is much cheaper to return 1 large blob than query 10 different subsets of the same blob due to network consensus.
Flexible typing is built into Candid. Create a value as a variant and dynamically switch on type to dynamically dispatch based on type. Look at how ICRC3 handles Values as an example of flexible typing.