Right, sorry for the confusion. I was deep into my own thoughts and conflated your problems with mine for a bit there
So, I think there are two steps to your scenario here:
- how to prevent creating two instances of a create_canister async call
- what to do after, in the event one fails
The answer to the first question would still be “use an enum with a payload” IMO. You could have it so that you check if there is any proposal in ProposalsStateTracking::InProgress(id), and only call execute_proposal if there are no “in progress” tasks. This would solve your <5 canisters condition, as I believe one of the threads would change the state first, and the second one would “see” this change when it gets its turn.
As to what happens after this in your hypothetical, I’m not clear on what needs to happen if say first call somehow fails but the second call already got “rejected” because the first one was “in processing”. You’d need a way to either re-issue an update call or piggyback on other update calls and re-check your proposal Queue and check the “next” proposal in line. Another advantage of using enums would be that you could have your “second” proposal from the example above be in a state of “Waiting”, and process it next if that is what your business logic needs. You can first do everything you can for the first proposal that got executed, and if that proposal reaches a state of “this is 100% rejected” then you move on to the next in queue.
tl;dr; use enums rather than booleans to track the changes in your “state machine”, as it can be more descriptive of your business logic, and it can help you catch unwanted and hard to reason about bugs earlier.
As for the other thing that I mentioned in the long code paste, I think that’s a problem worth thinking about, but I’ll wait for confirmation from someone that understands this better than I do. It might be that I totally misunderstood something and it’s not a valid concern.
edit: And I found the original source for the “beware of state changes between await calls”: It’s this post by @nomeata, which I took to mean what I tried explaining in my first reply.
Canisters process messages atomically (and roll back upon certain error conditions), but not complete calls. This makes programming with inter-canister calls error-prone. Possible common sources for bugs, vulnerabilities or simply unexpected behavior are:
Reading global state before issuing an inter-canister call, and assuming it to still hold when the call comes back.
Changing global state before issuing an inter-canister call, changing it again in the response handler, but assuming nothing else changes the state in between (reentrancy).
Changing global state before issuing an inter-canister call, and not handling failures correctly, e.g. when the code handling the callback rolls backs.
If you find such pattern in your code, you should analyze if a malicious party can trigger them, and assess the severity that effect