Deterministic Time Slicing

@ulan @domwoe thank you both. I agree even a small section would be great.

1 Like

@ulan Are there any plans to continue raising the DTS limit past 4X?

Now that canisters are able to hold up to 400GB, being able to run map reduce types of computations on that larger data will either require more rounds of consensus, or more complicated logic to chunk up the computations (more difficult developer experience).

I’d also imagine that the DeAI initiatives would be interested in having this limit raised.


@ulan Are there any plans to continue raising the DTS limit past 4X?

Yeah, we are planning to increase the limit to 8X-10X (40B - 50B instructions). Hopefully, it will happen in 1-2 months. Going beyond that is possible, but would require a strong use case because we are getting close to the hard limit of 500 DTS rounds.


I hope we can just keep pushing it as far as possible, we hit this limit on a regular basis and I’m pretty sure we’re not the only ones. For ICP to be a place of general-purpose computation we can’t have these kinds of limits.

This is one of the key limits I discuss with people as they contemplate deploying to ICP.


Does the hard limit come from checkpointing?

Really looking forward to this. The backend is constantly trying to get around this limit.