Make local development experience similar to the mainnet

Summary

Local development with dfx differs from how the things actually work on the mainnet quite a bit. The proposal is to make local dfx replica as close to the real deal to provide better development experience.

Note

I do only elaborate on missing cycles consumption mechanics on the local replica and encourage the community to extend this topic with any addition they see important. I remember there were some proposals to incorporate critical mainnet canisters into the local replica also - feel free to elaborate on that topic as well.

1 Cycles on dfx are ignored completely

Problem A - cycles consumption estimation

There is no way for me to estimate how many cycles does my application consume, except to deploy it to the mainnet, which is not cheap and not always easy. Especially in scenarios with multiple canisters deployed dynamically (since I need to write some additional logic to reuse these deployed canisters) - which is both: expensive and not so easy sometimes.

Cycles usage estimation is a critical functionality IMO since it provides us with a quickly way to check if the chosen algorithm performs well or there are still optimizations to be done, especially for people who came from other platforms and don’t know the IC’s economy capabilities yet.

Problem B - cycles consumption limit per message

There is no way to check message boundaries in terms of cycles consumption per message, except to deploy the canister to the mainnet and to perform an extensive testing. For those who don’t know, there are two kinds of limits per each message you send to a canister: message size limit (~2mb) and cycles consumption limit (the actual value doesn’t mean anything since there is no way for you to check it locally rn).

While the first limit is checked correctly locally (this number actually differs on the mainnet and on the local replica - the latter is about 3.5mb on dfx 0.8), the second limit is ignored completely which leads to unexpected bugs in production. This problem is not hard to solve, but the solution implies limiting the size of payload you send from frontend in one go, which could lead to some serious work you need to do, to always be sure you only send the right amount of data for canister to process at once.

Problem C - you don’t need a wallet canister to deploy locally

We recently did an experiment with on-the-fly canister deployment straight from the frontend code. Locally everything worked fine - call provisionally_create_canister_with_cycles() (which actually works only locally, in production you would call create_canister()) and you’re done. You don’t need a wallet canister for that.

But in reality this is not how it works. For a canister deployment you have to hold some cycles, but your keypair-based principal can’t hold them, you need a canister that would do it for you. It could be a wallet canister or any other canister that would pay for your create_canister call. The thing is that this is what we should have see at the beginning of our experiment locally, but only did on the mainnet.

Solution

Implement all cycles-related mechanics on local replica with an ability to switch them off on demand.

12 Likes

This makes sense to me. Of course, this is easy for me to say since I would have to do no work for this. They are the ones actually doing the hard work and also the real experts.

Let me ping SDK team…

6 Likes

Your work is also a great value, Diego :wink:

1 Like

+1000 Cycle management simulation would be huge. I know that mainnet is ‘cheap’…but it isn’t that cheap when you start doing things that threaten the cycle limit. When you couple this with the fact that people can find your canisters and start pinging them with data when you’re just trying to test something it makes a testnet really, really desirable. Being able to simulate locally would be a big step in the right direction.

I don’t even know if it has to be perfect, but being able to run a process that overruns the cycle limit by a factor of 500 locally is not a good developer experience when you’re just learning and get to the point where you push to mainnet and your world devolves into a fiery armageddon of code patching, pseudo logging, and praying that you can identify the rogue loop before the world discovers what tripe your dev skills are. Not that that has ever happened to me. But I could see it happening.

9 Likes

I would really like to start benchmarking cycle consumption, this would be great to have.

4 Likes

This is becoming more painful. I can’t easily run benchmarks to test how my code affects cycle usage. When people ask me, I can’t really answer the questions about how cycles costs work. I’m also starting to run into issues with running out of cycles locally when performing tests, and in those cases I would actually just like to turn cycles off.

My biggest use case is benchmarking Azle. I want to be able to see exactly how many cycles similar functions are consuming when a canister is written in TypeScript, Rust, and Motoko.

Can we make this a higher priority? Cycle measurements should be easy to obtain locally and we should be able to customize our cycle environment, this is making local development more difficult.

8 Likes

I feel your pain. I’ve been thinking about an ICDevs bounty for this…trying to get someone to do the work and document the results for the community. The thing that keeps holding me back is that this stuff is so in the back corner and feels a bit ‘tacked on’ and ‘loosey goosey’ that the next code push to the replicas could mess up the numbers. ETH has a specific opcode to gas table that can be relied on and, if it changes, it goes through governance. SOME of the dfinity cycle costs are like this(ingress messages) but so many others I think just run the clock and then do some math to estimate it. I don’t know enough about the isolation pattern on the machines to know how reliable that is and what a possible variance could be. Especially if you are running on pjljw. :grimacing:

3 Likes

Yes, I will escalate this. I have been asked by R&D team to help collage and propose prioritized pain points for local development (especially for testing defi apps once ICP is held in canisters). Is there anything else you guys see? A very obvious one is “have a ledger canister running locally that you can test against”)

4 Likes

Instead of assigning a lot of work on having the local dev environment match the mainnet, wouldn’t it make more sense to launch an actual testnet? Having some servers / nodes dedicated to a testnet, funded by the foundation sounds faster to market and smaller in dev scope IMO than locally duplicating the ledger canister, replicas, http proxies, etc etc…

I realize the devops wouldn’t “simply” be clone everything and let it rip, but I do believe it would be less work overall. If resources become a problem we could work with a scheduling system. Devs seem pretty split over US / EU / AU, so that should help…

3 Likes

To be fully transparent, I have been asked to propose both local and tesnte options to the team into concrete items… as folks inside the org have different ideas of what scope would be for both. Some people think one is much easier than the other. Others think we need both. I have been tasked to talk to the team who would work on the local version as well as teams who may be the ones who work on maintaining a testnet. Possible options include anything from foundation doing it to using grants, etc… or working with external folks to support their efforts.

All this to say, I think if I sounded opinionated that “local dev” is all that matters, then I mischaracterized my intent. I do think it matters… but I am agnostic as to “first steps” as I have only been asked (today) to propose something.

1 Like

Totally fair. I was just throwing my 2c into the discussion. I’ve seen a lot of posts asking for help installing the II canister locally. I think things will get more complicated the more systems and canisters one needs to locally install.

On the other hand, working on creating a testnet could also imply first cleaning up a lot of stuff so that it can be maintained, which in turn would make it easier to self deploy. Only Dfinity can say for sure which is faster, more optimal (capex / opex in both cost and human hours), etc…

1 Like

and I am glad you did! I hope the coldness of the forum posting and my quick responses did not seem like I was not appreciative. I often write many messages across many platforms so I can be needlessly overly factual and curt.

5 Likes

Things are pretty opaque at the moment when interacting with the underlying system, especially when using motoko. It would be great if we could flip a switch and see logs that correlate calls with input parameters and the amount of cycles used. I keep hearing that they don’t want to promote system level things into the language, but sometimes it is nice for your code(diagnostics and logging) to know wtf is going on with your system. If we don’t want to give motoko access to a call ID or cycles used then motoko has to have access to some handle that it can use to correlate its state with the sub system so it can be done in an external system.

I would also like to be able to turn off any consensus latency locally. I haven’t had much success with getting the artificial delay to disappear locally.

1 Like

Basically I would like to be able to customize cycles and consensus locally. Sometimes I would like to turn off cycle limits and consensus latency entirely, other times I would like those to match production as much as possible. Sometimes I just want to run many tests as quickly as possible with no limits and little latency, other times I want to run benchmarks and measure cycles as closely to production as possible.

I’m on dfx 0.7.1 and --no-artificial-delay doesn’t seem to do anything.

Apparently, that flag was removed in later dfx versions, but I guess based off what you said there still exists a consensus latency.

1 Like

Hmm, I’m on 0.8.4, and consensus is pretty fast. I don’t see why would some want to make it faster honestly.

Automated tests is one reason. There are multiple ways to go about testing, but one is to test very close to production but still locally by running tests from within a canister.

I can imagine wanting to experiment and test with different subnet replication factors as well. A single node subnet, which could be useful for real-time gaming and other low-latency applications, would be useful to test locally.

We should be able to turn the delay on and off, and if it’s going to be emulated locally it would be nice to set an emulation based on replication factor. It’s strange to have a default delay set.

1 Like