Only checked the docs, but this .yaml config is sick! Big congrats
Allowing for cli to sync canisters at this level of precision is very interesting for professional developers that are optimizing and working on a team.
Think it’s a significant and worthwhile improvement. Keep it up!
Decoupling the cli tool from the toolchains used to build the canisters - for eg updating your cli doesn’t necessarily force you to update your motoko version, you can do that when you are ready.
“Recipes” allow you to share build patterns within your team or with the community instead of the common build patterns being hard coded into the cli itself. You can write your own or used the ones published by the foundation.
There is a concept of “environment” which is a logical set of canisters on a network. So you are able to deploy the same canisters to different environments and manage their settings independently.
You can build the canisters once and deploy the same compiled code to different environments - canister ids don’t need to be hard coded into your canister code instead the cli automatically injects canister environment variables at deployment time so that ids: “Build once, deploy many times”
I’ve been playing around with the new icp-cli and I have to say, the work you’ve done is fantastic. The tool feels incredibly well-architected and the recipe-based approach is a huge step forward for modularity in the ecosystem—it’s refreshing to see such a clean and intuitive interface.
As I’m exploring the recipe system, I have a specific use case I’d love to get your thoughts on. I’m the author of ic-reactor, a library for React integration and state management. I’m interested in creating a recipe that helps users automate their frontend setup by generating the necessary “reactor” configuration files (like .ts or .js configs) based on the canister’s build artifacts.
From what I’ve seen in the dfinity/icp-cli-recipes repo, the current flow is very focused on preparing the WASM through build steps. My question is: Is there a recommended pattern for recipes to perform “side-effect” actions like generating local project files?
Specifically, once a backend canister is built and its Candid interface or Canister ID is resolved, I’d like to trigger a script that writes a configuration file back into the project’s source tree.
Is this something that should live within a build step, or is it better suited for a different part of the manifest?
Do you envision recipes being able to interact with the local filesystem to “scaffold” or update configs as part of the standard workflow?
I’d love to hear the team’s vision for this, as it would be a game-changer for library authors looking to provide “zero-config” setups for their users.
Yes, currently we force the output of every recipe to be a wasm. I think the closest thing to your situation that we already have is the asset canister recipe. Is ic-reactor very agnostic to other parts of the build? Then the current setup likely does not work well for you.
Partially speculating since I’m not that great at frontend: Could it be that icp-cli is the wrong place to hook into? It could be that creating a bundler plugin (is that what it’s even called? I’m talking about vite or webpack stuff here…) that is ‘only’ part of the frontend build may work better in this case since you seem to be very tied to react with your tooling.
We are moving towards not hard-coding canister ids. We are explicitly not requiring canister ids to be known at build time so that builds are no longer network dependent. Our asset canister serves canister ids in cookies that are returned with html files now and we plan to use that mode of canister id injection moving forward.
I’ll bring your points up with the team, thanks a lot for your input! Hopefully I’ll be able to provide some more concrete answers afterwards
Thank you for the thoughtful response! That context is incredibly helpful, especially the shift toward runtime canister ID injection via cookies—I fully support moving away from the brittle experience of managing .env files and hard-coded IDs.
Regarding the “side-effects” and the CLI’s role, I’ve been looking into the @icp-sdk/bindgen library used in your examples. I noticed that instead of a dfx generate-style command, the icp-cli ecosystem seems to favor a Vite plugin (from @icp-sdk/bindgen/plugins/vite) to handle the Candid-to-TypeScript generation.
This seems to confirm your point that the bundler is the right place to hook in. I could certainly architect ic-reactor to follow this exact pattern:
A Bundler Plugin: Instead of the CLI generating the “reactor” config, I can provide a Vite/Webpack plugin that monitors the Candid artifacts produced by the icp-cli build.
Runtime IDs: Since you mentioned the asset canister will serve IDs via cookies, I can update the ic-reactor core to automatically look for these credentials at runtime, making the generated config files even simpler and truly “network agnostic.”
Two quick follow-up questions for the team:
Standardized Artifact Discovery: For bundler plugins to work seamlessly, is there a recommended standard path where icp-cli will consistently output .did files for backend canisters? In the examples, I see them in ../../backend/dist/, but knowing if this will be a stable convention would help me build a “zero-config” plugin.
Recipe Output Types: If a recipe is strictly “frontend-tooling” (like a reactor generator), would you ever consider a “Metadata” or “Config” output type for recipes, or is the intention to keep the CLI’s recipe engine purely for WASM/Canister logic?
I’m really excited about the cookie-based ID injection—that alone solves 90% of the friction library authors face!
Thanks @raymondk for the call today—really helpful!
I’ve been building @ic-reactor/cli, which generates React hooks from Candid interfaces (using @icp-sdk/bindgen under the hood). Here’s what users end up with:
The recipes are meant to be translated into build and sync steps. build will produce the wasm and possibly some assets to be synched after the wasm is updated in the canister.
In this case, if the bindings are not generated, the build is likely to fail. I assume in your examples that you need to have the counter.did.ts and the hooks/*.ts files in order for code to compile.
What you might want to do is create a recipe that will run the binding generation and the compilation+bundling
Also, if you want to create a recipe, I would suggest you create it in your own repo. You could release it from the same repository. In the type field of a recipe you can use a url that points to your .hbs template.
# an example of a template
curl -L https://github.com/dfinity/icp-cli-recipes/releases/download/rust-latest/recipe.hbs
we’re excited to share that v0.1.0-beta.3 of the icp-cli is now live!
This release brings several improvements to the developer experience. My personal highlight? Interactive call argument building with candid assist. It makes crafting canister calls significantly faster and less error-prone.
You can grab the latest version using Homebrew or the standalone installer:
Via Homebrew:
brew install dfinity/tap/icp-cli
Via Shell Script Installer:
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/dfinity/icp-cli/releases/download/v0.1.0-beta.3/icp-cli-installer.sh | sh
We want your feedback!
Please give it a spin and let us know what you think. If you have detailed suggestions or feedback you’d like to discuss in person, please reach out - we’re happy to jump on a call!
The backend recipe owns both the wasm and the frontend bindings
The .did file is guaranteed to exist when the script runs
Frontend just needs to import from the generated hooks—no extra build steps
Would this pattern work? Or is the build.steps list strictly for producing wasm artifacts?
If scripts after the wasm build are supported, I could even create a recipe template that wraps any backend canister and adds the hook generation automatically:
This pattern does work. Technically the build steps are intended to produce artifacts only, but there’s nothing stopping you from producing arbitrary side effects
Given the current setup I think this is an ok workaround, but from a composability perspective it is not great: depending on the frontend language/framework/??? you will need a different backend recipe. Your idea with the recipe template goes in the right direction for this concern
Just tried it out with a toy project. Holy smokes, this is much easier to use. The CLI surface area is very intuitive. Nice work. My main ask would be to add linux support (maybe it’s already there and I didn’t see it in the docs.) All of my agents (and agents in general) tend to run linux, so this feels like an important unlock. (Silly me, it works great on linux!) I’ll try it out with something beyond a toy soon.
It already works on linux, but installation is not great yet. If you run homebrew I think it should just work with the same install instructions as on macOS. We’re working on standard package manager support