Introducing the LLM Canister: Deploy AI agents with a few lines of code

The experimental non-replicated outcalls feature will make the off-chain worker example redundant. Now you can directly use your preferred model or LLM API in Web2 to make model calls, giving you the freedom to choose any model you want. (Note: You’re still operating off-chain, so there’s no consensus or verification that your results were not tampered with.)

2 Likes