Hi, could someone confirm if this answer is correct? LLMs (Which can be summarized as “It doesn’t run directly in a canister, but through http outputs that call the ofchain service.”) I’d like to start experimenting with building different types of machine learning models and topologies, and I’d like to know if it’s possible to run compiled models directly in a canisters, considering limitations on the number of instructions per call, etc. Thanks.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Now General LLM is running on a canister!🚀 | 2 | 817 | October 20, 2023 | |
AI LLM via Canister/Cycles? | 6 | 761 | February 12, 2024 | |
Are you interested in running dynamic wasm scripts (lambdas) on-chain? | 10 | 784 | August 30, 2022 | |
Introducing the LLM Canister: Deploy AI agents with a few lines of code | 46 | 2680 | April 15, 2025 | |
Chatbot Canister some advice | 1 | 30 | February 15, 2025 |