Hi, could someone confirm if this answer is correct? LLMs (Which can be summarized as “It doesn’t run directly in a canister, but through http outputs that call the ofchain service.”) I’d like to start experimenting with building different types of machine learning models and topologies, and I’d like to know if it’s possible to run compiled models directly in a canisters, considering limitations on the number of instructions per call, etc. Thanks.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Dfinity/llm on the main Net | 14 | 319 | July 14, 2025 | |
| Is llama-13B(or 7B) LLM possible to deploy on canister? | 5 | 563 | October 8, 2025 | |
| Introducing the LLM Canister: Deploy AI agents with a few lines of code | 76 | 4875 | September 1, 2025 | |
| Llama2.c LLM running in a canister! | 61 | 5027 | July 1, 2024 | |
| How can I take an open source pretrained LLM model, deploy it to ICP and use as a private ChatGPT just fo me | 13 | 488 | June 16, 2025 |