Hi, could someone confirm if this answer is correct? LLMs (Which can be summarized as “It doesn’t run directly in a canister, but through http outputs that call the ofchain service.”) I’d like to start experimenting with building different types of machine learning models and topologies, and I’d like to know if it’s possible to run compiled models directly in a canisters, considering limitations on the number of instructions per call, etc. Thanks.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Dfinity/llm on the main Net | 14 | 274 | July 14, 2025 | |
| Llama2.c LLM running in a canister! | 61 | 4918 | July 1, 2024 | |
| Is llama-13B(or 7B) LLM possible to deploy on canister? | 5 | 547 | October 8, 2025 | |
| Introducing the LLM Canister: Deploy AI agents with a few lines of code | 76 | 4487 | September 1, 2025 | |
| Now General LLM is running on a canister!🚀 | 2 | 820 | October 20, 2023 |