Llama.cpp on the Internet Computer

This thread discusses llama.cpp on the Internet Computer.

A project funded by the DFINITY Grant: ICGPT V2

The first functioning version is now MIT licensed open source:

8 Likes

Current status:

  • on a Mac, you can build/deploy/upload the LLM, and then call an endpoint to have it generate tokens.
  • it only works for a small model, because no use is made yet of the recent advancements of the IC (SIMD, float handling, etc.).
  • the README of the GitHub repo contains a list of TODOs
4 Likes

It’s been a journey, but a pre-release of ICGPT with a llama.cpp backend is now live on the IC.

  • The deployed llama.cpp model is Qwen 2.5 - 0.5B - Q8_0 - Instruct
  • You can watch a small video here
  • You can try it out at https://icgpt.icpp.world/
  • A 0.5B model with q8_0 quantization fits fine in a 32bit canister.
  • However, because of the instruction limit, which requires multiple update calls, it takes about 2 minutes to get this answer to the question shown below
  • We did not do any load testing, so it will be interesting to see how it holds up when multiple users try it out at the same time.
  • The UI is still primitive. The same as the one that was developed for the tiny story teller LLM. Improving that is on the to-do list.

2 Likes

:tada:ICGPT V2 - final milestone reached :tada:

(I also posted this on X)

The grant work is now completed and here is a video summarizing what I created. I want to thank @dfinity for the opportunity & the support.

I also want to thank the #ICP community for the enthusiasm as I shared progress along the way over the past months, and the testing some of you did with the early releases.

Some of you even donated cycles that will keep the Qwen2.5 canister up & running for several months. You are the best. :slightly_smiling_face:

You can try it out at: https://icgpt.icpp.world

I am very happy with the outcome of this project and there are big plans to build on top on this foundation. More on this later. But first some time to celebrate this milestone :champagne::champagne:

Youtube Video

4 Likes

Worked better this time :muscle:

2 Likes

congrats super cool i hope this gets the use and attention it deserves

btw i’m playing with a solana project https://github.com/ai16z/eliza/ which can connect to to remote or local LLMs

does this have API end points so that we could add it as one of the remote models? i can only pidgeon code so hard for me to eval how it works

@superduper ,
thanks for your feedback and for pointing out the eliza model. I will check it out.

About the API:

  • There are two endpoints new_chat & run_update. These are candid based canister endpoints. The links bring you to the candid service definition. In the README it is described how you call it using dfx. This API is using the llama.cpp command line interface, and makes it really easy to test things locally, and then use the same arguments when calling the canister.

  • We’re looking at creating another API that is following the openAI completions standard.

2 Likes

Congratulations on the final milestones! Must have been some journey. :grinning:

We just got done with our first milestone of LLM Marketplace and will be getting into an interesting phase. In this phase I was planning to explore Tiny llama (1.1 B parameters) and train it for a niche task.

But looking at your post I’m having second thoughts, primarily because your model is smaller than tiny llama and quantised. Despite, it hits the instruction limit restriction. I will experiment and share my learning.

I’m trying to brainstorm ideas to see what could be a tiny task it could be trained on bypassing the instruction limit.

Would appreciate your thoughts around it.

Cheers!

Hi @roger ,

The 1.1B Tiny Llama will not fit.

I recommend you select a 0.5B parameter model, like the Qwen 2.5 model I am using, and try to fine tune that one.

1 Like

Yes, I have been contemplating to use Qwen and also exploring few other light wt models like SmolLM, DistilGpt2.

While researching these in also trying to close on a fun use case.

Thank you for the suggestion.

1 Like

We completed the update to the latest llama.cpp version (sha 615212).

Please start fresh by following the instructions at GitHub - onicai/llama_cpp_canister: llama.cpp for the Internet Computer

This update allows you run many new LLM architectures, including the 1.5Billion parameter DeepSeek model that attracted a lot of attention with this X post.

The main limiting factor of running the larger LLMs is the instructions limit. If a model can generate at least 1 token, you can use it, because we generate tokens via multiple update calls. (See the README in the repo for details.)

Latency is off course high, which hopefully will improve with further ICP protocol and hardware updates, but we believe it is already possible to build useful, targeted AI agents with their LLM running on-chain. It requires some smart prompt engineering, and this is an area where we are focusing our efforts.

To assist with prompt engineering, a python notebook prompt-design.ipynb is included in the repository, where you can run against the original llama.cpp compiled for your native system.

3 Likes

A few notes on the testing we did with DeepSeek.

We tested this DeepSeek model, available on HuggingFace:

The model card on Huggingface shows this llama.cpp command:

./llama.cpp/llama-cli \
    --model unsloth/DeepSeek-R1-Distill-Qwen-1.5B-GGUF/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M.gguf \
    --cache-type-k q8_0 \
    --threads 16 \
    --prompt '<|User|>What is 1+1?<|Assistant|>' \
    -no-cnv

Our initial tests confirmed that the parameter --cache-type-k q8_0 is important to get a good answer from the Q2_K quantized model.

To call the canister, you would use something like this:

dfx canister call llama_cpp run_update '(record { args = vec {"--cache-type-k"; "q8_0"; "--prompt-cache"; "prompt.cache"; "--prompt-cache-all"; "-sp"; "-p"; "<|User|>What is 1+1?<|Assistant|>"; "-n"; "512" } })'

(No need to pass -no-cnv, because that is a default for llama_cpp_canister)

You can generate 2 tokens per update call, so you configure the LLM with this call (Details are in README of repo):

dfx canister call llama_cpp set_max_tokens '(record { max_tokens_query = 2 : nat64; max_tokens_update = 2 : nat64 })'

And that’s really it to get going with this model.