Working with Decentralized LLM

Greetings everyone,

Our team is currently immersed in a project that heavily relies on a language model. However, we have encountered challenges stemming from the storage limitations of canisters, preventing the accommodation of a billion-parameter model. In response, we have chosen to download the model directly onto the user’s browser. Regrettably, this approach introduces two significant issues that warrant consideration:

  • The download process on the user’s browser is time-consuming, taking several minutes to complete.

  • To enable this feature, users must activate webGPU on their browser, which, as an initial observation, is an experimental feature across most browsers.

  • Regrettably, these circumstances collectively contribute to a suboptimal user experience.

In light of these challenges, we are actively seeking ways to circumvent these issues without compromising the integrity of our project. Your insights and suggestions would be greatly appreciated.

1 Like

Welcome @Dunsin-cyber
There are a few form topics here covering AI/ML related development on the Internet Computer. Try a search for AI or GPU and you will find them.
The most useful thread for you to start with would be the recently convened Decentralised AI Working Group (DeAI WG), all are welcome! Technical Working Group DeAI

The WG has an active IC Dev Discord channel for members of the WG and any other interested parties to chat here Discord

The WG holds a weekly voice meeting on Discord on Thursdays, the next one is in about 12 hours from now. Information about it is in the Discords channel.

The questions and limitations you raised are active topics of discussion right now so come join in!


alrght, thank you @icarus