Technical Working Group DeAI

Stable memory has a 400GB capacity per canister

Wasm heap memory, per canister: 4GiB
Wasm stable memory, per canister: 400GiB

It’s possible to upload a single 400GB file using ic-oss, and it uses concurrent upload technology with ReadableStream. Maybe the upload speed will be faster.

2 Likes

I guess the question is how fast can the software get access to the memory. For the heap it’s RAM so fast. Is stable loaded on demand from disk?

Running an AI model still requires loading the model file from stable memory to heap memory, so it is still subject to the current 4GB limit of WASM32.

Our use case is deploying a vector database which index size is over 70GiB :+1:

I’m using tokio’s stream_iter for parallelization, but it’s possible that your implementation is faster.
If you have the opportunity to upload tens of GiB, let’s compare!

2 Likes

Hi everyone, thank you for yesterday’s call (2024.07.25). This is the generated summary (short version, please find the long version here:
DFINITY is expanding its AI team by hiring an AI engineer specializing in GPU optimizations and deep learning libraries to work alongside the current team member, Islam, who has extensive experience in the Internet Computer (IC) and AI. The DeAI group discussed networking to find suitable candidates and expressed enthusiasm for the new role. The team is transitioning with the help of Ulan, planning milestones for GPU and CPU optimizations on IC, and working on various demos such as face recognition and GPT. Members shared interests in decentralized AI and gaming applications. Tim presented Hypercerts from Protocol Labs, which use semi-fungible certificates to incentivize and validate impact in projects like public goods and environmental remediation. The group discussed future collaborations, emphasizing decentralized AI development and community-driven efforts. Action items include sharing the job opening, continuing demo development, and exploring Hypercerts’ applications in AI projects.

Special thanks to Tim for sharing Hypercerts with us💪

Links shared during the call:

3 Likes

Hi everyone, thank you for yesterday’s call (2024.08.01) and special thanks to Krish for sharing his work. This is the generated summary (short version, please find the long version here:

In the latest DeAI call, the success of the recent Web3 AI hackathon was highlighted, which saw high-quality submissions focused on AI development using web technologies. Winning projects, such as DeDa, a decentralized data marketplace, Clanopedia and ICMLPL, which integrates the Flashlight deep learning library, will continue as developer grantees. The community is encouraged to participate in weekly calls for knowledge exchange, utilize the decentralized AI channel on Discord, and explore support programs like the Olympus accelerator for further growth and investment opportunities.

Links shared during the call:

3 Likes

Hi everyone, thank you for today’s call (2024.08.08). Special thanks to @icarus for sharing his latest research on viable GPU hardware options for the IC! This is the generated summary (short version, please find the long version here):

During today’s DeAI Working Group call for the Internet Computer, the primary focus was on discussing hardware options for optimizing AI workloads. The group explored various GPUs and AI accelerator cards, considering factors such as cost, power efficiency, and suitability for both inference and training. The discussion highlighted the potential of Tens Torrent’s wormhole cards, which offer a balance of performance and affordability, and emphasized the importance of an open-source software stack for flexibility and problem-solving. There was also mention of ongoing efforts to integrate GPU functionalities into the IC’s host functions and the potential need for customized hardware solutions to support AI-driven tasks effectively.

Links shared during the call:

2 Likes

Hello! How i can join to next call?

Hi there, we meet each Thursday in the ICP Developer Community Discord. We use the voice channel for the call:

Looking forward to having you :+1:

Hi everyone, thank you for today’s call (2024.08.15). This is the generated summary (short version, please find the long version here):

During the DeAI Working Group call, a presentation was given on Clanopedia, a decentralized, verifiable data source built during the Web3 AI hackathon, focusing on creating a collective knowledge base using a vector database architecture stored on the Internet Computer Protocol. The discussion then shifted to documentation updates, with an extension of the deadline for contributions, and the team explored ongoing work on AI transformers for specific use cases, emphasizing the importance of community collaboration. The call also featured an in-depth discussion on AI regulation, particularly the European Union’s AI Act, its implications for high-risk AI applications, and the evolving landscape of global AI governance. The session wrapped up with reflections on the need for future workshops, code walkthroughs, and regular updates on AI regulations, highlighting the importance of staying informed on legal developments as AI technology advances.

links shared during the call:

Please find Tim’s slides on AI regulation in the repo’s folder for today’s meeting too: DeAIWorkingGroupInternetComputer/WorkingGroupMeetings/2024.08.15/AI Law Regulation Compliance - ICP DeA WG Briefing 1_1.pdf at main · DeAIWorkingGroupInternetComputer/DeAIWorkingGroupInternetComputer · GitHub

special thanks to @tinybird for the demo on Clanopedia and Tim for sharing his expertise on AI regulations💪

1 Like

Hi everyone, thank you for today’s call (2024.08.22). This is the generated summary (short version, please find the long version here):
The DeAI Working Group for the Internet Computer discussed several key topics, including the one-year anniversary of running LLMs on the ICP, promotional activities, and the potential for organizing an AI symposium. The meeting also reviewed various AI projects on the Internet Computer, covering different programming languages and frameworks, such as TypeScript, C++, Python, and Rust, and discussed potential integrations with Awesome ICP and decentralized AI documentation. The group emphasized the importance of sharing and curating resources, training data, and ensuring comprehensive and up-to-date documentation for AI on ICP. Future discussions will focus on expanding these resources and fostering collaboration.

Links shared during the call:

2 Likes
2 Likes
1 Like

Hi everyone, thank you for today’s call (2024.09.05). This is the generated summary (short version, please find the long version here):

In today’s DeAI Working Group call for the Internet Computer, discussions centered on AI development with a focus on large language models (LLMs), vector databases, and decentralized AI applications. Key insights included the development of personal AI systems using secure, tamper-proof vector databases on the blockchain, offering users control over their own models and data, avoiding centralized data collection risks. The group emphasized optimizing LLMs like LLaMA for specific tasks such as email search and educational tools. They also explored the potential for a decentralized marketplace for AI models, enabling secure monetization of AI contributions on the ICP platform, with a long-term vision similar to Hugging Face, enhanced by blockchain features.

Links shared during the call:

5 Likes
2 Likes

Hi everyone, thank you for today’s call (2024.09.26). This is the generated summary (short version, please find the long version here ):

Today’s DeAI Working Group call for the Internet Computer focused on potential Delta V integrations, new participant introductions, and plans to publish a DeAI manifesto on ICP with an emphasis on accessibility, self-sovereignty, and AI safety. The group also explored on-chain vector databases, LLM scaling challenges, and standards for LLM inference. Future discussions will address cost efficiency, instruction limits, and the environmental impact of decentralized AI.

Links shared during the call:

1 Like

Hi everyone, thank you for today’s call (2024.10.03). This is the generated summary (short version, please find the long version here ):

In today’s DeAI Working Group call, participants discussed the challenges of implementing shared stable storage for canisters to avoid re-uploading large AI models across subnets. The conversation also focused on the serialization of tensors, exploring the need for standardized formats such as ONNX or MLIR for data sharing between canisters. The group also touched on the future potential of using GPU subnets for AI computation, dealing with issues like determinism and instruction limits, and the possibility of creating decentralized systems to rent out and protect training data for AI models.

Links shared during the call:

2 Likes

Anyone faced similar issue while working on deep learning or have a sample for dividing work into multiple messages?

1 Like