Hi everyone, thank you for today’s call (2025.11.06). Specials thanks to @icarus for sharing his expertise! This is the generated summary (short version, please find the long version here ): The WG discussed two strong open small-LLM contenders (IBM Granite 4 & ETH Apertus) and new TEEs running on AMD SEV-SNP. Hardware advances (Zen 5 EPYC Gen 3 nodes) could bring ~50 % faster performance, but protocol-set instruction limits—not memory—remain the bottleneck for running larger models on-chain. Focus remains on hybrid architectures—canister logic + adjacent AI services—for practical DeAI growth.
Links shared during the call:
- IBM Granite 4.0: Hyper-efficient, High Performance Hybrid Models for Enterprise
- IBM Granite 4.0 Tiny Preview: A sneak peek at the next generation of Granite models
- IBM becomes first major open-source AI model developer to earn ISO 42001 certification
- Granite 4.0 Nano: Just how small can you go?
- Apertus: a fully open, transparent, multilingual language model | ETH Zurich
- https://forum.dfinity.org/t/first-sev-snp-enabled-node-deployed/59499?u=icarus
- https://dashboard.internetcomputer.org/network/nodes/hckfw-kshvv-mzhew-h6isd-5q2wd-lde6w-uipmv-7gbuz-53wqz-cz5kx-oqe
- https://dashboard.internetcomputer.org/proposal/136408
- https://www.esafety.gov.au/industry/basic-online-safety-expectations