Is ic-onchain-node got GPU to training and inference for deAI? if not ,any plan?

Im curious about it .Is it necessary for deAI using parallel GPU empower?
I have checked ic ai documentation brief.

If onchain GPU already accessible will be great!

I found that using web2 gpt-api with frontend canister is not very difficult to impl.

This got a good example.

Do we have necessary to build fully on chain ai product ?

Since AI is quite trendy these days. guys , do you interesting on this ?

dom talked about AI computation ablity on chain

about 22:00

Explained some canister capability.

But i might missing see the GPUs on chain ?

24:38 mention about wasm vm memory.
wasm32 got 4gb memory.
few month will upgrade to wasm64.

But i got a quesiton :
is this memory on chain got similar performance on a nvidia B200 GPU GDDR6 memory ?
And how ic-replica do parallel computation heavy task like a B200 GPU does ?

You can check out DeAI projects on ICP on the DeAI on ICP Working Group GitHub and Internet Computer Ecosystem Page - AI.

You will see several on-chain LLMs already on ICP. Most notably ICPP-LLM.

Research on and onboarding of GPU-based nodes are ongoing.