Add GPUs/TPUs machines to ICP node ecosystem for AI, ML, LLM model training and execution. These AI-nodes are a specialized type of ICP Node machine, typically GPUs/TPUs and any future AI Optimized computing servers. Dfinity can define the AI-nodes Machine requirements These AI-Nodes can be on-boarded just like regular computing nodes are on-boarded, except that these machines will live in specialized AI-subnets. I am sure regular ICP nodes can support LLMs, but having GPUs/TPUs for AI workloads will give ICP the edge. Once these AI-Nodes are on-boarded users can train LLMs on these nodes.