Ideas on how to optimise AI capabilities on ICP

The Problem: AI ran as smart contracts on ICP is limiting which currently prevents it’s application to real world use cases.

Maybe Solution: Separate the AI into subnets where computation is split in different spaces (smart contracts/canisters) and be connected via smart contracts. Lots of simple, small AI models can be connected via smart contract channels. Input data filtered through lots of small specialised AI’s that can send outputs to the relevant next step of computation hosted in a new/different space (smart contract/canister).

This allows AI to have it’s computation done in different areas that are optimised for the task and it reduces the computational dependency on a single, central AI model.

So input data is received and then a smart contract (blue) can read the data and choose where to send parts of it for optimal computation to individual specialised AI subnets. Data can be split up within subnets again if needed, and some subnets maybe need to combine their outputs to create a shared output. Once all subnets have computed an output, then it can be collected and formatted into a group final output.

Share the computation baby.

This can help AI in lots of ways. For example you can combine different types of AI models like speech to text, image classification, analyse of text etc (video audio recording as input data). Or you can optimise 1 task neural networks like image classification. For example each layer of image classification can be done in it’s own smart contract/ canister (space) and then 1 layers outputs can be sent to the next layer as an input via an on-chain smart contract. Maybe we can combine this idea to help INN?

Also another idea is by splitting up an image classifying AI into a collective formed from hyper specialised AI. For example - an image classification model built for human faces, animal type, weather type, plant type, text/handwriting reader etc. Then an image classification AI can be the collective of specialised subnets. This should all be opensource and people can choose which AI models they want and make their own tailored collective AI with desired relevant subnets.

Specialised AI as interopable subnets makes customisable AI. This has many use cases and can allow users to build their AI as aligned to their needs as possible whilst having the security of AI being ran as a smart contract

TLDR: AI computation can be separated for optimal computation. AI stays on chain, AI can be split up into to smaller simpler tasks to manage the limitations of AI on ICP. Then the smaller outputs can be group and fed through more layers of AI to build collective outputs.

PS. I am not a developer and cannot code my knowledge has many gaps. All criticism is welcome and appreciated.

3 Likes

Firstly, to be fair, I think onchain AI on ICP is phenomenal, true novelty, and keeps getting better. Recent improvements including deterministic onchain floating point calculation is pretty nuts. Dfinity is a blockchain wizard workshop, and elevates the entire software space. :ok_hand: :pray:

But it’s an interesting idea. I have thought about a similar topic but more in general (across the software space), how to achieve infinite computation by combining smart contracts together, all onchain? It’s a broad and visionary topic with lots of nuance as well.

Basically, I’m seeing smart canister fractals :star_struck:. Where within one canister, you can have many dApp microagents (the Super dApp idea) but also you can combine many smart contracts together (distribute compute logically) to optimize and lessen the barrier of scale (especially for some niches). In web2, this is called map-reduce. In web3.0, and for us, maybe it can be called a super-agent, or smth. “As above, so below”, onchain agents everywhere :grinning:

1 Like

Thanks for responding.

Maybe I was too harsh about AI on ICP. I made the statement after seeing someone got a llama 3 8bn LLM running on ICP but I think it costed 16,666 x more than ChatGPT to use it made me want to look for solutions to improve AI/computation on ICP.

Really love ur ideas + diagrams. Smart canister fractals are a cool asf concept.

The Super dApp article left me with some questions/ideas P.S AMAZING ARTICLE BTW, thx for sending on.

Can we build a new type of canister that shares a memory ledger with an other canisters?

Can canisters not be nodes in their own simple blockchain where they tx outputs to act as inputs for further computation in different canisters?

Maybe if they have a memory ledger where canister’s computation output is updated then they don’t need to make an inter canister call as the canisters have a merged section of memory for outputs to be used for inputs. Can canisters even be built like this idk?



The above depicts a way 1 canister can share output through a shared memory ledger with another canister, what if they was a way to have a canister group shared memory ledger of like 10 canisters or more. This way they can share data in more complex patterns/orders for varied use cases, where the chain/order of computation can change.

This relates back to this question: Can canisters act as nodes in their own simple blockchain where they tx outputs to act as inputs for further computation in different canisters? A shared ledger that can share simple data(outputs) between canisters without making an inter canister call.

I’m not a developer and have limited knowledge on ICP and how it all technically works but here are just some of my thoughts.

All feedback welcome

ps. we should deffo call them super agents lol

Extra:

These sub-canisters within canisters make me think about ecosystems in life. How you can always zoom in closer and see a more detailed ledger of energy exchange. ie, looking at the universe, zooming to a planet, zooming to a rainforest, zooming to a small a river, zooming to a plant in the river, zooming to the chemical exchanges within the plant etc. I suppose what I’m trying to say is that we should build ICP to mimic systems of nature and ecosystems involve energy exchange (a tx) and there a infinite layers within in layers. Loads of smaller tx, forming a bigger tx which is a small tx in much bigger tx and so on.

Natural processes and systems follow the same pattern. A plant has specialised areas(sub canisters) that deal with different tx types (water, light, sugar etc) and also has a shared ledger for sub canisters to communicate their output state to and the next system can use that as an input to help the plant run as a collective. Businesses follow this system as well (manufacturing line, etc).

Random idea:

Also nature uses the mycelium network almost as an organic blockchain to receive, send and share node (plant life/roots) data. For example, the network will know when a plant is low on a certain mineral and then tx the required mineral data from the other nodes. Are canisters able to do the same thing with cycles as cycles like the energy for life to a canister?


Just saw this.

‘Canister communicate with one another by RPC’

Maybe we need to build a new type of canister that doesn’t need to request and respond, instead they just send their outputs to a ledger which is like a mini blockchain between all the canisters (nodes). That way they don’t need to request and wait for consensus before every time data needs to be shared, it is just self updating in their shared memory ledger.

Then after the shared memory ledger it needs to runs an inter canister call to free up it’s shared memory ledger and the info can be stored a digital warehouse (another canister designed for storage of past tx between canister nodes)