I didn’t code much in Rust on IC. If the canister is able to load the models from Rust plz give let me know lol
Sure, I will post the result here
Hi everyone, I have tested the pre-trained deep learning model named “albert2” on IC.
I add an onnx model into the contact folder and use Path lib to read it.
Then I add the tokenizer,json file for performing natural language tasks.
Here is my Cargo.toml:
[dependencies]
candid = “0.7.14”
ic-cdk = “0.5.2”
ic-cdk-macros = “0.5.2”
tract-onnx = “0.19.2”
ndarray = “0.15.6”
tokenizers = “0.13.2”
tract_onnx: a library that can load the model built by pytorch, tensorflow, or jax.
ndarray; a library that can perform multi-dimensional tensor computation.
[target.‘cfg(target = “wasm32-unknown-unknown”)’.dependencies]
getrandom = { version = “0.2.6”, features = [“js”]}
The errors come out for the following aspects:
- random dependencies:
When you deploy the canisters that needed the random something, you must add:
getrandom = { version = “0.2.6”, features = [“js”]} - head file error:
This was what I meet here:
In one word, the first experiment failed.
But I already have some road maps for the next experiment, so I will continue to post my experiment result here.
So if anyone has some ideas, you can post them here and let me know, thx
BTW, all code was written by rust.
Following your experiment. I think this is a domain the foundation should definitely invest in.
do u have a GitHub repo link?
As far as we practice, IC currently has a limitation for updating the var state because each round of consensus will end up creating a greater memory heap. Meanwhile, I am not so sure how canister handles the time-out problem for the execution. If training takes a while and computation is going on, how does the canister manage the call?
We have DTS now which helps a bit but is still limited. What you can do is repeated self-await
because every await
is a commit point and starts a new message (and therefore the execution limit). If you could e.g. perform one epoch of training in a single message then you can just await
every epoch and you should be fine.
No, not training, just upload your model to IC for inference.
sorry, I delete it, but I will update new repo later.
I am also very interested in exploring the inference option using C++ compiled to wasm .
Hi all, so there is a question: for my experiment, there must be a basic random lib based on rust that can generate different kinds of distribution , but now the most rust random lib can not be compiled on IC. So I want to ask you guys who can recommend a simple random lib? We can build a basic IC random lib.
Here is the code from the open storage:
pub struct CanisterEnv {
rng: StdRng,
}
impl CanisterEnv {
pub fn new() → Self {
CanisterEnv {
// Seed the PRNG with the current time.
//
// This is safe since all replicas are guaranteed to see the same result of
// timestamp::now() and it isn’t easily predictable from the outside.
rng: {
let now_millis = time::now_nanos();
let mut seed = [0u8; 32];
seed[…8].copy_from_slice(&now_millis.to_be_bytes());
seed[8…16].copy_from_slice(&now_millis.to_be_bytes());
seed[16…24].copy_from_slice(&now_millis.to_be_bytes());
seed[24…32].copy_from_slice(&now_millis.to_be_bytes());
StdRng::from_seed(seed)
},
}
}
}
impl Environment for CanisterEnv {
fn now(&self) → TimestampMillis {
time::now_millis()
}
fn caller(&self) -> Principal {
ic_cdk::caller()
}
fn canister_id(&self) -> CanisterId {
ic_cdk::id()
}
fn random_u32(&mut self) -> u32 {
self.rng.next_u32()
}
fn cycles_balance(&self) -> Cycles {
ic_cdk::api::canister_balance().into()
}
}
Hi @cymqqqq, to get random value try calling raw_rand in ic_cdk::api::management_canister::main - Rust
Hi there, I’m glad to tell you guys that I have successfully deployed a word2vec canister model on IC, it’s my second machine learning experiment. Later on, I will post the github repo link.
Though it’s a toy version, I will continue to update it.
congrats, fantastic work!
It’s a toy version machine learning canister, so everyone can deploy and test it.
BTW, it’s a shit version
But, I will continually update a tensor library that can perform machine learning algorithms written in Rust like KNN, simple neural networks, etc… And I will post the link later.
So everything started from the first code.
I hope the ecosystem of machine learning in IC will be more advanced in 2023.
Jesus Chris… that can’t be real. I am currently very packed, but I will try your code ASAP.
If inference is possible (no training) on ic, that would be very mind blowing, because it means we can start to gently say good-bye to AWS’ lambda and Sagemaker:wave: bth I don’t believe it’s happening now.
Hi, I understand your points, now I just write simple machine learning code to run.
If we want to perform more machine learning computation on IC, there is a lot of work to do now, the first is to write a basic tensor library, that’s what I am doing now.
Now it’s impossible to run large scale machine learning model on IC.
Yes, it’s possible in theory, as follows:
- to transform pre-trained deep learning models into .wasm format(like onnx for .onnx format, keras for ,h5 format and the others),
- to put the wasm file inside a canister and deploy it on IC.
Now that someone got LLaMA and Whisper inferencing running on a Macbook M1, I’m hoping that these models can somehow be ported to the IC more easily.