I was experimenting with a simple Matoko program to do some benchmarking of the IC and see what its capable of:
import Array "mo:base/Array";
import Iter "mo:base/Iter";
actor {
var nums : [[Int]] = [];
public query func size(): async Int {
var sum = 0;
for (x in nums.vals()) {
sum += x.size();
};
return sum;
};
public query func sum(): async Int {
var sum : Int = 0;
for (x in nums.vals()) {
for (y in x.vals()) {
sum += y;
}
};
return sum;
};
public func append(size : Int) {
// TODO: Could pre-alloc this if i could figure out how to cast
// Int to Nat or make IDL parse a Nat.
var new : [Int] = [];
for (i in Iter.range(0, size)) {
new := Array.append<Int>(new, [i]);
};
nums := Array.append<[Int]>(nums, [new]);
};
};
I found that if I ran a bunch of calls to append in parallel I would push my local replica into a bad state where it would no longer reply to any calls (btw the dfx client doesn’t seem to have a default timeout which is a bit annoying). If I force-restarted my local replica it would recover.
Eventually after doing this long enough I seem to have borked my local replica and it just spam this message when I turn it on:
Jan 10 05:52:00.405 WARN s:fscpm-uiaaa-aaaaa-aaaap-yai/n:fhkdv-npwzk-k6xia-pkinl-pzgfn-tifm2-stjfg-2udsp-v5ejo-zx4ud-nqe/ic_consensus/block_maker Cannot propose block as the locally available validation context is smaller than the parent validation context (locally available=ValidationContext { registry_version: 1, certified_height: 4104, time: Time(1610257920403552000) }, parent context=ValidationContext { registry_version: 1, certified_height: 4157, time: Time(1610256351441464000) })
Now any calls I make with the dfx cli return:
Could not reach the server
I assume I can just wipe my local replica (how do I do that?), but thought I’d bring it up as a bug
Thanks Richard. The local replica state is local to each project, there’s a .dfx directory created when you first run dfx start within the project directory.
You can run dfx start --clean or manually delete this .dfx directory to reset the state for that project only.
Any idea why I’m struggling to make parallel calls? I’m hoping that the dfx CLI is just doing something non thread-safe in my local project and it’s not an issue with the replica itself.
Any suggestions on the best way I can try and bulk load data? The consensus latency is kind of high and the cycle limit limits how much data I can load at once so I’m trying to figure out how I can get a few gibs of data in there quickly without waiting for a bunch of synchronous consensus round trips.
@Ori It seems really easy to break my local replica and get it into a bad state. Is this representative of how full nodes will behave? All it takes is making 10 concurrent update calls for about a minute or so and the whole thing becomes unresponsive and gets into a bad state.
So the bash works fine for me, with 10 iterations. I haven’t tried the JS version, but it seems you are making 10 parallel calls for each append, that’s 10^100 calls…
Hmm so interestingly, I tried deploying to the sodium ic and that seems to work much better. My script did not break the sodium IC and I get error now about my canister exceeding its memory allocation. I should be nowhere near the 4GiB limit but I assume the sodium test net has an artificially low limit since no one is paying for it?
~ $ dfx canister install --help
dfx-canister-install
Deploys compiled code as a canister on the Internet Computer
USAGE:
dfx canister install [FLAGS] [OPTIONS] [canister-name]
ARGS:
<canister-name> Specifies the canister name to deploy. You must specify either canister
name or the --all option
FLAGS:
--all Deploys all canisters configured in the project dfx.json files
--async-call Specifies not to wait for the result of the call to be returned by polling
the replica. Instead return a response ID
-h, --help Prints help information
-V, --version Prints version information
OPTIONS:
--argument <argument> Specifies the argument to pass to the method
--argument-type <argument-type>
Specifies the data type for the argument when making the call using an argument
[possible values: idl, raw]
-c, --compute-allocation <compute-allocation>
Specifies the canister's compute allocation. This should be a percent in the range
[0..100]
--memory-allocation <memory-allocation>
Specifies how much memory the canister is allowed to use in total. This should be a
value in the range [0..256 TB]
-m, --mode <mode>
Specifies the type of deployment. You can set the canister deployment modes to install,
reinstall, or upgrade [default: install] [possible values: install, reinstall, upgrade]