Canic 0.18.x Architecture: Externalized Wasm, Pipeline-Driven Deployment, and Clean Control/Data Separation

Overview

The current system represents a deliberate architectural shift away from embedded payloads and implicit bootstrap behavior toward a fully externalized, pipeline-driven deployment model. The key principle is:

Deployable Wasm is no longer owned by canisters—it is injected via the deployment pipeline.

This change eliminates hidden state, reduces binary size, and establishes a clear separation between control plane (root) and data plane (wasm_store).


Core Architecture

1. Root Canister (Control Plane)

The root canister is responsible for:

  • Bootstrap sequencing

  • Publication binding lifecycle (active / detached / retired)

  • Store selection and rollover

  • Coordination of install and upgrade workflows

  • Admin and pipeline entrypoints

Critically:

Root no longer embeds or stores deployable Wasm payloads.

It only:

  • Orchestrates

  • Validates

  • Routes data


2. WasmStore Canister (Data Plane)

The wasm_store is a pure storage and serving layer:

  • Stores approved release manifests

  • Stores chunk metadata

  • Stores raw Wasm chunks

  • Handles store-local GC

It:

  • Starts empty

  • Gains data only via publication

  • Has no bootstrap knowledge

  • Has no lifecycle authority

WasmStore is a passive data container.

Including bootstrap exception role 'wasm_store' ahead of the config-defined release set
Submitting bootstrap role first so root can leave bootstrap wait state
Submitting role 'wasm_store' to root for wasm_store publication (710.75 KiB, 1 chunks)
Submitting role 'app' to root for wasm_store publication (647.19 KiB, 1 chunks)
Submitting role 'minimal' to root for wasm_store publication (647.19 KiB, 1 chunks)
Submitting role 'scale' to root for wasm_store publication (647.21 KiB, 1 chunks)
Submitting role 'scale_hub' to root for wasm_store publication (663.26 KiB, 1 chunks)
Submitting role 'shard' to root for wasm_store publication (647.24 KiB, 1 chunks)
Submitting role 'shard_hub' to root for wasm_store publication (691.72 KiB, 1 chunks)
Submitting role 'test' to root for wasm_store publication (663.12 KiB, 1 chunks)
Submitting role 'user_hub' to root for wasm_store publication (690.06 KiB, 1 chunks)
Submitting role 'user_shard' to root for wasm_store publication (675.71 KiB, 1 chunks)
Resuming root bootstrap after full release set staging
Waiting for root to reach READY

3. Deployment Pipeline (New First-Class Actor)

The system now relies on an external deployment pipeline to:

  • Stage bootstrap payloads

  • Publish versioned releases

  • Drive initial system readiness

This replaces all previous implicit embedding or self-seeding behavior.


Bootstrap Flow

Bootstrap is now explicit, staged, and externally driven.

Steps

  1. Pipeline stages bootstrap manifest + chunks into root

    • via:

      • canic_template_stage_manifest_admin

      • canic_template_prepare_admin

      • canic_template_publish_chunk_admin

  2. Root waits for bootstrap readiness

    • bootstrap will not proceed until data is present
  3. Pipeline triggers bootstrap continuation

    • canic_wasm_store_bootstrap_resume_root_admin
  4. Root creates wasm_store

    • if not already present
  5. Root publishes staged data into the store

    • via:

      • canic_template_publish_to_current_store_admin
  6. Root switches to store-backed operation

    • imports catalog

    • proceeds with normal install flow


Key Property

There is no fallback. Bootstrap cannot proceed without staged data.

This guarantees:

  • determinism

  • auditability

  • no hidden dependencies


Publication Lifecycle

The system tracks store usage using a root-owned lifecycle model:

State Fields

  • active_binding

  • detached_binding

  • retired_binding

  • generation

  • changed_at

  • retired_at

Lifecycle

Active → Detached → Retired → Deleted (external)

Semantics

State Meaning
Active Current publication target
Detached Previous store, still valid
Retired No longer used, pending cleanup
Deleted Removed externally (not canonical state)

Invariants (Enforced in Storage Layer)

  • Bindings are pairwise distinct

  • retired_at exists iff retired_binding exists

  • generation increments exactly once per mutation

  • Active binding is optional (0 or 1)


Workflow Guarantees

  • Detached/retired stores are not selectable

  • Rollover is blocked if lifecycle slots are occupied

  • Retirement is blocked until cleanup completes

  • Deletion is blocked while referenced


Template System (Transport Layer)

The template system provides shared infrastructure:

  • Manifest definitions

  • Chunking and hashing

  • Install resolution

  • Publication workflows

It is intentionally generic and reused across:

  • Bootstrap staging (root-local)

  • Store-backed publication


Install Semantics

There are now two distinct install paths:

1. Bootstrap Install (Special Case)

  • Source: root-local staged data

  • Used only for wasm_store bootstrap

  • Inline install allowed

2. Normal Install

  • Source: wasm_store

  • Chunk-based

  • Hash-verified

  • Used for all other roles


Data Ownership Model

Component Owns
Root Lifecycle, bindings, orchestration
WasmStore Payload bytes, metadata, GC
Pipeline All deployable input data

Key Boundary

Root never owns deployable Wasm bytes anymore.


Removal of Embedded Payloads

Previous system:

  • Root embedded wasm_store Wasm

  • WasmStore embedded release payloads

  • Implicit bootstrap

Current system:

  • No embedded payloads anywhere

  • No self-seeding stores

  • No fallback paths


Benefits

  • ~64% reduction in root Wasm size

  • deterministic bootstrap

  • no hidden state

  • easier upgrades

  • simpler reasoning model


Candid Interface Handling

Key Decision

Candid is a build-time artifact, not a runtime feature

Behavior

  • export_candid!() is debug-only

  • release Wasm contains no candid export

  • .did files are:

    • generated in debug OR

    • treated as canonical source files


Result

  • smaller binaries

  • no runtime metadata

  • clean separation between interface and execution


Build System

Previous Problems

  • multiple Cargo builds per canister

  • environment-variable divergence

  • duplicate debug builds

  • stale .dfx artifacts


Current Model

  • single workspace Wasm build per profile

  • guarded by:

    • lock file

    • timestamp stamp

  • subsequent calls:

    • copy existing artifacts

    • no recompilation


Behavior

  • dfx still invokes per canister

  • compilation happens once

  • extraction uses same debug artifact


System Invariants

A correct system must satisfy:

  • No embedded deployable payloads

  • wasm_store starts empty

  • bootstrap requires staged input

  • only wasm_store uses inline install

  • all other installs are chunked

  • detached/retired stores are not selectable

  • release builds contain no candid export

  • .did exists independently of release Wasm


Conclusion

This architecture represents a shift from:

implicit, embedded, self-seeding system

to:

explicit, externalized, pipeline-driven system

Key properties:

  • No hidden state

  • Clear ownership boundaries

  • Deterministic behavior

  • Smaller binaries

  • Scalable lifecycle management

At this point, the system is no longer transitional—it is a clean, extensible foundation for:

  • store lifecycle and GC

  • rollout strategies

  • multi-store evolution

  • high-throughput deployment pipelines


If extended correctly, this design avoids the common failure mode of systems like this:

gradually reintroducing implicit state and coupling

—and instead stays strictly explicit, observable, and controllable.

adam@Ripper:~/projects/canic$ dfx canister call root canic_wasm_store_overview
(
variant {
Ok = record {
stores = vec {
record {
gc = record {
prepared_at = null;
changed_at = 0 : nat64;
mode = variant { Normal };
runs_completed = 0 : nat32;
completed_at = null;
started_at = null;
};
pid = principal “vt46d-j7777-77774-qaagq-cai”;
remaining_payload_bytes = 33_883_964 : nat64;
payload_size = “5.83 MiB”;
templates = vec {
record { template_id = “embedded:app”; versions = 1 : nat16 };
record { template_id = “embedded:minimal”; versions = 1 : nat16 };
record { template_id = “embedded:scale”; versions = 1 : nat16 };
record { template_id = “embedded:scale_hub”; versions = 1 : nat16 };
record { template_id = “embedded:shard”; versions = 1 : nat16 };
record { template_id = “embedded:shard_hub”; versions = 1 : nat16 };
record { template_id = “embedded:test”; versions = 1 : nat16 };
record { template_id = “embedded:user_hub”; versions = 1 : nat16 };
record {
template_id = “embedded:user_shard”;
versions = 1 : nat16;
};
};
headroom_bytes = opt (4_000_000 : nat64);
remaining_payload_size = “32.31 MiB”;
max_store_bytes = 40_000_000 : nat64;
created_at = 1_774_799_052 : nat64;
max_template_versions_per_template = null;
max_templates = null;
within_headroom = false;
binding = “vt46d-j7777-77774-qaagq-cai”;
release_count = 9 : nat32;
headroom_size = opt “3.81 MiB”;
payload_bytes = 6_116_036 : nat64;
max_store_size = “38.15 MiB”;
template_count = 9 : nat32;
publication_slot = null;
};
};
publication = record {
retired_binding = null;
changed_at = 0 : nat64;
generation = 0 : nat64;
retired_at = 0 : nat64;
active_binding = null;
detached_binding = null;
};
}
},
)

That’s excellent work, though it seems unlikely the public will recognize its value.

Why, my post got flagged as spam… what’s going on, @thechimera99 ?

like the public don’t recognise the value of Kubernetes. It’s fine, I’m not aiming app creation at your average Joe.