Complete: ICDevs.org Bounty #20 - QuickStart Dapp - Scaling With Canisters - 200 ICP, 100 ICP, 50 ICP - Multiple winners

It’s with great pleasure I present the fruits of weeks of labor: Scaled Storage

Introduction

Before starting this bounty, I knew I wanted to build a general solution that could be easily added into any rust project.

While working on the bounty, the following goals drove most decisions I made:

  1. Simplicity
  2. Ease of use
  3. Storage efficiency
  4. It should work as an imported library.

Scaled Storage is a generic distributed hash table tailored for the internet computer. It can scale to possibly infinite amount of canisters, with a worst case scenario of one inter-canister calls and usually a best case of zero (I’ll got into this later). I’ve currently tested it to 10 canisters. The client side never needs prior knowledge of all canisters holding data, but instead just the canister id of any one of the canisters.

Features

  1. Imported as a Rust library
  2. Scales up or down depending on developer defined behaviour.
  3. Quine” style canister replication. All canisters are functionally alike and use the same code.
  4. House keeping operations (migrations, scaling up and down) are abstracted away.
  5. There isn’t a “primary”, “index” or “secondary” canister, any request can be taken from any canister.
  6. Tries to reduce inter-canister calls.

Developer UX

  1. Developer imports the scaled_storage library
  2. Copy-pastes a few predefined scaled_storage operations.
  3. Uploads the canister’s WASM using ss_uploader.

How It Works
Scaled Storage uses a consistent hashing algorithm to map keys to their appropriate canisters.

When a key is accessed through the exposed functions, the result is either the value or the canister where it is located, In the latter case the request is simply forwarded to that canister. Here’s a code excerpt:

match canister.with_data_mut(key.clone(), |data| data.clone()) {
   NodeResult::NodeId(canister_id) => {
    //forward request to canister_id or return to client side
    }
   NodeResult::Result(result) => {
    //return result to client side
   }
}

Scaled Storage provides the following aptly named functions:

  1. with_data_mut
  2. with_upsert_data_mut

Upgrades & Migrations
When nodes are added or deleted, the changes are broadcast to all other nodes.

pub enum CanisterManagerEvent {
    NodeCreated(Principal),
    NodeDeleted(Principal),
    Migrate(MigrateArgs),
}

These nodes then update their node list and hash function. The affected values that need to be migrated are converted to a generic byte array and sent to the relevant node.

#[derive(CandidType, Deserialize, Debug, Clone)]
pub struct MigrateArgs {
    #[serde(with = "serde_bytes")]
    data: Vec<u8>,
}

///candid file
type migrate_args = record {
    data: blob;
};

The consistent hashing algorithm in question is Anchor Hash which has “high key lookup rates,and a low memory footprint” according to the authors of its paper.

This particular algorithm guarantees that given N number of nodes and K number of keys, the hash function distributes K/N number of keys to each node, achieving uniformity. It also reduces the amount of inter-canister calls necessary by:

  1. Only needing to migrate values to a newly created canister (instead of having to migrate to other nodes).
  2. Only needing to migrate values from a deleted canister without disrupting the location of other keys

Architecture

Scaled Storage can be described as a linked-list of nodes with each node having the following:

  1. The previous node id
  2. The next node id
  3. A consistent-hashing function mapping keys to a canister
  4. A LIFO list of all nodes.
  5. The underlying data

I chose this linked list structure over an “index node” → “secondary node” structure for the following reasons:

  1. Canisters/Nodes have exactly the same code and behaviour.
  2. Minimal Responsibilities - Each node is only responsible for its subsequent node and serves as its sole controller (with the developer being the controller of the first node). This reduces surface area for any accidents and gives each node a sole scape-goat.
  3. It reduces memory fragmentation - new canisters are only created after prior ones are full. Canisters cannot be removed in an arbitrary order, but must first have all subsequent nodes removed first.
  4. Reduces inter-canister calls since there’s no need to check the memory usage of all canisters, when the last canister wants to upgrade, it can simply assume all other canisters are full.

Issues and Solutions

  1. Large binary from in-lining wasm via include_bytes!:
    Wasm code for creating new canisters is added in heap memory at runtime via an init_wasm operation. I’ve created wasm_uploader for this.
  2. Payload size too large for init_wasm operation:
    The wasm binary is chunked and sent one at a time.
"init_wasm":(wasm_init_args)->(bool);

type wasm_init_args = record {
    position: nat8;
    wasm_chunk: blob;
};

Performance

  1. Update operations take at most two calls; one made by the end user and a second potential inter-canister call if the hash function points to another canister.
  2. Query operations take at most three calls; an additional call made by the end user.
  3. Since operations can be taken from any canister, operation results could contain the id of the canister and stored/cached on the front-end. Subsequent operations could then be directly requested using the stored canister id.

Tests and examples

  1. Scaled Storage Simple Scaling Test
    This demonstrates the scaling capabilities of scaled storage. The default test demonstrates 10 canisters storing simple string types.
  2. IC Snippets
    This is a more complex example, demonstrating storing more advanced data structures, and authentication to prevent unauthorised access:
         ...
        .with_upsert_data_mut(snippet_key.page_id(), |page| {
            match page.get_snippet(&snippet_key) {
                Some(snippet) => {
                    let owner = snippet.owner;
                    if owner != ic::caller() {
                        return Err("Auth Error");
                    }
               `  ...
                }
           ....
            }
        });

Future updates and issues:

  1. Consistent hashing algorithm change

Although Anchor hash prevents unnecessary data migrations, it distributes data equally amongst nodes, typically this is a desirable effect, but for Scaled Storage this means “filled” nodes still have data inserted in them (although the possibility of that reduces as the number of nodes increases).
A better hash function would have the following properties:

a. Given N nodes, only distributes keys to node N.
b. When number of nodes are increased from N to N+1, only keys in N have to be migrated to N+1.
c. When number of node are reduced from N to N-1, only keys in N are migrated to N-1

This would limit the number of canisters to be migrated to only one; either the prior node (downgrades), or the next node (upgrades).

  1. A rewrite implemented as a finite state machine: easier to read, more bug free code since state transitions can be validated at compile-time, easier to write rollback operations when state changes error.
  2. Macros to reduce the amount of boilerplate to write.
  3. Downscaling hasn’t been implemented.
  4. More operations on the underlying data apart from inserting and getting data. For example map reduce operations that require data from every node.
  5. Authenticate house-keeping requests between nodes. Currently it is trivial for a bad actor to pose as a newly created node, receiving migrated data.
  6. Query calls cannot be forwarded currently, the first demo’s query forwarding only works locally, pending inter-canister query calls. For other demos, the canister id is returned to the client.
  7. Cycles sent to one canister should be automatically distributed amongst all canisters.
  8. Scaled storage uses heartbeat, it may be possible to avoid this and instead “hitchhike” on other requests, reducing cycle consumption.

Summary
There’s still significant work to be done, but the foundation for a generic easy to use scaling solution has been created. Thanks for reading.

  1. Scaled Storage repo: Github
  2. Scaled Storage library crate: Crates
  3. Scaled Storage uploader crate: Crates
  4. Canister for the simple scaling test: qjikg-riaaa-aaaag-aaf7a-cai
7 Likes