Motoko Wishlist 2025

The Motoko ecosystem needs specific language features and libraries to become production-ready for serious IC development. While most projects currently default to Rust due to ecosystem maturity, Motoko has strong fundamentals that could make it the preferred choice with the right improvements.

This thread is for specific requests of language features or Motoko libraries that are lacking. The goal is to create a comprehensive reference that can guide development priorities and community library efforts. Keep it focused on concrete technical needs - broader discussion about Motoko’s future and ecosystem strategy should go in the Discussion - What’s Missing for Motoko Adoption? thread.

What to post here:

  • Language features you need for your IC projects
  • Missing libraries that would unlock your use cases
  • Specific gaps preventing you from choosing Motoko

Be specific about what you need and why. This creates a reference for what the community should prioritize building.

What NOT to post here:

  • General discussion or commentary about Motoko’s future
  • Broad ecosystem concerns or strategic questions
  • Debates about priorities or approaches
  • Meta-discussion about the language or community

Categories

  • Language Features: Syntax improvements, type system enhancements, control flow
  • Standard Library: Core data structures, algorithms, utilities
  • Third-party Libraries: HTTP clients, JSON processing, cryptography, testing frameworks
  • IC-Specific Integration: Canister lifecycle management, inter-canister calls, upgrade patterns, cycle management

Previous Context

6 Likes

Here are mine from the previous wishlist linked:

Language Features - Type Reflection / Custom Data Structure Deserialization

Problem: Can’t deserialize to a custom data structure without manual parsing. There’s no way to introspect types at runtime or automatically deserialize JSON/other formats into custom types.

This makes working with external APIs and data formats very verbose - you have to manually parse every field and handle type conversions.

Potential solution: Some form of type reflection or derive macros that can automatically generate serialization/deserialization code for custom types.

Language Features - Error Propagation

The biggest pain point I run into is dealing with stopping code evaluation and returning an #error or continuing on with eval. My code is full of this pattern:

let value = switch(doSomething(...)) {
  case (#error(e)) return #error(e);
  case (#ok(v)) v;
};

Potential solution: Have a built-in Result<T, E> like Rust and handle propagation like null propagation with do ? {}:

let result : Result<T, E> = do E {
  let value1 : T = doSomething(...)*;
  let value2 : T = doSomething(...)*;
  value2;
}

Language Features - Subtyping with Pattern Matching

This is one I didn’t expect with structural typing. I run into this issue regularly and know others have as well.

When I have a Supertype that adds additional functionality on top of an existing type, I want to handle the supertypes cases and ALL of the subtypes. Given this example:

public type SubType = {
  #one;
  #two;
};

public type SuperType = {
  #three;
};

switch (superType) {
  case (#three) processThree();
  case (#two) processSubType(#two);
  case (#one) processSubType(#one);
};

I want it to be less redundant like:

switch (superType) {
  case (#three) processThree();
  // Remaining cases have to be the subtype
  case (subType) processSubType(subType);
};

Reference: Variant subset matching

Language Features - String Interpolation

It’s annoying to write out strings with # and no interpolation:

let value = "This is some " # someToTextFunc(v) # " text that im writing and took me " # Nat.toText(x) # " seconds to come up with";

Potential solution: String interpolation syntax:

let value = $"This is some {someToTextFunc(v)} text that im writing and took me {Nat.toText(x)} seconds to come up with";

I don’t have a good solution for making stringification better because structural typing makes it hard to know how to format values. An option is to make anything that is not Text default to whatever debug_show does, but that might be dangerous and make it easy to make mistakes.

Language Features - Better Type Inference for Inline Functions

A lot of the time I just want to do something like a simple map:

let v : [Nat] = [1, 2, 3];
Array.map(v, func(x) = x + 1);

But the code above doesn’t work because “cannot infer type of variable” is an error. So either the param types/return types have to be defined in the function or on the Array.map. This becomes more and more of a problem with longer type names and more parameters. It seems like it should have enough information to infer, but it isn’t able to.

The func(...) = ... syntax helps vs a normal func but it’s not quite there yet.

5 Likes

Actor modules that I can include and potentially override public shared functions and queries with my own versions(or add code with a super.call.

Ie

import actor "mo:icrc1-mo/token";

//the above adds icrc1_transfer etc to my actor as well as any stable vars, classes, etc....collision my be a big deal here

public query({caller}) override icrc1_balance(args:Account) :Nat{

// private balances
if(args.owner==caller) super.call(args);
else 0;

};
3 Likes

Protobuf support. I’m sort of confused why ledger blocks are protobuf encoded, and yet Motoko doesn’t have strong support for decoding those blocks (unless things have changed since How to get protobuf messsages from ic? - Developers - Internet Computer Developer Forum).

1 Like

Language Features – Simplify Importing Types

Currently, importing types from modules requires repetitive aliasing like this: š

    type Map<K, V> = Map.Map<K, V>;
    type Set<V> = Set.Set<V>;
    type Result<T, E> = Result.Result<T, E>;
    type Time = Time.Time;
    type Vector<A> = Vector.Vector<A>;

Potential solution:

  • Allow e.g. import Result from "mo:base/Result" to implicitly expose types with the same name, in this case Result.Result as Result.
  • Allow importing types explicitly. E.g. import { type Result } from "mo:base/Result" would bring Result.Result into scope as Result.
5 Likes

Hey Martin! You’re in luck :slight_smile: Unless we find a showstopper bug during reviewing (unlikely) this should land soon!

5 Likes

The most notorious security issue I see is a cycle drain attack where actor authors have a Text, Nat, Int, or Blob parameter and we currently have no way to say that those items need to be bounded which leads to the possibility of people just sending you 2MB of params over and over with out any kind of cycle saving quick bail.

We need something like the following.

public shared func test(yourHash : Blob with _.size() == 32) : async(){}

I know that this is probably better handled at the candid level, but motoko would need a way to communicate that to the candid compiler/parser. In the meantime motoko could make itself much more secure by providing something(even if it is just maxbytes).

2 Likes

Here you can use:

  let #ok(value) = res else return #error(e);
  // code that uses `value`
1 Like

Isn’t there system func inspect for that purpose?

Only from the boundary nodes and for ingress messages. From inside the IC it would be an expensive attack because you’d have to pay for your own outgoing messages, but it would still cause problems for the target canister and burn a bunch of their cycles.

Fair but still us a bit clunky. All over my code for most of my projects i have this pattern where i just want to bubble up my error or continue with my ok result without being so redundant

Something like the Rust ‘?’ would be amazing

Any data input with an unknown length (lists, strings etc) would be prone to this, not just Blob as far as I understand.

To protect against decoding unnecessary large candid encoded arguments, maybe something like either:

  • a request argument size limit in bytes, similar to http outcall limits
  • lazy decoding arguments instead of the whole thing at once

Might be more of a solution, but I’m not enough of an expert on Motoko/Candid in relation to cycle consumption to give any definitive input in this regard :sweat_smile:

Wait, how do you get access to e?

I just copied from your code. I can ask the same question about your code snippet. There shouldn’t be a difference between using your switch or let-else as here.

Sending a xnet message costs the sender 260k cycles. Processing an update message costs the victim 5m cycles. That’s a pretty good ratio of 20x in favor of the attacker. That’s just with empty calls (no bytes). It is not clear that adding bytes would make it better for the attacker. It costs the attacker 1k cycles per byte. Do you think the candid decoding performed by the Motoko runtime, which you can’t prevent, will spend more than 1k instructions per byte to decode? If the answer is no then the attacker will favor empty calls for a cycle drain attack. (or, in fact, as long as the candid decoding spends less than 20k cycles per byte the attacker should favor empty calls because it gives him the better ratio)

Ahh…fair point. I guess my answer is “I don’t know”. We should test it!

My suspicion is that allocating a 2MB blob on the heap cost “a lot”. :grimacing:

I guess the magic ratio is does it cost less or more than 1,000 instructions per byte. Probably less?

The hidden question is are people taking the time(and cycles) to validate the length. I’ve seen folks take an incoming text and just pass it right into an Array.append loop, assuming it’s nothing but a principal or something small. Something small like that and you could end up burning a whole couple of rounds cycles with DTS.

I’ve talked with the motoko team about it. With the let else, it doesn’t expose the else variant value. You can just return the ‘res’ but that comes with a restriction that the ok vales are the same

1 Like

Ok, I see now what you mean. I probably never bubble up errors in my own code but instead create a new error variant on each level with a name that has a meaning from that level.

For what you want to do there is Option.mapOk but that’s probably still too clunky for you. Because you have to inline a function which leads to indentation. To avoid indentation you can write a dedicated function that processes value and that does not deal with errors at all. Depending on the situation, sometimes this can just be awkwardly clunky, but sometimes it can also be clearer to separate the ok-case value processing from the error handling. I’m thinking of code like this:

import Result "mo:base/Result";

type ErrType = Text; // error type that is bubbled up
type Ok1 = Nat; // downstream return type
type Ok0 = Nat32; // upstream return type

func downstream() : Result.Result<Ok1, ErrType> = #ok(1);

// clean function for value processing
func f_(v : Ok1) : Ok0 {
  // happy case processing here
  0
};

// wrapper with boilerplate code for error handling
func f() : Result.Result<Ok0, ErrType> {
  let res = downstream(); 
  Result.mapOk(res, f_);
};

Now do you also want ErrType to be a variant and you want to introduce new error variants on the upper level, i.e. you have two ErrType, one a supertype of the other?

Note that if you don’t have a ok return value on the upper leve, i.e. you happy case processing is simply consuming Ok1, then it would like this:

func f_(v : Ok1) : (()) {
  // happy case processing here
};

func f() : Result.Result<(), ErrType> {
  let res = #ok(1);
  Result.mapOk(res, f_);
};
1 Like

Wish there was no type error here. It works with A, should work with B as well. Both A and B are shared, since they go in and out of async functions. In other words - if A is allowed then B should be allowed too.


Pehraps: test<shared A, shared B>(..

Currently, because of this, we can’t develop libraries around generic async functions, like Promise.all, Promise.race, withRetries, etc..

1 Like

Yes, this doesn’t work. In one of our functions, we had to get #ok(..) from a few other functions, and if any of them returned an error, cancel all state changes and return err. Since try/catch isn’t working with sync traps, we had to make these functions first return ‘#ok(intent)’ objects and if all are ok, then commit(intent) for each- doing the actual state changes. This works, but - not if one of these functions depends on state changes done in a previous one.
If it isn’t too expensive in terms of instructions and speed (small chance of that, but worth asking), this would be nice:

transaction some:
...change state...
if (..) rollback transaction some;

We can already do this with trap, but it’s reverting state changes of the whole async func.
It can be developed in the application layer, just won’t be very easy. Redux has a module which stores undo patches for every change and rolling back the state just applies them n steps back, the whole state isn’t stored to increase performance. Probably best if implemented on language level.