Improving Motoko's programmability

Middleware, framework, and toolkit developers, will use the advanced features of a language, as is the case in other languages.
A development language that wants to be widely used needs to lower the developer threshold with the help of rich frameworks and toolkits. And developing these requires advanced features like reflection.
Example.
A tool package needs to wrap an external canister interface.
(1) If you use reflection, you only need to write a few functions to do this. Otherwise, the wrapper needs to be written function by function.
(2) If the external canister interface is upgraded, the tool package written using reflection can adapt well to the changes (Just need to update the interface file); otherwise, the tool package must be upgraded simultaneously.

The candid encoding/decoding feature is also a necessity. Reference example: solidity prioritizes the release of these features, and it opens the assembly interface.

Exposing Candid to/from Blob serialization is not difficult and largely already supported by the compiler, but not exposed. It would be easy to add as a dedicated construct (similar to debug_show) but harder to surface as a library.

I would be very reluctant to add reflection to the language as it breaks all sorts of properties of the language itself and introduces much overhead.

It sounds like you are mostly interested in reflecting on candid interfaces, not all Motoko features. That seems like a more reasonable ask and I can see the applications for that. However, even though canisters currently can expose their textual interfaces using a (hidden) text based query (that’s how icrocks obtains them) we’d probably want something with more structure than just text for a programmatic interface.

1 Like

I can imagine raw shared methods public raw func endpoint(bulk : Blob) and a dynamic analysis of the bulk ingress message. To do it in a nicely type-safe way (pattern-matching fashion), one would need GADTs, I suppose… (to emulate dependent pairs)

Yes, reflection can improve programmability, but it is not required. I agree that there is a trade-off.

Exposing Candid to/from Blob serialization would solve a lot of problems. Hopefully it will be supported soon.

2 Likes

I’m thinking the msg in public shared (msg) func .... is already representing the sender’s message. We now exposes msg.caller, but we could also expose msg.args for example. This would fit in with inspect_message for example.

Something like:

type WithdrawArgs = { amount: Nat; to_account: AccountId }
public shared (msg) query func inspect_message() {
  switch (try_decode<WithdrawArgs>(msg.args)) {
    case (#ok(args)) { /* do something with args */ };
    case (#err(err)) { throw(err) };
  }
}

So try_decode would be the magic system function and it must always take a type argument. What do you think?

1 Like


This application scenario is very widely used because there are times when you cannot rely on the off-chain for encoding and decoding.

For example the wallet_call function of Cycles Canister. If there is no encoding and decoding in Motoko, it can only implement proxies and cannot handle business logic (do something (using a) and do something (using return))

1 Like

I agree that this is the solution to support polymorphism without upgrading the interface. Because, if the smart contract changes the interface, it affects composability and immutability.

Multiple use cases show that candid encoding and decoding is necessary.

3 Likes

@claudio Is call_raw live in a public release yet?

1 Like

I’ve felt like in the past that I wish I had some reflection available. Probably mostly when trying to write some generic helper functions that might operate over some limited set of types that I want to do specific things for.

This is all likely based on my bad practices from javascript and some old .net habits. I know they are bad habits, but it is still frustrating to have a language that claims to be general-purpose but doesn’t let me do some things.

I wrote the candy library(which really bastardized the language - candy_library/lib.mo at main · aramakme/candy_library · GitHub ) because I needed a way to store JSON style dynamic data structures. I know that I shouldn’t use them in general, but I also need them in specific circumstances, especially when trying to plan for future extensibility without accidentally blowing away my data store on an upgrade because I added a variant in the wrong order. With the library I’m able to reflect on the data coming in and out of my functions. Maybe a refactoring of the library by a better programmer could make it useful for addressing the situations where you need reflection without needing to change the language.

This is mostly a collection of thoughts while I’m in a place where I’m having a bit of trouble concentrating, but I thought I’d throw the thoughts out there for discussion.

2 Likes

I have found that I need to write custom serialization/deserialization functions for every entity type if I want a generic storage solution for backups. Coming from .net, this extra code is burdensome. If it’s all in the name of security and/or the necessary optimization of the IC, then I am very content with the challenge, as it’s tiny compared to the challenges faced by the Dfinity team. However, if the inconvenience is a matter of language maturity, then I wait patiently for future versions.

Adding the serialization primitive isn’t hard, but not easily done as a function because it needs to be variadic and have special typing rules (all arguments must be shared) (so better just a new language construct with dedicated typing rules).

Indeed @nomeata proposed and implemented something similar with a trapping deserialization (not opt returning one) here:

What we currently have is a pair of overloaded intrinsics (prims) that are hard (but not impossible) to access unless you are a compiler writer.

Regardless, I don’t think your approach would quite work because Motoko would still attempt to deserialize the blob at type () (for the fuction argument) and fail, before you even enter the function. Maybe if we you typed the arguments as type Any (not ()) but then the message payload would need to be a single argument too, IIRC. Of course we could hack it to work but I’d like to avoid hacks as much as possilbe.

For the particular application of inspect_message, I’ve actually got some strawman proposals sketched here:

None are super attractive though.

1 Like

It’ll be out with dfx 0.9.1, which is currently in internal beta testing. Hopefully next week.

If you are brave, you can pull down dfx 0.9.1-beta.0 (IIRC)

I appreciate what you are saying, but, for the record, one can implement wallet_call with out extending Motoko with serialization primitives, all you need is the call_raw functionality that is coming with dfx 0.9.1.

See here:

“candid serialization/deserialization + call_raw” can help programmers solve a lot of problems, it can basically achieve “what rust can do, motoko can also do”.
I think it is worth it if it is needed to introduce additional language rules.

2 Likes

I think we may just need a cbor encoder/decoder with call raw. But then again you may need the schema to decide. I don’t know much about cbor. Maybe we need a candid parser and a cbor parser?

I came up with another case where I need some Candid/Cbor love inside of Motoko.

I have an on-chain wallet. It provides multi-sig functionality. Users can call a function via call(principal, function, data) and a proposal is created. I’d like to support showing the incoming data as readable text. The problem is that I don’t want to have to upgrade my canister each time a new service comes online. I’d like to let a service provider, user, or app to give me the candid. Then I can use this candid to parse the incoming blob. For example:

This is a call to “send_dfx” on the canister zzzzz with value:

SendArgs = {
to : “kdkfkdjfdj”;
fee : 20000;
memo : 1;
from_subaccount : null;
created_at_time : null;
amount : 100000000;}

I can’t do this right now because all I’ll have is a Blob. But even if I had the candid definition I can’t do it because I don’t have a cbor → candid conversion library.

I’m not asking to coerce the blob to an unknown type or anything, I just want a library that lets me do it if I want. Maybe I want to construct a known type, convert it to a binary representation, hash it, and keep an eye out for that particular function signature in the future.

I feel like I can come up with a lot of reasons why I’d want some kind of reflection/conversion. Maybe they aren’t good reasons.

1 Like

How would reflection help here? I think you’d need a CBOR parser.

Again, this sounds like you’d need a CBOR codec instead of reflection.

I think you’re suggesting that reflection could be used in conjunction with some codec to generically provide implementations for those types, rather than writing them manually.

I still think that’s more of a convenience though, and otherwise the encoding/decoding is doable today if someone were to write the relevant codecs.

Yes on both accounts. For this application it is more of a parsing issue. But it would be cool.if there were a native interface and data type to do this since it is such an integral part of how everything works.

Check it! ICDevs.org - Bounty #18 - CBOR and Candid Motoko Parser - $3,000+ - #30 by skilesare. h/t @Gekctek