Completed: ICDevs.org - Bounty #19 - CBOR and Candid Motoko Parser - $3,000+

Yes! Those would be great….I have another bounty that could some of this as well. As soon as I get it approved I’ll let you know.

One interesting use case is to have a canister produce the cbor that needs to be relayed to the network from an t-ecdsa signed message. Once we have http requests, a canister would be able to send signed messages to the replica endpoint itself.

1 Like

I should have something usable within the next few days. Im almost done with basic cbor reading right now, but trying to solve a few issues like Float creation from bytes and a good serializer pattern.

Its fun to work in code that has almost no libraries, get to write everything from scratch. The wild west of coding right now lol

I was hoping to have something to have reviewed this weekend but my wife is going into labor for our first child, so i doubt I’ll have time lol

1 Like

Ha! See you in a couple of months. :grinning: Enjoy this moment and congrats!

1 Like

Had a few moments between nap times (mine and his) and was able to get the basic functionality done.

I don’t know how this bounty process works but this would probably be a good time for a initial review. I have implemented bytes -> cbor object and cbor object -> bytes but there is no custom type serialization, its all a manual process.
I have put basic information in the README along with some TODOs

Im curious if anyone has thoughts on how to implement the custom seriliazation. My world has been .NET and that was much different than this. I don’t see any concept of reflection, so im trying to figure out the best way to go forward.

1 Like

We’ve discussed this before at Improving Motoko's programmability - #14 by skilesare

I really wish it was natively in the motoko runtime as it would make tasks like this much easier. This library will get us a long way there. It is going to be integral to some of the upcoming bounties as well.

3 Likes

// TODO is there a way to convert a Nat to an Int directly?

A Nat is a sub-type of Int so you don’t have to convert it. :slight_smile:

1 Like

Any ideas for a good integration test? Are we at the point where we could encode something like a balance request and check the cbor generated against the cbor in a network request from agent.js?

Or better yet…maybe we could just grab one of those and decode them. Maybe I’ll set up a motoko playground later and try it out.

1 Like

I love this structural typing concept in Motoko but its taking a minute for my mind to adjust. Keep thinking in terms of inheritance and casting

1 Like

Yes and no. The way to do it right now requires a manual step of converting motoko types/classes into a ‘cbor value’ which is just a variant of all the cbor major types. Once its in the form of the ‘cbor value’ it can encode/decode just fine
My next step is to remove that manual process, so ill take a look at the candy_library you linked and see what i can do

1 Like

I think you might find this helpful. My attempt at writing a cbor library . I didn’t complete it, but I was able to write methods to encode most of the candid_library types to cbor except for Float.

Nice. Ty.
Ya floats were a pain

2 Likes

@skilesare
Alright I think I need clarification on a few things because im confusing myself

  1. Is the CBOR encoding a custom thing vs the HTTPS interface?
    From what I understand the CBOR is encoded/decoded in the boundry nodes and doesn’t touch the canister itself. From the tests/examples I have seen for call_raw, the encoding is just the raw candid, not cbor. Is there going to be CBOR inside the candid in blob form?

  2. Looking at the candy_library i get what is happening, allowing the ability to look at properties/meta data of the data itself. BUT what i don’t get is how it goes from a motoko value to a candy value. It seems like its just a manual process, similar to what im doing with Cbor values, where its just a variant storing motoko types with some meta data. Am I understanding it correctly or am I missing something.

  3. How exactly do we want the code to look like with converting to and from cbor in the call_raw method. I see in the description we want to be able to inspect whats inside or build cbor based on a definition, but I think Im having troubling picturing it. For construct a payload for call raw using an uploaded candid definition what does the definition look like? Is it a *.did file? If so is the idea to parse that file, build some kind of descriptor data for it, then use that for validation and creation of candid models? Thats also where im confused because im not sure what that has to do with CBOR, seems like a candid thing.

Any help would be great because im going in circles.
I think some sort of pseudo code of the call_raw function of what you expect/want would be helpful as well.

1 Like

Starting the candid portion in a different project

Will work similarly to have I have the CBOR where I will take the Blob and create a candy_library like model holding the candid value/type meta data

1 Like

Sweet. I’ll try to get to your questions later today….sorry for the delay.

1 Like

No worries. Have baby duties occupying most of my time and have the candid work ongoing

1 Like

I have been pulled down a rabbit hole this week.
I needed to have a better system for encoding/decoding numbers along with having floats with different precisions. So I have moved all the work i have done already for those and much more to

Where this will be a referenced project in both the cbor and candid encoders.
I have it setup for the cbor project and now im going back to the candid project. Should be much easier since I already have implemented LEB128 encoding in the numbers project.
Dealing with 1’s and 0’s in long bit streams and bytes along with different endian-nesses can really destroy one’s mind.

1 Like

You are a rock star! Fun things down those rabbit holes. :grinning:

1 Like

Just wanted to give an update
Im close on the candid project
Have done all the ‘encoding’ except for the func/service, plus a couple TODOs/polish, but its in a good state with all my tests passing
Still need to do the ‘decoding’ from bytes but that shouldn’t take as long now I have the structure

Any updates on my questions above?

1 Like

Actually the Decoder was much easier than I expected and its in a similar situation as the encoder where its mostly done but needs some edge cases/polish and fixing a few broken tests.

I guess this would be a good time to figure out if the library is structured in the way we need and handle any initial reviews.
How is the review process going to work?