If not feasible to implement a functionality in Motoko, is it possible to create a Motoko interface able to call an existing library?

@skilesare @paulyoung. It seems that a library responsible to execute a large computation should divide this computation in chunks. Ok, I get the idea. So, probably Dfinity has already thought about this and they may have a “best practice” implementation pattern. I would imagine, such implementation pattern should be actually implemented in some core package provided by Dfinity due to the close connection with the details of the protocol specification. Does this package or implementation pattern exist? if it does not exist, then should we propose and implement this core package first?

1 Like

Can you also expose it as a module instead of of an actor and just reference it locally to include in your actor build?

When you write module, do you mean a motoko package?

Yes exactly, that’s what I mean.

This is also the suggestion by @skilesare. It seems that this is the right way. However, now the discussion moved to the problem of how to implement a motoko package where methods require a lot of computation and need to be divided in chunks to prevent errors at the end of a consensus round. I suggested/asked about the need of a standardized implementation pattern for modules doing large computations. this was the last issue along that line…

I have written GitHub - skilesare/pipelinify.mo: Move data chunks between canisters to do this. It allows for sequential or parallel computation. There are a few tests, but they are lacking and the documentation is even more sparse. I’m happy to answer questions about it and would love pull requests. It also allows for streaming workloads to other canisters and pushing/pulling data back and forth.

@skilesare, thank you for sharing your project. As it is now, it would be a very significant effort to reverse engineer your code to figure out how it works (motivation of design decisions and their implementation), how to use it and whether it is at all relevant to the case of implementing a linear algebra library.

In my opinion, the following documentation would help: 1) the problem that this library is trying to solve, 2) the design decisions that you took to address this problem, 3) functions should be commented to connect code with the design decisions, 4) finally, there should be a description of how to implement a library that use your helper library.

@paulyoung, probably the problem of executing a motoko function beyond the consensus round period is a problem that you have already addressed in some forum post and/or internally in some of your daily development tasks. Could you please advice us on how to proceed?

My team will try create couple of ML and advanced algebra modules for Motoko from their python counterparts later this year. But for the present, my question is: if JS works well in combination with Motoko and JS does have some libraries like tensorflow that can be used, wouldn’t such a combination be helpful for now?

@Harsh That is great. What are for you the “ML and algebra” methods with most priority and why? It would be also interesting to know what are your plans to solve the problem we discussing in this post: how to deal with functions with longer execution time than the consensus period?

Concerning using JS, I guess it depends of your application: if it is fine for your application to execute javascript in the browser, then do it. A different case is to develop a backend service always processing data (e.g. oracles, bots, etc). In the second case, rust and motoko are, by the time being, the only supported options (I heard typescript is soon coming but not yet fully supported)

Yes, it desperately needs some documentation. It is on my list of todos.

In the mean time:

The simplest implementation is in the tests on pipelinify.mo/_pipelinifyTest-Processor.mo at a34056ba1109060a92d67802027f8aaa9c67aae3 · skilesare/pipelinify.mo · GitHub

The consumer test file also shows how to push and pull data between canisters as well.

1 Like

Thank you. I will study it.

@skilesare, something that I did not understand until now is that it looks like you are assuming that the motoko module (i.e my linear algebra library) is supposed to be an independent canister and all calls to the new library will be canister calls. Is this correct?

You could build it that way, but the pipelinify module supports local serialized and parallel processing.

When you call .process you pass in a process config and you have a number of options:

public type DataConfig  = {
        #dataIncluded : {
            data: [AddressedChunk]; //data if small enough to fit in the message
        #local : Nat;

        #pull : {
            sourceActor: ?DataSource;
            sourceIdentifier: ?Hash.Hash;
            mode : { #pull; #pullQuery;};
            totalChunks: ?Nat32;
            data: ?[AddressedChunk];

#dataIncluded means that the data is in the request
#local(id) means that you’ve stored the data somewhere in your canister and the system will call the function you configure for getLocalWorkspace on the initialization interface to get a handle on the data. You would only use push and pull if you were calling the library from another canister.

1 Like

I’m afraid not. At least not yet.

I am interested in solutions that abstract this problem away from us as developers though.

1 Like

@paulyoung @skilesare, today I learned about the system “hearbeat” function and I wondered whether this periodic system signal is related to the “consensus period”. If this was the case, it would be possible to create a generic method to interrupt/save/recover the state of long computations. Does it make sense or I am confused?

Yes. I have on my todo list to add heartbeat to GitHub - skilesare/pipelinify.mo: Move data chunks between canisters. Heartbeat is expensive though, so a external crank turner will likely save you some money.

The library supports you just calling process(process_id) until you get a #done back.

Does this mean that calling a “hearbeat” method consumes much more cycles that calling a normal canister method?

What is an “external crank turner”?

It is a function called every second I think. So even if you just check a flag, you are using cycles.

Just an external process. It can be another canister, a web browser waiting for work to be done, an aws instance I charge if processing requests.

@skilesare Why the cost in cycles of an implementation based on “heartbeat” is much higher than another implementation based on an external process? aren’t both just update calls?

1 Like