Calling C/C++ from Motoko. Howto?

I want to reuse an existing C/C++ library that is too complex to be reimplemented in Motoko. I wonder what is the way to do that. I thought about creating a C++ canister that expose a query method for each C++ function and updates method for each class method that modifies the object state. Could you please indicate me how to proceed?

There is the experimental tool canpack which can do that for Rust, but I don’t know how hard it would be to get it running for C/C++

If I want to create an app that has the following requirements.

  1. 2 canisters where canister 1 is written in Motoko and canister 2 is written in C++
  2. I want canister 1 to be able to instantiate canister 2 and be able to call its methods

Would that be a possible solution? Do you see any problem with it or any way to improve it to achieve my goal?

If you’re talking separate canisters, there’s nothing special you need to do about it. Just make sure that they can serialize / deserialize each other’s messages. This is just like a Python backend server talking to a C++ backend server.

The difficulty is in putting C++ and Motoko code inside the same canister. But it would seem this is not what you’re looking for.

Yes, ideally I would want to to put motoko and C++ code inside the same canister. Is that possible?

Technically yes, but that’s way out of my area of expertise. Look into canpack (linked above). But unless it’s easy to do; or you really-really cannot do without having both Motoko and C++ code in the same canister; I would recommend having 2 canisters running side-by-side. Or just rewriting the smaller chunk of code in the other language.

2 Likes

This isn’t very useful information in the short term, but longer term, this will be possible with the wasm component model. @rvanasa ran an experiment a few months back where he successfully composed a wasm module with Rust and Motoko components

2 Likes

Is that the canpack project that @Severin mentioned above or is a different project? is there a repository to this “wasm component model”?

There was no code published because it was run as an internal experiment. It did show us that adoption of the wasm component model can give us access to capabilities like this though. We will adopt the wasm component model once it is more mature

1 Like

Hi @ildefons ,

The best way is to just create a C++ canister using icpp-pro, and create query or update endpoints that you then call from your Motoko canister.

The C++ canisters have full support of the Candid interface, and it will not be difficult to do this. I am more than happy to assist you with this task.

What library are you interested in using ?

1 Like

Hi @icpp , Thank you very much for your offer to help. And yes, icpp-pro seems to best option so far to solve my problem. I am interested in wrapping a C++ linear algebra library so I have access to matrix decomposition and matrix inverse functionalities. This would open a lot of possibilities to extend Motokolearn with plenty of clasic machine learning and optimizations methods. there are a few LA libraries and I am still trying to decide which one.

1 Like

tenor (3)

2 Likes

That concept of a wasm component model is very interesting.

How does it work? Are multiple wasm files that come from different languages combined into a single deployable wasm, so we can avoid the overhead of having to call another canister?

A great use case for that would also be the combination of a python wasm combined with a C/C++ wasm. That is so common in the engineering world, to code everything in python, but then optimize it with C/C++ when performance matters. Instead of doing it the traditional way to expose the C/C++ code via special bindings, perhaps it can be handled nicer using the wasm component model…???..

1 Like

@Severin @dfx-json @free Recently I received a rejection answer from my Dfinity grant application to extend Motokolearn with a linear algebra library. The plan was to wrap a C++ LA library within a canister. The basis of the rejection is “The main reason is that it’s unfortunately not feasible to solve linear equations via inter canister calls to a C++ canister. We would need a native Motoko implementation”. I want to reformulate my application but I don’t understand what is not feasible. Could you please help me understand the problem?

Forwarded your question to the growth team. This is not something I know much about

1 Like

I’m not sure who told you that but I think it is wrong and/or right depending on the structure of the library.

Does the equation solving take a long time to process? IE, over a second or so? If so, then having a Motoko version is likely necessary. You’d need to do the calculation in rounds so that the calculation can span multiple blocks(do a chunk of work, await, do a chunk of work, await…awaits commit state and let calculations run across more rounds than provided by deterministic time slicing)

If it is a quick calculation then a c++ set up similar to @rvanasa 's canpack set up should work just fine if your canisters are deployed on the same subnet. @icpp should be able to give you some advice on how to set up and call a c++ library from a canister.

2 Likes

@skilesare , @ildefons ,

I have a couple questions about Motoko+Rust:

  1. Is it possible to call Rust directly from Motoko ? (I thought this was possible, but am not sure…)
  2. Is it possible to wrap a Rust package into a mops package ?

If this is possible, it should be possible to do the same for C++. So, instead of inter-canister calls, the C++ code will be linked into the same wasm and called directly from Motoko.

Similar to the way we’re already linking a rust static library into every C++ canister. The wasi2ic library.

1 Like

Please see GitHub - dfinity/canpack: Package multiple libraries into one ICP canister.. I’d imagine it would be possible to swap out the rust config on the rust side with a c++ wrapper, but @rvanasa would be able to explain if there are any gotchas…I think only a small shim is necessary. You are basically doing an inter-canister call, but if it is deployed on the same subnet motoko basically schedules it and you likely will get it executed in the same round as long as the calculation is small enough so it will ‘feel’ like they are on the same canister. A Bob’ed up subnet might work a bit differently, but with the caching and scheduling operations of late I’d think that these utility canisters that get used often and have very little memory would get prioritized by that layer and execute pretty quickly.

2 Likes

@icpp If creating a C++ canister exposing query calls is a limitation because some calls could require longer than few seconds. Would it be possible to just expose some of these methods and update calls and then do chunk work as @skilesare suggest? Is it possible to do that with icpp-pro?

@ildefons
Yes, I do all the LLMs already with a sequence of update calls.
Does that answer your question?

1 Like