Announcing Motoko Server

Hey all, I wanted to share a new tool I’ve been working on!

Available at https://mops.one/server - Server is a library for dynamically handling HTTP requests, using syntax that is inspired by the popular Express.js framework for Node.

This is all made possible through an additional base library - https://mops.one/certified-cache, which handles storing the responses and certifying them so that they can be queried later.

The library makes it easy to register dynamic GET, POST, PUT, and DELETE requests in your motoko canister. The first time any uncached request is hit, it will get upgraded to an update request, but any cached request can be used as a query.

Finally, server is in its early stages. I plan to add more features, particularly around making static assets more convenient to work with. Try it out and request some features!

To get a sense of what is possible now with server - try visiting https://qg33c-4aaaa-aaaab-qaica-cai.ic0.app/

It’s a rebuild of the dfx new starter project, refactored to run on a single Motoko canister, and with no JavaScript agent - just a http `POST request. Code for the demo lives at server/examples/http_greet at main · krpeacock/server · GitHub

Have fun!

29 Likes

Nice interface.
Can we have https requests to custom domains?
I was also wondering, can’t somehow canisters be on the other end of SSL, so we don’t have to check if responses are certified. I suppose If that can be done, then nodes and boundary nodes will then be considered man in the middle.

1 Like

The request gets upgrade on the proxy…but how does the service worker know that what was returned was actually an update? Do you flag the response and requery?

Awesome work!

1 Like

It’s managed by the boundary node - if a query returns upgrade = ?true;, the b̶o̶u̶n̶d̶a̶r̶y̶ ̶n̶o̶d̶e̶ http gateway will re-try as an update call and then return the result. That is handled by the agent, and the ServiceWorker is configured to accept the returned data

1 Like

Does the service worker actually verify the response in this case?

If it is an update call, then the service worker doesn’t have to do anything. The agent takes care of validating the result/signature of an update call.

What agent? The client side doesn’t know anything about the update call that has been made by the boundary node. Not sure if I’m somehow on the wrong track.

Yeah…this is where I was a bit confused my understanding of the flows are

sequenceDiagram
  participant ServiceWorker  
  participant Browser
  participant BoundaryNode
  participant Canister

  Browser->>+BoundaryNode: Send query
  BoundaryNode->>+Canister: Send query using http_request
  Canister->>+BoundaryNode: Request upgrade using headers
  BoundaryNode->>+Canister: Resubmit as upgrade request using http_request_upgrade
  Canister->>+BoundaryNode: Perform update and return http response
  Canister->>+Canister: Can't provide updated cert because updates can't provide updated certification
  BoundaryNode->>-Browser: Relay result (without certification header)
  Browser->>+ServiceWorker: ?

I don’t think that’s entirely correct. I just recently created this diagram for a presentation that shows the flow of a query call (so not quite the same thing that you’re talking about):

I’ll try to get someone to explain how it works for the http_request_upgrade flow, but in general an update call doesn’t need an updated cert because the response is already certified by consensus

3 Likes

I think the issue here is how does the service worker “know” that it was upgraded? I’m guessing maybe the boundary node is adding a header or something. Likely there is a lone of code that we can point at to better understand.

Short answer: There’s a property in the candid interface for canisters that serve HTTP requests that the service worker checks: Internet Computer Loading

A side note on terminology: The boundary node does not perform this logic, the HTTP Gateway Protocol does this. The HTTP Gateway Protocol can be the Service Worker or ICX Proxy. If you navigate to https://nns.ic0.app/ then you’re using the Service Worker, if you navigate to https://nns.raw.ic0.app/ then you’re using ICX Proxy.

Longer answer: The particular feature that we’re discussing here is called an “upgrade to update call” and it’s detailed in the spec here: Internet Computer Loading.

You guys are on the right track anyway, but here’s my summary of the flow:

  • Browser makes a standard HTTP request
  • HTTP Gateway converts this request into an Internet Computer query call
  • HTTP Gateway sends the query call request using agent-js to a canister’s http_request method via the boundary node (the boundary node handles the routing to the replica)
  • Canister responds with the upgrade property set to true
  • HTTP Gateway repeats the original request as an update call using agent-js, and again this is routed by the boundary node
  • agent-js polls the IC with a read_state request and once it receives a response it is then returned to the HTTP Gateway
  • Since this response is present in the state tree (this is what the read_state request checks), it has been signed by consensus and we can trust the entire response
6 Likes

Ah, in case on a non-raw request, the upgrade to an update call is done in the service worker not the boundary node. Thanks!

1 Like

This is something that has been explored before, but unfortunately it’s extremely expensive and slow. This is an oversimplification, but you would essentially need to create a threshold SSL handshake that would involve every replica on the subnet.

1 Like

Would the vetkey tech be good for this? We have this bounty in the freezer once we are able to have reliably secure decryption on the canister and I think that there is a direct plug-in for @kpeacock server system that would keep data private from boundary nodes.

As an aside…this server infrastructure makes this other “freezer” bounty much more doable:

It would be super cool for new devs if they could point to a swagger file and have the ‘server’ file be rendered with hooks that made it super easy to implement the details of each service.

1 Like

Hi! Is it possible to call an actor function (async/await) from server.get(“/greet”, func(req : Request, res : ResponseClass) : Response{
await actor.function()

Seem impossible to return a : async Response from the server
Thank you

1 Like

It should be possible since we do run it in an update, but it would take some redesigning around the API to make it happen. I’d need to think about how to pull it off

Hi @kpeacock , thank you for this library, I think it’s really useful! I wanted to combine this with the Motoko proxy you wrote (GitHub - krpeacock/motoko-outcalls-proxy: Simple example of motoko outcalls) but realized it would also need async requests for this. If you have a design in mind and some pointers for me, I could see if I can help and contribute work on the async requests :slight_smile: cheers

2 Likes

When there is a cache miss, the library will run as an async update call and it should be valid. You could see if just modifying the server.get interface to be async works, or maybe we could add a new getAsync handler that gets around the Motoko constraints

1 Like

cool, thanks! I’ll work on this and update you.

1 Like

Hi @kpeacock , I implemented a first very simplistic approach, you can see the wip here: Add initial async server code by patnorris · Pull Request #1 · patnorris/server · GitHub I couldn’t really come up with a good way to add the async functionality into the existing functions without breaking compatibility (so without making http_request and http_request_update and the functions they call async).

Some ideas I had were changing the HttpFunction type such that both regular and async functions would be included (so something along the lines of: type HttpFunction = (Request) → Response or (Request) → async Response ; ) or “hiding” the async behavior in registerRequestWithHandler by just awaiting each call (regular or async) but I don’t think Motoko supports this.

Not sure how helpful this simplistic approach I implemented really is but happy to incorporate any feedback you have. What would need to change to become useful to the server library?