Proposal to Adopt the Namespaced Interfaces Pattern as a Best Practice for IC Developers

@skilesare it is already possible now to make one transfer public method and then match on the candid-decode of the call-arg-bytes and see which types are being passed and respond according to the types passed. You can fork the rust cdk and make public the arg_data_raw function and the function to return raw bytes and do the candid encode manually.

I can give two real-world examples. Example one shows the issue with having similar but divergent services attempt to share namespaces. The second shows the issue of accidental overlap that impedes the advancement of services on the IC.

Example one: Origyn has been trying to get the OGY token listed on a DEX for the last 3 months. They have a pipeline of loaded buyers and sellers that want to participate in trading on the DEX and further distributing the token. The OGY token was built on the ICP ledger tech. At the time they deployed the ledger, the public API did not have a .transfer function. Thus, when Sonic launched their dex and they said that OGY needed to have the DIP20 standard implemented Origin looked at adding that to the ICP Ledger. Thankfully there was no collision and so Origyn went about attempting this. External geopolitical issues slowed the progress, but finally, they were ready to roll it out and when they merged the latest ICP ledger they discovered that .transfer had been added to the public API. The .transfer in the DIP20 standard takes (principal, amount) and the ledger takes (TransferArgs). They now cannot have both in the OGY ledger. Thousands of dollars of work has been wasted, and significant (millions of dollars)value and interest remain on the sideline until the issue is resolved.

The current next best solution is for Origyn to create another canister that implements DIP20 that has “God” status in our ledger. This second canister can move tokens from one account to any other account. It also keeps track of allowances and enables a transferFrom workflow(which is a terrible and exploitable flow but what DIP20 and sonic require). This solution also slows the user experience because we now have to wait for 3 rounds of consensus to do a transferFrom. This solution also has second-order consequences because the Origyn NFT project handles payments and was distinguishing between tokens by using canister ID as part of our primary key. Now the OGY token will have two canisters that are both legitimate methods of transferring tokens. More work gets more complicated simply because we can’t just add the dip20 end points to the main ledger and keep track of allowances in that ledger.

Example two: The Origyn NFT project is attempting to combine a number of different features into an NFT canister. One feature is to transfer NFTs from one owner to another. The EXT-NFT function has a .transfer function that moves an NFT from one owner to another. The Origyn NFT may also hold a collection-based fungible token for governance as well. If it implements the DIP20 standard or the ledger standard the .transfer function will be impossible to implement because the NFT .transfer function is in the way. We could host the token on another canister but this breaks all kind of interoperability goals that Origyn has. Origyn wants a super standard that has NFT functions in it and Token functions in it. This super standard should be interoperable with all kinds of tools(NFT marketplaces) that only care about the NFT part and all kinds of tools(wallets, dexs) that only care about the fungible token governance part and we want one canister to handle both of those. With .transfer_ext_nft; .trasfer_ext_fungible; .transfer_dip20, .transfer_ledger we can support multiple marketplaces, wallets, and dexes even if they only support one of the ledger functions. With .transfer .transfer .transfer and .transfer Origyn will have to pick and choose and make our lives more complicated.

3 Likes

Is that available in motoko?

If I could tell a function it’s parameters were “any” and then pattern match by type, this would work, as long as I could return “any”.

Could you help me understand why you prefer the above to the following?

service {
  transfer : (variant { extNft : … }) -> …
}
service {
  transfer : (variant { extFungible : … }) -> …
}
service {
  transfer : (variant { dip20 : … }) -> …
}
service {
  transfer : (variant { ledger : … }) -> …
}
type Interface = {
  #extNft : …;
  #extFungible : …;
  #dip20 : …;
  #ledger : …;
  …
};

func transfer(interface : Interface) : … {
  switch interface {
    case (#extNft …) …;
    case (#extFungible …) …;
    case (#dip20 …) …;
    case (#ledger …) …;
    …
};

(Edit: transfer would probably need to return a variant as well to account for different return types)

Because if a users uses a wallet written last month I want it to be compatible with the service I write next November even if the dev that wrote the wallet gets hit by a bus and it is never updated.

I guess if every function took a variant and returned a variant then subtyping would take care of this, but the upgrade headache of marshaling your data from one data structure to the modified one seems a bit frustrating. Adding an endpoint with the namespaced function seems easier.

You could swap out interface namespacing with parameter:return namespacing, but if you end up with one standard having #token{canister:Principal} and another with #token{rootcanister:Principal;dip20:principal} then you are right back at the same place.

1 Like

I for one dislike seeing common characters in names also used as namespace separators. All it takes is for someone to not notice that their backronym, chosen to be a recognizable word, is also a totally reasonable word to use there in a non-namespaced function. I like ::, personally, but regardless I think having it not be _ is a good idea. A CDK must already have the facility to represent these functions; the existing usability issue, if one exists, can be changed in an update.

Let me expand on what I mean:

type Dip20 = record {
  transfer: func(Principal, Nat) -> (TxReceipt);
  ...
};

type SNSLedger = record {
  transfer: func(TransferArgs) -> (TransferResponse);
  ...
};

service : {
  interfaceDIP20: () -> (DIP20);
  interfaceSNSLedger: () -> (SNSLedger);
  ...
}

It does add one level indirection (only when initially setup to talk to such a canister, not for every call), but there are many benefits, the least of which is that this is already supported on IC and requires no extra change at system level.

3 Likes

I think there is something fairly interesting here, but I may be missing it. This looks like you are defining an Interface that returns function signatures. The remote tool would need to ask for the Interface first and then map the interface to its functions.

I love this. I need to wrap my brain around how this would work…I think third-party tools would still need to define a change to how they are doing things and never assume a standard and instead ask for the Interface?

A couple of concerns: Maybe this only works for 1:1 translations…also, motoko’s lack of a typeof() modifier might make parsing this challenging. We need that candid library.

motoko’s lack of a typeof() modifier might make parsing this challenging

Like with using all 3rd party canister services, you’ll need the did file, which should have all types required for each interface.

I quickly tried this in Motoko:

actor Hello {
  public type Hello = {
    hello: shared (Text) -> async Text;
  };
  public func greet(name : Text) : async Text {
    return "Hello, " # name # "!";
  };
  public func interface() : async Hello {
    { hello = greet }
  };
};
import Hello "canister:hello";

actor Test {
  public func test() : async Text {
    let hello = await Hello.interface();
    await hello.hello("world");
  };
};

Seems to work just fine:

$ ~/bin/dfx canister call test test
("Hello, world!")
2 Likes

Can the Interface be a query?

Sure

Sure, I don’t see why not.

I’m also guessing that these need to be a known quantity? Each canister would need to match all or nothing so that the calling canisters get their types right? I’m a bit confused about subtyping. Example:

Canister A:

type Dip20 = record {
transfer: func(Principal, Nat) → (TxReceipt);

};

type SNSLedger = record {
transfer: func(TransferArgs) → (TransferResponse);

};

service : {
interfaceDIP20: () → (DIP20);
interfaceSNSLedger: () → (SNSLedger);

}

Canister B:

type Dip20 = record {
transfer: func(Principal, Nat) → (TxReceipt);

};

type EXT = record {
transfer: func(AccountID, Amount) → (TransferResponse);

};

service : {
interfaceDIP20: () → (DIP20);
interfaceEXT: () → (EXT);

}

This would break things, right? If I had a did file that expected:

service : {
interfaceDIP20: () → (DIP20);
interfaceEXT: () → (EXT);
interfaceSNSLedger: () → (SNSLedger);

}

It would want them all? Maybe we put null in front?

service : {
interfaceDIP20: ?() → (DIP20);
interfaceEXT: ?() → (EXT);
interfaceSNSLedger: ?() → (SNSLedger);

}

1 Like

In your examples, canister A only has interfaceDIP20 and interfaceSNSLedger, and canister B only has interfaceDIP20 and interfaceEXT, so I would expect a slightly different did file for each

Are you thinking something like iterating through a list of canisters, and being able to call all of them depending on their interfaces? In this case, A and B can be put into the same list that only has interfaceDIP20.

Also, in motoko it is totally fine to give different types to actor("...") in different places using type annotation. In Rust it is even more flexible since actor types are not statically checked.

I will play with this a bit and see if I can’t get a working example that works across the problem space. I’d much prefer an addition rather than a modification if we are going to push something forward. Thanks for pointing this out!

2 Likes

A couple of questions on this one, first for @PaulLiu , do you have any idea what this would look like in rust?

Secondly, for @kpeacock how would the agent handle a response to this function? How would I call the resulting function that was returned by the interface? For example, if I called interfaceDIP20 below and got back the Dip20 object, What were the JavaScript look like the called the transfer function?

@nicopoggi , how would plug handle this,?Would it know what function was being called and let it pass through?

I’m thinking that this might be the ideal answer, but not until we have innercanister queries.

@ulan do you know what the current timeline for this is?

If I call this and get back an interface, any idea how we would certify it? If it is an update call, it is fine(and the default if a canister is calling it, but if a client calls it as a query, how to certify? It isn’t data. How would I sign it? Sign the candid? We are back to needing a candid library or reflection over types in motoko. @nomeata @claudio

My current thinking is that, as a best practice, never call a canister that implements a standard directly. Always call a __supports query that returns an array of interfaces namespaced by declared namespace in the standard:


type Dip20 = record {

transfer: **func** (Principal, Nat) -> (TxReceipt);

...

};

type SNSLedger = record {

transfer: func (TransferArgs) -> (TransferResponse);

...

};

type PeopleRegistey = record {

isMember: func (Text) -> (Bool);

register: func(Principal, Text) -> (Bool);

...

};

public query func __supports() : {

list: [Text]; //list of available interfaces

iDIP20 : ?() -> (DIP20); //null so that if it is missing in candid response it won’t throw an error

iSNSLedger: ?() -> (SNSLedger);

iPeopleRegistry: ?()->(PeopleRegisty);

….// any n number of used standards interfaces

I’m going to be a dissenter :smile:

To say that we all have a global namespace that we must share is not the most accurate characterization of our development paradigm. Interfaces and function names are scoped to canisters.

Collisions only happen when multiple standards/protocols are being implemented in the same canister.

It seems that if we had a tidier standards landscape, we wouldn’t have the problem of ten different standards (which all try to do the same thing) competing for the same transfer function namespace. In-canister standards composition seems to be the impetus here, and that would be less of a concern given “one clear standard.”

I do think that namespacing makes sense when you’re working on a smaller project level, where you know that you your code a) will live alongside an existing standard, b) isn’t meant for mass adoption, c) is experimental in nature, etc.

There are certainly examples of protocols conflicting with each other where they probably shouldn’t. Recently a marketplace protocol was released which conflicts with dominant marketplace protocol of Entrepot. This does feel to me like a possible failure in the design of the new protocol to consider the broader landscape, but it’s also solvable via wrapping.

On the note of composing standards within a canister, for example a fungible and a non fungible token, there’s precedent from Ethereum for this type of functionality within a single standard: ERC1155.

In summary, namespacing makes sense sometimes but I would suggest that it’s misused as an approach to the problem of fragmented token standards. We should not seek a way to make a plethora of standards that all do the same thing composable with each other, we should seek to unify them. A unified, authoritative token standard would not need to be namespaced.

I’m guilty of not reading this entire thread, but I know that there are some very important considerations in here, such as the fact that we do not have the hash addressing of functions that Ethereum does, which is one less tool in our belt and that may have important consequences that I have not considered, and so on. I would suggest that backwards incompatible changes for the token standard should be very rare, and can be handled via wrapping and migration to new contracts. Not everything needs to be done via canister upgrade (which should reduce the lift that standards interoperability are responsible for.)

3 Likes

Thanks for engaging!

More details coming soon, but we are exploring making wrapping obsolete. It breaks the ability to enforce your code and we’re trying to solve that.

In summary, namespacing makes sense sometimes but I would suggest that it’s misused as an approach to the problem of fragmented token standards. We should not seek a way to make a plethora of standards that all do the same thing composable with each other, we should seek to unify them. A unified, authoritative token standard would not need to be namespace.

I’d agree, but the realist in me sees that we have two houses of the legislatures in the US and you have to be able to lobby both. In other words, humans are strange and I don’t know that we’re going to be able to all get on the same page. While we wait progress is stalling.

I’d argue that having unique names is a net gain for everyone. If your standard wins it is because it was best, not because it was the first one to claim transfer. It doesn’t preclude us from getting to a single unified standard it just lets us try some different things.

We have mutually exclusive standards already. The SNS_ledger’s public methods are incompatible with DIP20 public methods because DIP20 wants transaction history by principal ID and the SNS_Ledger purposely masks principal in an account ID.

I would suggest that backwards incompatible changes for the token standard should be very rare, and can be handled via wrapping and migration to new contracts.

To be clear, no standard would have to do away with their current standard, just add a unique namespaced function as well. So DIP20 could have a transfer(principal, amount) for backward compatibility and transfer_dip20(principal, amount) for future compatibility. They could call the same code.

1 Like

The scheme token.std_name.method(....) looks elegant, the downside is that it requires a refactoring of the did and the interface code, which is invasive.
We use this scheme, Discussion on the compatibility of different token standards, which is compatible with the DRC20 and DIP20 standards.

The timeline is vague at this point. We should be able to propose the options for voting relatively soon (weeks) and then the actual implementation depends on the outcome of voting.

Unfortunately, this approach is very painful to use in Rust at the moment.