DeFi Working Group

Working Group Goals:

  • Simplify the creation of DeFi services.
  • Enhance interoperability and collaboration among canisters across various owners by setting ICRC standards and offering libraries that are easy to use, secure, and reliable.
  • Increase awareness of DeFi on ICP by bringing ICP DeFi projects to relevant platforms like Dexscreener

Key Performance Indicators (KPIs):

  1. A rise in the quantity of DeFi services.
  2. An increase in the interconnectivity volume among these services.

Coordination:

Meetings will take place within the ICP Developer Community Discord at ICP Developer Community
First one is today at 4pm CET

Our first meeting agenda:

  • Intro
  • Overview ICRC-45 [Live DEX Data]
  • Overview ICRC-47 [DEX History]
  • Overview ICRC-55 [DeFi Vector]
  • Float or Nat? Determining the appropriate data types for:
    • Amounts
    • Rates
    • Volume
    • Market depth
  • Representation Strategies for Volume
    • Options for depicting trading volume:
      • In USD
      • In XDR
      • With a single token
      • With two tokens
  • Volume Calculation Methodologies
    • Methods for accurately calculating volume:
      • Utilizing a rolling 24-hour window for both tokens
  • Market Depth Representation
    • Techniques for accurately depicting market depth information.
  • Demo of DeVeFi ICRC Ledger Client library and how it works.
  • Should we use the same polling (read/write) pattern in other ICRCs?

WG repo GitHub - Neutrinomic/wg_defi

7 Likes

Link to the event. It will be inside a channel ICP Developer Community

3 Likes

Thank you @infu for starting this. I won’t be able to join today, but I’d like to propose to add to additional goals:

  • Increase awareness of DeFi on ICP by bringing ICP DeFi projects to relevant platforms like Dexscreener et al.
  • Besides standards, what are the building blocks that are missing?
4 Likes

Great idea. I think also making a module for ccxt https://www.npmjs.com/package/ccxt
and making IC DEXes work in there will bring a lot of applications like https://hummingbot.org/ to the ecosystem.

4 Likes

Some notes from the first meeting:

Float on the IC Motoko, Candid, and JS browsers & node use the same → Double precision (64-bit) floating-point numbers in IEEE 754 representation.
They are easier to use for our purposes and if we store them in browsers, their footprint will be 5-10 times smaller. Since when serializing and storing Bigints in state they need to become strings. Not to mention carrying around their decimals and dividing them all the time is cumbersome.
Floating-point operations are generally faster because modern CPUs are designed with hardware support for floating-point arithmetic, which allows these operations to be highly optimized and executed rapidly.

@afat raised an interesting concern about using float. There is no Motoko hash function for them.

One way to do it is to convert to Text and then hash. But there needs to be a function working exactly the same way in other CDKs.

"2.342_342_300_000_000_1e+38" : Text

It’s probably better if we can somehow extract the sign bit, exponent, and significant and hash these (Can’t see any functions for these either) ? Or just have a hash function built in Motoko? @claudio

Float representation

A double precision floating-point number is divided into three parts:

  1. Sign bit: 1 bit
  • Determines if the number is positive (0) or negative (1).
  1. Exponent: 11 bits
  • Used to calculate the power of 2 by which the significant (or mantissa) is multiplied. The exponent is stored in a biased form, meaning that a bias is subtracted from the actual exponent to get the stored exponent value. For double precision, the bias is 1023. So, an exponent is stored as the actual exponent plus 1023. For example, an exponent of 10 is stored as 10 + 1023 = 1033, which is then represented in binary.
  1. Significant (or Mantissa or Fraction): 52 bits
  • Represents the precision bits of the number. It does not store the leading 1 (known as the “implicit bit”) because, for normalized numbers, this bit is always 1 and is assumed rather than stored. This part represents the fraction (following the binary point) of the number in binary. The actual value represented is 1.fraction for normalized numbers, where “fraction” is the binary fraction represented by the 52 bits.

1 Like

Wait, I thought it was Financial Accounting 101 to never use floats for monetary calculations or storing balances? The post is lacking a bit of context, but since the topic was DeFi, my alarm bells are going off


2 Likes

Thanks for the feedback! It’s not for storing balances, but rates. token1/token2. All the CEX APIs are returning floats. (ICRC-45 Live DEX Data - #12 by infu) They are not even using 1/8 of the capacity and round numbers. I just wanted to present all the questions to the WG.
The output was:
Amounts - Nat
Rates - Float
Volume - (Nat, Nat) or USD
Market depth - (Nat, Float) - (Amount, Rate)

Do you think it’s a bad idea to use floats for rates?

It is a question nonetheless. Half of the DEXes use floats for some values and the other half uses Nat.

Sorry for the ‘lack of context’ Linking all the forum posts related to this so far:
https://forum.dfinity.org/t/icrc-45-live-dex-data/26417
https://forum.dfinity.org/t/icrc-47-dex-history/26477
https://forum.dfinity.org/t/icrc-55-defi-vectors/27209
https://forum.dfinity.org/t/devefi-ledger-icrc-ledger-client-library/27274

2 Likes

Let’s take this rate for example (ex: Bitcoin / very low-value coin)

0.000000000000000000000000000000000000000000123456789123456789123456789123456789
IEEE 754 float will be 1.234567891234568e-43
It will fit in 64 bits.

The Nat representation (amount : nat, decimals:nat8) will be something like
(12345678912345678912345678912345678900000000000000000000000000000000000000000000000000000000000000000, 100) - 100 or something else, we have to eventually limit the precision.

if our rate is 0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000123456789123456789123456789123456789
IEEE 754 float will be 1.234567891234568e-231 still 64bits

The Nat representation (amount : nat, decimals:nat8) will be something like
(123456789123456789123456789123456789000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, 231)

Redux - the most common and best IMO library for web state management states:

A BigInt is not a serializable value with JSON ( @lastmjs maybe has some thoughts on this) - the most common serialization format in JavaScript. A float is serializable.

If we use BigInt and casual libraries most devs would use, we will end up storing strings, or even worse for js - arrays.
In cases where our web app has to store tens of thousands of rates in the browser’s memory and we use common libraries. Our memory using Nat’s will be a lot bigger. Not to mention calculations will be slower and we will have to convert things back and forth. Our contracts memory storing these numbers will also suffer - a lot less than js, but still enough to be noticeable.

Another solution would be to return (123456789123456789123456789123456789, 450) - adaptive decimals for every number. But there are no libraries right now that make these calculations easier. Still a js problem.

Third one - return the amounts without dividing them (Amount, Decimals, Amount, Decimals) Still has a bigger footprint and will require additional calculations.

So worst case scenario, we will be using 50 times more browser memory and things will be -my guess is- 20 times slower when doing calculations.

When the whole crypto is based on probabilities with a small chance of failing and the most stable coin prices - Bitcoin moves ±10% a day and all exchanges report ± 0.05% difference. Do we really need to make it hard for web developers and Motoko developers - to gain precision after 17 digits when even after 4 digits it is irrelevant?
I thought we were building the web3 not accounting3. Feel free to change my mind!

That said if we are wrong and you point us in the right direction. I think the whole DeFi community of builders will be eternally grateful. It doesn’t seem like anyone has figured it out.

2 Likes

For exact monetary values like e.g. balances you’d avoid float due pointer precision issues, very small fractions of funds basically would end up disappearing if you’d store these as floats in e.g. JS.

The precision issues with floats are rounding issues e.g. 0.2 is suddenly 0.1999999999 which is not a problem for an exchange rate, since that’s more than precise enough to indicate how many tokens B your tokens A are worth.

With an actual exchange of tokens A to tokens B you’d get an bigint return value for a given bigint input value. Since transfers of tokens would be in e8s (icp), satoshis (bitcoin) etc.

It’s only a problem when your balance suddenly is 0.200000001 instead of 0.2 because that would magically create 0.0000001 out of thin air. Which is why balances are not in floats.

So yeah, it’s not a problem to use floats for an exchange rate and indeed common practice.

2 Likes

Thanks for your contribution to this topic!

In what scenario does that occur?

1 Like

You can’t exactly represent certain fractions in binary format. See more at https://0.30000000000000004.com

2 Likes

Thanks. Right!
image
So do we care about that inaccuracy when it comes to rates?

1 Like

The rates are fine as floats.

Also multiplying a nat amount tokens with a float rate of 0.1999999999999 or 0.2 does almost always not make any difference due to the output also being interpreted as a whole number nat of tokens where the difference in such a calculation would be somewhere in the decimals that aren’t returned in a nat in the first place. In theory you could be sometimes, for example be 0.000000001 icp off (1 e8s icp) but that’s unavoidable in any math with floats due to the precision of floats.

This could also be floats that are created within a calculation of non floats e.g. (a / b) * c as calculation could have non float values for a and b where you’d end up with a float that gets multiplied with c. So if a dex does not return a float (= a / b) but instead both a and b as separate values, it doesn’t end up making a difference since you’d just end up making this non precise float yourself in your calculation.

In practice when e.g. you as an exchange work with floats for a fee, you’d calculate that fee amount first and then deduct that from the total amount instead of multiplying the total amount with (1 - fee float) to get the remaining amount. So you don’t have to worry about the remaining amount + fee amount not being equal to the total amount.

3 Likes

No we don’t, and given the current state of our other hash functions, that’s probably a good thing.

We also don’t offer a primitive to extract the bits of float. You could work around this by writing the float to a region and reading it back as a Nat64 or blob, or (less recommended) encode to a candid blob and making sure to extract the relevant bytes (probably the last eight, but you’d need to check)/

But we should just add the primitive I guess.

3 Likes

In the WG session tomorrow (March 19), I will present an idea of user-paid messages that we are exploring at DFINITY. This is more of a temperature check with the community to see if there is interest in the regular gas model [in addition to the existing reverse gas model]. The goal is to collect the actual use cases and to see if there are many potential users.

5 Likes

Thanks all for joining today’s session. Here are the slides.

Summary:

  • Two community members expressed concrete interest and would use the feature in their application if the feature was available.
  • Multiple people thought that the feature will be useful for launchpads, inscriptions, DEXes, and other applications both for boosting the messages and protecting against cycle draining attacks.
  • A feature to query the average fee in the last few blocks would be useful.
  • Consider a feature to allow spending cycles from someone else’s (sponsor’s) balance if the sponsor signs the approval. This would allow an off-chain service for users to boost their messages without talking to the cycles ledger canister and without maintaining their cycle balance on each subnet.

If I missed anything important, please add in comments.

5 Likes

We are interested to get feedback if you’d leverage this feature in your applications, and/or if you have any concerns.

@ICPSwap @sea-snake @simpson @cakemaker1 @memecake @OrangeDonut @mzibara @dostro @witter @bob11

2 Likes

We should definitely have a regular gas model. Self-sovereignty is difficult without it.

We’d use it for NFID Vaults.

2 Likes

I have a ton of questions
sorry to have missed the presentation.

  1. This seems to be mostly talking about Ingress messages. Is that true? So the boundary nodes are the bottleneck and they will need to order the messages according to fee? Does this then carry through to the replica? Are they also ordering by the extra fee?

  2. This leads me to a deeper question that maybe I should have asked a long time ago, but the current assumptions make it irrelevant and I always assumed that it would come up eventually as we moved to decentralized boundary nodes and/or nodes running custom replicas. We currently don’t have to worry(much) about MEV on the IC, but should we start worrying about it? If a boundary node can order by fee, what is to keep it from ordering by whatever metric it wants including inspecting the incoming message and sliding a message in front of it that front runs a trade? Is there anything currently that is keeping node providers from running a forked replica that may have their own custom code running on it? If we do implement this do we perhaps get a handhold to start doing some kind of slashing if a message is misordered?

  3. If this is just an ingress issue, then what about inter-canister? Does this system need to exist there? I actually expect most of the traffic on the IC to eventually be inner canister. Building an application is already hard enough. With the upcoming addition of non-guaranteed message delivery, we’re going to have more failure conditions to handle for inter-canister calls and this seems like another addition. It is a lot easier for a user to decide if they want to boost their message than for a hard-coded canister to do so.

  4. Who gets this extra fee? Is it always cycles? Is it burned? Can we pay in ICP or ckBTC? If so how will the node calculate the exchange rate? If the end application gets the fee then that seems like a fairly cool feature. Do they have to update their code to accept it or will it just get deposited automatically?

  5. If it is an inter-canister, should we be looking at an application-level solution instead of adding to the replica? (I have a utility that I’ve been working on that seems to do something like this and it is going to get frustrating if every problem we run into keeps migrating down into the replica after someone burns a couple thousand hours of dev time
see some of the comments on today’s ‘enable canister to hold neurons’ thread
not saying this is one of those times, but that I’m concerned it might be
and moving to the replica might be the right answer but it still sucks if you’ve built something that becomes irrelevant).

  6. If everyone moves to canister-based wallets does this get easier?

I can probably think of 10 or so more questions, but I’ll stop there and try to get up to speed before I ask any more.

2 Likes

This feature will enable ICP developers to build stand-alone immutable applications that live forever. With the current cycles model, it is much harder as you always have the risk that the canister will be frozen and uninstalled.

My main question is even in the event that you do charge users (as a form of a gas/network payment) it still doesn’t seem like you cover all of the costs, because storage is charged over time, and usage may not cover the storage costs.

So the real solution here (from my perspective) would be to couple one-time storage payment with user-supplied gas fees. This is essentially the ETH model. Pay a one-time fee to upload your smart contract to the network (could be very expensive), then not worry about storage costs anymore, and then charge users enough in fees to cover all compute + additional storage from their transactions, and now you have a self-sufficient system that is not relying on anyone else to survive. Meaning:

  1. If noone interacts for 6 months it doesn’t matter. All storage has already been paid for.
  2. If tons of people interact, it doesn’t matter. Users cover their costs.
  3. If nobody ever thinks about topping up the canister, it doesn’t matter, because the canister is self-sufficient at that point.

If we could get one-time paid storage + user-paid gas (essentially copying the ETH model) developers can more easily build immutable, decentralized, self-sustaining applications.

6 Likes