Threshold ECDSA Signatures

We have in fact done an initial analysis of the tokenomics and we feel that this will not be a major problem. Based on the cost of paying node providers, we would have to charge on the order of a few cents per signature to not be inflationary. Indeed, if we assume a throughput of 1 sig/sec, 35 nodes in a dedicated signing subnet, and $2000 per month per node, we would need to charge about 3 cents per sig to break even. This is generally in line with what we initially plan to charge.

You are right that 100% utilization is not realistic. Assuming the IC and this feature really take off, we can fine tune things to aim for something realistic, like 25-30% utilization, and still have some spare capacity for surges. Over time, we will fine tune the performance and price, as well as build out several signing subnets with load balancing (“horizontal scaling”). As long as everyone is on board with the idea that the price per sig is on the order of cents (rather than, say, hundredths of a cent), this all seems reasonable, at least to me.

9 Likes

It does eventually need to be cheap to sign these things. 3 cents sounds cheap until your a service provider signing hundreds of thousands of polygon transactions a day in a gasless model.

2 Likes

Understood. But I don’t think the economics of this can be changed much, so dapp developers will have to plan accordingly.

From what I can see, as of today and excluding all voting rewards, there was 2,777,790 ICP paid to nodes providers and total 92,279 ICP burned. The the gap towards the providers is constantly increasing. So I don’t really understand the ‘not to be inflationary’ statement. Am I missing something? Should this be an actual issue to be resolved, if possible? Any plans?

5 Likes

Usage of the network is still not high enough to create deflation

2 Likes

To my feel, the more you need computing, the more nodes you need, thus more inflation as well. To be even, it would mean the network is only actually used at 3% (92279 / 2777790). On top, we all know the usage cannot go to 100%.
But if the network is used at 3%, why adding more nodes now?
Would really be happy to see a plan, expectation from Dfinity on this?
Because for me, numbers don’t work and will not work for deflation ever, very far from it. @Kyle_Langham is the expert at numbers but I haven’t seen any study on this.
Show me evidence, if there are.
Really hope I am totally wrong but I cannot see how I can be.

5 Likes

I only meant that the threshold ECDSA feature would not make inflation any worse, not that it would do anything by itself to make it any better (unless we charge huge fees for ECDSA sigs so that it would subsidize other costs)

1 Like

Personally, I would like to see more fees for the ECSDA feature; simply because it is a market differentiator. Devs should pay more for such a feature. Or put it in the reverse-gas model context, devs should build dapps that use this feature in a novel enough way that it increases their user base.

I don’t think that we should use fees from ECSDA to “subsidize” other features.

I disagree, it should be a cheap as possible as long as the tokenomics allow for deflation with realistic usage, this way devs can use it more easily and for more use cases bringing more dApps and users to the ecosystem, which in turn will cause more cycle usage.

5 Likes

Early node providers were paid significantly just after launch, I believe as a result of providing nodes prior to launch and as a reflection of the increased risk at that time. Recently, the rewards paid to node providers has been much lower than last year.

I don’t remember where I documented this a year ago (perhaps Twitter) but my memory is that if the network operated at 100% capacity it would result in a 7x-10x burn to mint ratio.

It’s also possible that the NNS decides to charge more for computations in the future to increase the deflationary pressures. I imagine that wouldn’t be considered until there’s more growth, however.

1 Like

I have 2 concerns:

  • Subnets are quite small right now, so the 7-10x ratio isn’t just optimistic cause it assumes 100% usage at all time but also the average subnet being made of 13 nodes.

  • Subnets can currently handle 300GBs of state, that means at 5$ GB/yr it’d only take 1500$/yr to occupy a subnet and make it unavailable to anyone else, effectively wasting a subnet’s capacity and making it inflationary. Has Dfinity thought about a similar scenario?

4 Likes

This calculates only storage, not computation, right?

That’s right, but there are no guarantees computation will be done on that data or at all. The space might be used for simple storage or as a mean to attack the IC.

Currently we have 35~ subnets, so an attacker could spend 52,500$/yr (1500$ x n of subnets) and waste almost the entire computational capabilities of the IC, I say almost cause already existing dApps would still work but they might encounter some issues, e.g inability to spawn new canisters and on top of that make it impossible for the system to become deflationary.

1 Like

I think there is, but it’s getting harder and harder the more we have squeezed out performance of the protocol already already.
I don’t have the exact size of the testnet that was used for this in my head, but if I recall correctly, it was rather close to 30 nodes, so smaller than the one we want to launch on, but not much.

Yes, that’s possible if there’s demand. The feature can scale out horizontally by adding more signing subnets for the same key. The governance for all of this is already part of the initial design and the NNS proposals, but some parts, such as the deterministic load balancing, would still need to be implemented to make this work.
The idea of load balancing would be to determine, for each signing request, deterministically to which signing subnets to send the request to. All signing subnets would be listed in the registry.

4 Likes

Can a single subnet only handle a single signing request at a time?
I’m not a cryptography expert but intuitively it seems like these requests should be able to run in parallel.

1 Like

What’s the biggest bottleneck consesus or complexity of the cryptography involved? If the latter would it be possible in future to optimize by running specific hardware for ECDSA subnets?

Also considering performance degrades with node count, has Dfinity considered implementing a system similar to the one described in the original whitepaper and have only a subset of nodes chosen via VRF each block to reach consensus?

2 Likes