Enhancing Network Decentralization - Proposals for Node Provider Standards

Definitely an interesting analogy, as few people buy a home outright. Similarly, we could run into an issue where the people with the right kind of expertise, background and/or contributions to the ecosystem are not often enough those who have the capital to become Node Providers.

It could be good to have a means of attracting the most favourable candidates with ideas like this →

Very good question!

Assuming you would only have two node providers in one data center then yes. The target topology would only allow a single data center in any given subnet.

However, assume that you have two node providers A & B, each having one node in data center 1 & 2, sharing a rack in both of them. Then a node of data center 1 of node provider A and a node in data center B of node provider 2 could be allocated to the same subnet, although both A & B might have physical access to the same nodes.

2 Likes

Great point! Thanks for clarifying, that makes sense :slightly_smiling_face:

1 Like

I’ve been thinking on this a little more, and I think this would actually kill 2 (or maybe even 3) birds with one stone (if the maturity ownership limitation is addressed).

  1. NF neurons are controlled by the NNS.
  2. So is the D-QUORUM neuron (DQ).
  3. In this potential staking/slashing implementation for Node Providers, so would these special NP neurons.

In all 3 cases a missing feature is for the maturity to be allocated to individual(s) who are not the formal neuron controller (given that the controller is the NNS itself). There’s no reason for maturity to accumulate in an NNS neuron that’s owned by the NNS itself.

If instead this maturity automatically flowed to the neurons that are followed by that NNS-controlled neuron, numerous problems would be solved at once (NF, DQ, NP).

NF
DQ
NP

Example NP Neuron Setup

Note that the only thing that really represents a new feature are the orange lines (maturity flowing to followee neurons rather than accumulating in the neuron itself, if it’s an NNS-controlled neuron). Everything else is already supported.

This would support NP neuron staking, while simultaneously making on-chain NNS reviewer rewards possible (using D-QUORUM), as well as addressing limitations with NF neurons.

cc @lara

1 Like

It should be pointed out that the D-QUORUM neuron is not actually controlled by the NNS…it is controlled by the Followees assigned to the Neuron Management proposal topic. There is no code that performs the intended functions that Alex would like to implement for NNS control. In fact, this functionality is intentionally forbidden in the code. There are numerous forum threads that discuss the features he would like to implement and in all cases DFINITY has recommended that he create this functionality in a separate canister.

The reason that the NNS is currently assigned as a controller of the D-QUORUM neuron is because when you spawn a neuron from neuron maturity using command line, you can assign a controller. Alex created this neuron and assigned the NNS governance canister as the controller. Before taking this action, he assigned himself as a Followee for the Neuron Management proposal topic for the parent neuron so the child neuron would inherit the same Followees. He was then able to set other people as Followees for this topic too. Controlling a neuron using the Neuron Management proposal topic is a feature that is built into every NNS neuron, but very few people know about it because it is only accessible via command line. These Neuron Management Followees can control all aspects of the neuron except dispersing ICP. That means they have the ability to set Followees for every topic, trigger a manual vote on every proposal, set hotkeys, change dissolve delay and state, etc. So the truth is that these Followees control the D-QUORUM neuron in the same ways that are built into every NNS neuron, yet the NNS is listed as a controller in name only because there is intentionally no functionality that enables the NNS to actually control a neuron in this way.

Creating methods that give the NNS direct control of a neuron Followees in this way would be a slippery slope that fundamentally changes NNS governance. Personally, I’m getting tired of always feeling the need to raise a red flag every time I see Alex bring up this topic. He alludes to capabilities that don’t actually exist or alludes to support from DFINITY on an idea where they have not actually indicated support. I would greatly appreciate if DFINITY (aka @lara or @bjoernek) would offer an assessment of Alex’s ideas in a way that gives concrete feedback on whether or not the NNS will ever be used to elect Followees for individual neurons or whether the NNS will ever change the maturity distribution mechanism where it is no longer assigned to the actual controller of the neuron. Alex uses language that makes it sound like his ideas are a forgone conclusion based on his read of DFINITY feedback so far. If he is right, then I’d like to know so I can adjust my responses accordingly. If he is wrong, then I would appreciate if you would tell him in no uncertain terms.

For more information about the ideas Alex has raised regarding D-QUORUM neuron control by the NNS, you can start with these threads:
D-QUORUM Stake & Disbursals
Known Neuron Proposal: D-QUORUM

1 Like

If we are going to limit data centers to 1 node provider only, then would it be worth considering allowing each node provider to have nodes in only 1 or 2 data center(s)? Wouldn’t this inherently help avoid confusion, maximize the target topology goals, and simultaneously impose a limit to the number of nodes that can be owned by any node provider to equal roughly the max number of subnets?

You write that there is a risk that someone might be able to access a server in a data center if it is located in the same rack as other servers. How do you imagine this?

  • This is a criminal offense.

  • No one knows and cannot know what exactly is on this server.

  • When working with a personal server, it is removed from the rack if long-term maintenance is needed. If the procedure is short, there is always a data center employee nearby, plus cameras. It is impossible to do anything on another server, even if someone somehow finds out that it is connected to ICP. But how can one know that this is my server? How can this be done?

  • All information on the server is encrypted. How can someone cause harm if we don’t have access?

I have done a lot of work to find data centers, sign contracts with each one, solve various breakdowns and bugs that arise on the data center side. Will we lose in decentralization by uniting now and adding a bunch of problems to those who will have to re-rearrange their nodes and export them from different countries? I do not fully understand how much money we will save by cutting payments to node providers and will this affect the token rate? I have been holding icp for 3 years, my average price is 6.5, so this question worries me a lot. Are the efforts directed in the right direction?

1 Like

From personal experience, that doesn’t seem to be a deterrent to these bad actors in the network.

1 Like

The NNS is verifiably the controller of the neuron (the permanent, formal controller than can never be changed). Note that this is the same for NF neurons, and that’s why you see statements like this in the documentation.

neurons created for NF participants are controlled by the NNS

Also note that two prominent DFINITY team members are on the D-QUORUM co-founding committee, involving back and forward discussion before setting D-QUORUM up. The founding committee is also composed of members of Synapse, members of CodeGov, members of CO.DELTA, dev team engineers from prominent projects including WaterNeuron, Toolkit, TAGGR, the list goes on.

This statement should be balanced against the actual D-QUORUM goal and co-founder commitment →

The NNS can boot the Neuron Management followees whenever needed, and this will be welcomed.


Back to the Point - Slashable NP Neurons…

My point is that nothing needs to be done by the NNS to support the D-QUORUM mission. Simply by facilitating slashable NNS-controlled neurons (with maturity cascade), elected proposal reviewers will automatically begin receiving D-QUORUM maturity. Then we can get on with the business of increasing the D-QUORUM neuron stake, for which there are numerous plans.

I understand that you’re hesitant about this idea Wenzel. D-QUORUM is about supporting competition in the proposal reviewer community. This is sorely lacking, which serves existing proposal reviewers well. Rest assured that this initiative is designed with best intentions for the network as a whole.

1 Like

I am a Gen2 node provider and would also like to provide my feedback on the proposed changes:

  • Regarding any new disclosure requirements or KYC procedures, I see no issues and I am sure we can come to a conclusion that addresses the actual concerns that have been raised. A process can always be strengthened, especially when scenarios or risks have been identified that weren’t thought about at the outset, and I have no issue with that.
  • As regards making people buy more nodes to get to 10, consolidating node providers who are in the same data center etc., I do not see the necessity of that. It was already pointed out that some of these situations exist because of how Gen2 node rewards work with the reduction factor. All these node providers made proposals at the time to set up nodes in a certain way which were accepted by the community, including Dfinity. Maybe node rewards need to be structured differently going forward to not incentivise people to set things up in that way. But I see no reason to make such radical changes right now when we can do a lot of other things through disclosures etc. to address the actual risks that have been identified.
  • As regards staking 50% of node proceeds, I don’t think this takes into account the realities of the business model of being a node provider and I would not be supportive. Several node providers already also chimed in and I agree with them. Gen1 node providers were allowed to finish their 4 year term and they have just been given the rules for the 2 year extension.
    • Gen2 node providers onboarded under the then mandate of geographic decentralisation and based their investments on the parameters that were stipulated at the time. Yes, there is risk with every investment, and there already is significant risk in any case given node providers are paid at the 30-day moving average and the nature of the investment in general. However, factoring in such radical changes to the terms of being a node provider is not something anybody would have been able to anticipate.
    • @bjoernek implied that node providers would then be staking the 50% of the node provider rewards that are attributable to profit. However, this is not an accurate assessment of how the node provider business works. Firstly, monthly rewards value varies based on the 30-day moving average price and there have been many months where the 30-day moving average has been below spot. Data center expenses have to be paid regardless.
    • Secondly, assuming a 50% profit margin is quite a stretch in my opinion and experience. Malith already shared insights on costs in some of the geographies where Gen2 node providers operate, I can confirm that his numbers are spot on. Yes, node rewards somewhat take into account different costs across geographies, but the “total cost over 4 years” benchmark consisted of a CAPEX and OPEX benchmark, and from what I recall mostly estimates of varying OPEX were factored in. However, a lot of Gen2 node providers incurred a lot of additional costs in terms of shipping and import taxes. Also, anyone not in the US or Europe will know that after their 4 year term, there would be not much chance of selling their servers for any other purpose. Shipping, export, import etc. is simply prohibitive and there is no viable local market. I for one factored in full depreciation over the 4-year period.
    • Thirdly, none of this takes into account that most node providers have to pay taxes, which is another huge expense.
    • Of course all node providers, including myself, made a business model and decided at the time that it was worth it. However the key assumption was that the agreed terms would stay for the 4-year period and not be changed arbitrarily. If there had been a 50% staking requirement, the business model would have been fundamentally different.
    • As a node provider you already face various risks: data center costs go up each year - mine have increased 15% from last year - while node rewards do not, ICP price can hurt your return when spot is below 30-day moving average, servers fail and need to be replaced etc. I signed 4-year contracts with data centers as well which include quite hefty early termination clauses. But these were all identifiable risks that I could “price in” when I made the decision to invest in node providing. But the risk of Dfinity arbitrarily changing the fundamental terms during the 4-year period? No, that is not something any of us would have been able to anticipate or price in in any way.
    • I would highly recommend to not set this sort of precedent and to continue under the current rewards model until Gen2 node providers have finished their 4-year term and Gen1 node providers have finished their 2-year extension, respectively. For any new node providers, new terms can be set of course. This course of action would be a lot more professional and is also standard business practice.
10 Likes

And once again, the only ones trolling on an otherwise constructive forum topic are you guys.

What if the % that’s allocated to the NP stake could be chosen by the Node Provider, so they could opt out. There would of course need to be an incentive to stake (and benefits to staking more).

I think it’s a very popular and sensible opinion that, in an ideal world, Node Providers should be able to demonstrate significant PoS (proper proof of stake, in ICP, in a way that can be slashed by the NNS as a counterbalance for malicious behaviour).

Just chucking an idea out there, what if there were a ‘you pay in we pay in’ scheme. When Node Providers stake into a slashable neuron that represents them as a Node Provider, maybe the NNS could mint a matching amount into that stake (a stake which has the maximum dissolve delay, 8 years). Given there’s the threat of the NNS slashing the stake (if wrong doing is evident), it seems reasonable for the NNS to help incentivise and bootstrap this stake. After all, we’re talking about securing the infrastructure that the NNS itself runs on.

This could be a limited time scheme, covering only the period that NPs have already planned and accounted for. At the end of that period, there should be a means of highly staked NPs benefitting from their commitment (such as by being eligible to participate in higher security subnets where the rewards could be higher).

Just thinking out loud and chucking this out there for discussion. At the end of the day we need PoS, and I think we should be trying to find ways to make this work while accounting for legitimate concerns.

2 Likes

We have a node provider working group, and the renewed discussions regarding the current Gen1 and Gen2 node providers are very concerning. After extensive discussions and alignment earlier this year, a 2-year commercial arrangement was agreed upon and implemented. Now, it appears that some community members, backed by the foundation are looking to reopen the discussions regarding the terms, despite the fact that node providers have already committed to commercial agreements with data centers based on them. This creates uncertainty and raises questions around the stability and credibility of the Internet Computer Network.

  • Data center/service provider rules – Many existing node providers, particularly in Asia, have long-term contracts in place. The introduction of new requirements could render continued operations impractical and economically unviable. Had these been clear from the start, providers may have made different choices. A fair approach would be to allow current participants to fulfil their existing 2-year terms under the original assumptions, with a reasonable transition period going forward.

  • 50% staking of node rewards – This could significantly impact the financial sustainability of some providers, especially when considering hardware failures, repair costs, and tax obligations (which in some jurisdictions can be as high as 50% on node rewards). Such a change could result in providers exiting the ecosystem and offloading hardware at a loss—outcomes that are not beneficial to the network’s long-term health.

  • Current incentives already work - The system already motivates node providers to keep nodes healthy. Let’s not forget that.

  • Some of the new criteria and added complexity will also make the foundation less nimble, potentially slowing it down just when it needs to scale faster. Let’s not discount the potential for growth in the name of scrutiny.

  • An increasing number of node provider are losing faith in the process and are prepared to call it the day. We are close to a silent quitting of a number of very relevant node providers simply because it is tiring to have this rather unproductive discussion once again.

Most reasonable and sensible approach would be:
Grandfathering the current Gen1 and Gen2 agreements until their respective ends. Create stability.

All discussions we have now shall be related for new node providers and when Gen1 and Gen2 agreements come to an end. Anything else could be unproductive and impacts credibility of Dfinity / Internet Computer Network.

2 Likes

Are we going to blackmail the foundation again, Stefan?

1 Like

Hi @Lorimer, thank you for your ideas. I think the ability for a NP to opt out (presumably without penalty) and to choose a % with certain incentives attached, could be quite reasonable, in particular for some sort of transition period or also as a test case/proof of concept of this model. Having subnet specific requirements and rewards might also be a feasible option, under the assumption - I presume - that there would be enough nodes available for said subnet that meet the requirements. My point being here that presumably one would need a certain minimum participation of nodes that meet these requirements from day 1.

2 Likes

Good question! The existing target topology already addresses decentralization of data center providers/owners. In particular, it only allows the same data center provider to appear once on any single subnet.

2 Likes

That’s a valid point. Until now, I primarily looked at node clusters in terms of UBOs and family connections. However, you’re right; we could also take into account maintenance providers and collocation arrangements when determining clusters.

Side note regarding collocation: It is mainly problematic when it spans multiple data centers. Collocation within a single data center is already addressed by the current target topology by imposing data center limits. (So this should be ideally considered when using collocation for clustering).

2 Likes

So the topology would consider, for example, br1 and mm1 as the same data center? Just curious and confirming that this is how it works. I would have assumed that so far the topology would still have considered br1 and mm1 as different data centers.

1 Like

Technically, there are treated as different data centers. However, there is a separate characteristic called “data center provider” (in addition to node provider, data center & country). The current limits for node provider, data center and data center provider are all set to 1. So br1 and mm1 would not appear in the same subnet if they belong to the same data center provider. I hope this clarifies!

2 Likes

Thank you for your feedback @162DC

With respect to grandfathering all existing agreements for node providers, here is my view:

  1. It is important to agree on the desired state of node provider standards so that everyone knows what we are aiming for.
  2. Certain changes may take more time to implement, so sensible transition periods should be considered (as mentioned in the original proposal). Each standard/aspect may need its own timeline and approach.
  3. For the most urgent topics—such as identifying clusters and using that information in subnet allocation—we should start implementing both tactical and strategic solutions now.

So in summary: Maintaining the complete status quo purely for stability’s sake does not seem advisable. We should address urgent matters without delay and, for less critical areas, put in place a measured transition plan.

2 Likes