Context
As a prerequisite to support performance-based node rewards, the system for calculating and recording node provider rewards is being improved to support increased auditability. This will enable node providers to verify rewards without manual assistance from DFINITY, and will allow the larger community to retain confidence in the process that funds the operation of the Internet Computer.
Background
Node Provider Rewards are the ICP tokens minted to compensate Node Providers for running the machines that run canisters on the Internet Computer.
The rewards are calculated using data and settings from Registry, Governance and the Cycles Minting Canister. These are used to calculate the rewards paid out, based on the nodes that are in service. For reference, here is the existing explanation of Node Provider Remuneration.
Previously, the data was not stored beyond the latest rewards minted, and no contextual data was stored.
Recently, governance began storing this contextual information along with the rewards that were minted, and began storing each reward event in stable storage.
Problem
Node Provider rewards are opaque.
A myriad of supporting data is needed to calculate them. The data used in the calculations was not (until recently) stored on-chain. As a result, when they wanted to audit their rewards, they had to ask DFINITY to help them with this process, which required a lot of manual verification using records collected by DFINITY’s internal teams.
In a decentralized system, every participant should have this information and the tools needed to make sense of it so that no single party needs to be trusted.
The current system cannot readily scale.
There are currently 6 different NNS proposals which can affect Node Provider Rewards:
- AddOrRemoveNodeProvider
- UpdateNodeRewardsTable
- AddOrRemoveDataCenters
- UpdateNodeOperatorConfig
- AssignNoid (short for Assign Node Operator ID)
- RemoveNodeOperators
Rewards are also affected by the registry’s do_update_node_operator_config_directly method, which is called directly by node operators.
This complexity makes it difficult to know what should have happened (auditing) and what will happen if you pass a particular proposal (prediction).
Auditing Difficulties
As of now, it is still difficult to reason about rewards, due to the complexity of the algorithm and its myriad of inputs.
In order to check the correctness of rewards, one must have all of the records pertaining to node providers, node operators, data centers, and the rewards table. For example, forgetting to add a region entry to the rewards table would cause incorrect rewards.
Prediction Difficulties
Prediction difficulties mirror auditing difficulties, but with regard to the effects of proposals not yet made.
When updating Registry data through proposals, it is hard to see how they will affect future node provider rewards.
This is exacerbated by the fact that certain Registry invariants are not enforced (e.g. a data center exists before a node operator can reference that data center).
A common mistake in proposals is confusing similar 2-character country codes. For example, a proposal might specify that a node will run in SL, but it will actually run in Slovakia (SK). The result of such human errors is under- or over-compensation.
All of these issues combine to make it very difficult to be sure that you have met all the preconditions necessary to get rewards when a new node operator in a new data center is added for an existing node provider.
Solution Outline
Recently, we released the first step in this process, which is storing the rewards, along with their supporting data, in stable storage so that they can be audited.
An API will soon be available to expose past rewards along with that supporting data.
We will also be investigating the possibility of enforcing registry invariants to avoid invalid proposals, and provide feedback to node providers about defects in their proposals, and how they can be corrected…
We will investigate how the rewards formula and implementation can be simplified so that it is easier to understand.
We plan to retain rewards with supporting data on-chain for a minimum of two years for auditing purposes.
Finally, we will be evaluating tooling options to make it possible to see how proposals would affect rewards, and to test new methods of reward calculations against old data to verify the results.