I recently came across an analysis of node provider compensation versus network usage that raised interesting points about potential inefficiencies in the current system. According to the analysis, nodes are operating far below their storage capacities while node provider rewards appear high compared to cycles burned.
For example, it states that subnets currently have a ~300GB limit despite nodes supporting up to 30TB of storage. Additionally, August node provider rewards were approximately 583k ICP while only ~5k ICP were burned via cycles.
I’m sure there are good reasons for the current subnet storage limits and incentive structures. As the network matures, how are these parameters and balances evaluated and optimized? Are there plans to incrementally raise subnet storage limits as usage increases?
My goal is to better understand the rationale behind these protocol-level design decisions. I think we all want to see the Internet Computer succeed, which means constantly improving efficiency and utility alongside decentralization and security. I’d appreciate any clarification the community can provide on how we should interpret the data presented and the roadmap for continual optimization. Please let me know if I’m missing any important context.