I don’t think it’s exactly optimal for us to be “near” maximum storage capacity although when you consider the multitude of inefficiencies, a compounding effect begins to occur.
The issue starts with how nodes are optimized - they’re currently limited to 300GB of ~30TB, however since raising the topic, I’ve heard 700GB is in discussion, although I’m still exploring this further. This translates to 1/100th of what these nodes have the potential for, although I understand another small fraction of storage is utilized for programs to run the nodes , replication of chain state, etc.
This 100x inefficiency is then compounded by the fact that we’re only utilizing 32.8% (3.55 TiB of 10.8 TiB) of the self imposed limitation of network state, while continuously on-boarding new nodes.
I recognize it takes time to get these nodes operational, but there are 660 Nodes waiting to be added to subnets at this second. That means we have a ~70% inefficiency while more Nodes are waiting to be added to subnets than the entire Network itself currently has.
These inefficiencies are then compounded into the burn rate to node provider reward ratio: we’re burning ~5,000 ICP a month via cycles, while we mint ~600,000 ICP a month to compensate node providers. Which translates to ~.8% burnt of what is compensated to Node Providers.