I think this approach to calculate price per GB is a bit flawed as it doesn’t take into account a crucial detail: node providers are paid a fixed monthly rate for their service. The rewards vary based on country and in future based on specs too, for simplicity’s sake let’s assume all nodes are paid the lowest rate: 873 XDR/month or 1111$ at current exchange rate and subnets have 1TB state as that is the long term plan afaik.
If the IC keeps charging 5$GB/Y deflation might never be possible, with this pricing it’d only cost 5000$/month to occupy an entire subnet’s state and block new canisters from using it, opposed to the 14k$ minted as reward for NPs in that subnet, computation MIGHT make up for it but not necessarily.
Bad actors could use this as an attack vector to waste the network’s performance/profitability and hinder existing dApps, e.g dApps that need to create new canisters.
Currently we have 35~ subnets, so an attacker could spend 52,500$/yr (5$ x 300GB x n of subnets) and waste almost the IC’s entire computational capabilities, even with 1TB state subnets it’d only cost 175k/year, not much for such a disruptive attack.
If we want to mitigate this, storage costs should be based on the worst case scenario: a subnet whose entire state is used but computation is never executed on it, this means the formula to calculate GB/year cost on a given subnet should be: (single node yearly rewards * replication factor)/GBs of state. Even in the most optimistic scenario, where all nodes are paid the lowest rate and with future state increases accounted for, we have (13,332 * 13)/1000 = 173$ GB/Y.
This is obviously too high for devs and assumes 13 nodes subnets being the norm! The cost would obviously increase as the subnets grow and even if we had 1 node subnets a GB should cost 13$/Y to prevent this scenario.
I’m not advocating for price increases by any means, I would like to see them lowered, but if network profitability and deflation are the long term goal something has to be done, looking away and hoping computation might make up for this flaw could hurt the IC bad eventually.
I don’t have any concrete proposals to this issue, but imo Dfinity should stop treating all data in the same way, static data and data which has to be processed can be handled with different mechanisms, e.g files could be stored without replication using erasure coding.