I did some additional thinking and found that if we assume that risk scales with the square root of time, rather than linear time, the calculation looks a lot different.
sqrt(0.5) ≈ 0.71
sqrt(8) ≈ 2.83
Using square root scaling, the time increase is calculated as follows:
2.83/0.71 ≈ 4
So under square root risk scaling, the risk is only 4 times greater, rather than 16 times greater using linear scaling. The reward difference remains about 1.9x higher for 8 years.
While that looks a lot better, it’s still a significant gap. But there are other factors we can include that will make the disparity even lower. For example, age bonus and daily compounding.
With the max age bonus after 4 years and rewards compounded daily, a 6 month neuron gives 11.64% APY (not APR, since we’re compounding) and an 8 year neuron gives 23.59%. Square root time increase remains at 4, and the reward increases to ≈ 2.03x, up from 1.9x.
There are other things to consider that are difficult to put an exact number on, like the value of voting power, i.e. the ability to control changes to the network. Furthermore, keep in mind that if investors do expect the price to increase over time (why would they invest otherwise?) the compounded returns are much greater with a higher APY which could close the gap even further. Can anyone think of other factors we can include in this analysis?