Why I sold my ICP for a massive loss

Firstly, I do think ICP has the potential to be a top 5 coin in market cap, however, being exposed financially to this project has allowed me to understand it better and why it’s flawed and why I should put my money elsewhere.

I’ll begin with what’s good about the Internet Computer, in my opinion;

  1. Asynchronous smart contract communication
    This is what distinguishes ICP from other networks, it’s a completely unique take on what smart contracts can be and opens up possibilities not possible with networks that host a single monolithic state machine

  2. (high) Node hardware requirements
    Another unique(ish) idea that I like, the internet is run on servers, not desktop computers, and having high node requirements opens up more possibilities for applications

In my opinion these are the two key selling points for the IC over other networks however, sadly, they are not implemented in a way that is congruent with the crypto ethos. I’ll explain why.

The network is not decentralized, or scalable

What do I mean when I propose the network is not decentralized? I’m not talking about the number of nodes or node providers, I’m referring to the central point of control that is the NNS and the role it plays. Democracy is not decentralization. The NNS has control over literally every aspect of the network, and without it, the network cannot function. This is in contrast to governance in other crypto projects, where users vote to make changes to very specifically designed aspects of a project that simply cannot be automated otherwise.

The NNS has control over the entire topology of the network, meaning subnets are not created or load balanced at the protocol level, spawning new ones when all others are at max capacity, it is simply DFINITY picking from a pool of nodes to spawn a new subnet. Again there is plans for DFINITY to secede and let the enthusiasts(?) manage these things but again Democracy is not decentralization.

This leads into why I stated that asynchronous smart contracts are not well implemented. This thread goes into more detail as to why the subnet model hinders this core aspect of the ICs design. TL:DR? Subnets only make sense for synchronous state machines, putting asynchronously communicating cannisters together in a centrally controlled & designed subnet means that developers are limited in the kinds of network topology that will be running their app. This is in contrast to Avalanche’s subnet model where the Dapp developer has control over the subnet’s creation, it’s rules for gas, and whether node joining is permissioned or permissionless.

The second point is that the network is not scalable. You might think this is untrue based on DFINITYs marketing, however think about what scalability is; it’s the ability to increase and decrease in throughput. With node rewards on the IC, nodes are minted a fixed amount of dollars worth of ICP regardless of the amount of transactions they process, this means the network requires constant growth as node rewards increasingly debase ICP the bigger the network becomes. The network cannot scale downwards from a point of massive expansion without breaking into a death spiral as noted by some people on this forum. This also means to prevent a death spiral in the event of the network expanding then contracting the NNS would have to delete various subnets and remove nodes from the network and the cannisters they host to prevent a death spiral. This is a fragile system.

This is in contrast to every other blockchain network which can expand and contract dynamically depending on demand, which is in my opinion one of the most impressive aspects of blockchain networks, that they are anti-fragile (at least the good ones are).

My final point is that DFINITY views the IC as a commercial product and not a sovereign network like the Ethereum Foundation views Ethereum. This is of course my most subjective opinion, but I do feel strongly that this is the case.

19 Likes

Fair enough. I do want to understand one thing about your meaning of decentralization. Won’t the number of node providers increase with time?

2 Likes

Actually, the plan from the engineering side is to (eventually) implement dynamic load balancing. But in order for that to work at all, you need to be able to migrate canisters. Which is why we’re working on subnet splitting and will then move on to subnet merging (because we’ve determined this to be the simplest way of implementing canister migration).

It is not a straightforward proposition, though (imagine moving running processes with open sockets from one VM to another, if you will; while being able to verify that no tampering has occurred in the process). Which is why it will take some time to complete. We do have an almost working implementation for the subnet splitting (including message rerouting), we just need to figure out the orchestration, including the verification part.

The plan is that eventually one should be able to decide what sort of properties the subnet they want their canister to run on should have. From something as simple as “I want my canister to run on a fiduciary subnet” (a subnet with higher replication and tamper resistance) to “a subnet consisting of EU replicas only” or “low compute, high storage”. Automation (a la Kubernetes) should be able to then take care of which canister runs where, ensuring all these constraints are respected.

The other concept I’ve heard discussed (within and without DFINITY) is subnet rental. If you plan on building a large dapp; or a small one with very specific requirements that can’t be met by simply specifying constraints; you should be able to say “put me together a subnet consisting of these nodes and give me sole control over what runs on it”.

None of the above involve any sort of democracy. Nor would it be feasible for the NNS to directly and manually manage anything more than a few dozen subnets and a few hundred nodes.

All of the above though requires at the very least solid support for canister migration; a subnet/canister management system similar to Kubernetes; and implementing a much more elaborate cost model than simply “cost per replica”. And it’s not that anyone has anything against any of the above (as far as I know), it’s just that all of it needs to be built.

17 Likes

Problem is as @Kurtis0108 said the network isn’t quite capable of scaling based on actual usage, it can grow but downsizing is much more iffy as it would mean kicking providers out of the network after they invested lots of money and time to get approved, which in turn might disincentivize new potential providers from considering running nodes on the IC. As more subnet types are created the network’s topology gets more fragmented and the scaling issue becomes worse.

5 Likes

I am not going to get involved in an argument regarding whether the network can scale down (AFAICT most blockchains can’t even scale; and for those that can, what would scaling down mean? reducing the number of validators on a side-chain down to single digits?). Or whether the tokenomics appear to require continuous network growth to avoid inflation. Or whether inflation needs to be avoided at all costs.

But I will point out that from a technical POV, most of the subnet constraints that one would be able to specify would refer to stuff like network topology (e.g. high or low number of replicas; high block size and low block rate; all replicas under some legal jurisdiction; or in a single data center, so as to achieve the lowest latency possible; etc.). It would not be about specific hardware configurations, as in “high CPU, low storage”; or the other way around. And the latter could, for the most part, be achieved by having e.g. a high CPU, low storage replica share the same hardware with a low CPU, high storage replica.

While one may argue about intangible aspects such as whether the tokenomics allow for the network to scale down; or what would providers do in such a situation (e.g. sell their hardware at a discount?); both these aspects and the technical ones can be addressed AFAICT. None of it is set in stone.

There is nothing in the protocol that says node providers must be rewarded after such and such a model. It’s just the way thing stand now. And that’s a combination of what was required to pull in node providers to launch the network; the current limitations of the implementation (e.g. no real loadbalancing support); and some amount of inertia from all parties involved.

12 Likes

@free
You raised some good things and I read all you wrote but they are very granular, instead of replying to individual points I’ll reply with what would make me reinvest in ICP.

This isn’t necessarily what I want it’s what I think what would beat ICP if someone else did this.

Get rid of the NNS as something that controls the network, by making the protocol much more minimalistic.

The only role for the network should be to connect developers who want to pay for replicated computation (or unreplicated), with nodes who want to run computation for money. If you take a step back and think about this, you realize this kind of network can be done with as little as a messaging system and a global currency, it doesn’t require an intermediary layer of subnets or a protocol determining how nodes ought to be organized and it doesn’t require governance.

The users and nodes should determine the topology such that the network itself reflects the incentives inside the network. What do I mean by this? Smart contracts are essentially replicated compute jobs, developers ought to be able to bid ICP for nodes to replicate their smart contract according to the spec they configured and the amount of ICP they post. This kind of system is a free market, the most decentralized, scalable, anti-fragile system humans know of, and I mean that quite literally.

Put this in contrast to ICP, which more closely resembles central planning reminiscent of communism. This kind of system does not work at large scale and has inefficiency and fragility baked into it.

EDIT/Addendum: A federated system would also work well too, instead of the example I gave, where users bid for compute jobs. A federated system would mean nodes themselves associating with each other and determining the structure of the network, such that they form federations where developers can deploy smart contracts to if they agree to the federations rules/requirements.

4 Likes

How would the nodes coordinate what computation to replicate without a consensus protocol?

1 Like

@JaMarco
I should’ve wrote that as “as long as you have a system where nodes and users can communicate arbitrary data and transfer tokens, you don’t need to prescribe the compute environment (subnets) like the current system does”

Communication and token transfer obviously needs to go through a consensus protocol to work

1 Like

How does that scale without subnets? This one network will have to process all dev and user activity, what do you envision the throughput is?

1 Like

Subnets are a way to scale it, subnets do not need to be at the protocol level to scale it, my point was to demonstrate what is absolutely necessary at the protocol level versus what is prescribed by the network. I believe any system that aims to be maximally extensible will win versus one that tries to maximally useful, because one cannot predict what people will find useful in the future. ICP & DFINITY prescribe way too much on the protocol level such that I believe it’ll be beat out by more extensible solutions if another team make them.

4 Likes

Depending on demand of what?

1 Like

This thread explains what he means, instead of having fixed subnets the protocol decides which nodes will run the canisters based on dev specified parameters: location, replication, etc…

3 Likes

@Zane
And that’s just one other way to do asynchronous smart contracts,

EDIT/Addendum: A federated system would also work well too, instead of the example I gave, where users bid for compute jobs. A federated system would mean nodes themselves associating with each other and determining the structure of the network, such that they form federations where developers can deploy smart contracts to if they agree to the federations rules/requirements.

(From my previous post in this thread)

And all of these systems don’t even need to be on the protocol level if it was designed from the ground up to be maximally extensible for everyone, you could have the current subnet model, as well as these two models that I have proposed operating on the same network, all created by developers or DFINITY at the application level, instead they chose to do more work to create an overall less useful system

1 Like

We aren’t talking about scaling throughput but scaling the network as in the actual hardware running it, other chains have a much simpler and more fluid model that is known beforehand so the miners/validators know what they get into: there is a fixed issuance + network fees, if both aren’t enough then the node owners can either keep their hw on at a loss as an investment or back out.

On the IC providers are promised and expect fixed monthly rewards, that by itself creates many potential issues such as continuous growth needed to avoid inflation as you mentioned but also raises the question of what should be done with those nodes when in extreme cases network usage spikes and then is drastically reduced, e.g after a bull market.
Sure there is nothing that stops the NNS from changing how the current system works to solve such scenarios were they to arise on day, but:

  1. They have barely if at all been discussed, which doesn’t inspire confidence.
  2. They’d most likely result in tokenomics changes or kicking providers and we go back to my initial point of new potential providers being disincentivized from considering running nodes on the IC.

On their own they might not be a problem but once you start mixing them together, e.g low repl subnets with low block rate and nodes run in X country it becomes trickier to make sure the protocol’s computational capability is constantly being used as cost efficiently as possible and stuff like running a canister on an arbitrary number of nodes won’t be feasible without subnet rental, which is extremely costly. The subnet model forces every possible permutation of those options to be its own subnet with nodes dedicated only to it, so a tradeoff has to be made between customizability and cost efficiency.

3 Likes

It is true and scary. The fact that certain fundemental things can be changed simply by voting can potentially be a showstopper for new node providers who are willing to contribute.

4 Likes

What do you consider acceptable criteria for what constitutes decentralization?

1 Like

I am struggling to understand alternatives to the NNS that would allow for a protocol as complex as the IC to be seamlessly upgraded across all hardware providers.

That’s what’s so beautiful about the IC, it brings together hardware providers and weaves them into the protocol. AWS and other clouds work the same way, they unilaterally control the provisioning of their own protocol across all of their hardware…well, or at least they give developers the option to spin these up.

I would love to brainstorm how to remove the centralization risks of the NNS while not losing its most important benefits.

13 Likes

No, what’s really scary is the alternative of trusting a dictator to be benevolent.

5 Likes

Decentralized systems are flat, there is no top-down control. Even though they technically allow centralized systems inside them. A system that presides over the entire network (The NNS) cannot ever be decentralized, regardless of if its democratic or despotic.

2 Likes

I am struggling to understand alternatives to the NNS that would allow for a protocol as complex as the IC to be seamlessly upgraded across all hardware providers.

I like simplicity not complexity, complexity just means, there’s more stuff that can go wrong, especially in dynamic systems. If the NNS was confined to software upgrades then that’s a much more bulletproof system, but that’s not DFINITYs goal.

2 Likes