In defense of Dom and the Dfinity team - There is a future for the Internet Computer

This is completely spot on. They have been delivering - and delays happen when you’re trying to get a project on its feet. (As I said, years, not months).

I’d still love to disagree with you on the ticker. I think it can be well known - and it’s not as important of an issue as I think many people think. But from personal experience, I’ve gotten much better responses from others talking about what the network can do and the projects built on it.

That was more of a joke - as like everyone else, I was frustrated about SNS-1 release. I really wanted to learn and take part in the governance mechanism. If you do want to send me a token, shoot me a dm - as I’m not planning on selling.

Dfinity’s trying to build the tech from the bottom up - and so we’re still focused on the tech and getting things right. People don’t realize how this tech can annihilate oracles and the ETH network will be less relevant once the network is fully integrated onto the IC chain. Imagine doing ETH projects with no gas fees! I commend them for standing up to all the flak they’ve taken from all sides. From price action, to attacks on Dom, to delays and bugs - people don’t really realize that this isn’t just a crypto project. It’s bigger than that. And congrats to them for staying on course.

7 Likes

Since it hasn’t been mentioned yet, Blocks is an online low-code editor which can be used to develop Motoko canisters with a drag-and-drop interface (similar to Unreal Engine Blueprint).

This might be handy for learning Motoko, since you can deploy to Motoko Playground and view the corresponding source code for your project.

+1. This is a great no-code option for developing a front-end webpage on the Internet Computer.

9 Likes

ICME is the fastest way to get an on-chain blog or NFT store up. You can also migrate your project later on if you decide you want to code yourself.

#3 I think everything useful needs to be made into modules that can be push-to-deploy like this.

3 Likes

In terms of foundational tech,

I’m a big believer in the direction DFINITY took when building the IC (permissioned subnets governed by a DAO), but the only time I feel doubt is when I look at zero-knowledge proofs and all the rapid progress going on there.

I wonder if (efficient) proof-based verifiable computation will one day replace the “replicated state machine” approach that the IC uses… In other words, will we still be doing consensus 20 years later by making a bunch of nodes run the exact same computation? @dominicwilliams - I’d appreciate your take on this.

7 Likes

From what I have seen, constructing a zero knowledge proof for the trace in a general purpose computation model like that of Wasm is many million times more expensive than performing the actual computation. And it’s not clear how that can be improved in fundamental ways that would shave off 6 orders of magnitude. So I wouldn’t bet on this becoming a viable alternative to consensus, except for extremely limited use cases.

7 Likes

I think for the next decade zk based solutions will be limited to specific use cases, for general computation the main innovations will be around consensus, e.g Avax supposedly has less time complexity to achieve finality than classical nakamoto consensus, and execution: [2212.04895] CryptoConcurrency: (Almost) Consensusless Asset Transfer with Shared Accounts

Then there are some projects like Nillion, which claim to have invented a new approach to do MPC without the need of inter node communication, but they are in early stages and their tech isn’t peer reviewed so it’s too soon to get hyped.

3 Likes

Avalanche consensus has significant tradeoffs and isn’t inherently better than other algorithms.

? so you only think Dfinity design ICP like this is good? when did u see ICP has any consensus ?

I am not sure I follow, can you please explain?

I did some research, and I think you may be right about the overhead for provers… generating an encoded transcript or proof (not necessarily zero knowledge) of a computation seems to be roughly some 10,000x more expensive than natively computing it (as of today)

But there may be situations where this overhead is acceptable. For example, if the same computation is being run on different inputs, e.g. the same neural network being run on different input examples at inference time. Then, the initial cost of generating the proof can be amortized across lots of runs, to the point where this setup may make more sense than replication-based consensus.

1 Like

but aren’t other developers allowed to send pull requests to the IC repos? I saw these all public repos, anyone can join efforts, like with Bitcoin and Ethereum code that’s maintained by many different people. It seems just to early to judge, and trust me the size of their team is not too large for such a massive complexity project with so many components, as a FAANG engineer myself, I estimate their productivity as very high, I worked in departments and teams where 70 to 150 people work on some VPN or appstore related features, and these are not as complex as IC at all, this team produces some sick results if you ask me :man_shrugging: , based on my glance at the repos and ecosystem of IC… just my IMHO really. but this project is in its first baby steps, as it seems, and very early to judge. they also do obvious mistakes with branding/partnerships/community management (or lack of :smile: ) but hard to say if it’s bad engineering involved or just lack of time / rushing to release / lack of security[defense] features/ skipping thorough testing and doing shortcuts in favor of faster releases /etc’.

3 Likes

interesting take on how you see the project, thanks for sharing. regarding usability I slightly disagree because they are an infra product and not user-facing, it’s the dApp developers who should take care of UX and accessibility aspects. Slightly similar to let’s say Linux Kernel is complicated but the Ubuntu desktop UI is nice, kernel and drivers are not “user friendly” because they’re not aimed at end users but at other engineers. IC should take care as much as possible about the developers ‘ease of use’ of their libraries and tools they release, that’s for sure, and documentation to be up to date, but the rest is up to those who will decide to build on top of the infra.

there are not much ‘neat’ dapps, wallets, for IC, but Dfinity is not the responsible entity for that, they definitely should incentivize ecosystem with good support/feedback loop, participate with grants where possible, if budget allows, but they also need partners who will build quality products using IC. it’s something they can do, but might also be too much to ask at this stage, I don’t know how well funded and staffed they are in all teams, maybe the marketing/branding/community teams underperform, what can we do :sweat_smile: , still a project is heads above everything else that’s being built in crypto, as it seems. baseline is good, just need to push it through, with extra help from professionals, I hope good partners/projects will join efforts with Dfinity to bring forward the good stuff that can be built.

Since last week, new information has surfaced.

  • People are getting sued for governance proposals by DFINITY
  • DFINITY killed a proposal at the last minute and stated that it doesn’t lead to code changes, so they voted against it- even though dfinity has voted on multiple motion proposals submitted by ICPMN and DFINITY insiders that didn’t lead to any code changes.
  • DFINITY is actively censoring the ecosystem and its participants with its influence and web2 employees.

Probably one of the worst blockchains to contribute or build in right now if you’re planning to build for the long term. The platform risk associated with Proof of stake on-chain governance → NNS is easily palpable by experienced builders.

Also, considering how they robbed the passive investors with tokenomics proposals by ICPMN, it is clear that DFINITY and its employees are the biggest hurdle for the developers in the ecosystem and decentralization.

A blockchain without decentralization is a joke. Especially if it claims to be the future of the open internet. A fork is inevitable on ICP, and these people will do everything in their power to increase the price of token and control the ecosystem regardless of the long-term outcomes.

99% of ICP right now is trustmebro.exe, and it is being abused like web2.

aren’t zk proofs needed exactly because of limitations of other blockchains? (their slow/ineffective/expensive approach to computation/consensus). if eth was working as fast as dApps need it to be useful/usable, there won’t be a need for zk rollups? it’s a workaround for a limitation, nothing more, as I see it. if there’s no limitation present, no need in the workarounds.

1 Like

what is ICPMN and why robbed, you might be exaggerating, as long as a smart investor does not sell anything at a loss, and willing to hodl, I don’t see how he can be robbed. and selling whatever it is related to IC at such early stage is just weird behavior.

The platform risk associated with Proof of stake on-chain governance → NNS is easily palpable by experienced builders.

aren’t all proof of stake chains the same? governance is only used to guide development efforts prioritization and staking/unstaking/rewards stuff?

2 Likes

could you post links, sued for what? how? people who can make proposals, are just anonymous ICP wallets, aren’t they? why they get sued, it doesn’t make sense but please share what you mean

2 Likes

That seems too low for Wasm. The factor very much depends on the generality and high-levelness of the language/machine. You’ll have to do a lot of busy work for Wasm’s linear memory, for example. And ZK proofs tend to be significantly more expensive as well.

The proof is about a specific trace of a computation on a specific input. You cannot reuse the proof, unless you are repeatedly running the same program on the same input.

1 Like

The proof is about a specific trace of a computation on a specific input. You cannot reuse the proof, unless you are repeatedly running the same program on the same input.

Hmm, you may be right in that the proof is not technically reused. But there is a technique called batching (link, link), which is precisely used in situations where the same computation needs to be performed over different inputs. But its primary advantage seems to be amortizing setup costs for the verifier—although there appears to be some performance improvement for the prover as well.

Lots of misconceptions here about ZKP and zkWasm. The mindset and data people may find online is normally from older proving systems…

Currently there are very fast schemes with linear time provers. Folding scheme like SuperNova and Protostar are among these. They also no longer require a trusted setup.

If you are going to spend 5 minutes running a program… it ideally would take you 5 minutes or less to run the execution trace through a ZKP proving system and generate a proof (not 10,000x). Verifier time of course is much, much less (seconds to verify). For ML use-case where the resulting model or weights is high value, ZKP based verifiable computation is ideal; this is the emerging field of zkML.

2 Likes