Any “investor” no matter the size of their investment is probably not “dumping” their ICP and will not…. A more likely scenario is their ICP is staked and therefore locked and cannot be sold. 54.5% of total supply is staked. A proper investor does not buy/sell on speculation, those are traders, investors have their ICP staked for 8-years and have read the 20-year roadmap from Dom in its entirety and surmise that they will never half to sell their investment ever, because of the proceeds that their original investment will produce in the future by spawning new neurons if that is what they choose to do.
Somewhat unrelated, but is there a beginner’s guide to threshold ECDSA that you recommend? Like a blog post, a textbook, or even a paper? I’m curious how it works under the cover, and why it’s a challenging endeavor.
I think this is the paper https://eprint.iacr.org/2022/506.pdf
The webpage Threshold ECDSA | Internet Computer Home with its links is a good introduction to tECDSA.
There is another paper, however, this is far from beginner level.
@Benjamin_Loison has pointed out further, more beginner-friendly resources.
Hope that helps!
I’m becoming increasingly concerned with the security of the Bitcoin Integration because of the node operator collusion attack vector of the tECDSA subnets, please see here: Threshold ECDSA Signatures - #202 by lastmjs
I’m not entirely decided, but personally I don’t think I’d trust much of my BTC on the IC at this point, and having $millions of people’s funds on the IC will make me feel very uneasy.
Can we hope that the narrow jurisdictions in which node operators currently operate expose them to competent judicial systems, making collusion of the kind you mention untenable? Because the collusion will be verifiably exposed, right?
We need to solve the node ownership problem. If it takes the NNS to strategically place specific nodes for the Bitcoin subnet to ensure as many unique providers as possible are part of the BTC subnet then so be it. If there are 34 nodes, the goal should be to get 34 different node owners ( probably not possible).
In my opinion for the internet computer to reach its goal The decentralization of node providers has to be implemented in a different way and mean different things than on other networks. To me decentralization of node providership is about not having one entity owning the majority of nodes or having the ability to censor applications or transactions. It’s not about whether they are KYCed or not or having to comply with local regulations. For enterprise use cases, the internet computer has to be able to provide the same level of security and privacy. That starts with providers being known and under contracts. We need to stop pretending that we are ETH but scalable.
Hasn’t Chainlink already solved this? Their oracle providers are KYCed and everyone in the industry trusts their off-chain data feeds for their dapps. How do they do it?
You (we) can hope. "But if wishes were horses, beggars would ride "
Hope is most disastrous in these situations. Everything that can go wrong, will go wrong.
What law prohibits the existence & execution of complex mathematical functions? Wouldn’t an advanced adversary take the position that this ability to collude is the “raison d’exister” for him/her as a node provider to exist? What would a contract for a provider look like? That the node provider must never merge with another node provider?
Consider this for a change: if ETH Merge goes wrong (or a parallel POW fork takes over), who would you sue if you were an active staker ?
I like @lastmjs 's term: progressively decentralizing. Currently you can sign through test tECSDA. Of course you want to move in-n-out relatively quickly. Similarly the amount of risk that you take will be a factor of (a) potential loss and (b) the amount of time you are in the integration net.
Therefore the dapps that will thrive in this eco-system are quick get-in and quick get-out.
As we begin to harden, the amount of potential loss and amount of time can increase. In the meantime, we should have clear warnings about what the limitations of the current platform are.
Is LayerZero a real competitive threat to ICP’s native integrations? How should we think of one vs the other?
I dont think so, LayerZero relies on oracles/Chainlink to function. And it cant do Bitcoin transactions.
For people who build canisters that try to create bitcoin transactions, here is an alternative library that you can use
We have made some simple patches to bitcoins - Rust, and have been using this library in building the BTC functionality in Spinner.
The main advantage over bitcoin - Rust is that “bitcoins-rs” does not depend on secp256k1, which is a C library that is challenging to compile/link in Wasm.
Just to throw it out there in case somebody may find it useful!
Just to let everyone know, there is a small but noteworthy change to the Bitcoin interface spec:
bitcoin_get_current_fee_percentiles returns fee percentiles expressed in millisatoshi/vbyte (and not millisatoshi/byte as it said before).
For readers who are not so familiar with Bitcoin, relative fees in Bitcoin transactions are properly measured in sat/WU (satoshi per weight unit) or, equivalently, sat/vbyte (satoshi per virtual byte) ever since the segregated witness protocol upgrade in 2017.
This small change in the interface spec makes it explicit that the Bitcoin integration also adheres to this standard by expressing fees in the correct unit.
If there are any questions about the interface or this change, please let me know!
- We believe my team this speed integration next year
If you can make it faster than Dfinity’s team, be our guest…
Hi, is Dfinity Team making progress about Bitcoin Integration difficulties ?
As many of you know there is a good amount of bitcoin that is lost because it is sent to an address that doesn’t exist or an address that doesn’t except bitcoin. Is it possible to write a smart contract that will identify a bitcoin transaction that is going to the wrong address and mold an address to receive it instead of it being burned. I thought of this when I read this snippet of the bitcoin integration page, “Internet Computer smart contracts to create bitcoin addresses and directly send and receive bitcoin”
We invested some time now on an aspect of the implementation that will help accelerate development in the future: We evaluated and progressed moving the Bitcoin canister implementation from the replica to a Wasm canister. The move looks promising and helps us get rid of technical debt.
After some time of silence in terms of progress updates from our side, let me give you details on what has been happening in the recent weeks.
We have been evaluating and progressing a change of the implementation architecture of the BTC functionality: We have been looking at implementing the BTC canister completely in canister space as Wasm canister instead of in the replica, the architecture used up until now. So far, the approach is looking extremely promising and it is highly likely that we will switch the implementation to this architecture, unless something unexpected hits us.
There have been a couple of reasons for proceeding with the evaluation of this architectural change:
In the replica implementation, we could not utilize the features a Wasm canister has to offer. The most relevant ones here include canister scheduling, deterministic time slicing, the upcoming HTTP outcalls etc. This list will likely be growing in the future as we add additional features to the Internet Computer.
The replica implementation is tightly coupled with the replica and thus slowing us down. This means, changes to the Bitcoin functionality require touching more components, thus requiring code reviews from other teams, increasing time to release changes etc. Every change requires more overhead from the Bitcoin team as well as other teams involved.
The replica implementation requires addition functionalities to be implemented, such as specific handling of the new stable memory regions the Bitcoin feature requires and that need to be part of replicated state. Bitcoin Mainnet implementation would create (some) additional effort in this domain, even after the Bitcoin Testnet implementation has been finished as it would require further memory regions to store the Bitcoin Mainnet state on the IC.
As the replica implementation cannot trap, the implementation results in a higher code complexity as in the canister implementation because of the need to handle all kinds of errors rigorously, which can be handled by a trap in canister space. Also, there is the risk of bugs in the replica which can crash a subnet, which is not the case in the canister implementation.
The canonical question you may ask now is: Why have we not done the implementation as a Wasm canister right from the beginning? This is a very good question and a main reason for this is that we did not have enough stable memory per canister to hold the Bitcoin Mainnet UTXO set. Thus, a pure Wasm canister implementation was just not a viable option at the time. In addition, we did not have features such as deterministic time slicing available at the time and it was not clear when this feature would land in the IC canister execution environment. Overall, the assessment at the time clearly hinted at a replica-based architecture. Thus, the choice at the time was the replica-based implementation that we implemented for the Beta release.
This is not an as big change is it sounds like, it is essentially switching from a Rust implementation compiled into the replica and running in machine code to a Rust implementation running in our Wasm virtual machine and benefitting from the various canister features. As both architectures are based on a Rust implementation, the vast majority of the source code, particularly, all the crucial algorithms, remain unchanged, and only the “plumbing” to move the implementation into a canister needed to be done and has progressed far.
We spent some engineering time now to assess and progress the feasibility of the Wasm canister implementation and it looks very promising, as mentioned earlier. Unless unexpected problems appear, we will be switching to this architecture. Following the above argumentation, this will considerably accelerate the future development related to the feature towards mainnet release, and particularly in the longer term for the feature maintenance.
This architectural change is an investment now that will be amortized soon as it will help accelerate work on this feature from now on due to the reasons mentioned further above.
Many thanks to @ielashi, the main engineer currently working on this feature, for his ongoing efforts in facilitating this transition! This step has only been possible with Islam’s sustained effort that he has put into the engineering of this feature!
This is great news!!! I get the explanation now, but at the time I was befuddled as to why we’d want to have that complexity in the replica.