Direct Integration with Bitcoin

Anyone have any insight on how ICs BTC/ETH integrations are different/better/worse than these cross-chain solutions:

ThorChain: https://docs.thorchain.org/
Algorand State Proofs: Algorand State Proofs. Powering Blockchain Interoperability… | by Noah Grossman | Algorand | Mar, 2022 | Medium
Chainlink CCIP: Cross-Chain Interoperability Protocol (CCIP) | Chainlink
Cosmos IBC: IBC Protocol | Tendermint
LayerZero: LayerZero Labs Raises $135 Million to Create Omnichain Crypto Networks | Business Wire
Axelar Network: https://axelar.network/

2 Likes

Differences with Thorchain are explained here: https://www.reddit.com/r/dfinity/comments/tr6wic/what_is_the_difference_between_canister_ecdsa_and/?utm_medium=android_app&utm_source=share

3 Likes

Have an idea how to get the Bitcoin Integration onto the big stage:

1 Like

@dieter.sommer

I’ve started work converting making a few of the Motoko class based data structures to stable data structures in the repos of this org. My grant project pretty much requires that all my data structures be stable. I have “build a stable BTree” in my roadmap backlog, but to save time just converted the RBTree library to a stable RBTree and am using that (it works great for where my project is at now, but the BTree will definitely scale better with increasing # of records).

I’d be happy to review/transport your stable BTree library to Motoko, but one of the specific features I’m looking for in a BTree library is that it’s structured in such a way that it is easy to split based on the order of the BTree (for partitioning data/scaling out as the BTree grows towards the canister limit).

2 Likes

@jzxchiang

Can you elaborate on this? I’m under the impression that stable or not, all Motoko canisters will run into this 4GB limit. The IC still uses canisters as the encapsulated storage/smart contract building block across a subnet, and without a specific data schema that would allow you to know which canister the data is listed on in this 300 GB “subnet/canister/whatever you want to call it”, performing operations on such a large data structure (CRUD operations) would be incredibly inefficient in terms of cycles and performance, searching all the canisters one by one.

@dieter.sommer

I may have missed over this (the thread is quite long), but I’m curious exactly how all of the data related to this Bitcoin Integration is being stored on the IC (in multiple canisters, or the abstraction of a canister is being stretched just for this integration). In addition to describing it, do you have any diagrams showing the architecture and per-canister storage?

This is awesome! Thanks for sharing the repository for your stable data structures! I am sure this will help many others who need to use stable memory extensively.

Ours is not yet open source, I think, but will be soon.

Just thinking whether it would make sense to start a forum topic on stable data structures to advertise to the community what we have and get further requirements and participation. What do you think?

4 Likes

The full UTXO set is stored in the “BTC Canister”, which is implemented as part of the replica. It’s called canister as it is exposed as a management canister. We use the new StableBTreeMap to directly store the UTXO set in stable memory. As it is implemented as part of the replica, there are no limits to memory use, i.e., we could potentially use all of replicated state of the replica.

There are some talks around about the design of Bitcoin integration, other materials will be made open as well, but I think we don’t have the slide set used in the talks public yet. Something to be done on our side…

As we use Wasm32, canister heaps are 32-bit and thus limited to 4GB. However, stable memory addressing has been upgraded to 64 bit, meaning that we can address all replicated state from one canister (in theory). Practically, there is currently an 8GB limit for stable memory per canister, but this is rather artificial and could be raised. The reason for the limit is to be conservative in the beginning and then raise the limit over time and possibly lift it entirely at some point. But increasing the 8GB or lifting the limit requires further testing etc. in order to have sufficient assurance that things work as intended with large stable memory allocations per canister.
You can use the stable memory besides the heap you use, it’s not counting towards your heap.
Does that answer your question?
And now we are at 350GB replicated state per subnet already AFAIK. :slight_smile: This is growing with improvements we make in the IC protocol stack.

3 Likes

I think that’s a great idea (and would limit the discussion in this topic to just Bitcoin integration :sweat_smile:).

I definitely had a fair amount of questions regarding stability that Andreas Rossberg answered in this post, and I know it’s a major point of concern for developers.

One of the main issues here I see is not a lack of stable data structures themselves, but an understanding of what makes those data structures stable. Additionally, many of the motoko-base library data structures such as HashMap.mo were released as classes (to aid in programming style familiarity & adoption), but didn’t include any documentation or assurances about their stability in the library or code itself, or information how to make them stable, which I think created some of this confusion in the first place.

Once developers understand stability on the IC, it’s not that difficult to turn an unstable data structure into a stable one (I’m speaking purely from the perspective of a Motoko developer, so I can’t speak for the Rust developer’s experience with stability). So I’m not sure that a single forum topic would help any more than expanding the developer documentation.

Specifically, I would add the following content to this section https://smartcontracts.org/docs/language-guide/upgrades.html

  • Provide a link to https://smartcontracts.org/docs/language-guide/language-manual.html#stability, with some concrete language examples (in both Motoko and Rust), including some do’s and dont’s that showcase transformations of an unstable data structures into a stable data structures (i.e. transforming a class with instance variables and methods into a stable object and functions)
  • Tie in information about how to use developer tooling (i.e. dfx, etc.) to safely check upgrade compatibility, and(feature request) perhaps even the variables that will remain stable (and any changes (like a git diff) to those stable variables.

Maybe at the very least we move this discussion to a separate forum post. I definitely think asking the community about what type of functionality they’d want out of a BTree library would be beneficial.

4 Likes

I wonder, if we had 64-bit Wasm heap some time in the future, would developers still want to operate on stable data structures or just keep everything in the heap and move in and out of stable memory in the pre- and post-upgrade hooks (i.e., use stable memory just for upgrades)?

I think it would really depend on the cycle costs of one versus the other. Personally, I like operating directly on stable data structures and keeping the canister upgrade process as simple as possible, but if it costs me 2x as many cycles I’d probably be less enthusiastic.

Do you have an opinion on this? Just thinking whether it would make sense to further pursue work on stable structures particularly also for Motoko, e.g., as part of the grants program.

Yes! I know that I sorely need stable data structures, and I’m sure other developers need them too.

And I’m referring to a different kind of stable data structure than the kind @icme is working on. IIUC @icme is porting object-oriented data structures (like Motoko’s HashMap) to a functional form that can be stored in a Motoko stable variable using the stable keyword.

That is different from the stable data structure that @dieter.sommer and his team are building, which directly stores data in stable memory without any serialization/deserialization (which is implicitly done when using stable variables).

Just thinking whether it would make sense to start a forum topic on stable data structures to advertise to the community what we have and get further requirements and participation. What do you think?

I think this is a great idea. I suspect quite a few in the community will have thoughts on this. In some sense, this is an interesting alternative to BigMap: instead of scaling out using canisters, we can scale up using stable memory.

Relying on ExperimentalStableMemory in Motoko isn’t ideal, as @claudio explains here.

In fact, another benefit of these types of stable data structures is that they can abstract over ExperimentalStableMemory, so that if we one day want to replace that library with a more fine-grained library that provides better memory isolation we can do so without breaking stable data structure users.

1 Like

Thanks for the clarification, I you are correct that I was originally referring to porting these data structures to a form where they can be stored in a Motoko stable variable via stable. I misunderstood the path @dieter.sommer and his team were taking, and now understand this is then being done in order to use the full 8GB memory of the canister (roughly up to ~8GB).

A few follow up questions for @dieter.sommer.

  • Why did the team choose to directly store the data in stable memory throughout its lifecycle given the performance implications, when serialization/deserialization is much more efficient? Does the quote below apply differently to Rust than Motoko in terms of the performance tradeoffs of permanently storing this data in stable memory?
  • Why not attempt to scale out the UTXO set storage horizontally instead of vertically?

  • If the Bitcoin Integration team has chosen to scale vertically, what does this mean in terms of the IC roadmap for scaling (i.e. BigMap pushed back), and should application developers abandon horizontal scaling and follow suit by scaling vertically?

1 Like
  • Why not attempt to scale out the UTXO set storage horizontally instead of vertically?
  • If the Bitcoin Integration team has chosen to scale vertically, what does this mean in terms of the IC roadmap for scaling (i.e. BigMap pushed back), and should application developers abandon horizontal scaling and follow suit by scaling vertically?

I have the same questions! (I like the way you frame vertical vs horizontal scaling here.)

3 Likes

How does ICs integration method compare to Algorand State Proofs? https://www.reddit.com/r/CryptoCurrency/comments/t5wcvx/algorand_state_proofs_are_here_this_is_huge/

2 Likes

It is my strong opinion currently that it’s preferable to work with a large heap so that special stable data structures do not have to be created.

3 Likes

Hello, are there any updates on the progress with the bitcoin integration?

6 Likes

Hello, are there any updates on the progress with the bitcoin integration?

1 Like

Hello, are there any updates on the progress with the bitcoin integration?

Good question. I have pinged the team to post an update.

4 Likes

Dear community!

It has been some time since the last update on the Bitcoin feature. Thus, let me give you an update on the progress of the Bitcoin integration.

There has been excellent progress on the implementation front for this feature. The team has progressed the implementation of the StableBTreeMap, its integration in the replica code, and fixes of the Adapter so that we were able to sync an IC testnet with the Bitcoin testnet. :tada: So, the memory management with the StableBTreeMap, one of the large remaining items so far, is largely working. There have been some glitches observed, but, as we think, nothing too major.

The next steps now are to iron out some of the issues we have observed during these runs on the testnet and perform further runs in test environments to get sufficient confidence that everything works as intended. Then (This will be soon!) we will be ready for a first deployment on an IC mainnet subnet that we will observe and test for some time. Once the system has been confirmed to work as intended, we will open the feature up to to a public audience for testing (and developing against the API).

As a side task, another team is working on the SDK integration of the feature, i.e., managing the Bitcoin Adapter process from the SDK, including its configuration. This is crucial work that helps get the feature fully completed and allow people to conveniently implement canisters against the BTC Canister API.

16 Likes

:tada: :tada: :tada:

Wooo! Awesome job!

4 Likes

When all things of this feature will be done ? in Q2, Q3 or Q4?

1 Like