I’ve noticed dscvr and dsocial don’t actually store post content on ICP, they use IPFS and arweave respectively.
It seems terrible to me that the IC requires thousand+ dollar nodes with multi-terabyte data storage all in a blockchain of only 13 nodes and it’s still seemingly inferior to hosting on arweave and ipfs.
It seems like the IC is going to be beat out by more decentralized solutions as it’s monolithic design seemingly hasn’t yielded any benefits over more decentralized ones.
We’re building out a multi-terabyte DynamoDB type solution for our upcoming game, Dragginz. Every single part of this game (database, assets, logic) is on the IC.
I think dscvr and dsocial were a bit too early to take advantage of all the progress the IC team has made since they launched, but I wouldn’t be surprised to see them move back when a few of the issues have been ironed out.
Can you clarify a little bit more about how are you scaling it? The biggest constraint to me is that IC currently does not support intercanister query calls, so, how do you scale terabytes in database without awating multiple seconds for a simple query?
We’re not using the “auto-scale” function of CanDB just yet because it would just make things overly complex in our system. What we’re doing is splitting up the data tables into different microservices, and then looking up the canister by a modulo of the ID.
For sure not the same scale as these amazing projects above but, my personal project Papyrs does already use the IC as file storage. If you upload images, they are saved in custom asset canister.
Speaking of, regarding scalability, each of my users get personal smart contracts - i.e. canisters are generated on the fly for the users. So in that sense, I think there are ways to make the architecture scalable.
I think the comment about arweave is a bit derisive, but I guess it is expected as a decentralized DB. Where is the reason to ridicule arweave, where arweave can’t go as far as hosting? I would like to know the difference between IC and arweave.
Here at Distrikt we are effectively fully on chain from the start. So in order to achieve this we use two different approaches as we have need for horizontal and vertical scalability.
Our assets are stored using a bigmap fork that was built by Dfinity and it works fine as long as you do no maintenance (moving stuff around when adding a new bucket for example) because there is too much instructions to process. So we use a fixed amount of bucket canisters and offload / upload all the assets when we want to add more buckets.
As of now all our BE data is in one canister but we are currently working on our new backend infra that will split our components into canister pools (vertical scaling) and those pools will auto scale over time (horizontal scaling) by spawning clone of themselves (state excluded) in order to store new data.
Projects on the IC need to advertise themselves appropriately. You can’t claim fully on chain if you are using IPFS or even Arwewave. The biggest advantage of the internet computer is the ability to run fully decentralized applications, preferably on chain on the IC. If not you might as well use Lens protocol for socials.