IC Wiki Maintenance

Ok sorry to hear about the long hours, but you are working for the company that will dictate the future.

A wiki is designed to be a collaborative project. If that is not what is intended, don’t use a wiki.

A large quantity of information was outdated, missing or incorrectly named resulting in the list of Node Providers not being valid. It was essentially an opaque mess, which is understandable if the wiki maintenance was left to one or two people. That is not how a wiki should operate.

There was also no consistency. Bear in mind your audience here. You have high functioning autists and logically focused people. If you wish to share information (which I highly recommend given the IC’s tenet on transparency) it should be in an organised, consistent manner.

I do not want to tell you how to do your job, but if you want to use the old Hitchhikers planning regulations excuse to defend how some node providers were on boarded, you need to make sure that all the information is available to the community!

"But the plans were on display…”
“On display? I eventually had to go down to the cellar to find them.”
“That’s the display department.”
“With a flashlight.”
“Ah, well, the lights had probably gone.”
“So had the stairs.”
“But look, you found the notice, didn’t you?”
“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.”

6 Likes

Indeed this is the right approach to take. We can have discussions if any sections warrant adding to the wiki pages of node providers. Such sections could then be added if agreed and each node provider can update those sections themselves. This can be part of the wider discussions that are currently taking place.

We are also considering an adjustment of some sort that would enable the community to add things to wiki data without moderators having to approve, as that is part of the challenge. We could be legally liable for approving details that aren’t true, which means that we would have to somehow verify every detail added, which isn’t really possible. Thus, limiting additions to what the NP themself adds is what we have to limit it to at the moment.

Please note that this is not meant to discourage you from your research at all. It is merely due to the limitations and legal considerations of the wiki.

2 Likes

Are there any teams working on a decentralised wiki? Internally or externally?

2 Likes

Happy to add a bounty for that as it would be useful for many projects, Dragginz included.

Not the simplest thing to do, but having a wiki that can never be taken down and has absolute clarity of who edited what and when certainly has it’s appeal.

1 Like

There is a mature opensource node.js based project Wiki.js here https://js.wiki/
Might be possible to deploy this using Azle CDK with SQLite as the backend db as that has been proven to run in an IC canister as well (to some level of performanc).

How difficult this would be to get working and how performant and reliable it would be as a production-grade wiki site is the hard part; am not an IC dev so this could be shot down quickly.

1 Like

We have done https://2g62z-zaaaa-aaaao-qj6ta-cai.icp0.io/ and it’s open source GitHub - internet-computer/node-providers. The goal is to move the wiki there, we have already uploaded all the pdf related to node providers to the canister.

4 Likes

Thank you Wenzel. I have no idea who Snoopy is. The only thing I can tell through the admin side of IP addresses is that he/she is about 9 time zones away from me. Trust me… I haven’t got the time to juggle multiple forum identities. I’m in operations here, and my job is to keep certain things running, secure, and accurate… and that is my top priority. Any assistance I give in the forums is farther down the priority list, since I’m not one of the main admins in here.

Gotta get back to work.

2 Likes

Interesting, could you point me to an example? I didn’t know this was the case.

Your goal is to move the wiki there

1 Like

There has been multiple efforts to get SQLite working on the IC to provide an “embedded” SQL compliant database engine in a canister dapp.

A forums search for sqlite will return a whole history of discussion around the idea, the need for it and actual implementations. The best starting point is reading this 2021 post

Important work has since been done by @sgaflv with his stable-fs implementation of a high IO throughput in-canister virtual filesystem backed by stable memory. Performance testing of an SALite db file on stable-fs is shown at the end of the head post here

More recently @lastmjs has posted about Demergent Lab’s plans to support SQLite and possibly PGlite (a wasm complied branch of PostgreSQL that runs in-browser) in an upcoming release of their Azle CDK when wasm module support it added to it.

PGlite would be the ideal in-canister database engine IMHO although the TCP connection protocol support would need to be replaced with an ICP based transport for inter-canister query comms (or do without it altogether). The maturity, extensible datatype support and ease of implementing extension to the pg engine is a compelling package. My team have used PostgreSQL in production web2 application operations since 2003 and I have never once regretted that decision. It is robust, flexible and incredibly well supported by its open source community.

I know in theory we “dont need traditional databases” on the IC, a position usually argued based on having persistent canister state for code+data and orthogonal memory support. Also SQL databases are sometimes seen as a “legacy technology”. But relational databases remain the dominant technology for transactional enterprise data management in the commercial world. Likewise the relational data model and relational calculus remains a crucial abstraction over the data object references we use to implement data structures in most modern programming languages. The problems with search and updating ever larger data structures using tree and graph models is what inspired E.F. Codd to design and propose the relational model of data management back in 1970. There is a very good article covering this history and the context of why this development was so important here Important Papers: Codd and the Relational Model

3 Likes

Thanks for the references! I’ll catch up on the dicussions when I get a chance. I fully agree that a performant in-memory relational store (living inside a canister, and subject to consenus) that can be interfaced with ANSI SQL is super valuable.

SQLite has been supported through SQL.js for a while now: Databases - The Azle Book

Once we release Azle 1.0 and can focus on experimental features again, I would love to work on PGlite.

1 Like

We don’t need SQLite, we have stable structures.

1 Like

But SQLite provides a declarative query language to interface with your data, a query planner, data integrity constraints etc. I think this sort of thing is very much needed for building complex data driven dapps, isn’t it?

1 Like

Thanks @lastmjs. What’s the performance of sql.js like relative to standard SQLite?

Rust has already all the tools needed to interact with data stuctures inside a thread_local RefCell with dynamic borrow checking at runtime. SQLite, inside a canister, would just be a wrapper around those things. Although it’s true that SQLite in a real computer is really useful!

1 Like
2 Likes