Where are the informations from the ic api coming from?

Hello,

I’ve been playing around with the governance canister and noticed that there is a limit on the number of the number of ballots stored per neuron (ie vote casted by a neuron).
The governance canister only stores the 100 most recent casted votes; due to memory limitation most likely.

At the same time you can make a request to the ic api. For instance for this proposal.

https://ic-api.internetcomputer.org/api/v3/proposals/42010

This api allows one to query the known neurons that casted their vote on any proposal.
I’m wondering where this information is coming from and where it is stored ?

More generally, I’ve found it pretty difficult to find official informations from this api and would appreciate any supplement of info. :slight_smile:

1 Like

Hi @Seb, thanks for this question!

The governance canister only stores the 100 most recent casted votes; due to memory limitation most likely.

This is correct. Note that the ballots are also stored in the proposals, but the same applies here: they are not kept forever, for memory reasons. One vision that we have for the future is to maybe store the proposals in an archive on chain (similarly to how the ledger keeps blocks forever).

For now, if you want to keep the ballots, you can regularly query the governance and store the information off-chain. This is what the referred API does: They store this information in a persistent, relational database.

I’ll ping the team working on this to see if there is more public information on this. If you have additional concrete questions, please feel free to ask here! Hope that helps!

1 Like

Wow, thank you for the transparency @lara

It’s a fairly good idea for Dfinity products, like the proposals in this case, to wholly be on chain. That way, shortcomings and limitations in the capability of the chain to be flagged much earlier by Dfinity engineers working on Dfinity owned products. As opposed to third party devs discovering them while working on their own product and reporting and working with the core Dfinity engineers to alleviate and find solutions to these problems.

The reason is that these Dfinity engineers are more closely situated (both physically and influentially) to the core protocol Dfinity engineers to prioritize these capabilities, especially ones related to scalability and developer experience as they encounter them and help prioritize them on the roadmap.

I believe the term used for this is “dogfooding”. Unless this happens, the internal teams won’t truly appreciate the frustrations that third party developers run into while using and interacting with the IC and its ecosystem of tools and APIs.

The above is meant with the utmost respect and from someone who only wishes for the IC to suceed. :slight_smile:

Thanks for the feedback. Just to make sure I understand: are you suggesting that most things should be on chain because then the DFINITY engineers “dogfood” it more?

I am not sure I fully understand why “dogfooding” and “on-chain” are directly related: we can also “dogfood” use cases that require off-chain interaction or build things on-chain but never use them.

But I overall agree that it is a good idea if we can also try out our product.
I also agree that it can be a good idea to keep information on chain. I would maybe emphasise other reasons for this, such as the possibility to then have a verified history of the proposals on-chain that can be easily accessible.

1 Like

Yes, anything and everything that’s possible on chain should be on chain.

Apart from the verifiable history aspect that you pointed out, there are more benefits that I see:

  • Archival of all data on chain is a use case that other products building on the IC would also need to tackle. This will probably require a complex multi canister architecture that not every product has capacity/capability figuring out on their own. For example, storage of all historical token transactions is something most products would want to build. But it’s a difficult thing to build. IF Dfinity engineers try building it first, they’ll undoubtedly run into scalability/usability/dev experience issues that they can effectively communicate to core protocol devs. Which will lead to improvements that will benefit smaller products/teams.
  • Apart from the core protocol improvements, there’s a varied list of tools/SDKs/CDKs/extensions/CLI tools that will be used to build these out. The shortcomings/limitations in these will also surface when they are used by internal product teams to build. These will also get improvements and enhancements. Currently these tools rely on feedback from external teams/grant participants/forum members for the source of improvement proposals.

That is what I meant by “dogfooding” in this context.

I’m probably trying to explain a vision and falling short putting it into words :slight_smile:

Thanks for the added explanation. I think I now understand!

Just as an extra information:

For example, storage of all historical token transactions is something most products would want to build.

This we are already doing in so called archive canisters that are spawn by the ledger canister to store all old blocks with the transactions. So I know your statement was meant to me more general, but in case you are also looking for a solution for this specific problem, maybe it can help to already look how this is done in the ledger! Hope this helps and please let us know if you have any questions for this concrete solution too!

1 Like

Some additional information that the team working on the dashboard kindly provided:

The dashboard API has a swagger definition available at Public Dashboard API. The swagger page lists all available API endpoints and inputs (optional and required).

Hope this can provide the additional information that you are looking for!

2 Likes