Sorry @sat We were basically rattling the cages to get someone to listen, whilst at the same time digging deeper with our research. Had no idea where this would all lead to, my god its crazy! You may notice that we have not publicly called anyone out or continued to post information for some time. There is a reason for that.
We actually have a lot of evidence of everything that we have said, it hasn’t been a witch hunt or whatever and I definitely have better things I would rather be doing. Our goal is to ensure the IC has a strong, solid base to build upon and we do not wish to harm it.
We cannot post that evidence here as it is now going directly to the foundation who we are talking to and they are taking appropriate action. If there is anything in particular that you think should not be on the forums that myself or @borovan posted, please feel free to remove it.
To everyone else, sorry if you aren’t satisfied with that but I cannot or will not publicly post things that can be used to harm Dfinity/the IC. You will just have to wait.
Our goal is not to “control the IC” that goes against everything we believe that the IC is capable of. A real chance to actually build a meritocracy, have trustless incorruptible contracts between people, make crazy inheritance plans for your future generations with whatever rules you want that will be carried out no matter what, the options are limitless. We are not going to sit by quietly and see all that potential be destroyed.
Please note this post was submitted about 8 hours ago, but it is another example of a post that was held up for moderators approval. While I don’t mind needing a moderator to approve my post, it would be nice to know what triggered the need for moderator review. Moderator approval is a new feature and I can’t tell yet why and how it was implemented. My goal is to stay aligned with forum rules so a moderator isn’t needed.
I’m not sure why you felt the need to insert some personal attacks in this response, but you make some great points. It is a deviation from the original intent of this forum topic, but it’s worth further discussion.
You have done a really good job of raising your concerns. Unfortunately, your tactics are often presented in an ad hominem and accusatory tone. I believe that is why you have felt like you are being ignored. Just look at the thread that you linked above. SYBILing nodes! scream Exploiting IC Network… Community Attention Required! is a thread that starts off by accusing node providers of nefarious intent in the title itself. The evidence that you provide of this nefarious intent is extremely weak…node providers with the number 23 in their forum username or accusing husband/wife and/or business partners of colluding because they chose to use different business entities in different locations. The tone is not one in which any sane person would be interested in engaging.
I would argue that when you started that thread it was already recognized that improvements needed to be made to the node provider selection and onboarding policies. The need for these improvements were realized because community members such as yourself, CodeGov, and Aviate Labs started getting paid through the Grants for Voting Neurons program to start looking at proposals in the Subnet Management, Participant Management, and Node Admin through the lens of a reviewer. It became clear that work processes for how to create the proposals and what to look for in the reviews needed to be improved. Yet, there are existing rules in place that have been deemed sufficient for years. It takes time to evolve and improve these work processes and as more people start paying attention these policies will get better. I’ve stated on more than one occasion that Adam has made a positive contribution in the node provider space by accelerating the need for defining and implementing these improvements. I am very appreciative of the influence he has been able to assert in this area, but I still wish he would do it in a more professional and civilized manner.
I agree that you have provided some of the best reviews of SNS launches. It’s not enough. We need more people doing it, but it is real work and, unfortunately, there is no incentive for people to do anything more than a simple yes or no vote. It’s a great example of getting what we pay for. It’s also a proposal topic where it would make sense for the whales of the ecosystem to actually provide funding to people and/or organizations to perform more due diligence on SNS projects.
Ideally, reviewers would be aware of every new SNS project and start their investigations the moment they are announced. It would include thorough review of white papers, forum posts, SNS organization, yaml file, canister code review, etc. The reviews would be written before the proposal hits the NNS so they can be posted within 12 hours. The same reviewers would perform this work for every SNS project. Hence, specific reviewers could build a reputation for their work and gain a following. Investors could decide who they want to trust in their evaluations.
This level of commitment is far more advanced than anyone has ever demonstrated so far, but I believe it is critically needed. This service needs to be provided by many reviewers. It will never happen if the NNS doesn’t fund this type of work.
Yes, sure. But that’s a connection you found later, not Alex when he started the thread.
As explained by many people, there are legitimate and allowed connections between husband/wife and business partners and it can be expected that transactions flow between them for various reasons.
The most widely connected node providers to a Coinbase address were from the AT1, TP1, SJ1, and FM1 data centers because those pre-genesis node providers had a completely different onboarding model and special contracts that expired last November. The node machine owner (DFINITY) was different than the data center contract owner who was different than the node provider. Remuneration went to one person and they had to pay all the interested parties. This too was explained to you already by people who are aware of the history.
If you found other connections based on wallet address, then I assume they haven’t been presented publicly. If that’s the case, then I guess it won’t be possible for others to review.
Sidenote: this post didn’t get held up for moderator approval. Yeah!
Of course you would, you’d be missing the mark though.
This is first and foremost about Node Admin and Participant Management. I’m not a formal/elected reviewer on those topics, and I don’t think those who are were the driving force behind recognising the need for an overhaul. Just read this thread. If you read all the way down to @Thyassa’s post you’ll see it’s been a constant battle to get others to take this seriously. If nothing else, this indicates the need for a re-election.
I’ll remind you that nothing was happening with regard to these concerns back then. It took almost a month of silence after the post below (while I was working on pulling CO.DELTA together) before something changed.
What changed? The community started shouting. It was needed, this is the point that you keep missing. You’ve actually been very inhibitory throughout this process, and I expect that’s why you’ve recieved a lot of flack. Here’s an example.
On a related note, It’s now been 3 weeks since I wrote this post, and nobody has objected, so I’m planning to implement these changes to my tooling this week. I’ll cross-post any proposals rejected on that basis in that thread for visibility.
That’s great, thanks Wenzel. Starting… now.
In the meantime, I’d want to bump Donna’s post, which does a much better job of putting things in context.
I didn’t miss it @catpirate32 . The thing is that making moderation actions public has potential downsides, which is why DEFAULT behavior on discourse is NOT to have this information (moderation actions) public. I have no idea if this even can be made public. It is visible to other moderators, but IS NOT PUBLIC. I don’t have admin privileges on the forum so I can’t check.
Now, since we seem to have started sharing LLM views in the forum, let me share what ChatGPT o3-mini says if I input the most relevant parts of this thread as a prompt:
Below are some recommendations that balance automation, moderator efficiency, and community trust while mitigating the risks:
1. Evaluate and Pilot the Proposed Automation
Pilot Phase: Introduce the Discourse AI triage feature in a limited, test-oriented mode. Monitor its accuracy and impact on both spam/toxicity reduction and false positives.
Metrics & Feedback: Establish clear quantitative and qualitative measures (e.g., reduction in moderation load, accuracy rate of classifications, community satisfaction surveys) to assess its performance before wider implementation.
Human Oversight: Maintain a layer of human review to evaluate automated decisions. Automating post approvals should serve as a filter rather than an outright gatekeeper.
2. Engage Stakeholders and Build Consensus
Moderator Input: Include moderators and experienced community members in both planning and review. Their insights will help balance efficiency with sensitivity, ensuring that legitimate posts are not inadvertently censored.
Community Dialogue: Organize open discussions or feedback sessions to explain the rationale behind using an automated tool. This transparency helps reduce misconceptions about “weaponized moderation” and promotes trust.
Transparent Policies: Clearly communicate the rules, processes, and criteria used by both automated systems and human moderators. Consider involving community representatives in policy design to reflect diverse perspectives.
3. Transparency in Moderation Practices
Flag Visibility: The suggestion to make flags public should be carefully evaluated. Public flagging may increase accountability but also risks exposing flags to abuse or retaliation.
Risk Consideration: Public flags might discourage community members from flagging borderline cases, which could reduce the effectiveness of moderation.
Alternative Approach: Consider anonymized or aggregated reports instead of exposing individual flag data. This maintains transparency while protecting user identity and reducing risk of targeted attacks.
Communication Strategy: Explain why moderation decisions are made, without compromising sensitive details. This helps counter the narrative that moderation is arbitrary or politically motivated.
4. Adopt Best Practices for Automated Moderation
Regular Audits: Regularly review the automated system’s performance, including false positive/negative rates, and adjust thresholds as necessary.
User Appeals: Establish a clear and accessible process for users to appeal or seek clarification on flagged or removed content. This process not only protects user rights but also enhances the system’s fairness over time.
Iterative Improvements: Use pilot feedback to improve the system. Data-driven adjustments should guide changes, ensuring that the tool evolves alongside community norms and emerging patterns in forum behavior.
5. Mitigate Communication and Trust Risks
Clear Messaging: Avoid inflammatory language on any side. Reinforce that the goal is to improve overall community health, not to silence dissent.
Training for Moderators: Provide training sessions on how to work with automated tools and on best practices in conflict resolution. Well-prepared moderators can better manage situations that might otherwise escalate into public criticism.
@catpirate32 most relevant is (3) above: but also risks exposing flags to abuse or retaliation. I don’t think that risk vs gain is worth it. If we had a community that does not retaliate and troll, I’d be more open to it. If it was technically possible, of course.
Well then I disagree with your opinion and proposal for moderation/automation.
Lack of transparency + AI = more censorship. You are advocating for lack of transparency and automating censorship
I was asking you to show us who has been flagging the posts here and censoring them. Based on my observations, I can only assume the existing moderation in this forum is biased. Lack of transparency = degrades trust = centralized governance.
On the contrary, my friend. My preference would be to keep the AI prompt fully open - so completely transparent, and allow the community members to suggest of even make changes to the prompt to create a better forum for all of us, something that ALL of us will enjoy and benefit from.
So instead of endorsing a culture of transparency and blaming the other community members and moderators as some here have recommended, I’m suggesting to endorse a transparent and blameless culture.
There should be very clear rules and guidelines on what is allowed and endorsed, and what not. What fosters collaboration and friendly discussion and what not.
This could be a good starting point for the rules: FAQ - Internet Computer Developer Forum and could be given to the LLM/AI as the set of rules to follow.
The only thing I was arguing for here is introducing automated mechanism for enforcing the rules and guidelines, rather than having humans do this. That’s all. Based on the feedback, seems like the majority is supportive for a pilot. And then we can re-evaluate in a few weeks.
Has the foundation ever considered making better use of dapps like TAGGR? It has a pay-to-post model, and configurable features for managing who has sufficient trust privileges for posting on specific topics (including age of the account etc.)
Moderation is also managed in a decentralised way.
@catpirate34 nope that wasn’t me, but some other moderator…
You actually did nothing wrong to me so there was no need for me to suspend your account. I try not to waste time doing unnecessary things. I do end up doing a number of other pointless things though, so I’m far from perfect.
FYI, you can get MUCH better results and much fewer problems on the forum if you:
stop posting inflammatory messages on the forum, you don’t want to make yet another moderator (me) angry at you - like you did with the other ones.
Again @catpirate34 since you don’t seem to get it… you did something to get some moderator(s) angry, and that wasn’t me. Please correct your behavior (the above steps would work) or face the anger of one more person.
I do want this forum to be the place where people come to learn, not read messages written by emotions. Please move to other platforms if you want to have flame wars.
Last warning from my side @catpirate34 and everyone else shi*ing on the forum.
What is UP with the forum hiding all my replies? No discourse is allowed on the forum here? Is this how we are helping to improve what is supposed to be the future uncensored and “unstoppable” internet of value here? Geez.
It seems that lots of messages are getting flagged. and i didnt see anything wrong in most of them. May I know what is the criteria and who is doing it, is it automatic?