TLDR: I’ve built a Reputation tracking tool that any developer, web2/web3, can use to track user reputation in their app, platform, or community. I need ~50 volunteers to stress-test it, so I get insight into cycles consumption and vulnerabilities to exploitation.
Hello everyone, I’m the founder of Solutio (https://solutio.one/) and in my free time I built a reputaiton tracking tool that I want to use for Solutio to handle token distribution to users that actually help grow the platform. But I need to test it, see how it performs, and help trying to break it. I need volunteers to login, try voting, try cheating, and let me know of your experiences.
Who is it for?
The tool works by users and platforms voting on each-other, and can be used in many ways
- make sure giveaways go to real users
- track buyer/seller reputaitons on ebay/shopify-like platforms
- build web3 social media in which users gain Reputation in the topics they are posting in. Imagine an app where a developer has more influence in topics about programming languages he regularly posts in, while an environmentalist might have more influence in topics related to his expertiese. Can even be used for web3 blogging platforms.
How does it work?
- Users vote on each-other
- Votes carry reputation
- Votes from users with higher reputaiton, carry more weight
- Anyone can create reputaiton #tags, and user reputation is exclusive to that tag.
- There’s a minimum reputation users must reach for their votes to start “counting” (keeps bots and fake accounts at bay)
- Which makes it extremely resistant to bots
- If a user’s reputation changes, his future votes AND is past votes update to reflect the change.
- This means, if a bad actor upvotes fake-accounts or bots, and is caught, as users downvote his account, the votes he made also start losing power.
- Votes decay over time
- Each reputation #tag can create its own rules for vote decay over time, minimum thresholds for being trusted, and minimum number of users to end reputation bootstrapping phase
- During bootstrapping, everyone’s votes count, after bootstrapping is over, only the votes from trusted users count)
- Backend is built using Juno, which means I can set the database to “private” after the tests, at which point not even I will have access to private user information.
- I could even allow users to create #tags in which users can’t see which account cast which vote (accounts are already annonymous unless the user reveals who he is, but this could be an additional level of annonymity).
- I can allow users to create “personas” for each community, so users can keep work and private life reputaitons apart. Gain reputation at #onlyfans without leaking over to your #icp reputation.

What makes it different?
- it is completely annonymous, real proof of human without KYC
- any app can use it like they wish, under their own rules. embed it into your app in any way that makes sense to you. Maybe you vote on users each time they buy, maybe users vote on each-other automatically after successful transactions, or maybe it is just a ‘like’ button that carries a vote in the #tag a topic is about. Use it however you like.
- completely free, and completely open source (though once cycle costs get too high, I might offer big accounts a way to pay for their cycles). This is NOT a for-profit app. I built it to benefit the ICP community.
How can you help?
I built it, and now I need a group of people to help stress test it, and try to exploit it. I want answers (and question) about:
- how much cycles does it use? I highly opimized the code, but the fact that old votes weight changes according to the user’s current reputation means a lot of recalculations over time. I used a clever approach with a mix of caching, partial updates, and ‘calculations as users are queried’ but I’m still curious to see how intensive it can get.
- How easy is it to exploit? Users usually find a way, but I am hoping that this ‘community based’ reputaiton system makes it hard for bad actors to gain reputation without ever being caught. Whenever a bot is found, users can see who upvoted him, and follow the trail to downvote the accounts that are creating the mess.
- What kind of features would app developers want for this? And is anyone interested in integrating? This is a Juno satelite with Candid endpoints, meaning anyone will be able to use this in their app.
So if you’re interested, please let me know. No technical knowledge required, it is a very easy web-interface that anyone can use.
How cool would it be if ICP is the first app to crack proof-of-human without KYC, completely annonymous, and bot resistant?
If you want to help spread the message, here’s the tweet where I’m asking for volunteers:
4 Likes
Sounds interesting. When I was more active on TAGGR I remember there being a lot of bot and/or sybil accounts (they were called farmers, as they were upvoting themselves to farm the TAGGR rewards distribution).
How do you tackle this sort of scenario? What’s to stop a user giving themself a high reputation via other accounts, and creating their own inner circle of accounts that bootstrap themselves with high reputation? How would genuine users spot fake accounts of this sort, and how would they outvote accounts that already have high reputation?
1 Like
Great question! That’s exactly what I’m trying to prevent with my approach.
There’s a minimum reputation users must reach for their votes to start “counting” (keeps bots and fake accounts at bay)
- If a user’s reputation changes, his future votes AND is past votes update to reflect the change.
- This means, if a bad actor upvotes fake-accounts or bots, and is caught, as users downvote his account, the votes he made also start losing power.
So in practice: bots can participate and vote, but since their reputaiton is 0, any votes they cast will not be included in the calculaitons. So they can vote 1000 times, and it will have the same result: 0 reputaiton gained.
Users must reach a minimum reputaiton threshhold for their votes to start being included in calculations, and they can only gain reputation by being voted by other users with good reputation standing.
“But what happens if a user pretends to be honest, and starts upvoting bots after he has gained enough reputation?”
- If you see a bot, you can check his profile, and see everyone who voted on him. You can then downvote those accounts.
- If a user loses reputation, all past+future votes he cast also lose power. So by downvoting a single bad actor, you can take down a whole net of bots he has spawned.
- And each community can set their own threshold, which decides how much reputation a user must reach for their votes to start counting.
So in practice it works like this:
- you create a new reputaiton #tag, and set the threshold to 100 users
- until that community reaches 100 users, everyone’s votes count.
- once the 100 users are reached, the #tag automatically restricts itself, and from there on, only the votes from reputable users count.
- new users need to be upvoted by existing users to reach trusted level.
A bit like Karma in reddit communities, except you can’t farm karma in the #memes tag and use that to post in the #ICP tag. Each community has its own reputation.
Does that answer your question? And more important: do you think it will work in practice? It will depend a lot on how vigilant the community is. I hope that “moderators” take on that role, and by reporting bad actors, get upvoted themselves, giving them more power to downvote bad actors and spamers.
Hey there!
I don’t want to discourage you - please, continue to develop wonderful stuff and don’t let anyone doubt your ideas.
But just as a piece of advice.
The most important reputation metric is money. Everything else is getting abused and re-sold to scammers easily on any platform (we recently started to encounter p2p merchants on Binance and Bybit selling their high-reputation accounts to scammers, who then use those accounts to money-launder via unsuspecting traders). So, it is better to focus on on-chain activity, staking and stuff like that. Something that implies you posess some value in your pockets.
The UI looks great, though!
2 Likes
I think almost anything that can be devised can also be gamed if the incentives are there, but that’s not to say that it’s not valuable to make gaming the system harder. I agree that what you’ve built looks great, and aims to solve a very prolific and important problem.
Perhaps things you could consider adding to it are how much stake is backing the reputation of each user (or take stake into account when establishing reputation), and also integrating with DecideAI for facial recognition would also be cool (another signal that could contribute to reputation).
Perhaps what you’re building could eventually become a one stop shop for a host of different measures and deterrents, and provide a comprehensive dashboard for all of this sort of stuff. Keep up the good work!
I also think the sooner you can try and put this to work somewhere, and gain some real feedback from real users, the better. @aligatorr89, could you imagine TAGGR ever making use of something like this?
Can only think of using DecideAI. For other domain
actions - comments and VP have to be considered - definetelly not voting on “who is human”.
1 Like
Can only think of using dAI for recognition. For other domain
actions - comments and VP have to be considered - definetelly not voting on “who is human”.