Apologies for the late response.
I agree with @willguest, another review group layer falls prey to the same issue. As you said your self it could continue on ad-infinitum, who reviews the reviewers who review the reviewers and how do we make sure rewards we give actually reward quality reviews. I do think your initial statement holds a lot of weight however. The true end goal of any of these concepts is that proposals are reviewed accurately without the need for too many layers or mechanics, we definitely need this. soo with that in mind.
How about if during the voting phase we also just have each neuron make a comment about their decisions and after voting is done each neuron rates all other neurons comment responses. Kind of like yelp reviews but with randomization. this can also work to build a neurons trust status so they get chosen more often and Since neurons are chosen randomly it would be hard to game this system. I could see that solving your infinite review dilemma. So within the same group, we are now not only reviewing the proposal for accuracy but at the end we review each neurons decisions for accuracy by using comments as an identifier for bots or individuals who are not really participating. The get pushed down the totem pole and the good neurons go up the totem pole. You dont know who is who so you wont go out of your way to help a stranger if they wont help you. social dynamics for the win.
@Zane I also really like the blind voting idea, like that is a really good idea. Its so simple yet so powerful.
now combine everything we have so far and holy shit this is very complicated but I have no doubt it will stop spam proposals or at the very least heavily mitigate them lol.