Strengthening Namada Through Clearer Delegation Criteria

Summary

We propose that the Namada Foundation provide additional transparency and clarity around its delegation program selection process. While we appreciate the intention and effort behind the Round 2 Delegation Program, we believe further detail and structure would benefit the validator community, promote ethical stewardship, and build long-term trust in Namada’s governance.

Status: Draft

Motivation

The current delegation program blog post outlines high-level criteria but lacks detailed insight into the scoring methodology, rationale behind decisions, or a chance to receive validator-specific feedback. As participants who wish to align with Namada’s mission and grow alongside the network, we seek more clarity to better understand how to contribute meaningfully and position ourselves for future participation.

Proposal

We request that the Namada Foundation consider implementing the following improvements, either retroactively for Round 2 or proactively for future rounds:

1. Publish a Transparent Scoring Rubric

Provide a clear, weighted breakdown of how delegation decisions are made across categories such as:

  • Technical performance
  • Governance participation
  • Community contributions
  • Decentralization/Privacy impact

2. Validator Feedback Option

Offer brief individualized or cohort-based feedback on why specific validators were or were not selected for delegation.

3. Appeals or Clarification Form

Enable a lightweight process for validators to request clarification, correct misunderstandings, or appeal a decision in future rounds.

4. Establish a Delegation Advisory Group

Consider forming a rotating community council to advise or co-review future delegation rounds or provide more insight on who is involved in this process. This would strengthen legitimacy and incorporate diverse perspectives.

Conclusion

We believe these proposed enhancements will create a more open, fair, and resilient delegation process that aligns with Namada’s mission and long-term success. We appreciate the Foundation’s continued efforts and welcome the opportunity to co-create a stronger validator ecosystem together.

3 Likes

Thanks @ChainflowPOS for these considerations!

This discussion may be helpful for us (Luminara :dim_button:), since we also want to do some delegations very soon. As an aside, we intend to be sensitive to operators who may have fell through the cracks of the recent Anoma Foundation delegation selection.

Transparency vs bikeshedding

Personally, I think that too much transparency can create practical problems, like a) bikeshedding over evaluations (arguments leading to misaligned focus and breakdown in cohesion ie. it will make our community toxic), b) gaming the evaluations, which can be hard to codify in a meaningful way and c) expanding the work involved in evaluating every period. If the majority of our community are like “yeah this seems about right,” imo the delegations have been done well. But there should be room for incremental improvements so the process yields better outcomes.

Bottom-up feedback

I think it would be helpful / informative for the entity offering delegations to highlight ideal examples of recipients, because it signals what it is that the entity values and helps the future set of candidates update their efforts accordingly. It could also be helpful for the candidate to make their case for why they should have been selected, since this could help the AF improve their process and ideally not have as many misses next round. It would also help Luminara to see the missed candidates, so we can help retain high-value operators.

Perhaps the AF could make a lightweight process for this, but it may be simpler to have a forum post that a passed-over operator could publicly reply to and eg. give the top 3 reasons they expected to be offered a delegation. This could be helpful feedback for the AF, and also a way that others (like us) could use to consider how they delegate.

An advisory could be ++interesting! It would be great to get a few different perspectives and some insight to help broaden impact and reduce misses. It would be great to the lightest-weight form of this that integrates into our process without completely exposing our process. We want to be judged on the overall results/outcomes of our decisions, not invite a public discordant discussion for each of the micro-decisions within the process.

Impact vs cost of process

I love that there’s this kind of considered feedback, criticism is ++important. Also, perfect is the enemy of good, and imo it’s better to have a “good enough” process than to miss bigger opportunities or have burn-out perfecting it.

Luminara will strive to keep our process as lean as possible, relative to how impactful we assess delegations to be. Impact vs cost of process, we want to be as economical as possible, minimize unrest and/or trigger good operators to exit, while maintaining focus on our primary mission.

2 Likes

@ChainflowPOS fwiw i think that this post fits better in Staking & Validators or General (since PGF Ops is really for how the on-chain community resources are allocated, whereas the Anoma Foundation are not delegating from these resources)

pls lmk if i have your consent to relocate this topic

1 Like

I very much appreciated this motivation statement. I also personally agree with some of the suggestions that Gavin brought up here, specifically giving candidates the chance to state why they should’ve been selected. However, let me point out that this is strictly different from an ‘appeal’.
More tangibly, we will do our best to get back to any questions or comments that have been sent, e.g. via email, in a more timely manner. Thanks a lot for your patience so far!

2 Likes

You have my consent to move relocate this topic

Totally hear you on the risks of over-engineering the process. We’re explicitly trying to avoid the trap that grinds DAO operations to zero for months or indefinitely. Burnout is real, and no one wants to get stuck in a feedback loop that slows down actual progress.

I also agree that we don’t need full transparency to improve outcomes. Even small steps like highlighting “ideal recipient” examples or qualitatively signaling which criteria matter most (e.g., uptime > infra decentralization > social media) could go a long way in helping operators orient without turning it into a gameable system.

Really liked the bottom-up idea of allowing passed-over validators to share why they felt they were a fit. It creates a lightweight feedback loop without adding more work to AF. I’d be happy to kick that off if helpful. Maybe we could pin a thread or agree on a quick format (e.g., “3 reasons you thought you aligned” + optional links)?

And love that you’re open to some form of validator advisory. I agree, it doesn’t need to expose every micro-decision. Even a small, sanity-check group (3-5 rotating validators or mods every x amount of months) could help flag edge cases or accidental misses before decisions go out.

Appreciate the open convo, this kind of iterative back-and-forth is helpful.