Nominations: outstanding Shielded Expedition submissions & contributions

Amidst the challenges, there were participants, submissions, and contributions that we (Knowable) think really shone during the Shielded Expedition, and we’d love to recognize these beyond the Shielded Expedition set of winners :high_brightness: Will you help us?

This is a call-out for Shielded Expedition nominations for outstanding submissions in three categories and a bonus fourth category for contributions to the Shielded Expedition itself :mega:

  1. Explorers
  2. Interfaces
  3. Shielded apps
  4. Shielded Expedition support: a) tools b) social support

Please give us a) the category b) the URL and c) briefly tell us what you particularly liked, and also optional d) if this was helpful, mention how.

1. Expansive Explorers :telescope:

There were many explorer submissions, and a number of explorer apps were uniquely noteworthy for their outstanding user experience and their advanced features.

Here are some totally nebular explorer submissions that were out of this world :earth_asia: :milky_way:

In particular, check out the governance proposal features, like being able to see the stats about how different voters voted. For example, with we can see the biggest validators that didn’t vote, and this explorer also lets us see all IBC assets. lets us see when epochs started and ended, and predicts how much time is left before the next epoch. was one of the earliest featured explorers in the Shielded Expedition, and we depended upon it regularly. lets us sort voting stats on governance proposal and lets us sort validators by status. lets us see geographical distribution statistics

There were so many explorers that there must be more outstanding apps worth showcasing. Did you see any you particularly liked or appreciated using? Please reply below!

2. Interstellar Interfaces :comet:

We’re going to need some ambitious wallet interface builders for our community to begin the Namada mission. Were their any interfaces in the Shielded Expedition that stood out to you? Please reply below!

3. Shimmering Shielded apps :sparkles:

It was incredibly difficult to make a shielded app during the Shielded Expedition because the Namada SDK was not yet complete :open_mouth: However, several participants persisted and still managed to build some impressive apps. Did any catch your attention? Here are some that stood out to @spork.Knowable:

hkeydesign’s app (currently offline), was the closest to the original vision of what we would want from a shielded action: it was atomic and didn’t use a trusted back-end. :sparkles: Mandragora’s app appeared to have a working Sentinel integration. :open_mouth: EmberStake’s app had data protection considerations with single-use addresses :raised_hands:

These and other apps may not currently be live. We’d love to see them working and to share them with the world. Please let us know if there are others that deserve recognition!

4. Supernoval Support :superhero:

There were all kinds of contributions that didn’t fit into Shielded Expedition ROID point categories that were helpful, interesting, and exciting.

  • In-channel support :people_hugging: ZEN, pretoro, spidey, LiveR, Daniel, amadison79, Rigorous
    • helping competitors? a big ask, and some did it without being asked :heart:
  • Similarity Tool by Rigorous tpknam1qqp0w8f9fg2yxz7nx8mvarld2ufuh2k2dz44ms8gxhacsgce39t5cd4kmgt
    • this tool was critical to identifying Sybil clusters, enabling us to remove 394 accounts :boom:
  • by Kintsugi Nodes tpknam1qr4m5m2eu9zc24fzswe2vjwpywlysqewnn704rj6hlgtw948ah5269al0ls
    • was available early in the Shielded Expedition and provided Nebb scoring when the Nebb was broken!

Please reply below if there was a person who provided a similar calibre of support, or if there was a tool that someone provided that was particularly helpful throughout the Shielded Expedition :pray:

Beyond nominations

Beyond these nominations, we (Knowable) committed to some bounties during the Shielded Expedition.

  • March 1: Oneplus (2500 NAM)
    • discovered & reported what was preventing certain S Class submissions ie. those that contained IP addresses, saved us considerable time
  • March 8: KrEwEdk0 (5000 NAM)
    • provided a much-needed snapshot to restart the SE and a lot of other kinds of support
  • March 16: phychain (2500 NAM)

We’d also like to recognize the many GitHub issues and email reports that identified a wide range of issues :raised_hands: We are proposing that a bounty be awarded for these helpful and amazing contributions :tada:

Special recognition for our Uptime Titans: validator operators that persisted against all odds to maintain uptime. We’ve identified 30 operators who had at least 99% uptime, and 14 operators who had at least 95% uptime :clap:

These two uptime missions had to be removed from the Shielded Expedition because such a limited set of participants were able to compete for these missions, and we think it’s still important to recognize these operators and their heroic efforts :muscle:

Uptime titans :mechanical_arm:

95% uptime - 14 validators

99% uptime - 30 validators


When checking, we discovered that our address is not listed in the uptime, although our uptime is 96.98% during the operation of the testnet.

HEX: 8A398C8BEDB8DB1A6876F168B623BDD988EEB279
Address: tnam1qx0pnh3k5f7xvv407wfqteqk8jg52k5s0527lkt3
Public key: tpknam1qqa0etypknw6jq64vczdth7z53yqjg9nq4djq9jppu87wjqhny0rcyufapm

Check, please.
Thank you

1 Like

This is NOT an ‘outstanding submission’ this is part of the B category of the shielded expedition (Namada Shielded Expedition WANTED Asteroids, ROIDs Point System and Rankings | Blog - Namada).

-The SE ended on April 11th and today is 16th May, it is not fair at all and this has huge effect in the final ranking to entirely remove the uptime category

-Validators who, for 2.5 months, had many sleepless nights, helped to coordinate hardforks, upgrades, etc. are penalized and their prizes are instead given to those creating and voting nonstop spam proposals since it seems governance category will be kept!

-Changing the rules of the SE, especially such major changes, over a month after the SE ended is absolutely unfair. But if the B category had been removed this would be one thing, but removing all the roid from those who managed to get over 99% uptime and keeping the roid for those voting nonstop useless spam proposals, this honestly I have no words. The only solution here is to maintain uptime for integrity and upholding the very values of Namada or removing also governance for fairness, otherwise you risk losing most of the long term contributors and the community @cwgoes @awa @adrian


I am having a hard time accepting that. We had to do the extra mile a couple of times (!) to achieve that uptime and now we are getting penalized? It took us a lot of efforts to get there and now this will negatively affect our ranking and contribution points.

This is not what I signed up for. Rewarding spam proposals and penalizing high quality uptime… Sad to see - disappointed. Gives me a bad feeling that the team removes an entire category after the SE concluded. I hope you can reevaluate the decision with Adrian and the rest of the team. Thanks.

EDIT: Additionally, I do not believe that a limited set of participants is an argument. Even if only a single person achieved that uptime the points should fully belong to him. Or what exactly was the purpose of this game? Everyone played according to the rules. The rules were clear. Do not break them. The people that outperformed shall be rewarded accordingly to the full upside potential.


from ‘se-100’ channel

Daniel | Keplr - Today PM 6:28 (KST)

This may be a decision that just makes it a game to kill time rather than verifying one’s abilities as a validator.

  1. I invested a lot of time and hundreds of dollars in costs. I think the same goes for other many validators. How many times has the chainhalted? After the chain outage, we couldn’t run the monitoring app, so we had to wait and reaction overnight whenever the chain is halted just for this testnet. And what do you think of the effort put into maintaining uptime even under network load due to frequent spamming? If uptime wasn’t important, there would be no need to put in this much effort. Wasn’t the Shielded Expedition an event to select validators with skills and experience? What was the Shielded Expedition for?

  2. Out of topic, Many validators, including myself, missed many proposals because the faucet did not work properly at the beginning of the competition. Is that just acceptable? Is this a fair and right decision?

Daniel | Keplr - Today PM 7:10 (KST)

They shouldn’t have removed the uptime tasks. Uptime had to be calculated individually based on when each validator was in the activeset, rather than all block of the chain, and jailed validators had to be deducted differentially depending on whether they had downtime or double sign. ( Point deduction excluded : when restart network with no announce was given.) + Points are awarded only to validators who have participated for more than x epochs.


The most concerning is that it is claimed that only 2 postgen validators could achieve other 99% or 95% uptime and they are ‘outliers’. The truth is that actually there is a bug in counting uptime for postgen. When removing the first 2 epochs when postgen couldn’t yet join the indexer data shows 13 postgen with over 95% or 99% uptime meaning they were not outliers but many postgen actually achieved the uptime. So claiming that ‘such limited set of participants were able to compete for the uptime’ is 100% not accurate and false information

Completely agree, it mean our effort is worthless, and it will make huge impact on Final results. We also helped to relaunch network after halt by providing public node with needed peers, and i think it was great stress test for all validators to keep network going, i must say that team also did a great job in all emergency situations and somehow 44 Validators managed to keep good uptime, but at the end it doesn’t even matter, very sad about this situation. And it means most of Participants of SE WIN from this situation, and the guys who did well LOOSE.

1 Like

The 44 presented in this forum post is not correct actually, these 44 only include 2 postgen validators because there is some bug counting uptime for postgen validators. However when counting correctly with the indexer and removing the first two epochs when postgen couldn’t join, a total of 13 postgen achieved uptime meaning that 44 +11= 55 achieved the 95% or 99% uptime

1 Like

I think my wallet should also be nominated for best shielded app.

1 Like

We operate as validators in various protocols, always with seriousness and commitment. However, it is regrettable to see for the first time decisions that create uncertainties. Many of these decisions are being made after the end of the testnet.

I participated in all the planned activities and was approved in each one of them. However, after the recent “team decision,” the activities of “Building protocol and cryptography improvements” and “Finding Security Vulnerabilities” were removed, benefiting only a few selected partners. It seems that the decisions are being influenced by those who make the most noise.

Additionally, just when we thought the process had come to an end, new changes regarding Uptime have emerged.

These decisions are difficult to understand, as I do not see the same consistent effort to remove users participating as pilots and crew members, or spam proposals. How can this be considered valid?

At least the testnet served to alert us to the need to invest our time in projects that truly take us seriously.

There were around 250 active set validators in the SE, 55 validators is around 20% of validators achieving the uptime tasks, it is not 0.2% or 0.02% it is 20%. How can you claim that 20% of validators achieving uptime is ‘such a limited set of participants were able to compete for these missions’?

For the governance missions there was 62.5 billion ROIDs the same as for the uptime missions. Achieving the governance missions gives around 1 billion roid meaning that only around 60 validators achieved the governance tasks, but 60 from ~10,000 pilots, since all could vote for governance. So this is 0.6% of the ~10,000 pilots set achieving governance versus 20% (55 validators including 13 post genesis) of the ~250 pilots set achieving uptime. So 20% is 33 times larger than 0.6%. How can Gavin claim on the forum that he removed the uptime missions because '‘such a limited set of participants were able to compete for these missions’ when 33x more achieved uptime than governance from the set of participants in each mission?

1 Like

Thx for nominating our explorer!

Dear Gavin,

Thank you for your update.

I’d like to address a few key points regarding the recent decision to remove uptime.

Validator Participation
Labeling the participation of 44 validators as “limited” seems inaccurate. While it is true that the testnet faced challenges, it was not impossible for validators to complete their tasks, as evidenced by the number who did. Therefore, the removal of these incentives seems unwarranted, especially since it was promised from the outset.

High Uptime Incentives
The incentivization for high uptime led many validators to remain vigilant around the clock, often at the expense of other responsibilities, to maintain the testnet. This incentivization prompted immediate and dedicated responses to network issues. Removing these incentives post-factum undermines the efforts of those who persevered through these challenges. It seems counterproductive to reward those who may have exhibited negligence or mistakes, rather than those who committed to achieving these points. Again, making a node sign 95-99% was not impossible.

Silent Restart Bug
Instances of the silent restart bug causing jailing appear limited but maybe its more pronounced then I have found even if it was… Proper monitoring should be a basic requirement for any validator, and such issues would not be silent with appropriate monitoring systems in place. Proper monitoring and alerts, would not happen. Running validator with redundancy in mind, would not happen.

SRE-1: Infra is having an issue
BOSS: Did you fix it?
SRE-1: Yup! We run redundant services and I was promptly notified of the issue by our monitoring.
BOSS: Awesome job, you shall be paid handsomely for our excellent uptime as promised.

SRE-2: So all our infra was down all weekend.
BOSS: How could you let our core services be down for so long?
SRE-2: It was a silent bug…
BOSS: Okay, what about monitoring or alerting to tell you it was down?
SRE-2: I don’t have that setup…
BOSS: What about redundant services or backups?
SRE-2: I don’t have that setup…
BOSS: Oh okay, I’m deducting SRE-1’s pay for this. Sorry about the trouble.

SRE-1: But…

1 Like

Exactly. And it is not just about not having monitoring, alerts or redundant services/backups. Some of the validators who didn’t achieve the uptime task, apart from the lack of all these setups, it seems they were running on very low specs and unreliable cloud servers and not even knowing the difference between a bare metal server or a cloud vps server.

Your TLDR couldn’t be better. Validators without proper setups that didn’t achieve the uptime are rewarded while penalizing those who achieved the uptime task with the proper setups.

Yes but it is even more inaccurate still. Here in the forum post Gavin presented 44 validators achieving the uptime task, including 2 post genesis validators but this is incorrect. When correctly counting uptime for all post genesis validators a total of 13 post genesis validators achieved uptime and not only 2, so in total 55 validators achieved the uptime task, around 20% of the active set which is far from ‘limited set’