-The SE ended on April 11th and today is 16th May, it is not fair at all and this has huge effect in the final ranking to entirely remove the uptime category
-Validators who, for 2.5 months, had many sleepless nights, helped to coordinate hardforks, upgrades, etc. are penalized and their prizes are instead given to those creating and voting nonstop spam proposals since it seems governance category will be kept!
-Changing the rules of the SE, especially such major changes, over a month after the SE ended is absolutely unfair. But if the B category had been removed this would be one thing, but removing all the roid from those who managed to get over 99% uptime and keeping the roid for those voting nonstop useless spam proposals, this honestly I have no words. The only solution here is to maintain uptime for integrity and upholding the very values of Namada or removing also governance for fairness, otherwise you risk losing most of the long term contributors and the community @cwgoes@awa@adrian
I am having a hard time accepting that. We had to do the extra mile a couple of times (!) to achieve that uptime and now we are getting penalized? It took us a lot of efforts to get there and now this will negatively affect our ranking and contribution points.
This is not what I signed up for. Rewarding spam proposals and penalizing high quality uptime… Sad to see - disappointed. Gives me a bad feeling that the team removes an entire category after the SE concluded. I hope you can reevaluate the decision with Adrian and the rest of the team. Thanks.
EDIT: Additionally, I do not believe that a limited set of participants is an argument. Even if only a single person achieved that uptime the points should fully belong to him. Or what exactly was the purpose of this game? Everyone played according to the rules. The rules were clear. Do not break them. The people that outperformed shall be rewarded accordingly to the full upside potential.
This may be a decision that just makes it a game to kill time rather than verifying one’s abilities as a validator.
I invested a lot of time and hundreds of dollars in costs. I think the same goes for other many validators. How many times has the chainhalted? After the chain outage, we couldn’t run the monitoring app, so we had to wait and reaction overnight whenever the chain is halted just for this testnet. And what do you think of the effort put into maintaining uptime even under network load due to frequent spamming? If uptime wasn’t important, there would be no need to put in this much effort. Wasn’t the Shielded Expedition an event to select validators with skills and experience? What was the Shielded Expedition for?
Out of topic, Many validators, including myself, missed many proposals because the faucet did not work properly at the beginning of the competition. Is that just acceptable? Is this a fair and right decision?
Daniel | Keplr - Today PM 7:10 (KST)
They shouldn’t have removed the uptime tasks. Uptime had to be calculated individually based on when each validator was in the activeset, rather than all block of the chain, and jailed validators had to be deducted differentially depending on whether they had downtime or double sign. ( Point deduction excluded : when restart network with no announce was given.) + Points are awarded only to validators who have participated for more than x epochs.
The most concerning is that it is claimed that only 2 postgen validators could achieve other 99% or 95% uptime and they are ‘outliers’. The truth is that actually there is a bug in counting uptime for postgen. When removing the first 2 epochs when postgen couldn’t yet join the indexer data shows 13 postgen with over 95% or 99% uptime meaning they were not outliers but many postgen actually achieved the uptime. So claiming that ‘such limited set of participants were able to compete for the uptime’ is 100% not accurate and false information
Completely agree, it mean our effort is worthless, and it will make huge impact on Final results. We also helped to relaunch network after halt by providing public node with needed peers, and i think it was great stress test for all validators to keep network going, i must say that Citadel.one team also did a great job in all emergency situations and somehow 44 Validators managed to keep good uptime, but at the end it doesn’t even matter, very sad about this situation. And it means most of Participants of SE WIN from this situation, and the guys who did well LOOSE.
The 44 presented in this forum post is not correct actually, these 44 only include 2 postgen validators because there is some bug counting uptime for postgen validators. However when counting correctly with the indexer and removing the first two epochs when postgen couldn’t join, a total of 13 postgen achieved uptime meaning that 44 +11= 55 achieved the 95% or 99% uptime
We operate as validators in various protocols, always with seriousness and commitment. However, it is regrettable to see for the first time decisions that create uncertainties. Many of these decisions are being made after the end of the testnet.
I participated in all the planned activities and was approved in each one of them. However, after the recent “team decision,” the activities of “Building protocol and cryptography improvements” and “Finding Security Vulnerabilities” were removed, benefiting only a few selected partners. It seems that the decisions are being influenced by those who make the most noise.
Additionally, just when we thought the process had come to an end, new changes regarding Uptime have emerged.
These decisions are difficult to understand, as I do not see the same consistent effort to remove users participating as pilots and crew members, or spam proposals. How can this be considered valid?
At least the testnet served to alert us to the need to invest our time in projects that truly take us seriously.
There were around 250 active set validators in the SE, 55 validators is around 20% of validators achieving the uptime tasks, it is not 0.2% or 0.02% it is 20%. How can you claim that 20% of validators achieving uptime is ‘such a limited set of participants were able to compete for these missions’?
For the governance missions there was 62.5 billion ROIDs the same as for the uptime missions. Achieving the governance missions gives around 1 billion roid meaning that only around 60 validators achieved the governance tasks, but 60 from ~10,000 pilots, since all could vote for governance. So this is 0.6% of the ~10,000 pilots set achieving governance versus 20% (55 validators including 13 post genesis) of the ~250 pilots set achieving uptime. So 20% is 33 times larger than 0.6%. How can Gavin claim on the forum that he removed the uptime missions because '‘such a limited set of participants were able to compete for these missions’ when 33x more achieved uptime than governance from the set of participants in each mission?
I’d like to address a few key points regarding the recent decision to remove uptime.
Validator Participation
Labeling the participation of 44 validators as “limited” seems inaccurate. While it is true that the testnet faced challenges, it was not impossible for validators to complete their tasks, as evidenced by the number who did. Therefore, the removal of these incentives seems unwarranted, especially since it was promised from the outset.
High Uptime Incentives
The incentivization for high uptime led many validators to remain vigilant around the clock, often at the expense of other responsibilities, to maintain the testnet. This incentivization prompted immediate and dedicated responses to network issues. Removing these incentives post-factum undermines the efforts of those who persevered through these challenges. It seems counterproductive to reward those who may have exhibited negligence or mistakes, rather than those who committed to achieving these points. Again, making a node sign 95-99% was not impossible.
Silent Restart Bug
Instances of the silent restart bug causing jailing appear limited but maybe its more pronounced then I have found even if it was… Proper monitoring should be a basic requirement for any validator, and such issues would not be silent with appropriate monitoring systems in place. Proper monitoring and alerts, would not happen. Running validator with redundancy in mind, would not happen.
TLDR:
SRE-1: Infra is having an issue
BOSS: Did you fix it?
SRE-1: Yup! We run redundant services and I was promptly notified of the issue by our monitoring.
BOSS: Awesome job, you shall be paid handsomely for our excellent uptime as promised.
SRE-2: So all our infra was down all weekend.
BOSS: How could you let our core services be down for so long?
SRE-2: It was a silent bug…
BOSS: Okay, what about monitoring or alerting to tell you it was down?
SRE-2: I don’t have that setup…
BOSS: What about redundant services or backups?
SRE-2: I don’t have that setup…
BOSS: Oh okay, I’m deducting SRE-1’s pay for this. Sorry about the trouble.
Exactly. And it is not just about not having monitoring, alerts or redundant services/backups. Some of the validators who didn’t achieve the uptime task, apart from the lack of all these setups, it seems they were running on very low specs and unreliable cloud servers and not even knowing the difference between a bare metal server or a cloud vps server.
Your TLDR couldn’t be better. Validators without proper setups that didn’t achieve the uptime are rewarded while penalizing those who achieved the uptime task with the proper setups.
Yes but it is even more inaccurate still. Here in the forum post Gavin presented 44 validators achieving the uptime task, including 2 post genesis validators but this is incorrect. When correctly counting uptime for all post genesis validators a total of 13 post genesis validators achieved uptime and not only 2, so in total 55 validators achieved the uptime task, around 20% of the active set which is far from ‘limited set’