DISCUSS: Network Issues w/ Shade Airdrop - 2/21/22

Excellent explanation @Stefan_DomeriumLabs regarding the rationale behind it, and a shout-out to @baedrik for bringing this up in the first place. secretSauce also supports a temporary lowering of the maximum gas per block.


Whispernode supports. +1 on the great explanations as well!


Hello all - Order of Secrets supports lowering the maximum gas per block.

We fully support sharing peers and will be adding to the list ourselves asap. We understand that the list has not grown as quickly as hoped before now - if there is a reluctance for peers to be shared with the whole world perhaps setting up sub-lists to allocated validators that they agree to keep private would encourage more to share?

Either way, we think that splitting a (sizeable) list of peers into batches that different validators are allocated to is a sensible way to ensure they don’t all get tapped out. It seems that quality definitely trumps quantity and everyone benefits in the end. Distributing evenly across rankings might work as a fair method of allocating the peers (e.g. Batch 1 - 1, 11, 21, 31, 41, 51, 61, Batch 2 - 2, 12, 22, 32, 42, 52, 62 etc into 7 batches who circulate a peer list between them) - of course rankings change but this would be a reasonably simple method to get it off the ground.

We’d also like to express our appreciation for the efforts that so many people have made to deal with a challenging couple of days on the network. Special thanks to @dylanschultzie, @jamama2354 and @pmuecke and of course to @anon60841010 for throttling the flow of claims while we get through this!

1 Like

The peer list is maintained on this public list.


Our experience has been that because the shared peer list (the same one @mohammedpatla has just posted) is quite short, and lots of people are connecting to those peers, they were actually getting tapped out. Also, during the heavy volumes yesterday, we tried adding the whole list and it made things worse not better because of getting timed out with connections - so quality is (arguably) better than quantity at least for public peers.


Yeah just to second. Jamama (myself) and the RC DAO validator support lowering the max gas per block. We also definitely see the need to have high quality peers. Partitioning them into smaller groups would be great since sharing the same peers across the board seems to have some negative amplifying effects when nodes start to drop. I’m also for controlled stress testing of the network through adjusting the front-end bottleneck. It might help us identify other pinch points and optimizations we can take ahead of other launches or high network periods.


I’d like to make a suggestion, forgive me if it doesn’t make sense as I am not a Secret Network expert.

Validators should NOT be fielding queries. Keep their system resources focus on writing blocks to the chain, running contracts, etc. The read layer should be a separate series of nodes, kind of like “read-only slaves” to use a mysql term. This allows the two node types to scale independently as needed and without one affecting the other.

This is the desgin pattern that THORChain has been using and its been great!

1 Like

Suggestion before making suggestions:

First the good news…the lowered gas limit appears to have had a stabilizing effect as we were hoping
Screen Shot 2022-03-03 at 4.10.22 AM

The beginning of the flat-lining at the end corresponds exactly with the parameter change going into effect

But I think 4 mill might end up being too low. I forgot that the cost of a compute store tx did not go down with supernova like the cost of a compute execute tx did, so we are likely going to see contracts that use permits, and especially ones that build off of snip20s and snip721s requiring more than 4 mill gas to store. Snip721 in particular is pretty large to begin with, although projects could shave off functions they don’t need like Reveal, SetMinters, functions that are redundant and only there for strict cw-721 compliance, etc… (They’d definitely get it under 4 mill by removing permits, but I don’t think that is a viable option.)

But depending on how complicated their use case, they might be adding even more than what they are shaving. So I’m wondering if we actually need to have the limit at 5 mill. I guess another alternative would be to apply similar gas savings to compute store txs as was done with other txs, but that might not be feasible or worth the work if we think a bump up to 5 mill would still give us noticeable block time stability (although likely somewhat higher times than what we are currently seeing). Unfortunately it’s hard to really guess if any current projects need even more than 5 mill to store, so it might be worth trying to get some input from them too


As an FYI for the community, Secret DreamScape’s largest gas usage is 300,000 and NFT minting is 1M for a box of 10 mints

Those are executes. This is in reference to storing the contract code (tx compute store). Executes underwent drastic gas reductions with supernova, but the tx to store your contract was not made cheaper (I think even just storing the counter template contract is over 1 million gas)

Welp. Less than 12 hours later and this has already been proven to be the case. We’ve already seen a team have to delay their launch this morning because the gas limit is too low now to store a contract. I did some digging and found 108 (out of 286) compute store txs that had a “gas used” of over 4 million, with 4 (including a couple of alter contracts) over 5 million. I think submitting a proposal to increase the gas limit to 6 million would be a good compromise, but I’m open to other suggestions.

Hmmm, if only 4 are over 5 million. I could even see raising it to just 5 mill, and having teams that need to store larger contracts notify us when they will be ready. Then we create a proposal to raise it to what they need (with a buffer obviously), and create another proposal 3 or 4 days later to lower it back to 5. That way they have a 3 or 4 day window to do their single store tx, and we just inform teams so that no launches are done when the network is vulnerable with the higher limit

1 Like

I would have been fine with 5m as well but the proposal is already submitted for 6m https://secretnodes.com/secret/chains/secret-4/governance/proposals/80. I don’t think the 1m will make THAT much of a difference but I’m fine with admitting that this is 100% speculation

Yeah, it really is a trial and error process to zero in on what is too high. Since it is such a simple thing to change, though, trial and error doesn’t really hurt other than the 7-day wait times


Its hard to guess how much difference there is between 5 mill and 6 mill, but that is right around the area where we start seeing a large number of nodes starting to struggle. So it could be significant if that 1 mill increase is right at the max capabilities of a large number of nodes

1 Like

Yeah it’s difficult to predict, if 6M is too high we can just step down to 5M or 5.5M.

Has anyone looked at a potential correlation between blocktime, gas in the block and number of validators missing the block? Could help us narrow down what we should be aiming for.

I might spend a little time today/tomorrow adding a “total” row to the blocks page on secretnodes.com so we can see total gas for each block

1 Like

(Clicked reply on the wrong message)

True, an app that is launching can rate limit itself, but the right gas limit is a rate limit that the network is able to enforce. So like if two apps had launch windows at the same time, even though they both might rate limit as much as they think is acceptable for their product, the network can still crumble if the gas limit is too high. But with the right gas limit, it can handle both apps even if they aren’t team players

1 Like