If you or others decide not to enter long term pricing then sure, but thats a choice you are not forced into. The pricing long term is closer to half that number.
I’ve used the same equipment since 2020 without running into that issue but If you rent a server, and it doesn’t work anymore for your use case, you can escape the long term deal unless they provide a suitable replacement. If you disagree then I accept that, but nothing anyone says changes the fact the old hardware will prevent us from scaling. We can decide to remain limited by smaller enclaves, or we can move to larger ones and when some fall out of the set, more of the VP will go to remaining validators thus increasing their income. My costs are already really low per node so I’m not really incentivized for personal gain of a couple hundred dollars a month…. It’s just a hard truth of the matter, that the old machines will always be a bottleneck.
1 Like
Again, I’m not saying that old hardware isn’t a hinderance. Nor have I vocalized any opinion on whether forced upgrade should happen or not. I’m reserving my opinion for a time that I see data on how much of a difference is actually made on the number of txs per block the next gen hw can handle, because old machines are not the ONLY bottleneck.
The only intent of my original post was to clarify the actual costs to validators because it was not clear before
Whatever your ultimate stance is I think it’s fine if we disagree. I’m not convinced people will choose to exclude older machines, But I did think the discussion should happen. As for this not being the only bottleneck that’s true however, I’ve cited sources saying it is among the largest bottlenecks of SGX. Any other improvements done that would increase the throughput would also increase the number of paging operations and threads spun up on the older machines.
I disagree, why not instead parallelize the calculations and add ZKPs to verify that the enclave calculation was done correctly?
At that point you could have a classic Validator Set like any other Cosmos SDK based chain to verify ZKPs and add a zkTEE Layer computation.
Instead of each TEE validator counting each TX, each TX can be assigned to only one validator which creates a ZKP for it.
1 Like
this is a cool idea but then throughput is still limited by hardware limitations of whatever node does the computation. It’s certainly helpful, but it doesn’t help us expand gas limit. Short to medium term we can do all sorts of cool stuff, but where it will shine is when people aren’t running validators on toasters.
1 Like
I agree, but splitting will remove security from the need to run SGX devices, which can help increase decentralization and reduce validator cost.
oh if that’s the idea then i agree with that. Freshscrts suggested it years ago, cashmaney suggested it, I suggested it, and maybe a few others. Creating an SGX runner of sorts that can run separately from the consensus nodes, where the consensus nodes just connect to it is idea. We would need to update the wasm engine to be based on rpcs. Here’s how Cashmaney phrased it.
"Ideally we’d move to an engine that’s based on rpcs (grpc or whatever) so that you would be able to separate the sgx runner from the chain itself. "
You still want to make sure the runner has sufficient enclave memory though, we wouldn’t want those limited by old equipment either.
2 Likes