> until we have size-independent new block propagation
I don't really believe that is possible. I'll argue why below. To be clear, this is not an argument against increasing the block size, only against using the assumption of size-independent propagation.
There are several significant improvements likely possible to various aspects of block propagation, but I don't believe you can make any part completely size-independent. Perhaps the remaining aspects result in terms in the total time that vanish compared to the link latencies for 1 MB blocks, but there will be some block sizes for which this is no longer the case, and we need to know where that is the case.
* You can't assume that every transaction is pre-relayed and pre-validated. This can happen due to non-uniform relay policies (different codebases, and future things like size-limited mempools), double spend attempts, and transactions generated before a block had time to propagate. You've previously argued for a policy of not including too recent transactions, but that requires a bound on network diameter, and if these late transactions are profitable, it has exactly the same problem as making larger blocks non-proportionally more economic for larger pools groups if propagation time is size dependent).
* This results in extra bandwidth usage for efficient relay protocols, and if discrepancy estimation mispredicts the size of IBLT or error correction data needed, extra roundtrips.
* Signature validation for unrelayed transactions will be needed at block relay time.
* Database lookups for the inputs of unrelayed transactions cannot be cached in advance.
* Block validation with 100% known and pre-validated transactions is not constant time, due to updates that need to be made to the UTXO set (and future ideas like UTXO commitments would make this effect an order of magnitude worse).
* More efficient relay protocols also have higher CPU cost for encoding/decoding.
Again, none of this is a reason why the block size can't increase. If availability of hardware with higher bandwidth, faster disk/ram access times, and faster CPUs increases, we should be able to have larger blocks with the same propagation profile as smaller blocks with earlier technology.
But we should know how technology scales with larger blocks, and I don't believe we do, apart from microbenchmarks in laboratory conditions.
--
Pieter
Between all the flames on this list, several ideas were raised that did not get much attention. I hereby resubmit these ideas for consideration and discussion.
- Perhaps the hard block size limit should be a function of the actual block sizes over some trailing sampling period. For example, take the median block size among the most recent 2016 blocks and multiply it by 1.5. This allows Bitcoin to scale up gradually and organically, rather than having human beings guessing at what is an appropriate limit.