When blocks are found under or over the 10 minute threshold, hashing difficulty is raised or reduced dinamically to keep a balance. This intelligent measure has avoided us having discussions and kept a balance.
The same way you can't assume how much hashpower there will be to find the next blocks, why can't we have a
function that adapts to the transactional volume on the blockchain, one which allows us to grow/shrink an acceptable maximum block size. We're not putting caps on processing, why should we put a date based cap on transactional volume per block? You can't predict the future, but you can look at what's happened recently to correct these limits.
Such function/filter should be able to recognize real sustained growth in transactional volume and let us adjust the maximum accepted blocksize to allow for the organic growth that will come due to real activity from things like distributed market-places, decentralized bitcoin based services (and all the things the community dreams about and might be building already), truly decentralized technological breakthroughs that geniunely need to use the blockchain. <Going the off-chain way only leads to centralization and personal/corporate agendas, which to me goes against the Bitcoin ethos>
It should be able to adapt fast enough so that we don't have episodes where people need to wait 4 hours to days for transactions to get on the blockchain and be confirmed. I believe proposals that include "every 100,000 blocks" are out of touch with reality, the blocksize needs to adapt the same way blockdifficulty already adapts to growth or lack of hashing power.
I'm not a statistician/mathematician, but I'm sure if we propose the parameters that need to be considered for a realistic blocksize that reflects the needs of the Bitcoin network users, there's plenty of crypto/statistician/mathematician brain power to propose such filtering function here.
Things that could be considered:
- median number of transactions per block (between 6 to 12 hours, you should be able to adjust to a real shopping sprint for instance, or huge pop band/artist decides to sell concert tickets on Bitcoin)
- median fees offered per transaction (can we detect spammers)
- median blocksizes
- median size per transaction
- number of new addresses signing off transactions, number of addresses we've already seen in the blockchain before (are these spammers creating lots of new addresses to move around the same outputs, is there an efficient way to detect the likelyhood of a transaction being spam? Bayes? No clue, no mathematician)
- median velocity between which an address receives an input and sends it to another one?
- more things I've no knowledge of since I'm not familiar with the details, but could immediatly come to mind to the experts.
Mining Centralization is already happening due to its competitive nature, we don't complain or try to force hashing limits, we shouldn't do the same for storage. There will be no shortage of blockchain mirrors, and those interested in running full nodes, will surely find a way to do so.
Angel