Note: many of these ideas are neither my own nor really all that new, but it seems in the past we’ve given up too easily on actually moving forward on them despite their critical importance.
——
1) A fee market never really got created, we don’t really know how transaction fees would work in practice.
The only way to see how fees would work in practice is to have scarcity. If the network is still not sufficiently mature to be able to handle actual resource limits securely, the safest way to do this is to artificially impose limits. Some economists might bicker about the problems with production quotas and what not…but how else are we to solve the real, non-trivial engineering problems without risking system collapse? The eventual goal would be to remove these artificial limits once we’re confident that the economic incentives are properly aligned to maintain security. We’re still quite far from this goal, though, and it would be irresponsible, IMHO, to insist on letting the system hit its real limits.
2) Turns out the vast majority of validation nodes have little if anything to do with mining - validators do not get compensated…validation cost is externalized to the entire network.
3) Miners don’t even properly validate blocks. And the bigger the blocks get, the greater the propensity to skip this step. Oops!
Issues (2) and (3) are inextricably related so I’ll cover both together.
The obvious problem here is that as long as the cost of checking validators is the same as the cost of validating itself, there’s really little we can do to properly have any sort of division of labor. Requiring, at the very least, random checks might be a start. Perhaps some clever use of SNARKs might eventually be secure and practical.
It might also be possible to directly pay validators for satisfying random checks or providing SNARKs. If only we could trustlessly and securely outsource this work we’d make tremendous progress.
Of all the issues I’ve listed, these are perhaps the ones for which practical solutions seem most tentative at present.
4) A satisfactory mechanism for thin clients to be able to securely obtain reasonably secure, short proofs for their transactions never materialized.
The first part of the solution to this issue is the use of better data structures. Satoshi’s SPV can prove that transactions are included in blocks…and that outputs are spent. But it has no mechanism for proving that a given transaction is *not* included in any block…or that some particular output remains unspent. The structures to which we’re committing extremely inefficient for querying some of the most important things required for validation…i.e. whether an output exists and whether it is spent.
The second part is shifting the responsibility for constructing proofs to the parties who already have the greatest incentives to store the necessary data to construct these proofs to allow efficient prunability. Outsourceability of proofs would also be highly desirable.
——
If we want to be able to raise the block size limit…or perhaps get rid of it altogether, I would suggest we start by addressing these specific issues and work to find practical solutions. Since raising the block size limit is already a hard forking consensus rule change, at least the need for hard forks isn’t what’s stopping us.
- Eric
I agree that the historical reasons are irrelevant from an engineering perspective. But they still set a context for the discussion…and might help shed some insight into the motivations behind some of the participants. It’s also good to know these things to counter arguments that start with “But Satoshi said that…”
What’s critically important to note is that several of the assumptions that were being made at the time this limit was decided have turned out wrong…and that these other issues should probably be of greater concern and more highly prioritized in any discussion considering the merits of deploying potentially incompatible consensus rule changes. It seems if these other issues were fixed perhaps no block size limit would be required at all (as was originally hoped).
- Eric
Does it matter even in the slightest why the block size limit was put in place? It does not. Bitcoin is a decentralized payment network, and the relationship between utility (block size) and decentralization is empirical. Why the 1MB limit was put in place at the time might be a historically interesting question, but it bears little relevance to the present engineering issues.