From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from sog-mx-4.v43.ch3.sourceforge.com ([172.29.43.194] helo=mx.sourceforge.net) by sfs-ml-2.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1Yy1hZ-0003xu-61 for bitcoin-development@lists.sourceforge.net; Thu, 28 May 2015 17:34:41 +0000 Received-SPF: pass (sog-mx-4.v43.ch3.sourceforge.com: domain of gmail.com designates 209.85.218.46 as permitted sender) client-ip=209.85.218.46; envelope-from=pieter.wuille@gmail.com; helo=mail-oi0-f46.google.com; Received: from mail-oi0-f46.google.com ([209.85.218.46]) by sog-mx-4.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.76) id 1Yy1hX-0001Dk-8B for bitcoin-development@lists.sourceforge.net; Thu, 28 May 2015 17:34:41 +0000 Received: by oihb142 with SMTP id b142so37887723oih.3 for ; Thu, 28 May 2015 10:34:33 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.202.78.142 with SMTP id c136mr3278230oib.131.1432834473671; Thu, 28 May 2015 10:34:33 -0700 (PDT) Received: by 10.60.94.36 with HTTP; Thu, 28 May 2015 10:34:32 -0700 (PDT) Received: by 10.60.94.36 with HTTP; Thu, 28 May 2015 10:34:32 -0700 (PDT) In-Reply-To: References: <16096345.A1MpJQQkRW@crushinator> Date: Thu, 28 May 2015 10:34:32 -0700 Message-ID: From: Pieter Wuille To: Gavin Andresen Content-Type: multipart/alternative; boundary=001a11c162e0d34079051727c5e1 X-Spam-Score: -0.6 (/) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. -1.5 SPF_CHECK_PASS SPF reports sender host as permitted sender for sender-domain 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (pieter.wuille[at]gmail.com) -0.0 SPF_PASS SPF: sender matches SPF record 1.0 HTML_MESSAGE BODY: HTML included in message -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-Headers-End: 1Yy1hX-0001Dk-8B Cc: Bitcoin Dev Subject: Re: [Bitcoin-development] Proposed alternatives to the 20MB step function X-BeenThere: bitcoin-development@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 May 2015 17:34:41 -0000 --001a11c162e0d34079051727c5e1 Content-Type: text/plain; charset=UTF-8 > until we have size-independent new block propagation I don't really believe that is possible. I'll argue why below. To be clear, this is not an argument against increasing the block size, only against using the assumption of size-independent propagation. There are several significant improvements likely possible to various aspects of block propagation, but I don't believe you can make any part completely size-independent. Perhaps the remaining aspects result in terms in the total time that vanish compared to the link latencies for 1 MB blocks, but there will be some block sizes for which this is no longer the case, and we need to know where that is the case. * You can't assume that every transaction is pre-relayed and pre-validated. This can happen due to non-uniform relay policies (different codebases, and future things like size-limited mempools), double spend attempts, and transactions generated before a block had time to propagate. You've previously argued for a policy of not including too recent transactions, but that requires a bound on network diameter, and if these late transactions are profitable, it has exactly the same problem as making larger blocks non-proportionally more economic for larger pools groups if propagation time is size dependent). * This results in extra bandwidth usage for efficient relay protocols, and if discrepancy estimation mispredicts the size of IBLT or error correction data needed, extra roundtrips. * Signature validation for unrelayed transactions will be needed at block relay time. * Database lookups for the inputs of unrelayed transactions cannot be cached in advance. * Block validation with 100% known and pre-validated transactions is not constant time, due to updates that need to be made to the UTXO set (and future ideas like UTXO commitments would make this effect an order of magnitude worse). * More efficient relay protocols also have higher CPU cost for encoding/decoding. Again, none of this is a reason why the block size can't increase. If availability of hardware with higher bandwidth, faster disk/ram access times, and faster CPUs increases, we should be able to have larger blocks with the same propagation profile as smaller blocks with earlier technology. But we should know how technology scales with larger blocks, and I don't believe we do, apart from microbenchmarks in laboratory conditions. -- Pieter On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock wrote: > Between all the flames on this list, several ideas were raised that did > not get much attention. I hereby resubmit these ideas for consideration and > discussion. > > - Perhaps the hard block size limit should be a function of the actual > block sizes over some trailing sampling period. For example, take the > median block size among the most recent 2016 blocks and multiply it by 1.5. > This allows Bitcoin to scale up gradually and organically, rather than > having human beings guessing at what is an appropriate limit. > A lot of people like this idea, or something like it. It is nice and simple, which is really important for consensus-critical code. With this rule in place, I believe there would be more "fee pressure" (miners would be creating smaller blocks) today. I created a couple of histograms of block sizes to infer what policy miners are ACTUALLY following today with respect to block size: Last 1,000 blocks: http://bitcoincore.org/~gavin/sizes_last1000.html Notice a big spike at 750K -- the default size for Bitcoin Core. This graph might be misleading, because transaction volume or fees might not be high enough over the last few days to fill blocks to whatever limit miners are willing to mine. So I graphed a time when (according to statoshi.info) there WERE a lot of transactions waiting to be confirmed: http://bitcoincore.org/~gavin/sizes_357511.html That might also be misleading, because it is possible there were a lot of transactions waiting to be confirmed because miners who choose to create small blocks got lucky and found more blocks than normal. In fact, it looks like that is what happened: more smaller-than-normal blocks were found, and the memory pool backed up. So: what if we had a dynamic maximum size limit based on recent history? The average block size is about 400K, so a 1.5x rule would make the max block size 600K; miners would definitely be squeezing out transactions / putting pressure to increase transaction fees. Even a 2x rule (implying 800K max blocks) would, today, be squeezing out transactions / putting pressure to increase fees. Using a median size instead of an average means the size can increase or decrease more quickly. For example, imagine the rule is "median of last 2016 blocks" and 49% of miners are producing 0-size blocks and 51% are producing max-size blocks. The median is max-size, so the 51% have total control over making blocks bigger. Swap the roles, and the median is min-size. Because of that, I think using an average is better-- it means the max size will change (up or down) more slowly. I also think 2016 blocks is too long, because transaction volumes change quicker than that. An average over 144 blocks (last 24 hours) would be better able to handle increased transaction volume around major holidays, and would also be able to react more quickly if an economically irrational attacker attempted to flood the network with fee-paying transactions. So my straw-man proposal would be: max size 2x average size over last 144 blocks, calculated at every block. There are a couple of other changes I'd pair with that consensus change: + Make the default mining policy for Bitcoin Core neutral-- have its target block size be the average size, so miners that don't care will "go along with the people who do care." + Use something like Greg's formula for size instead of bytes-on-the-wire, to discourage bloating the UTXO set. --------- When I've proposed (privately, to the other core committers) some dynamic algorithm the objection has been "but that gives miners complete control over the max block size." I think that worry is unjustified right now-- certainly, until we have size-independent new block propagation there is an incentive for miners to keep their blocks small, and we see miners creating small blocks even when there are fee-paying transactions waiting to be confirmed. I don't even think it will be a problem if/when we do have size-independent new block propagation, because I think the combination of the random timing of block-finding plus a dynamic limit as described above will create a healthy system. If I'm wrong, then it seems to me the miners will have a very strong incentive to, collectively, impose whatever rules are necessary (maybe a soft-fork to put a hard cap on block size) to make the system healthy again. -- -- Gavin Andresen ------------------------------------------------------------------------------ _______________________________________________ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development --001a11c162e0d34079051727c5e1 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

> until we have size-independent new block propagation

I don't really believe that is possible. I'll argue = why below. To be clear, this is not an argument against increasing the bloc= k size, only against using the assumption of size-independent propagation.<= /p>

There are several significant improvements likely possible t= o various aspects of block propagation, but I don't believe you can mak= e any part completely size-independent. Perhaps the remaining aspects resul= t in terms in the total time that vanish compared to the link latencies for= 1 MB blocks, but there will be some block sizes for which this is no longe= r the case, and we need to know where that is the case.

* You can't assume that every transaction is pre-relayed= and pre-validated. This can happen due to non-uniform relay policies (diff= erent codebases, and future things like size-limited mempools), double spen= d attempts, and transactions generated before a block had time to propagate= . You've previously argued for a policy of not including too recent tra= nsactions, but that requires a bound on network diameter, and if these late= transactions are profitable, it has exactly the same problem as making lar= ger blocks non-proportionally more economic for larger pools groups if prop= agation time is size dependent).
=C2=A0 * This results in extra bandwidth usage for efficient relay protocol= s, and if discrepancy estimation mispredicts the size of IBLT or error corr= ection data needed, extra roundtrips.
=C2=A0 * Signature validation for unrelayed transactions will be needed at = block relay time.
=C2=A0 * Database lookups for the inputs of unrelayed transactions cannot b= e cached in advance.

* Block validation with 100% known and pre-validated transac= tions is not constant time, due to updates that need to be made to the UTXO= set (and future ideas like UTXO commitments would make this effect an orde= r of magnitude worse).

* More efficient relay protocols also have higher CPU cost f= or encoding/decoding.

Again, none of this is a reason why the block size can't= increase. If availability of hardware with higher bandwidth, faster disk/r= am access times, and faster CPUs increases, we should be able to have large= r blocks with the same propagation profile as smaller blocks with earlier t= echnology.

But we should know how technology scales with larger blocks,= and I don't believe we do, apart from microbenchmarks in laboratory co= nditions.

--
Pieter

On Fri, May 8, 2015 at 3:20 AM,= Matt Whitlock <bip@mattwhitlock.name> wrote:
Between all the flames on this list, several ideas were raised th= at did not get much attention. I hereby resubmit these ideas for considerat= ion and discussion.

- Perhaps the hard block size limit should be a function of the actual bloc= k sizes over some trailing sampling period. For example, take the median bl= ock size among the most recent 2016 blocks and multiply it by 1.5. This all= ows Bitcoin to scale up gradually and organically, rather than having human= beings guessing at what is an appropriate limit.

=
A lot of people like this idea, or something like it. It is nice= and simple, which is really important for consensus-critical code.

With this rule in place, I believe there would be more &q= uot;fee pressure" (miners would be creating smaller blocks) today. I c= reated a couple of histograms of block sizes to infer what policy miners ar= e ACTUALLY following today with respect to block size:

=
Last 1,000 blocks:

Notice a big spike a= t 750K -- the default size for Bitcoin Core.
This graph might be = misleading, because transaction volume or fees might not be high enough ove= r the last few days to fill blocks to whatever limit miners are willing to = mine.

So I graphed a time when (according to <= a href=3D"http://statoshi.info" target=3D"_blank">statoshi.info) there = WERE a lot of transactions waiting to be confirmed:

=
That might also be misleading, because it is possible there were a lot= of transactions waiting to be confirmed because miners who choose to creat= e small blocks got lucky and found more blocks than normal.=C2=A0 In fact, = it looks like that is what happened: more smaller-than-normal blocks were f= ound, and the memory pool backed up.

So: what if w= e had a dynamic maximum size limit based on recent history?

<= /div>
The average block size is about 400K, so a 1.5x rule would make t= he max block size 600K; miners would definitely be squeezing out transactio= ns / putting pressure to increase transaction fees. Even a 2x rule (implyin= g 800K max blocks) would, today, be squeezing out transactions / putting pr= essure to increase fees.

Using a median size inste= ad of an average means the size can increase or decrease more quickly. For = example, imagine the rule is "median of last 2016 blocks" and 49%= of miners are producing 0-size blocks and 51% are producing max-size block= s. The median is max-size, so the 51% have total control over making blocks= bigger.=C2=A0 Swap the roles, and the median is min-size.

Because of that, I think using an average is better-- it means the= max size will change (up or down) more slowly.

I = also think 2016 blocks is too long, because transaction volumes change quic= ker than that. An average over 144 blocks (last 24 hours) would be better a= ble to handle increased transaction volume around major holidays, and would= also be able to react more quickly if an economically irrational attacker = attempted to flood the network with fee-paying transactions.

=
So my straw-man proposal would be: =C2=A0max size 2x average siz= e over last 144 blocks, calculated at every block.

There are a couple of other changes I'd pair with that consensus chang= e:

+ Make the default mining policy for Bitcoin Co= re neutral-- have its target block size be the average size, so miners that= don't care will "go along with the people who do care."

+ Use something like Greg's formula for size inste= ad of bytes-on-the-wire, to discourage bloating the UTXO set.

---------

When I've= proposed (privately, to the other core committers) some dynamic algorithm = the objection has been "but that gives miners complete control over th= e max block size."

I think that worry is unju= stified right now-- certainly, until we have size-independent new block pro= pagation there is an incentive for miners to keep their blocks small, and w= e see miners creating small blocks even when there are fee-paying transacti= ons waiting to be confirmed.

I don't even thin= k it will be a problem if/when we do have size-independent new block propag= ation, because I think the combination of the random timing of block-findin= g plus a dynamic limit as described above will create a healthy system.

If I'm wrong, then it seems to me the miners will= have a very strong incentive to, collectively, impose whatever rules are n= ecessary (maybe a soft-fork to put a hard cap on block size) to make the sy= stem healthy again.


--
--=
Gavin Andresen


-----------------------------------------------------------------------= -------

_______________________________________________
Bitcoin-development mailing list
Bitcoin-develo= pment@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-de= velopment

--001a11c162e0d34079051727c5e1--