From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 7192A2A00 for ; Wed, 9 Sep 2015 13:10:47 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-io0-f177.google.com (mail-io0-f177.google.com [209.85.223.177]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 549E9152 for ; Wed, 9 Sep 2015 13:10:44 +0000 (UTC) Received: by ioiz6 with SMTP id z6so20975889ioi.2 for ; Wed, 09 Sep 2015 06:10:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=6MM+P24xCquL+IqcFUb+UH41Ek/ABR/7gw7sBRTwKFI=; b=el7T1LyvN4gi90qDhF5926Wqafzow88w6ZotHa4fYAMRFN8xB1X/bRFgyfMPM6j1ZX unVJKDoyE0fzFzy5HV9ZYFTt/VUA1rLF7vkTcbk5n32F8G59yDdnheDs21q9voXcNOy2 oQBbSrpLMuv5m8TchqFelWdIZDMaX8Q029LyTmPi+AQNHS3H617TISrqYmLHzCZuP4wm rT58cMOgxMrh9GWGYt2TwGWmh2KNtVj48dVQt0AfbBFiEAe92HJUe54T0TSm8xbKqUOc mQ97CMhgQYApblI0oxQ6wc+ghfSPA8OhyzVyk2o7acPPqQQSC0Rm6qyiITiTiPGEfkLz 29Wg== MIME-Version: 1.0 X-Received: by 10.107.165.140 with SMTP id o134mr18724894ioe.29.1441804243587; Wed, 09 Sep 2015 06:10:43 -0700 (PDT) Received: by 10.107.178.12 with HTTP; Wed, 9 Sep 2015 06:10:43 -0700 (PDT) In-Reply-To: References: Date: Wed, 9 Sep 2015 23:10:43 +1000 Message-ID: From: Washington Sanchez To: Gavin Andresen Content-Type: multipart/alternative; boundary=001a1141fb90c66589051f5035d2 X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Bitcoin Dev Subject: Re: [bitcoin-dev] Dynamic limit to the block size - BIP draft discussion X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Development Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Sep 2015 18:24:31 -0000 --001a1141fb90c66589051f5035d2 Content-Type: text/plain; charset=UTF-8 Errata + clarity (in bold): > > > - So in my proposal, if 2000+ *blocks *have a size >= 60% *of the > current limit*, this is an indication that real transaction volume has > increased and we're approaching a time where block could be filled to > capacity without an increase. The block size increase, 10%, is triggered. > > On Wed, Sep 9, 2015 at 9:11 AM, Washington Sanchez < washington.sanchez@gmail.com> wrote: > If you want me to take your proposal seriously, you need to justify why >> 60% full is a good answer >> > > Sure thing Gavin. > > If you want blocks to be at least 60% full... > > > First off, I do not want blocks to be at least 60% full, so let me try and > explain where I got this number from > > - The idea of this parameter is set a *triggering level* for an > increase in the block size > - The triggering level is the point where a reasonable medium-term > trend can be observed. That trend is an increase in the transaction volume > that, left unchecked, would fill up blocks. > - Determining the appropriate triggering level is difficult, and it > consists of 3 parameters: > 1. Evaluation period > - *Period of time where you check to see if the conditions to > trigger a raise the block size are true * > - Ideally you want an increase to occur in response to a real > increase of transaction volume from the market, and not some short term > spam attack. > - Too short, spam attacks can be used to trigger multiple > increases (at least early on). Too long, the block size doesn't increase > fast enough to transaction demand. > - I selected a period of *4032 blocks* > 2. Capacity > - *The capacity level that a majority of blocks > would demonstrate in order to trigger a block size increase* > - The capacity level, in tandem with the evaluation period and > threshold level, needs to reflect an underlying trend towards filling > blocks. > - If the capacity level is too low, block size increases can be > triggered prematurely. If the capacity level is too high, the network could > be unnecessarily jammed with the transactions before an increase can kick > in. > - I selected a capacity level of *60%*. > 3. Threshold > - *The number of blocks during the evaluation period that are > above the capacity level in order to trigger a block size increase.* > - If blocks are getting larger than 60% over a 4032 block > period, how many reflect a market-driven increase transaction volume? > - If the threshold is too low, increases could be triggered > artificially or prematurely. If the threshold is too high, the easier it > gets for 1-2 mining pools to prevent any increases in the block size or the > block size doesn't respond fast enough to a real increase in transaction > volume. > - I selected a threshold of *2000 blocks or ~50%*. > - So in my proposal, if 2000+ nodes have a block size >= 60%, this > is an indication that real transaction volume has increased and we're > approaching a time where block could be filled to capacity without an > increase. The block size increase, 10%, is triggered. > > A centralized decision, presumably by Satoshi, was made on the parameters > that alter the target difficulty, rather than attempt to forecast hash > rates based on his CPU power. He allowed the system to scale to a level > where real market demand would take it. I believe the same approach should > be replicated for the block size. The trick of course is settling on the > right variables. I hope this proposal is a good way to do that. > > *Some additional calculations* > > Block sizes for each year are *theoretical maximums* if ALL trigger > points are activated in my proposal (unlikely, but anyway). > These calculations assume zero transactions are taken off-chain by third > party processors or the LN, and no efficiency improvements. > > - 2015 > - 1 MB/block > - 2 tps (conservative factor, also carried on below) > - 0.17 million tx/day > - 2016 > - 3.45 MB/block > - 7 tps > - 0.6 million tx/day > - 2017 > - 12 MB/block > - 24 tps > - 2 million tx/day > - 2018 > - 41 MB/block > - 82 tps > - 7 million tx/day > - 2019 > - 142 MB/block > - 284 tps > - 25 million tx/day > - 2020 > - 490 MB/block > - 980 tps > - 85 million tx/day > > By way of comparison, Alipay (payment processor for the Alibaba Group's > ecosystem) processes 30 million escrow transactions per day. This gives us > at least 4-5 years to reach the present day transaction processing capacity > of 1 corporation... in reality it will take a little longer as I doubt all > block size triggers will be activated. This also gives us at least 4-5 > years to develop efficiency improvements within the protocol, develop the > LN to take many of these transactions off-chain, and network infrastructure > to be significantly improved (and anything else this ecosystem can come up > with). > > (let me know if any of these calculations are off) > > -- ------------------------------------------- *Dr Washington Y. Sanchez * Co-founder, OB1 Core developer of OpenBazaar @drwasho --001a1141fb90c66589051f5035d2 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Errata + clarity (in bold):
  • So in my proposal, if 2000+= blocks have a size >=3D 60% of the current limit, this is= an indication that real transaction volume has increased and we're app= roaching a time where block could be filled to capacity without an increase= . The block size increase, 10%, is triggered.
<= div class=3D"gmail_extra">
On Wed, Sep 9, 201= 5 at 9:11 AM, Washington Sanchez <washington.sanchez@gmail.com<= /a>> wrote:
If you wan= t me to take your proposal seriously, you need to justify why 60% full is a= good answer

Sure thing Gavin.=

If you wan= t blocks to be at least 60% full...

= First off, I do not want blocks to be at least 60% full, so let me try and = explain where I got this number from
  • The idea of this parameter = is set a triggering level for an increase in the block size=C2=A0
  • The triggering level is the point where a reasonable medium-term tren= d can be observed. That trend is an increase in the transaction volume that= , left unchecked, would fill up blocks.
  • Determining the appropriate= triggering level is difficult, and it consists of 3 parameters:
    1. Evaluation period
      • Period of time where you check to see if= the conditions to trigger a raise the block size are true=C2=A0
      • Ideally you want an increase to occur in response to a real increase of t= ransaction volume from the market, and not some short term spam attack.
      • Too short, spam attacks can be used to trigger multiple increases (at = least early on). Too long, the block size doesn't increase fast enough = to transaction demand.
      • I selected a period of=C2=A04032 blocks
    2. Capacity
      • The capacity level that a major= ity of blocks would=C2=A0demonstrate in order to trigger a block size incre= ase
      • The capacity level, in tandem with the evaluation period an= d threshold level, needs to reflect an underlying trend towards filling blo= cks.
      • If the capacity level is too low, block size increases can be = triggered prematurely. If the capacity level is too high, the network could= be unnecessarily jammed with the transactions before an increase can kick = in.
      • I selected a capacity level of 60%.
    3. Thresho= ld
      • The number of blocks during the evaluation period that ar= e above the capacity level in order to trigger a block size increase.
      • If blocks are getting larger than 60% over a 4032 block period, how = many reflect a market-driven increase transaction volume?
      • If the th= reshold is too low, increases could be triggered artificially or prematurel= y. If the threshold is too high, the easier it gets for 1-2 mining pools to= prevent any increases in the block size or the block size doesn't resp= ond fast enough to a real increase in transaction volume.
      • I selecte= d a threshold of 2000 blocks or ~50%.
  • So in my pro= posal, if 2000+ nodes have a block size >=3D 60%, this is an indication = that real transaction volume has increased and we're approaching a time= where block could be filled to capacity without an increase. The block siz= e increase, 10%, is triggered.
A centralized decision, presumably = by Satoshi, was made on the parameters that alter the target difficulty, ra= ther than attempt to forecast hash rates based on his CPU power. He allowed= the system to scale to a level where real market demand would take it. I b= elieve the same approach should be replicated for the block size. The trick= of course is settling on the right variables. I hope this proposal is a go= od way to do that.

Some additional calculat= ions=C2=A0

Block sizes for each year are=C2=A0= theoretical maximums if ALL trigger points are activated in my propo= sal (unlikely, but anyway).
These calculations assume zero transa= ctions are taken off-chain by third party processors or the LN, and no effi= ciency improvements.
  • 2015
    • 1 MB/block
    • 2 tps (conservative factor, also carried on below)
    • 0.17 millio= n tx/day
  • 2016
    • 3.45 MB/block
    • 7 tps
    • = 0.6 million tx/day=C2=A0
  • 2017
    • 12 MB/block
    • = 24 tps
    • 2 million tx/day=C2=A0
  • 2018
    • 41 MB/b= lock
    • 82 tps
    • 7 million tx/day
  • 2019
    • = 142 MB/block
    • 284 tps
    • 25 million tx/day
  • 2020
    • 490 MB/block
    • 980 tps
    • 85 million tx/day
    <= /ul>
By way of comparison, Alipay (payment processor for the Alib= aba Group's ecosystem) processes 30 million escrow transactions per day= . This gives us at least 4-5 years to reach the present day transaction pro= cessing capacity of 1 corporation... in reality it will take a little longe= r as I doubt all block size triggers will be activated. This also gives us = at least 4-5 years to develop efficiency improvements within the protocol, = develop the LN to take many of these transactions off-chain, and network in= frastructure to be significantly improved (and anything else this ecosystem= can come up with).

(let me know if any of these c= alculations are off)




--
--001a1141fb90c66589051f5035d2--