From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 4B9DD15DE for ; Fri, 18 Sep 2015 17:28:03 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-pa0-f48.google.com (mail-pa0-f48.google.com [209.85.220.48]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 5A2EC13C for ; Fri, 18 Sep 2015 17:28:02 +0000 (UTC) Received: by padhk3 with SMTP id hk3so56250477pad.3 for ; Fri, 18 Sep 2015 10:28:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=user-agent:in-reply-to:references:mime-version:content-type :content-transfer-encoding:subject:from:date:to:cc:message-id; bh=6/QbMpO0+ikCv0v3mOXLLNHpddsfMq2l9Ex7Il0eleU=; b=DxkD3FHOnev3WIMI4sbMapTtZorux/Gy79CZBsqmno5Xe9K4U7uggKK5Wi7x/AKaV3 FvuLOfLxQ7UcNXFYlMo9qVqRP6GhXq36v4TEuCDCySUjlmqJTDGCyg9dV95IyfTrHiQT yT8ZENAr+YSEK3AkrkGWSFqNxtAdA23vMYHfZ5Tw8eaioJnM2/gsUrifM5I1fqLb+4g2 nClVPyxr7+p2huiW4iQ+XUZORJO0xmaaaQarffTcW5l4XhX5NG4zuo5kLW3rgmjZA2y+ WaZahuLqwrciNhvj8OzWO0qNyaqPEQ09KPaak3bnRLnjt+H4DKiUcCtcwjXBbMkG8fke HApw== X-Received: by 10.66.252.5 with SMTP id zo5mr8192413pac.96.1442597281891; Fri, 18 Sep 2015 10:28:01 -0700 (PDT) Received: from [192.168.1.105] (cpe-76-167-237-202.san.res.rr.com. [76.167.237.202]) by smtp.gmail.com with ESMTPSA id qo3sm10129086pac.10.2015.09.18.10.28.00 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 18 Sep 2015 10:28:01 -0700 (PDT) User-Agent: K-9 Mail for Android In-Reply-To: References: <55F9E47D.50507@mattcorallo.com> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----M7ZFMCQ1745V3UU7IXWFC7X7BPUC7B" Content-Transfer-Encoding: 8bit From: Eric Lombrozo Date: Fri, 18 Sep 2015 10:28:08 -0700 To: Dave Scotese , Dave Scotese via bitcoin-dev , Mark Friedenbach Message-ID: X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Bitcoin development mailing list Subject: Re: [bitcoin-dev] Scaling Bitcoin conference micro-report X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Development Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Sep 2015 17:28:03 -0000 ------M7ZFMCQ1745V3UU7IXWFC7X7BPUC7B Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 To be quite frank, I'm a little disappointed we've fallen back on arguing over numbers pulled out of a hat rather than discussing far more fundamental issues such as the dev process generally, consensus building, and our basic understanding of what Bitcoin really is, its strengths and weaknesses, where it shows most promise, and communicating a more unified vision to the industry and the public. On September 18, 2015 10:10:08 AM PDT, Dave Scotese via bitcoin-dev wrote: >"But if a metric were chosen that addressed my concerns (worst case >propagation and validation time), then I could be in favor of an >initial >bump that allowed a larger number of typical transactions in a block." > >+1. A ratio is much more valuable than a simple metric. It seems >clearly >difficult to identify a reasonable limit to block size, but the ratio >between any one of several possible metrics and bytes in a block would >work >well and may already have a very good reasonable expected range. > >I like BTCDaysDestroyed (BTCDD) best. If it might be time consuming to >compute, then it need only be computed for all blocks less than or >equal in >size to the average size of the largest 200 or so blocks in the >previous >difficulty period. To exceed that limit, a miner would have to ensure >that >the block has enough BTCDD per byte. "Enough" could be hardcoded in >each >release, or if it's simple enough, use the ratio as computed over all >the >blocks in the previous difficulty period as the lower limit. > >notplato > >On Thu, Sep 17, 2015 at 10:55 PM, Mark Friedenbach via bitcoin-dev < >bitcoin-dev@lists.linuxfoundation.org> wrote: > >> Correction of a correction, in-line: >> >> On Wed, Sep 16, 2015 at 5:51 PM, Matt Corallo via bitcoin-dev < >> bitcoin-dev@lists.linuxfoundation.org> wrote: >> >>> > - Many interested or at least willing to accept a "short term >bump", a >>> > hard fork to modify block size limit regime to be cost-based via >>> > "net-utxo" rather than a simple static hard limit. 2-4-8 and >17%/year >>> > were debated and seemed "in range" with what might work as a short >term >>> > bump - net after applying the new cost metric. >>> >>> I would be careful to point out that hard numbers were deliberately >NOT >>> discussed. Though some general things were thrown out, they were not >>> extensively discussed nor agreed to. I personally think 2-4 is "in >>> range", though 8 maybe not so much. Of course it depends on exactly >how >>> the non-blocksize limit accounting/adjusting is done. >>> >>> Still, the "greatest common denominator" agreement did not seem to >be >>> agreeing to an increase which continues over time, but which instead >>> limits itself to a set, smooth increase for X time and then requires >a >>> second hardfork if there is agreement on a need for more blocksize >at >>> that point. >>> >> >> Perhaps it is accurate to say that there wasn't consensus at all >except >> that (1) we think we can work together on resolving this impasse >(yay!), >> and (2) it is conceivable that changing from block size to some other >> metric might provide the basis for a compromise on near-term numbers. >> >> As an example, I do not think the net-UTXO metric provides any >benefit >> with respect to scalability, and in some ways makes the situation >worse >> (even though it helpfully solves an unrelated problem of spammy dust >> outputs). But there are other possible metrics and I maintain hope >that >> data will show the benefit of another metric or other metrics >combined with >> net-UTXO in a way that will allow us to reach consensus. >> >> As a further example, I also am quite concerned about 2-4-8MB with >either >> block size or net-UTXO as the base metric. As you say, it depends on >how >> the non-blocksize limit accounting/adjusting is done... But if a >metric >> were chosen that addressed my concerns (worst case propagation and >> validation time), then I could be in favor of an initial bump that >allowed >> a larger number of typical transactions in a block. >> >> But where I really need to disagree is on the requirement for a 2nd >hard >> fork. I will go on record as being definitively against this. While >being >> conservative with respect to exponentials, I would very much like to >make >> sure that there is a long-term growth curve as part of any proposal. >I am >> willing to accept a hard-fork if the adopted plan is too >conservative, but >> I do not want to be kicking the can down the road to a scheduled 2nd >hard >> fork that absolutely must occur. That, I feel, could be a more >dangerous >> outcome than an exponential that outlasts conservative historical >trends. >> >> I commend Jeff for writing a Chatham-rules summary of the outcome of >some >> hallway conversations that occurred. On the whole I think his summary >does >> represent the majority view of the opinions expressed by core >developers at >> the workshop. I will caution though that on nearly every issue there >were >> those expressed disagreement but did not fight the issue, and those >who >> said nothing and left unpolled opinions. Nevertheless this summary is >> informative as it feeds forwards into the design of proposals that >will be >> made prior to the Hong Kong workshop in December, in order that they >have a >> higher likelihood of success. >> >> _______________________________________________ >> bitcoin-dev mailing list >> bitcoin-dev@lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >> >> > > >-- >I like to provide some work at no charge to prove my value. Do you need >a >techie? >I own Litmocracy and Meme Racing > (in alpha). >I'm the webmaster for The Voluntaryist >which >now accepts Bitcoin. >I also code for The Dollar Vigilante . >"He ought to find it more profitable to play by the rules" - Satoshi >Nakamoto > > >------------------------------------------------------------------------ > >_______________________________________________ >bitcoin-dev mailing list >bitcoin-dev@lists.linuxfoundation.org >https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev -- Sent from my Android device with K-9 Mail. Please excuse my brevity. ------M7ZFMCQ1745V3UU7IXWFC7X7BPUC7B Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit To be quite frank, I'm a little disappointed we've fallen back on arguing over numbers pulled out of a hat rather than discussing far more fundamental issues such as the dev process generally, consensus building, and our basic understanding of what Bitcoin really is, its strengths and weaknesses, where it shows most promise, and communicating a more unified vision to the industry and the public.

On September 18, 2015 10:10:08 AM PDT, Dave Scotese via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
"But if a metric were chosen that addressed my concerns (worst case propagation and validation time), then I could be in favor of an initial bump that allowed a larger number of typical transactions in a block."

+1.  A ratio is much more valuable than a simple metric.  It seems clearly difficult to identify a reasonable limit to block size, but the ratio between any one of several possible metrics and bytes in a block would work well and may already have a very good reasonable expected range.

I like BTCDaysDestroyed (BTCDD) best.  If it might be time consuming to compute, then it need only be computed for all blocks less than or equal in size to the average size of the largest 200 or so blocks in the previous difficulty period.  To exceed that limit, a miner would have to ensure that the block has enough BTCDD per byte.  "Enough" could be hardcoded in each release, or if it's simple enough, use the ratio as computed over all the blocks in the previous difficulty period as the lower limit.

notplato

On Thu, Sep 17, 2015 at 10:55 PM, Mark Friedenbach via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
Correction of a correction, in-line:

On Wed, Sep 16, 2015 at 5:51 PM, Matt Corallo via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
> - Many interested or at least willing to accept a "short term bump", a
> hard fork to modify block size limit regime to be cost-based via
> "net-utxo" rather than a simple static hard limit.  2-4-8 and 17%/year
> were debated and seemed "in range" with what might work as a short term
> bump - net after applying the new cost metric.

I would be careful to point out that hard numbers were deliberately NOT
discussed. Though some general things were thrown out, they were not
extensively discussed nor agreed to. I personally think 2-4 is "in
range", though 8 maybe not so much. Of course it depends on exactly how
the non-blocksize limit accounting/adjusting is done.

Still, the "greatest common denominator" agreement did not seem to be
agreeing to an increase which continues over time, but which instead
limits itself to a set, smooth increase for X time and then requires a
second hardfork if there is agreement on a need for more blocksize at
that point.

Perhaps it is accurate to say that there wasn't consensus at all except that (1) we think we can work together on resolving this impasse (yay!), and (2) it is conceivable that changing from block size to some other metric might provide the basis for a compromise on near-term numbers.

As an example, I do not think the net-UTXO metric provides any benefit with respect to scalability, and in some ways makes the situation worse (even though it helpfully solves an unrelated problem of spammy dust outputs). But there are other possible metrics and I maintain hope that data will show the benefit of another metric or other metrics combined with net-UTXO in a way that will allow us to reach consensus.

As a further example, I also am quite concerned about 2-4-8MB with either block size or net-UTXO as the base metric. As you say, it depends on how the non-blocksize limit accounting/adjusting is done... But if a metric were chosen that addressed my concerns (worst case propagation and validation time), then I could be in favor of an initial bump that allowed a larger number of typical transactions in a block.

But where I really need to disagree is on the requirement for a 2nd hard fork. I will go on record as being definitively against this. While being conservative with respect to exponentials, I would very much like to make sure that there is a long-term growth curve as part of any proposal. I am willing to accept a hard-fork if the adopted plan is too conservative, but I do not want to be kicking the can down the road to a scheduled 2nd hard fork that absolutely must occur. That, I feel, could be a more dangerous outcome than an exponential that outlasts conservative historical trends.

I commend Jeff for writing a Chatham-rules summary of the outcome of some hallway conversations that occurred. On the whole I think his summary does represent the majority view of the opinions expressed by core developers at the workshop. I will caution though that on nearly every issue there were those expressed disagreement but did not fight the issue, and those who said nothing and left unpolled opinions. Nevertheless this summary is informative as it feeds forwards into the design of proposals that will be made prior to the Hong Kong workshop in December, in order that they have a higher likelihood of success.

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev




--
Sent from my Android device with K-9 Mail. Please excuse my brevity. ------M7ZFMCQ1745V3UU7IXWFC7X7BPUC7B--