From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 46D0E7AE for ; Mon, 17 Aug 2015 12:38:27 +0000 (UTC) X-Greylist: whitelisted by SQLgrey-1.7.6 Received: from mail-ig0-f177.google.com (mail-ig0-f177.google.com [209.85.213.177]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 2407E63 for ; Mon, 17 Aug 2015 12:38:26 +0000 (UTC) Received: by igxp17 with SMTP id p17so54498923igx.1 for ; Mon, 17 Aug 2015 05:38:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=GPfg9ikBCaWS9FitScm8p3rkunpSuvKS+1+HbYoQGFE=; b=tLciiBJmI1kHsDi1Zs9wm9n9GmF1u1RFD4AQaGgRc9N/oU2tx4O4qsPb5J97I7YNQg 63NyWCcbP8OKeTA1WyPb6ECTfD3gr1NxflzHBeMUv5wzokE6RGFDYAGExMRlx+RFyWHx ex0HzQhSFzx0sBPGnIgZgdqYFGFwpNPueC2iBHkfJCgpTBt0mrnmaKaANqj3f/qWBHC1 Ewr4Smrn3+ijJtfQbJ+kviOXLW0qrI8NCjNLJkt14soq/HZ4uBDich7ALjoVN7SZsdYJ ayRmcyJqfu2ESrFK9EG2q1SaD05gmfRw++atmeYsJt53mgtUzm2IRWH+9Exo23/8vf3L ELGA== X-Received: by 10.50.109.233 with SMTP id hv9mr17094519igb.92.1439815105523; Mon, 17 Aug 2015 05:38:25 -0700 (PDT) MIME-Version: 1.0 Received: by 10.36.122.144 with HTTP; Mon, 17 Aug 2015 05:38:06 -0700 (PDT) In-Reply-To: References: From: Angel Leon Date: Mon, 17 Aug 2015 08:38:06 -0400 Message-ID: To: Rodney Morris Content-Type: multipart/alternative; boundary=089e0122e6bce838c1051d81138a X-Spam-Status: No, score=-2.7 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Cc: Bitcoin Dev Subject: Re: [bitcoin-dev] Dynamically Controlled Bitcoin Block Size Max Cap X-BeenThere: bitcoin-dev@lists.linuxfoundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Bitcoin Development Discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Aug 2015 12:38:27 -0000 --089e0122e6bce838c1051d81138a Content-Type: text/plain; charset=UTF-8 I've been sharing a similar solution for the past 2 weeks. I think 2016 blocks is too much of a wait, I think we should look at the mean block size during the last 60-120 minutes instead and avert any crisis caused by transactional spikes that could well be caused by organic use of the network (Madonna sells her next tour tickets on Bitcoin, OpenBazaar network starts working as imagined, XYZ startup really kicks ass and succeeds in a couple of major cities with major PR push) Pseudo code in Python https://gist.github.com/gubatron/143e431ee01158f27db4 My idea stems from a simple scalability metric that affects real users and the desire to use Bitcoin: Waiting times to get your transactions confirmed on the blockchain. Anything past 45mins-1 hour should be unnacceptable. Initially I wanted to measure the mean time for the transactions in blocks to go from being sent by the user (initial broadcast into mempools) until the transaction was effectively confirmed on the blockchain, say for 2 blocks (acceptable 15~20mins) When blocks get full, people start waiting unnaceptable times for their transactions to come through if they don't adjust their fees. The idea is to avoid that situation at all costs and keep the network churning to the extent of its capabilities, without pretending a certain size will be right at some point in time, nobody can predict the future, nobody can predict real organic usage peaks on an open financial network, not all sustained spikes will come from spammers, they will come from real world use as more and more people think of great uses for Bitcoin. I presented this idea to measure the mean wait time for transactions and I was told there's no way to reliably meassure such a number, there's no consensus when transactions are still in the mempool and wait times could be manipulated. Such an idea would have to include new timestamp fields on the transactions, or include the median wait time on the blockheader (too complex, additional storage costs) This is an iteration on the next thing I believe we can all agree is 100% accurately measured, blocksize. Full blocks are the cause why many transactions would have to be waiting in the mempool, so we should be able to also use the mean size of the blocks to determine if there's a legitimate need to increase or reduce the maximum blocksize. The idea is simple, If blocks are starting to get full past a certain threshold then we double the blocksize limit starting the next block, if blocks remain within a healthy bound, transaction wait times should be as expected for everyone on the network, if blocks are not getting that full and the mean goes below a certain threshold then we half the maximum block size allowed until we reach the level we need. Similar to what we do with hashing difficulty, it's something you can't predict, therefore no fixed limits, or predicted limits should be established. --089e0122e6bce838c1051d81138a Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I= 9;ve been sharing a similar solution for the past 2 weeks. I think 2016 blo= cks is too much of a wait, I think we should look at the mean block size du= ring the last 60-120 minutes instead and avert any crisis caused by transac= tional spikes that could well be caused by organic use of the network (Mado= nna sells her next tour tickets on Bitcoin, OpenBazaar network starts worki= ng as imagined, XYZ startup really kicks ass and succeeds in a couple of ma= jor cities with major PR push)

Pseudo code in Python
https://gist.github.c= om/gubatron/143e431ee01158f27db4

My idea stems from a simple sca= lability metric that affects real users and the desire to use Bitcoin:
Waiting times to get your transactions confirme= d on the blockchain.=C2=A0
Anything past 45= mins-1 hour should be unnacceptable.

Initially I wanted to measure the mean time = for the transactions in blocks to go from being sent by the user
(initial broadcast into mempools) until the transacti= on was effectively=C2=A0
confirmed on the b= lockchain, say for 2 blocks (acceptable 15~20mins)

When blocks get full, people s= tart waiting unnaceptable times for their transactions to come through=C2= =A0
if they don't adjust their fees. Th= e idea is to avoid that situation at all costs and keep the network
churning to the extent of its capabilities, withou= t pretending a certain size will be right at some=C2=A0
point in time, nobody can predict the future, nobody can predi= ct real organic usage peaks=C2=A0
on an ope= n financial network, not all sustained spikes will come from spammers,=C2= =A0
they will come from real world use as m= ore and more people think of great uses for Bitcoin.

I presented this idea to mea= sure the mean wait time for transactions and I was told=C2=A0
there's no way to reliably meassure such a number, t= here's no consensus when transactions are still=C2=A0
in the mempool and wait times could be manipulated. Such a= n idea would have to include new timestamp fields=C2=A0
on the transactions, or include the median wait time on the bl= ockheader (too complex, additional storage costs)

This is an iteration on the nex= t thing I believe we can all agree is 100% accurately measured, blocksize.<= /div>
Full blocks are the cause why many transact= ions would have to be waiting in the mempool, so we should be able
to also use the mean size of the blocks to determin= e if there's a legitimate need to increase or reduce the=C2=A0
maximum blocksize.
=
The idea is simple, If blocks are star= ting to get full past a certain threshold then we double the blocksize=C2= =A0
limit starting the next block, if block= s remain within a healthy bound, transaction wait times should be as=C2=A0<= /div>
expected for everyone on the network, if bl= ocks are not getting that full and the mean goes below a certain=C2=A0
threshold then we half the maximum block size a= llowed until we reach the level we need.
Si= milar to what we do with hashing difficulty, it's something you can'= ;t predict, therefore no fixed limits,=C2=A0
or predicted limits should be established.
--089e0122e6bce838c1051d81138a--