From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from sog-mx-3.v43.ch3.sourceforge.com ([172.29.43.193] helo=mx.sourceforge.net) by sfs-ml-3.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1YqqzJ-0001PA-Sw for bitcoin-development@lists.sourceforge.net; Fri, 08 May 2015 22:43:21 +0000 Received-SPF: pass (sog-mx-3.v43.ch3.sourceforge.com: domain of gmail.com designates 209.85.192.44 as permitted sender) client-ip=209.85.192.44; envelope-from=voisine@gmail.com; helo=mail-qg0-f44.google.com; Received: from mail-qg0-f44.google.com ([209.85.192.44]) by sog-mx-3.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.76) id 1YqqzH-0008Tw-Ps for bitcoin-development@lists.sourceforge.net; Fri, 08 May 2015 22:43:21 +0000 Received: by qgej70 with SMTP id j70so43586859qge.2 for ; Fri, 08 May 2015 15:43:14 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.140.236.73 with SMTP id h70mr422611qhc.20.1431124994342; Fri, 08 May 2015 15:43:14 -0700 (PDT) Received: by 10.140.91.37 with HTTP; Fri, 8 May 2015 15:43:14 -0700 (PDT) In-Reply-To: References: <16096345.A1MpJQQkRW@crushinator> Date: Fri, 8 May 2015 15:43:14 -0700 Message-ID: From: Aaron Voisine To: Mark Friedenbach Content-Type: multipart/alternative; boundary=001a11359476eac451051599c02c X-Spam-Score: 0.8 (/) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. -1.5 SPF_CHECK_PASS SPF reports sender host as permitted sender for sender-domain 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (voisine[at]gmail.com) -0.0 SPF_PASS SPF: sender matches SPF record 1.0 HTML_MESSAGE BODY: HTML included in message -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 1.4 AWL AWL: Adjusted score from AWL reputation of From: address X-Headers-End: 1YqqzH-0008Tw-Ps Cc: Bitcoin Development Subject: Re: [Bitcoin-development] Proposed alternatives to the 20MB step function X-BeenThere: bitcoin-development@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 May 2015 22:43:21 -0000 --001a11359476eac451051599c02c Content-Type: text/plain; charset=UTF-8 This is a clever way to tie block size to fees. I would just like to point out though that it still fundamentally is using hard block size limits to enforce scarcity. Transactions with below market fees will hang in limbo for days and fail, instead of failing immediately by not propagating, or seeing degraded, long confirmation times followed by eventual success. Aaron Voisine co-founder and CEO breadwallet.com On Fri, May 8, 2015 at 1:33 PM, Mark Friedenbach wrote: > It is my professional opinion that raising the block size by merely > adjusting a constant without any sort of feedback mechanism would be a > dangerous and foolhardy thing to do. We are custodians of a multi-billion > dollar asset, and it falls upon us to weigh the consequences of our own > actions against the combined value of the entire bitcoin ecosystem. Ideally > we would take no action for which we are not absolutely certain of the > ramifications, with the information that can be made available to us. But > of course that is not always possible: there are unknown-unknowns, time > pressures, and known-unknowns where information has too high a marginal > cost. So where certainty is unobtainable, we must instead hedge against > unwanted outcomes. > > The proposal to raise the block size now by redefining a constant carries > with it risk associated with infrastructure scaling, centralization > pressures, and delaying the necessary development of a constraint-based fee > economy. It also simply kicks the can down the road in settling these > issues because a larger but realistic hard limit must still exist, meaning > a future hard fork may still be required. > > But whatever new hard limit is chosen, there is also a real possibility > that it may be too high. The standard response is that it is a soft-fork > change to impose a lower block size limit, which miners could do with a > minimal amount of coordination. This is however undermined by the > unfortunate reality that so many mining operations are absentee-run > businesses, or run by individuals without a strong background in bitcoin > protocol policy, or with interests which are not well aligned with other > users or holders of bitcoin. We cannot rely on miners being vigilant about > issues that develop, as they develop, or able to respond in the appropriate > fashion that someone with full domain knowledge and an objective > perspective would. > > The alternative then is to have some sort of dynamic block size limit > controller, and ideally one which applies a cost to raising the block size > in some way the preserves the decentralization and/or long-term stability > features that we care about. I will now describe one such proposal: > > * For each block, the miner is allowed to select a different difficulty > (nBits) within a certain range, e.g. +/- 25% of the expected difficulty, > and this miner-selected difficulty is used for the proof of work check. In > addition to adjusting the hashcash target, selecting a different difficulty > also raises or lowers the maximum block size for that block by a function > of the difference in difficulty. So increasing the difficulty of the block > by an additional 25% raises the block limit for that block from 100% of the > current limit to 125%, and lowering the difficulty by 10% would also lower > the maximum block size for that block from 100% to 90% of the current > limit. For simplicity I will assume a linear identity transform as the > function, but a quadratic or other function with compounding marginal cost > may be preferred. > > * The default maximum block size limit is then adjusted at regular > intervals. For simplicity I will assume an adjustment at the end of each > 2016 block interval, at the same time that difficulty is adjusted, but > there is no reason these have to be aligned. The adjustment algorithm > itself is either the selection of the median, or perhaps some sort of > weighted average that respects the "middle majority." There would of course > be limits on how quickly the block size limit can adjusted in any one > period, just as there are min/max limits on the difficulty adjustment. > > * To prevent perverse mining incentives, the original difficulty without > adjustment is used in the aggregate work calculations for selecting the > most-work chain, and the allowable miner-selected adjustment to difficulty > would have to be tightly constrained. > > These rules create an incentive environment where raising the block size > has a real cost associated with it: a more difficult hashcash target for > the same subsidy reward. For rational miners that cost must be > counter-balanced by additional fees provided in the larger block. This > allows block size to increase, but only within the confines of a > self-supporting fee economy. > > When the subsidy goes away or is reduced to an insignificant fraction of > the block reward, this incentive structure goes away. Hopefully at that > time we would have sufficient information to soft-fork set a hard block > size maximum. But in the mean time, the block size limit controller > constrains the maximum allowed block size to be within a range supported by > fees on the network, providing an emergency relief valve that we can be > assured will only be used at significant cost. > > Mark Friedenbach > > * There has over time been various discussions on the bitcointalk forums > about dynamically adjusting block size limits. The true origin of the idea > is unclear at this time (citations would be appreciated!) but a form of it > was implemented in Bytecoin / Monero using subsidy burning to increase the > block size. That approach has various limitations. These were corrected in > Greg Maxwell's suggestion to adjust the difficulty/nBits field directly, > which also has the added benefit of providing incentive for bidirectional > movement during the subsidy period. The description in this email and any > errors are my own. > > On Fri, May 8, 2015 at 12:20 AM, Matt Whitlock > wrote: > >> Between all the flames on this list, several ideas were raised that did >> not get much attention. I hereby resubmit these ideas for consideration and >> discussion. >> >> - Perhaps the hard block size limit should be a function of the actual >> block sizes over some trailing sampling period. For example, take the >> median block size among the most recent 2016 blocks and multiply it by 1.5. >> This allows Bitcoin to scale up gradually and organically, rather than >> having human beings guessing at what is an appropriate limit. >> >> - Perhaps the hard block size limit should be determined by a vote of the >> miners. Each miner could embed a desired block size limit in the coinbase >> transactions of the blocks it publishes. The effective hard block size >> limit would be that size having the greatest number of votes within a >> sliding window of most recent blocks. >> >> - Perhaps the hard block size limit should be a function of block-chain >> length, so that it can scale up smoothly rather than jumping immediately to >> 20 MB. This function could be linear (anticipating a breakdown of Moore's >> Law) or quadratic. >> >> I would be in support of any of the above, but I do not support Mike >> Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the >> road without actually solving the problem, and it does so in a >> controversial (step function) way. >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bitcoin-development mailing list >> Bitcoin-development@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/bitcoin-development >> > > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > --001a11359476eac451051599c02c Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
This is a clever way to tie block size to fees.

I would just like to point out though that it still fundamentally = is using hard block size limits to enforce scarcity. Transactions with belo= w market fees will hang in limbo for days and fail, instead of failing imme= diately by not propagating, or seeing degraded, long confirmation times fol= lowed by eventual success.


Aaron Voisine
co-founder and CEO
breadwallet.com

On Fri, May 8, 2015 at 1:33 PM, Mark Frieden= bach <mark@friedenbach.org> wrote:
It is my professional opinion that rais= ing the block size by merely adjusting a constant without any sort of feedb= ack mechanism would be a dangerous and foolhardy thing to do. We are custod= ians of a multi-billion dollar asset, and it falls upon us to weigh the con= sequences of our own actions against the combined value of the entire bitco= in ecosystem. Ideally we would take no action for which we are not absolute= ly certain of the ramifications, with the information that can be made avai= lable to us. But of course that is not always possible: there are unknown-u= nknowns, time pressures, and known-unknowns where information has too high = a marginal cost. So where certainty is unobtainable, we must instead hedge = against unwanted outcomes.

The proposal to raise the block size now = by redefining a constant carries with it risk associated with infrastructur= e scaling, centralization pressures, and delaying the necessary development= of a constraint-based fee economy. It also simply kicks the can down the r= oad in settling these issues because a larger but realistic hard limit must= still exist, meaning a future hard fork may still be required.

But = whatever new hard limit is chosen, there is also a real possibility that it= may be too high. The standard response is that it is a soft-fork change to= impose a lower block size limit, which miners could do with a minimal amou= nt of coordination. This is however undermined by the unfortunate reality t= hat so many mining operations are absentee-run businesses, or run by indivi= duals without a strong background in bitcoin protocol policy, or with inter= ests which are not well aligned with other users or holders of bitcoin. We = cannot rely on miners being vigilant about issues that develop, as they dev= elop, or able to respond in the appropriate fashion that someone with full = domain knowledge and an objective perspective would.

The alternative= then is to have some sort of dynamic block size limit controller, and idea= lly one which applies a cost to raising the block size in some way the pres= erves the decentralization and/or long-term stability features that we care= about. I will now describe one such proposal:

=C2=A0 * For each blo= ck, the miner is allowed to select a different difficulty (nBits) within a = certain range, e.g. +/- 25% of the expected difficulty, and this miner-sele= cted difficulty is used for the proof of work check. In addition to adjusti= ng the hashcash target, selecting a different difficulty also raises or low= ers the maximum block size for that block by a function of the difference i= n difficulty. So increasing the difficulty of the block by an additional 25= % raises the block limit for that block from 100% of the current limit to 1= 25%, and lowering the difficulty by 10% would also lower the maximum block = size for that block from 100% to 90% of the current limit. For simplicity I= will assume a linear identity transform as the function, but a quadratic o= r other function with compounding marginal cost may be preferred.

= =C2=A0 * The default maximum block size limit is then adjusted at regular i= ntervals. For simplicity I will assume an adjustment at the end of each 201= 6 block interval, at the same time that difficulty is adjusted, but there i= s no reason these have to be aligned. The adjustment algorithm itself is ei= ther the selection of the median, or perhaps some sort of weighted average = that respects the "middle majority." There would of course be lim= its on how quickly the block size limit can adjusted in any one period, jus= t as there are min/max limits on the difficulty adjustment.

=C2=A0 *= To prevent perverse mining incentives, the original difficulty without adj= ustment is used in the aggregate work calculations for selecting the most-w= ork chain, and the allowable miner-selected adjustment to difficulty would = have to be tightly constrained.

These rules create an incentive envi= ronment where raising the block size has a real cost associated with it: a = more difficult hashcash target for the same subsidy reward. For rational mi= ners that cost must be counter-balanced by additional fees provided in the = larger block. This allows block size to increase, but only within the confi= nes of a self-supporting fee economy.

When the subsidy goes away or = is reduced to an insignificant fraction of the block reward, this incentive= structure goes away. Hopefully at that time we would have sufficient infor= mation to soft-fork set a hard block size maximum. But in the mean time, th= e block size limit controller constrains the maximum allowed block size to = be within a range supported by fees on the network, providing an emergency = relief valve that we can be assured will only be used at significant cost.<= br>
Mark Friedenbach

* There has over time been various discussio= ns on the bitcointalk forums about dynamically adjusting block size limits.= The true origin of the idea is unclear at this time (citations would be ap= preciated!) but a form of it was implemented in Bytecoin / Monero using sub= sidy burning to increase the block size. That approach has various limitati= ons. These were corrected in Greg Maxwell's suggestion to adjust the di= fficulty/nBits field directly, which also has the added benefit of providin= g incentive for bidirectional movement during the subsidy period. The descr= iption in this email and any errors are my own.

On Fri, May 8, 2015 at 12:20 AM, Matt Whitlock <bip@mattw= hitlock.name> wrote:
Betwee= n all the flames on this list, several ideas were raised that did not get m= uch attention. I hereby resubmit these ideas for consideration and discussi= on.

- Perhaps the hard block size limit should be a function of the actual bloc= k sizes over some trailing sampling period. For example, take the median bl= ock size among the most recent 2016 blocks and multiply it by 1.5. This all= ows Bitcoin to scale up gradually and organically, rather than having human= beings guessing at what is an appropriate limit.

- Perhaps the hard block size limit should be determined by a vote of the m= iners. Each miner could embed a desired block size limit in the coinbase tr= ansactions of the blocks it publishes. The effective hard block size limit = would be that size having the greatest number of votes within a sliding win= dow of most recent blocks.

- Perhaps the hard block size limit should be a function of block-chain len= gth, so that it can scale up smoothly rather than jumping immediately to 20= MB. This function could be linear (anticipating a breakdown of Moore's= Law) or quadratic.

I would be in support of any of the above, but I do not support Mike Hearn&= #39;s proposed jump to 20 MB. Hearn's proposal kicks the can down the r= oad without actually solving the problem, and it does so in a controversial= (step function) way.

---------------------------------------------------------------------------= ---
One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-de= velopment


-----------------------------------------------------------= -------------------
One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
= _______________________________________________
Bitcoin-development mailing list
Bitcoin-develo= pment@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-de= velopment


--001a11359476eac451051599c02c--