From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from sog-mx-3.v43.ch3.sourceforge.com ([172.29.43.193] helo=mx.sourceforge.net) by sfs-ml-2.v29.ch3.sourceforge.com with esmtp (Exim 4.76) (envelope-from ) id 1Yx4oC-00019O-3C for bitcoin-development@lists.sourceforge.net; Tue, 26 May 2015 02:41:36 +0000 Received-SPF: pass (sog-mx-3.v43.ch3.sourceforge.com: domain of gmail.com designates 209.85.220.46 as permitted sender) client-ip=209.85.220.46; envelope-from=gappleto97@gmail.com; helo=mail-pa0-f46.google.com; Received: from mail-pa0-f46.google.com ([209.85.220.46]) by sog-mx-3.v43.ch3.sourceforge.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.76) id 1Yx4o9-0002De-VA for bitcoin-development@lists.sourceforge.net; Tue, 26 May 2015 02:41:36 +0000 Received: by pacwv17 with SMTP id wv17so81512121pac.2 for ; Mon, 25 May 2015 19:41:28 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.66.249.168 with SMTP id yv8mr45003932pac.49.1432608088252; Mon, 25 May 2015 19:41:28 -0700 (PDT) Received: by 10.66.194.168 with HTTP; Mon, 25 May 2015 19:41:28 -0700 (PDT) In-Reply-To: References: Date: Mon, 25 May 2015 22:41:28 -0400 Message-ID: From: gabe appleton To: Thy Shizzle Content-Type: multipart/alternative; boundary=047d7b111c0f33e67b0516f31037 X-Spam-Score: -0.3 (/) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. -1.5 SPF_CHECK_PASS SPF reports sender host as permitted sender for sender-domain 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (gappleto97[at]gmail.com) -0.0 SPF_PASS SPF: sender matches SPF record 0.2 FREEMAIL_ENVFROM_END_DIGIT Envelope-from freemail username ends in digit (gappleto97[at]gmail.com) 1.0 HTML_MESSAGE BODY: HTML included in message -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.0 T_REMOTE_IMAGE Message contains an external image X-Headers-End: 1Yx4o9-0002De-VA Cc: Bitcoin Dev Subject: Re: [Bitcoin-development] No Bitcoin For You X-BeenThere: bitcoin-development@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 May 2015 02:41:36 -0000 --047d7b111c0f33e67b0516f31037 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable But don't you see the same trade-off in the end there? You're still propagating the same amount of data over the same amount of time, so unless I misunderstand, the costs of such a move should be approximately the same, just in different areas. The risks as I understand are as follows: 20MB: 1. Longer per-block propagation (eventually) 2. Longer processing time (eventually) 3. Longer sync time 1 Minute: 1. Weaker individual confirmations (approx. equal per confirmation*time) 2. Higher orphan rate (immediately) 3. Longer sync time That risk-set makes me want a middle-ground approach. Something where the immediate consequences aren't all that strong, and where we have some idea of what to do in the future. Is there any chance we can get decent network simulations at various configurations (5MB/4min, etc)? Perhaps re-appropriate the testnet? On Mon, May 25, 2015 at 10:30 PM, Thy Shizzle wrote: > Nah don't make blocks 20mb, then you are slowing down block propagation > and blowing out conf tikes as a result. Just decrease the time it takes t= o > make a 1mb block, then you still see the same propagation times today and > just increase the transaction throughput. > ------------------------------ > From: Jim Phillips > Sent: =E2=80=8E26/=E2=80=8E05/=E2=80=8E2015 12:27 PM > To: Mike Hearn > Cc: Bitcoin Dev > Subject: Re: [Bitcoin-development] No Bitcoin For You > > > On Mon, May 25, 2015 at 1:36 PM, Mike Hearn wrote: > > This meme about datacenter-sized nodes has to die. The Bitcoin wiki is > down right now, but I showed years ago that you could keep up with VISA o= n > a single well specced server with today's technology. Only people living = in > a dreamworld think that Bitcoin might actually have to match that level o= f > transaction demand with today's hardware. As noted previously, "too many > users" is simply not a problem Bitcoin has .... and may never have! > > > ... And will certainly NEVER have if we can't solve the capacity problem > SOON. > > In a former life, I was a capacity planner for Bank of America's > mid-range server group. We had one hard and fast rule. When you are > typically exceeding 75% of capacity on a given metric, it's time to expan= d > capacity. Period. You don't do silly things like adjusting the business > model to disincentivize use. Unless there's some flaw in the system and > it's leaking resources, if usage has increased to the point where you are > at or near the limits of capacity, you expand capacity. It's as simple as > that, and I've found that same rule fits quite well in a number of system= s. > > In Bitcoin, we're not leaking resources. There's no flaw. The system is > performing as intended. Usage is increasing because it works so well, and > there is huge potential for future growth as we identify more uses and > attract more users. There might be a few technical things we can do to > reduce consumption, but the metric we're concerned with right now is how > many transactions we can fit in a block. We've broken through the 75% > marker and are regularly bumping up against the 100% limit. > > It is time to stop debating this and take action to expand capacity. The > only questions that should remain are how much capacity do we add, and ho= w > soon can we do it. Given that most existing computer systems and networks > can easily handle 20MB blocks every 10 minutes, and given that that will > increase capacity 20-fold, I can't think of a single reason why we can't = go > to 20MB as soon as humanly possible. And in a few years, when the average > block size is over 15MB, we bump it up again to as high as we can go then > without pushing typical computers or networks beyond their capacity. We c= an > worry about ways to slow down growth without affecting the usefulness of > Bitcoin as we get closer to the hard technical limits on our capacity. > > And you know what else? If miners need higher fees to accommodate the > costs of bigger blocks, they can configure their nodes to only mine > transactions with higher fees.. Let the miners decide how to charge enoug= h > to pay for their costs. We don't need to cripple the network just for the= m. > > -- > *James G. Phillips IV* > > > *"Don't bunt. Aim out of the ball park. Aim for the company of immortals.= " > -- David Ogilvy * > > *This message was created with 100% recycled electrons. Please think > twice before printing.* > > > > -------------------------------------------------------------------------= ----- > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > --047d7b111c0f33e67b0516f31037 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
But don't you see the same trade-off in the end there?= You're still propagating the same amount of data over the same amount = of time, so unless I misunderstand, the costs of such a move should be appr= oximately the same, just in different areas. The risks as I understand are = as follows:

20MB:

  1. Longe= r per-block propagation (eventually)
  2. Longer processing time (eventu= ally)
  3. Longer sync time
1 Minute:
    =
  1. Weaker individual confirmations (approx. equal per confirmation*time)
  2. Higher orphan rate (immediately)
  3. Longer sync time
<= div>That risk-set makes me want a middle-ground approach. Something where t= he immediate consequences aren't all that strong, and where we have som= e idea of what to do in the future. Is there any chance we can get decent n= etwork simulations at various configurations (5MB/4min, etc)? Perhaps re-ap= propriate the testnet?

On Mon, May 25, 2015 at 10:30 PM, Thy Shizzle <thyshizzle@outlook.com> wrote:
Nah don't = make blocks 20mb, then you are slowing down block propagation and blowing o= ut conf tikes as a result. Just decrease the time it takes to make a 1mb bl= ock, then you still see the same propagation times today and just increase the transaction throughput.

From: Jim Phillips Sent: =E2=80= =8E26/=E2=80=8E05/=E2=80=8E2015 12:27 PM
To: Mike Hearn
Cc: Bi= tcoin Dev
Subject: Re: [B= itcoin-development] No Bitcoin For You

... And will certainly NEVER have if we can't solve the capacity p= roblem SOON.=C2=A0

In a former life, I was a capacity planner for Bank of America's m= id-range server group. We had one hard and fast rule. When you are typicall= y exceeding 75% of capacity on a given metric, it's time to expand capa= city. Period. You don't do silly things like adjusting the business model to disincentiv= ize use. Unless there's some flaw in the system and it's leaking re= sources, if usage has increased to the point where you are at or near the l= imits of capacity, you expand capacity. It's as simple as that, and I've found that same rule fits quite well in a = number of systems.=C2=A0

In Bitcoin, we're not leaking resources. There's no flaw. The = system is performing as intended. Usage is increasing because it works so w= ell, and there is huge potential for future growth as we identify more uses= and attract more users. There might be a few technical things we can do to reduce consumpti= on, but the metric we're concerned with right now is how many transacti= ons we can fit in a block. We've broken through the 75% marker and are = regularly bumping up against the 100% limit.

It is time to stop debating this and take action to expand capacity. T= he only questions that should remain are how much capacity do we add, and h= ow soon can we do it. Given that most existing computer systems and network= s can easily handle 20MB blocks every 10 minutes, and given that that will increase cap= acity 20-fold, I can't think of a single reason why we can't go to = 20MB as soon as humanly possible. And in a few years, when the average bloc= k size is over 15MB, we bump it up again to as high as we can go then without pushing typical computers or networks= beyond their capacity. We can worry about ways to slow down growth without= affecting the usefulness of Bitcoin as we get closer to the hard technical= limits on our capacity.

And you know what else? If miners need higher fees to accommodate the = costs of bigger blocks, they can configure their nodes to only mine transac= tions with higher fees.. Let the miners decide how to charge enough to pay = for their costs. We don't need to cripple the network just for them.

"Don't bunt. Aim out of the ball park. Ai= m for the company of immortals." -- David Ogilvy

=C2=A0This message was created with 100% recycled ele= ctrons. Please think twice before printing.


-----------------------------------------------------------------------= -------
One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
= _______________________________________________
Bitcoin-development mailing list
Bitcoin-develo= pment@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-de= velopment


--047d7b111c0f33e67b0516f31037--