* [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
@ 2016-12-05 15:27 t. khan
2016-12-10 10:44 ` s7r
2016-12-10 23:12 ` Bram Cohen
0 siblings, 2 replies; 24+ messages in thread
From: t. khan @ 2016-12-05 15:27 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 3022 bytes --]
BIP Proposal - Managing Bitcoin’s block size the same way we do difficulty
(aka Block75)
The every two-week adjustment of difficulty has proven to be a reasonably
effective and predictable way of managing how quickly blocks are mined.
Bitcoin needs a reasonably effective and predictable way of managing the
maximum block size.
It’s clear at this point that human beings should not be involved in the
determination of max block size, just as they’re not involved in deciding
the difficulty.
Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
passing the decision to miners/pool operators, the max block size should be
adjusted every two weeks (2016 blocks) using a system similar to how
difficulty is calculated.
Put another way: let’s stop thinking about what the max block size should
be and start thinking about how full we want the average block to be
regardless of size. Over the last year, we’ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this concept
‘Block75’.
The target capacity over 2016 blocks would be 75%. If the last 2016 blocks
are more than 75% full, add the difference to the max block size. Like this:
MAX_BLOCK_BASE_SIZE = 1000000
TARGET_CAPACITY = 750000
AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
TARGET_CAPACITY
To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
For example, if the last 2016 blocks are 85% full (average block is 850
KB), add 10% to the max block size. The new max block size would be 1,100
KB until the next 2016 blocks are mined, then reset and recalculate. The
1,000,000 byte limit that exists currently would remain, but would
effectively be the minimum max block size.
Another two weeks goes by, the last 2016 blocks are again 85% full, but now
that means they average 935 KB out of the 1,100 KB max block size. This is
93.5% of the 1,000,000 byte limit, so 18.5% would be added to that to make
the new max block size of 1,185 KB.
Another two weeks passes. This time, the average block is 1,050 KB. The new
max block size is calculated to 1,300 KB (as blocks were 105% full, minus
the 75% capacity target, so 30% added to max block size).
Repeat every 2016 blocks, forever.
If Block75 had been applied at the difficulty adjustment on November 18th,
the max block size would have been 1,080KB, as the average block during
that period was 83% full, so 8% is added to the 1,000KB limit. The current
size, after the December 2nd adjustment would be 1,150K.
Block75 would allow the max block size to grow (or shrink) in response to
transaction volume, and does so predictably, reasonably quickly, and in a
method that prevents wild swings in block size or transaction fees. It
attempts to keep blocks at 75% total capacity over each two week period,
the same way difficulty tries to keep blocks mined every ten minutes. It
also keeps blocks as small as possible.
Thoughts?
-t.k.
[-- Attachment #2: Type: text/html, Size: 3251 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-05 15:27 [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75) t. khan
@ 2016-12-10 10:44 ` s7r
2016-12-10 12:05 ` Hampus Sjöberg
2016-12-11 0:26 ` t. khan
2016-12-10 23:12 ` Bram Cohen
1 sibling, 2 replies; 24+ messages in thread
From: s7r @ 2016-12-10 10:44 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1.1: Type: text/plain, Size: 4492 bytes --]
t. khan via bitcoin-dev wrote:
> BIP Proposal - Managing Bitcoin’s block size the same way we do
> difficulty (aka Block75)
>
> The every two-week adjustment of difficulty has proven to be a
> reasonably effective and predictable way of managing how quickly blocks
> are mined. Bitcoin needs a reasonably effective and predictable way of
> managing the maximum block size.
>
> It’s clear at this point that human beings should not be involved in the
> determination of max block size, just as they’re not involved in
> deciding the difficulty.
>
> Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
> passing the decision to miners/pool operators, the max block size should
> be adjusted every two weeks (2016 blocks) using a system similar to how
> difficulty is calculated.
>
> Put another way: let’s stop thinking about what the max block size
> should be and start thinking about how full we want the average block to
> be regardless of size. Over the last year, we’ve had averages of 75% or
> higher, so aiming for 75% full seems reasonable, hence naming this
> concept ‘Block75’.
>
> The target capacity over 2016 blocks would be 75%. If the last 2016
> blocks are more than 75% full, add the difference to the max block size.
> Like this:
>
> MAX_BLOCK_BASE_SIZE = 1000000
> TARGET_CAPACITY = 750000
> AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> TARGET_CAPACITY
>
> To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
>
> For example, if the last 2016 blocks are 85% full (average block is 850
> KB), add 10% to the max block size. The new max block size would be
> 1,100 KB until the next 2016 blocks are mined, then reset and
> recalculate. The 1,000,000 byte limit that exists currently would
> remain, but would effectively be the minimum max block size.
>
> Another two weeks goes by, the last 2016 blocks are again 85% full, but
> now that means they average 935 KB out of the 1,100 KB max block size.
> This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> that to make the new max block size of 1,185 KB.
>
> Another two weeks passes. This time, the average block is 1,050 KB. The
> new max block size is calculated to 1,300 KB (as blocks were 105% full,
> minus the 75% capacity target, so 30% added to max block size).
>
> Repeat every 2016 blocks, forever.
>
> If Block75 had been applied at the difficulty adjustment on November
> 18th, the max block size would have been 1,080KB, as the average block
> during that period was 83% full, so 8% is added to the 1,000KB limit.
> The current size, after the December 2nd adjustment would be 1,150K.
>
> Block75 would allow the max block size to grow (or shrink) in response
> to transaction volume, and does so predictably, reasonably quickly, and
> in a method that prevents wild swings in block size or transaction fees.
> It attempts to keep blocks at 75% total capacity over each two week
> period, the same way difficulty tries to keep blocks mined every ten
> minutes. It also keeps blocks as small as possible.
>
> Thoughts?
>
> -t.k.
>
I like the idea. It is good wrt growing the max. block size
automatically without human action, but the main problem (or question)
is not how to grow this number, it is what number can the network
handle, considering both miners and users. While disk space requirements
might not be a big problem, block propagation time is. The time required
for a block to propagate in the network (or at least to all the miners)
is directly dependent of its size. If blocks take too much time to
propagate in the network, the orphan rate will increase in unpredictable
ways. For example if the internet speed in China is worse than in
Europe, and miners in China have more than 50% of the hashing power,
blocks mined by European miners might get orphaned.
The system as described can also be gamed, by filling the network with
transactions. Miners have the monetary interest to include as many
transactions as possible in a block in order to collect the fees.
Regardless how you think about it, there has to be a maximum block size
that the network will allow as a consensus rule. Increasing it
dynamically based on transaction volume will reach a point where the
number got big enough that it broke things. Bitcoin, because its
fundamental design, can scale by using offchain solutions.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-10 10:44 ` s7r
@ 2016-12-10 12:05 ` Hampus Sjöberg
2016-12-11 0:26 ` t. khan
1 sibling, 0 replies; 24+ messages in thread
From: Hampus Sjöberg @ 2016-12-10 12:05 UTC (permalink / raw)
To: s7r, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5340 bytes --]
> While disk space requirements might not be a big problem, block
propagation time is
Is block propagation time really still a problem? Compact blocks and FIBRE
should help here.
> Bitcoin, because its fundamental design, can scale by using offchain
solutions.
I agree.
However, I believe that on-chain scaling will be needed regardless of which
off-chain solution gains popularity.
2016-12-10 11:44 GMT+01:00 s7r via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org>:
> t. khan via bitcoin-dev wrote:
> > BIP Proposal - Managing Bitcoin’s block size the same way we do
> > difficulty (aka Block75)
> >
> > The every two-week adjustment of difficulty has proven to be a
> > reasonably effective and predictable way of managing how quickly blocks
> > are mined. Bitcoin needs a reasonably effective and predictable way of
> > managing the maximum block size.
> >
> > It’s clear at this point that human beings should not be involved in the
> > determination of max block size, just as they’re not involved in
> > deciding the difficulty.
> >
> > Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
> > passing the decision to miners/pool operators, the max block size should
> > be adjusted every two weeks (2016 blocks) using a system similar to how
> > difficulty is calculated.
> >
> > Put another way: let’s stop thinking about what the max block size
> > should be and start thinking about how full we want the average block to
> > be regardless of size. Over the last year, we’ve had averages of 75% or
> > higher, so aiming for 75% full seems reasonable, hence naming this
> > concept ‘Block75’.
> >
> > The target capacity over 2016 blocks would be 75%. If the last 2016
> > blocks are more than 75% full, add the difference to the max block size.
> > Like this:
> >
> > MAX_BLOCK_BASE_SIZE = 1000000
> > TARGET_CAPACITY = 750000
> > AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> > TARGET_CAPACITY
> >
> > To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
> >
> > For example, if the last 2016 blocks are 85% full (average block is 850
> > KB), add 10% to the max block size. The new max block size would be
> > 1,100 KB until the next 2016 blocks are mined, then reset and
> > recalculate. The 1,000,000 byte limit that exists currently would
> > remain, but would effectively be the minimum max block size.
> >
> > Another two weeks goes by, the last 2016 blocks are again 85% full, but
> > now that means they average 935 KB out of the 1,100 KB max block size.
> > This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> > that to make the new max block size of 1,185 KB.
> >
> > Another two weeks passes. This time, the average block is 1,050 KB. The
> > new max block size is calculated to 1,300 KB (as blocks were 105% full,
> > minus the 75% capacity target, so 30% added to max block size).
> >
> > Repeat every 2016 blocks, forever.
> >
> > If Block75 had been applied at the difficulty adjustment on November
> > 18th, the max block size would have been 1,080KB, as the average block
> > during that period was 83% full, so 8% is added to the 1,000KB limit.
> > The current size, after the December 2nd adjustment would be 1,150K.
> >
> > Block75 would allow the max block size to grow (or shrink) in response
> > to transaction volume, and does so predictably, reasonably quickly, and
> > in a method that prevents wild swings in block size or transaction fees.
> > It attempts to keep blocks at 75% total capacity over each two week
> > period, the same way difficulty tries to keep blocks mined every ten
> > minutes. It also keeps blocks as small as possible.
> >
> > Thoughts?
> >
> > -t.k.
> >
>
> I like the idea. It is good wrt growing the max. block size
> automatically without human action, but the main problem (or question)
> is not how to grow this number, it is what number can the network
> handle, considering both miners and users. While disk space requirements
> might not be a big problem, block propagation time is. The time required
> for a block to propagate in the network (or at least to all the miners)
> is directly dependent of its size. If blocks take too much time to
> propagate in the network, the orphan rate will increase in unpredictable
> ways. For example if the internet speed in China is worse than in
> Europe, and miners in China have more than 50% of the hashing power,
> blocks mined by European miners might get orphaned.
>
> The system as described can also be gamed, by filling the network with
> transactions. Miners have the monetary interest to include as many
> transactions as possible in a block in order to collect the fees.
> Regardless how you think about it, there has to be a maximum block size
> that the network will allow as a consensus rule. Increasing it
> dynamically based on transaction volume will reach a point where the
> number got big enough that it broke things. Bitcoin, because its
> fundamental design, can scale by using offchain solutions.
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 6328 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-05 15:27 [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75) t. khan
2016-12-10 10:44 ` s7r
@ 2016-12-10 23:12 ` Bram Cohen
2016-12-11 0:52 ` t. khan
1 sibling, 1 reply; 24+ messages in thread
From: Bram Cohen @ 2016-12-10 23:12 UTC (permalink / raw)
To: t. khan, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1043 bytes --]
On Mon, Dec 5, 2016 at 7:27 AM, t. khan via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Put another way: let’s stop thinking about what the max block size should
> be and start thinking about how full we want the average block to be
> regardless of size. Over the last year, we’ve had averages of 75% or
> higher, so aiming for 75% full seems reasonable, hence naming this concept
> ‘Block75’.
>
That's effectively making the blocksize limit completely uncapped and only
preventing spikes, and even in the case of spikes it doesn't differentiate
between 'real' traffic and low value spam attacks. It suffers from the same
fundamental problems as bitcoin unlimited: There are in the end no
transaction fees, and inevitably some miners will want to impose some cap
on block size for practical purposes, resulting in a fork.
Difficulty adjustment works because there's a clear goal of having a
certain rate of making new blocks. Without a target to attempt automatic
adjustment makes no sense.
[-- Attachment #2: Type: text/html, Size: 1442 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-10 10:44 ` s7r
2016-12-10 12:05 ` Hampus Sjöberg
@ 2016-12-11 0:26 ` t. khan
2016-12-11 0:40 ` James Hilliard
2016-12-11 17:11 ` s7r
1 sibling, 2 replies; 24+ messages in thread
From: t. khan @ 2016-12-11 0:26 UTC (permalink / raw)
To: s7r, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 6019 bytes --]
Miners 'gaming' the Block75 system -
There is no financial incentive for miners to attempt to game the Block75
system. Even if it were attempted and assuming the goal was to create
bigger blocks, the maximum possible increase would be 25% over the previous
block size. And, that size would only last for two weeks before readjusting
down. It would cost them more in transaction fees to stuff the network than
they could ever make up. To game the system, they'd have to game it forever
with no possibility of profit.
Blocks would get too big -
Eventually, blocks would get too big, but only if bandwidth stopped
increasing and the cost of disk space stopped decreasing. Otherwise, the
incremental adjustments made by Block75 (especially in combination with
SegWit) wouldn't break anyone's connection or result in significantly more
orphaned blocks.
The frequent and small adjustments made by Block75 have the added benefit
of being more easily adapted to, both psychologically and technologically,
with regards to miners/node operators.
-t.k
On Sat, Dec 10, 2016 at 5:44 AM, s7r via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> t. khan via bitcoin-dev wrote:
> > BIP Proposal - Managing Bitcoin’s block size the same way we do
> > difficulty (aka Block75)
> >
> > The every two-week adjustment of difficulty has proven to be a
> > reasonably effective and predictable way of managing how quickly blocks
> > are mined. Bitcoin needs a reasonably effective and predictable way of
> > managing the maximum block size.
> >
> > It’s clear at this point that human beings should not be involved in the
> > determination of max block size, just as they’re not involved in
> > deciding the difficulty.
> >
> > Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
> > passing the decision to miners/pool operators, the max block size should
> > be adjusted every two weeks (2016 blocks) using a system similar to how
> > difficulty is calculated.
> >
> > Put another way: let’s stop thinking about what the max block size
> > should be and start thinking about how full we want the average block to
> > be regardless of size. Over the last year, we’ve had averages of 75% or
> > higher, so aiming for 75% full seems reasonable, hence naming this
> > concept ‘Block75’.
> >
> > The target capacity over 2016 blocks would be 75%. If the last 2016
> > blocks are more than 75% full, add the difference to the max block size.
> > Like this:
> >
> > MAX_BLOCK_BASE_SIZE = 1000000
> > TARGET_CAPACITY = 750000
> > AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> > TARGET_CAPACITY
> >
> > To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
> >
> > For example, if the last 2016 blocks are 85% full (average block is 850
> > KB), add 10% to the max block size. The new max block size would be
> > 1,100 KB until the next 2016 blocks are mined, then reset and
> > recalculate. The 1,000,000 byte limit that exists currently would
> > remain, but would effectively be the minimum max block size.
> >
> > Another two weeks goes by, the last 2016 blocks are again 85% full, but
> > now that means they average 935 KB out of the 1,100 KB max block size.
> > This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> > that to make the new max block size of 1,185 KB.
> >
> > Another two weeks passes. This time, the average block is 1,050 KB. The
> > new max block size is calculated to 1,300 KB (as blocks were 105% full,
> > minus the 75% capacity target, so 30% added to max block size).
> >
> > Repeat every 2016 blocks, forever.
> >
> > If Block75 had been applied at the difficulty adjustment on November
> > 18th, the max block size would have been 1,080KB, as the average block
> > during that period was 83% full, so 8% is added to the 1,000KB limit.
> > The current size, after the December 2nd adjustment would be 1,150K.
> >
> > Block75 would allow the max block size to grow (or shrink) in response
> > to transaction volume, and does so predictably, reasonably quickly, and
> > in a method that prevents wild swings in block size or transaction fees.
> > It attempts to keep blocks at 75% total capacity over each two week
> > period, the same way difficulty tries to keep blocks mined every ten
> > minutes. It also keeps blocks as small as possible.
> >
> > Thoughts?
> >
> > -t.k.
> >
>
> I like the idea. It is good wrt growing the max. block size
> automatically without human action, but the main problem (or question)
> is not how to grow this number, it is what number can the network
> handle, considering both miners and users. While disk space requirements
> might not be a big problem, block propagation time is. The time required
> for a block to propagate in the network (or at least to all the miners)
> is directly dependent of its size. If blocks take too much time to
> propagate in the network, the orphan rate will increase in unpredictable
> ways. For example if the internet speed in China is worse than in
> Europe, and miners in China have more than 50% of the hashing power,
> blocks mined by European miners might get orphaned.
>
> The system as described can also be gamed, by filling the network with
> transactions. Miners have the monetary interest to include as many
> transactions as possible in a block in order to collect the fees.
> Regardless how you think about it, there has to be a maximum block size
> that the network will allow as a consensus rule. Increasing it
> dynamically based on transaction volume will reach a point where the
> number got big enough that it broke things. Bitcoin, because its
> fundamental design, can scale by using offchain solutions.
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 7050 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 0:26 ` t. khan
@ 2016-12-11 0:40 ` James Hilliard
2016-12-11 1:07 ` Bram Cohen
2016-12-11 17:11 ` s7r
1 sibling, 1 reply; 24+ messages in thread
From: James Hilliard @ 2016-12-11 0:40 UTC (permalink / raw)
To: t. khan, Bitcoin Protocol Discussion
Miners in general are naturally incentivized to always mine max size
blocks to maximize transaction fees simply because there is very
little marginal cost to including extra transactions(there will always
be a transaction backlog of some sort available to mine since demand
for block space is effectively unbounded as fees approach 0 and they
can even mine their own transactions without any fees). This proposal
would almost certainly cause runaway block size growth and encourage
much more miner centralization.
On Sat, Dec 10, 2016 at 6:26 PM, t. khan via bitcoin-dev
<bitcoin-dev@lists.linuxfoundation.org> wrote:
> Miners 'gaming' the Block75 system -
> There is no financial incentive for miners to attempt to game the Block75
> system. Even if it were attempted and assuming the goal was to create bigger
> blocks, the maximum possible increase would be 25% over the previous block
> size. And, that size would only last for two weeks before readjusting down.
> It would cost them more in transaction fees to stuff the network than they
> could ever make up. To game the system, they'd have to game it forever with
> no possibility of profit.
>
> Blocks would get too big -
> Eventually, blocks would get too big, but only if bandwidth stopped
> increasing and the cost of disk space stopped decreasing. Otherwise, the
> incremental adjustments made by Block75 (especially in combination with
> SegWit) wouldn't break anyone's connection or result in significantly more
> orphaned blocks.
>
> The frequent and small adjustments made by Block75 have the added benefit of
> being more easily adapted to, both psychologically and technologically, with
> regards to miners/node operators.
>
> -t.k
>
> On Sat, Dec 10, 2016 at 5:44 AM, s7r via bitcoin-dev
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>> t. khan via bitcoin-dev wrote:
>> > BIP Proposal - Managing Bitcoin’s block size the same way we do
>> > difficulty (aka Block75)
>> >
>> > The every two-week adjustment of difficulty has proven to be a
>> > reasonably effective and predictable way of managing how quickly blocks
>> > are mined. Bitcoin needs a reasonably effective and predictable way of
>> > managing the maximum block size.
>> >
>> > It’s clear at this point that human beings should not be involved in the
>> > determination of max block size, just as they’re not involved in
>> > deciding the difficulty.
>> >
>> > Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
>> > passing the decision to miners/pool operators, the max block size should
>> > be adjusted every two weeks (2016 blocks) using a system similar to how
>> > difficulty is calculated.
>> >
>> > Put another way: let’s stop thinking about what the max block size
>> > should be and start thinking about how full we want the average block to
>> > be regardless of size. Over the last year, we’ve had averages of 75% or
>> > higher, so aiming for 75% full seems reasonable, hence naming this
>> > concept ‘Block75’.
>> >
>> > The target capacity over 2016 blocks would be 75%. If the last 2016
>> > blocks are more than 75% full, add the difference to the max block size.
>> > Like this:
>> >
>> > MAX_BLOCK_BASE_SIZE = 1000000
>> > TARGET_CAPACITY = 750000
>> > AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
>> > TARGET_CAPACITY
>> >
>> > To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
>> >
>> > For example, if the last 2016 blocks are 85% full (average block is 850
>> > KB), add 10% to the max block size. The new max block size would be
>> > 1,100 KB until the next 2016 blocks are mined, then reset and
>> > recalculate. The 1,000,000 byte limit that exists currently would
>> > remain, but would effectively be the minimum max block size.
>> >
>> > Another two weeks goes by, the last 2016 blocks are again 85% full, but
>> > now that means they average 935 KB out of the 1,100 KB max block size.
>> > This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
>> > that to make the new max block size of 1,185 KB.
>> >
>> > Another two weeks passes. This time, the average block is 1,050 KB. The
>> > new max block size is calculated to 1,300 KB (as blocks were 105% full,
>> > minus the 75% capacity target, so 30% added to max block size).
>> >
>> > Repeat every 2016 blocks, forever.
>> >
>> > If Block75 had been applied at the difficulty adjustment on November
>> > 18th, the max block size would have been 1,080KB, as the average block
>> > during that period was 83% full, so 8% is added to the 1,000KB limit.
>> > The current size, after the December 2nd adjustment would be 1,150K.
>> >
>> > Block75 would allow the max block size to grow (or shrink) in response
>> > to transaction volume, and does so predictably, reasonably quickly, and
>> > in a method that prevents wild swings in block size or transaction fees.
>> > It attempts to keep blocks at 75% total capacity over each two week
>> > period, the same way difficulty tries to keep blocks mined every ten
>> > minutes. It also keeps blocks as small as possible.
>> >
>> > Thoughts?
>> >
>> > -t.k.
>> >
>>
>> I like the idea. It is good wrt growing the max. block size
>> automatically without human action, but the main problem (or question)
>> is not how to grow this number, it is what number can the network
>> handle, considering both miners and users. While disk space requirements
>> might not be a big problem, block propagation time is. The time required
>> for a block to propagate in the network (or at least to all the miners)
>> is directly dependent of its size. If blocks take too much time to
>> propagate in the network, the orphan rate will increase in unpredictable
>> ways. For example if the internet speed in China is worse than in
>> Europe, and miners in China have more than 50% of the hashing power,
>> blocks mined by European miners might get orphaned.
>>
>> The system as described can also be gamed, by filling the network with
>> transactions. Miners have the monetary interest to include as many
>> transactions as possible in a block in order to collect the fees.
>> Regardless how you think about it, there has to be a maximum block size
>> that the network will allow as a consensus rule. Increasing it
>> dynamically based on transaction volume will reach a point where the
>> number got big enough that it broke things. Bitcoin, because its
>> fundamental design, can scale by using offchain solutions.
>>
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-10 23:12 ` Bram Cohen
@ 2016-12-11 0:52 ` t. khan
0 siblings, 0 replies; 24+ messages in thread
From: t. khan @ 2016-12-11 0:52 UTC (permalink / raw)
To: Bram Cohen; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1664 bytes --]
Agreed, the clear goal of 10 minutes per block is why the difficulty
adjustment works well. Blocks averaging 75% full is the clear goal of the
described method. That's the target to attempt.
Under Block75, there will still be full blocks. There will still be
transaction fees and a fee market. The fees will be lower than they are now
of course.
Hardcoding a cap will inevitably become a roadblock (again), and we'll be
back in the same position as we are now. Permanent solutions are preferred.
On Sat, Dec 10, 2016 at 6:12 PM, Bram Cohen <bram@bittorrent.com> wrote:
> On Mon, Dec 5, 2016 at 7:27 AM, t. khan via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>>
>> Put another way: let’s stop thinking about what the max block size should
>> be and start thinking about how full we want the average block to be
>> regardless of size. Over the last year, we’ve had averages of 75% or
>> higher, so aiming for 75% full seems reasonable, hence naming this concept
>> ‘Block75’.
>>
>
> That's effectively making the blocksize limit completely uncapped and only
> preventing spikes, and even in the case of spikes it doesn't differentiate
> between 'real' traffic and low value spam attacks. It suffers from the same
> fundamental problems as bitcoin unlimited: There are in the end no
> transaction fees, and inevitably some miners will want to impose some cap
> on block size for practical purposes, resulting in a fork.
>
> Difficulty adjustment works because there's a clear goal of having a
> certain rate of making new blocks. Without a target to attempt automatic
> adjustment makes no sense.
>
[-- Attachment #2: Type: text/html, Size: 2510 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 0:40 ` James Hilliard
@ 2016-12-11 1:07 ` Bram Cohen
0 siblings, 0 replies; 24+ messages in thread
From: Bram Cohen @ 2016-12-11 1:07 UTC (permalink / raw)
To: James Hilliard, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7950 bytes --]
Miners individually have an incentive to include every transaction they can
when they mine a block, but they also sometimes have an incentive to
collectively cooperate to reduce throughput to make more money as a group.
Under schemes where limits can be adjusted both possibilities must be taken
into account.
On Sat, Dec 10, 2016 at 4:40 PM, James Hilliard via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Miners in general are naturally incentivized to always mine max size
> blocks to maximize transaction fees simply because there is very
> little marginal cost to including extra transactions(there will always
> be a transaction backlog of some sort available to mine since demand
> for block space is effectively unbounded as fees approach 0 and they
> can even mine their own transactions without any fees). This proposal
> would almost certainly cause runaway block size growth and encourage
> much more miner centralization.
>
> On Sat, Dec 10, 2016 at 6:26 PM, t. khan via bitcoin-dev
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
> > Miners 'gaming' the Block75 system -
> > There is no financial incentive for miners to attempt to game the Block75
> > system. Even if it were attempted and assuming the goal was to create
> bigger
> > blocks, the maximum possible increase would be 25% over the previous
> block
> > size. And, that size would only last for two weeks before readjusting
> down.
> > It would cost them more in transaction fees to stuff the network than
> they
> > could ever make up. To game the system, they'd have to game it forever
> with
> > no possibility of profit.
> >
> > Blocks would get too big -
> > Eventually, blocks would get too big, but only if bandwidth stopped
> > increasing and the cost of disk space stopped decreasing. Otherwise, the
> > incremental adjustments made by Block75 (especially in combination with
> > SegWit) wouldn't break anyone's connection or result in significantly
> more
> > orphaned blocks.
> >
> > The frequent and small adjustments made by Block75 have the added
> benefit of
> > being more easily adapted to, both psychologically and technologically,
> with
> > regards to miners/node operators.
> >
> > -t.k
> >
> > On Sat, Dec 10, 2016 at 5:44 AM, s7r via bitcoin-dev
> > <bitcoin-dev@lists.linuxfoundation.org> wrote:
> >>
> >> t. khan via bitcoin-dev wrote:
> >> > BIP Proposal - Managing Bitcoin’s block size the same way we do
> >> > difficulty (aka Block75)
> >> >
> >> > The every two-week adjustment of difficulty has proven to be a
> >> > reasonably effective and predictable way of managing how quickly
> blocks
> >> > are mined. Bitcoin needs a reasonably effective and predictable way of
> >> > managing the maximum block size.
> >> >
> >> > It’s clear at this point that human beings should not be involved in
> the
> >> > determination of max block size, just as they’re not involved in
> >> > deciding the difficulty.
> >> >
> >> > Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.)
> or
> >> > passing the decision to miners/pool operators, the max block size
> should
> >> > be adjusted every two weeks (2016 blocks) using a system similar to
> how
> >> > difficulty is calculated.
> >> >
> >> > Put another way: let’s stop thinking about what the max block size
> >> > should be and start thinking about how full we want the average block
> to
> >> > be regardless of size. Over the last year, we’ve had averages of 75%
> or
> >> > higher, so aiming for 75% full seems reasonable, hence naming this
> >> > concept ‘Block75’.
> >> >
> >> > The target capacity over 2016 blocks would be 75%. If the last 2016
> >> > blocks are more than 75% full, add the difference to the max block
> size.
> >> > Like this:
> >> >
> >> > MAX_BLOCK_BASE_SIZE = 1000000
> >> > TARGET_CAPACITY = 750000
> >> > AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> >> > TARGET_CAPACITY
> >> >
> >> > To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE +
> AVERAGE_OVER_CAP)
> >> >
> >> > For example, if the last 2016 blocks are 85% full (average block is
> 850
> >> > KB), add 10% to the max block size. The new max block size would be
> >> > 1,100 KB until the next 2016 blocks are mined, then reset and
> >> > recalculate. The 1,000,000 byte limit that exists currently would
> >> > remain, but would effectively be the minimum max block size.
> >> >
> >> > Another two weeks goes by, the last 2016 blocks are again 85% full,
> but
> >> > now that means they average 935 KB out of the 1,100 KB max block size.
> >> > This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> >> > that to make the new max block size of 1,185 KB.
> >> >
> >> > Another two weeks passes. This time, the average block is 1,050 KB.
> The
> >> > new max block size is calculated to 1,300 KB (as blocks were 105%
> full,
> >> > minus the 75% capacity target, so 30% added to max block size).
> >> >
> >> > Repeat every 2016 blocks, forever.
> >> >
> >> > If Block75 had been applied at the difficulty adjustment on November
> >> > 18th, the max block size would have been 1,080KB, as the average block
> >> > during that period was 83% full, so 8% is added to the 1,000KB limit.
> >> > The current size, after the December 2nd adjustment would be 1,150K.
> >> >
> >> > Block75 would allow the max block size to grow (or shrink) in response
> >> > to transaction volume, and does so predictably, reasonably quickly,
> and
> >> > in a method that prevents wild swings in block size or transaction
> fees.
> >> > It attempts to keep blocks at 75% total capacity over each two week
> >> > period, the same way difficulty tries to keep blocks mined every ten
> >> > minutes. It also keeps blocks as small as possible.
> >> >
> >> > Thoughts?
> >> >
> >> > -t.k.
> >> >
> >>
> >> I like the idea. It is good wrt growing the max. block size
> >> automatically without human action, but the main problem (or question)
> >> is not how to grow this number, it is what number can the network
> >> handle, considering both miners and users. While disk space requirements
> >> might not be a big problem, block propagation time is. The time required
> >> for a block to propagate in the network (or at least to all the miners)
> >> is directly dependent of its size. If blocks take too much time to
> >> propagate in the network, the orphan rate will increase in unpredictable
> >> ways. For example if the internet speed in China is worse than in
> >> Europe, and miners in China have more than 50% of the hashing power,
> >> blocks mined by European miners might get orphaned.
> >>
> >> The system as described can also be gamed, by filling the network with
> >> transactions. Miners have the monetary interest to include as many
> >> transactions as possible in a block in order to collect the fees.
> >> Regardless how you think about it, there has to be a maximum block size
> >> that the network will allow as a consensus rule. Increasing it
> >> dynamically based on transaction volume will reach a point where the
> >> number got big enough that it broke things. Bitcoin, because its
> >> fundamental design, can scale by using offchain solutions.
> >>
> >>
> >> _______________________________________________
> >> bitcoin-dev mailing list
> >> bitcoin-dev@lists.linuxfoundation.org
> >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >>
> >
> >
> > _______________________________________________
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 10119 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 0:26 ` t. khan
2016-12-11 0:40 ` James Hilliard
@ 2016-12-11 17:11 ` s7r
2016-12-11 19:55 ` t. khan
2016-12-11 20:38 ` Andrew Johnson
1 sibling, 2 replies; 24+ messages in thread
From: s7r @ 2016-12-11 17:11 UTC (permalink / raw)
To: t. khan, Bitcoin Protocol Discussion
[-- Attachment #1.1: Type: text/plain, Size: 2585 bytes --]
t. khan wrote:
> Miners 'gaming' the Block75 system -
> There is no financial incentive for miners to attempt to game the
> Block75 system. Even if it were attempted and assuming the goal was to
> create bigger blocks, the maximum possible increase would be 25% over
> the previous block size. And, that size would only last for two weeks
> before readjusting down. It would cost them more in transaction fees to
> stuff the network than they could ever make up. To game the system,
> they'd have to game it forever with no possibility of profit.
>
This is an incentive, if few miners agree to create a large conglomerate
that will ultimately control the network.
You miss something obvious that makes this attack actually free of cost.
Nothing will "cost them more in transaction fees". A miner can create
thousands of transactions paying to himself, and not broadcast them to
the network, but hold them and include them in the blocks he mines. The
fees are collected by him because transactions are included in a block
that he mined and the left amount is in another wallet of the same
person. Repeat this continuously to fill blocks.
> Blocks would get too big -
> Eventually, blocks would get too big, but only if bandwidth stopped
> increasing and the cost of disk space stopped decreasing. Otherwise, the
> incremental adjustments made by Block75 (especially in combination with
> SegWit) wouldn't break anyone's connection or result in significantly
> more orphaned blocks.
>
Topology and bandwidth speed / hash rate of the network cannot be
controlled - if we make assumptions about these it might have terrible
consequences.
Even if we take in consideration that bandwidth will only grow and disk
space will only cost less (which is not something we can safely assume,
by the way) the hard limit max. block size cannot grow to unlimited
value (even if the growth happens over time). There is also a validation
cost in time for each block, for the health of the network any node
should be able to download _and_ validate a block, before next block
gets mined.
You said in another post that a permanent solution is preferred, rather
than kicking the can down the road. I fully agree, as well as many
others reading this list, but the permanent solution doesn't necessarily
have to be increasing the max block size dynamically.
If you think about it the other way around, dynamically growing the max
block size is also kicking the can down the road ... just without having
to touch it and get dust on the boot ;)
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 17:11 ` s7r
@ 2016-12-11 19:55 ` t. khan
2016-12-11 20:31 ` James Hilliard
2016-12-11 20:38 ` Andrew Johnson
1 sibling, 1 reply; 24+ messages in thread
From: t. khan @ 2016-12-11 19:55 UTC (permalink / raw)
To: s7r, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3426 bytes --]
On Sun, Dec 11, 2016 at 12:11 PM, s7r <s7r@sky-ip.org> wrote:
>
> This is an incentive, if few miners agree to create a large conglomerate
> that will ultimately control the network.
>
> You miss something obvious that makes this attack actually free of cost.
> Nothing will "cost them more in transaction fees". A miner can create
> thousands of transactions paying to himself, and not broadcast them to
> the network, but hold them and include them in the blocks he mines. The
> fees are collected by him because transactions are included in a block
> that he mined and the left amount is in another wallet of the same
> person. Repeat this continuously to fill blocks.
>
No, that wasn't overlooked. Miners could indeed stuff their own blocks for
free, but they can't stuff blocks mined by others for free.
In the hypothetical scenario where there is a single mining pool which
mines most (if not all) of the blocks, we would have much larger problems
than their ability to raise the max block size gradually. Even if they were
able to fill 100% of the blocks for an entire year, the max block size for
that 2016 block period would be 7.25MB (not accounting for SegWit). After
the whole year they would have made no extra profit vs doing nothing. And
as soon as they stopped this scheme, block size would spring back to it's
natural level.
The good news is, this scenario has never happened and even when we've come
remotely close (when ASICs first shipped), the situation was temporary. The
odds of this happening in the future and persisting long enough to have any
major effect with Block75 are very close to zero.
> Topology and bandwidth speed / hash rate of the network cannot be
> controlled - if we make assumptions about these it might have terrible
> consequences.
>
> Even if we take in consideration that bandwidth will only grow and disk
> space will only cost less (which is not something we can safely assume,
> by the way) the hard limit max. block size cannot grow to unlimited
> value (even if the growth happens over time). There is also a validation
> cost in time for each block, for the health of the network any node
> should be able to download _and_ validate a block, before next block
> gets mined.
>
> You said in another post that a permanent solution is preferred, rather
> than kicking the can down the road. I fully agree, as well as many
> others reading this list, but the permanent solution doesn't necessarily
> have to be increasing the max block size dynamically.
>
Increasing *and* decreasing max block size dynamically. Block75 is
self-correcting, whereas any solution with hardcoded limits can't correct
without human intervention and would rely on our ability to predict the
future (which as you pointed out, we can't do). Therefore, any solution
that's not dynamic cannot be permanent.
Additionally, the frequent and gradual changes in max block size would
allow us to see any consequences well in advance (years probably).
> If you think about it the other way around, dynamically growing the max
> block size is also kicking the can down the road ... just without having
> to touch it and get dust on the boot ;)
Not having to touch it again = permanent solution. ;)
It would be helpful if some others would run the numbers on how Block75
would adjust the block size over time:
new max block size = 1000kb + (average block size over last 2016 blocks -
750kb)
-t.k.
[-- Attachment #2: Type: text/html, Size: 4547 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 19:55 ` t. khan
@ 2016-12-11 20:31 ` James Hilliard
2016-12-11 21:40 ` t. khan
0 siblings, 1 reply; 24+ messages in thread
From: James Hilliard @ 2016-12-11 20:31 UTC (permalink / raw)
To: t. khan, Bitcoin Protocol Discussion
What's most likely to happen is miners will max out the blocks they
mine simply to try and get as many transaction fees as possible like
they are doing right now(there will be a backlog of transactions at
any block size). Having the block size double every year would likely
cause major problems and this proposal allows over a 7x increase it
seems.
The main problem with this proposal I think is that users effectively
have no way to stop the miners from increasing block size
continuously.
On Sun, Dec 11, 2016 at 1:55 PM, t. khan via bitcoin-dev
<bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> On Sun, Dec 11, 2016 at 12:11 PM, s7r <s7r@sky-ip.org> wrote:
>>
>>
>> This is an incentive, if few miners agree to create a large conglomerate
>> that will ultimately control the network.
>>
>> You miss something obvious that makes this attack actually free of cost.
>> Nothing will "cost them more in transaction fees". A miner can create
>> thousands of transactions paying to himself, and not broadcast them to
>> the network, but hold them and include them in the blocks he mines. The
>> fees are collected by him because transactions are included in a block
>> that he mined and the left amount is in another wallet of the same
>> person. Repeat this continuously to fill blocks.
>
>
> No, that wasn't overlooked. Miners could indeed stuff their own blocks for
> free, but they can't stuff blocks mined by others for free.
>
> In the hypothetical scenario where there is a single mining pool which mines
> most (if not all) of the blocks, we would have much larger problems than
> their ability to raise the max block size gradually. Even if they were able
> to fill 100% of the blocks for an entire year, the max block size for that
> 2016 block period would be 7.25MB (not accounting for SegWit). After the
> whole year they would have made no extra profit vs doing nothing. And as
> soon as they stopped this scheme, block size would spring back to it's
> natural level.
>
> The good news is, this scenario has never happened and even when we've come
> remotely close (when ASICs first shipped), the situation was temporary. The
> odds of this happening in the future and persisting long enough to have any
> major effect with Block75 are very close to zero.
>
>>
>> Topology and bandwidth speed / hash rate of the network cannot be
>> controlled - if we make assumptions about these it might have terrible
>> consequences.
>>
>> Even if we take in consideration that bandwidth will only grow and disk
>> space will only cost less (which is not something we can safely assume,
>> by the way) the hard limit max. block size cannot grow to unlimited
>> value (even if the growth happens over time). There is also a validation
>> cost in time for each block, for the health of the network any node
>> should be able to download _and_ validate a block, before next block
>> gets mined.
>>
>> You said in another post that a permanent solution is preferred, rather
>> than kicking the can down the road. I fully agree, as well as many
>> others reading this list, but the permanent solution doesn't necessarily
>> have to be increasing the max block size dynamically.
>
>
> Increasing *and* decreasing max block size dynamically. Block75 is
> self-correcting, whereas any solution with hardcoded limits can't correct
> without human intervention and would rely on our ability to predict the
> future (which as you pointed out, we can't do). Therefore, any solution
> that's not dynamic cannot be permanent.
>
> Additionally, the frequent and gradual changes in max block size would allow
> us to see any consequences well in advance (years probably).
>
>>
>> If you think about it the other way around, dynamically growing the max
>> block size is also kicking the can down the road ... just without having
>> to touch it and get dust on the boot ;)
>
>
> Not having to touch it again = permanent solution. ;)
>
> It would be helpful if some others would run the numbers on how Block75
> would adjust the block size over time:
>
> new max block size = 1000kb + (average block size over last 2016 blocks -
> 750kb)
>
> -t.k.
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 17:11 ` s7r
2016-12-11 19:55 ` t. khan
@ 2016-12-11 20:38 ` Andrew Johnson
2016-12-11 23:22 ` s7r
1 sibling, 1 reply; 24+ messages in thread
From: Andrew Johnson @ 2016-12-11 20:38 UTC (permalink / raw)
To: s7r, Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 3970 bytes --]
"You miss something obvious that makes this attack actually free of cost.
Nothing will "cost them more in transaction fees". A miner can create
thousands of transactions paying to himself, and not broadcast them to
the network, but hold them and include them in the blocks he mines. The
fees are collected by him because transactions are included in a block
that he mined and the left amount is in another wallet of the same
person. Repeat this continuously to fill blocks."
This is easily detectable as long as the network isn't heavily
partitioned(which is an assumption we make today in order for transaction
propagation to work reliably as well as for xThin and CompactBlocks to work
effectively to reduce block transmission time). Other miners would have an
incentive to intentionally orphan blocks that contained a large number of
transactions that their nodes were unaware of.
I don't think this sort of attack would last long. Even later when
subsidies are drastically reduced, you would still lose out on significant
genuine fee revenue if your orphan rate increased even 10%(one out of ten
of your poison blocks intentionally orphaned by another miner).
On Dec 11, 2016 11:12 AM, "s7r via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:
t. khan wrote:
> Miners 'gaming' the Block75 system -
> There is no financial incentive for miners to attempt to game the
> Block75 system. Even if it were attempted and assuming the goal was to
> create bigger blocks, the maximum possible increase would be 25% over
> the previous block size. And, that size would only last for two weeks
> before readjusting down. It would cost them more in transaction fees to
> stuff the network than they could ever make up. To game the system,
> they'd have to game it forever with no possibility of profit.
>
This is an incentive, if few miners agree to create a large conglomerate
that will ultimately control the network.
You miss something obvious that makes this attack actually free of cost.
Nothing will "cost them more in transaction fees". A miner can create
thousands of transactions paying to himself, and not broadcast them to
the network, but hold them and include them in the blocks he mines. The
fees are collected by him because transactions are included in a block
that he mined and the left amount is in another wallet of the same
person. Repeat this continuously to fill blocks.
> Blocks would get too big -
> Eventually, blocks would get too big, but only if bandwidth stopped
> increasing and the cost of disk space stopped decreasing. Otherwise, the
> incremental adjustments made by Block75 (especially in combination with
> SegWit) wouldn't break anyone's connection or result in significantly
> more orphaned blocks.
>
Topology and bandwidth speed / hash rate of the network cannot be
controlled - if we make assumptions about these it might have terrible
consequences.
Even if we take in consideration that bandwidth will only grow and disk
space will only cost less (which is not something we can safely assume,
by the way) the hard limit max. block size cannot grow to unlimited
value (even if the growth happens over time). There is also a validation
cost in time for each block, for the health of the network any node
should be able to download _and_ validate a block, before next block
gets mined.
You said in another post that a permanent solution is preferred, rather
than kicking the can down the road. I fully agree, as well as many
others reading this list, but the permanent solution doesn't necessarily
have to be increasing the max block size dynamically.
If you think about it the other way around, dynamically growing the max
block size is also kicking the can down the road ... just without having
to touch it and get dust on the boot ;)
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 5671 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 20:31 ` James Hilliard
@ 2016-12-11 21:40 ` t. khan
2016-12-11 21:53 ` Bram Cohen
2016-12-11 21:55 ` James Hilliard
0 siblings, 2 replies; 24+ messages in thread
From: t. khan @ 2016-12-11 21:40 UTC (permalink / raw)
To: James Hilliard; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1355 bytes --]
On Sun, Dec 11, 2016 at 3:31 PM, James Hilliard <james.hilliard1@gmail.com>
wrote:
> What's most likely to happen is miners will max out the blocks they
> mine simply to try and get as many transaction fees as possible like
> they are doing right now(there will be a backlog of transactions at
> any block size). Having the block size double every year would likely
> cause major problems and this proposal allows over a 7x increase it
> seems.
Block75 is not exponential scaling. It's true the max theoretical increase
in the first year would be 7x, but the next year would be a max of 2x, and
the next could only increase by 50% and so on.
However, to reach the max in the first year: 1) ALL blocks would have to be
100% full and 2) transactions would have to increase at the same rate. We'd
have to be doing 2.1 million transactions a day within a year to make that
happen, and would therefore need blocks to be that big.
Realistically, max block size will grow (and shrink) at a much slower rate
... even more so with SegWit.
> The main problem with this proposal I think is that users effectively
have no way to stop the miners from increasing block size
> continuously.
Yes they could, simply by not sending transactions. Users don't care at all
about block size. They just want their transactions to be fast and
relatively cheap.
-t.k.
[-- Attachment #2: Type: text/html, Size: 2041 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 21:40 ` t. khan
@ 2016-12-11 21:53 ` Bram Cohen
2016-12-11 21:55 ` James Hilliard
1 sibling, 0 replies; 24+ messages in thread
From: Bram Cohen @ 2016-12-11 21:53 UTC (permalink / raw)
To: t. khan, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 677 bytes --]
On Sun, Dec 11, 2016 at 1:40 PM, t. khan via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Block75 is not exponential scaling. It's true the max theoretical increase
> in the first year would be 7x, but the next year would be a max of 2x, and
> the next could only increase by 50% and so on.
>
With those limits there's very little reason to not simply have a fixed
schedule. Blocks are likely to all be full in the future anyway, with a
real fee market, and the idea that miners will be held back on block sizes
for worry about propagation delay is a myth, and even if it were true it
would favor collective pooling a lot, which would be a very bad thing.
[-- Attachment #2: Type: text/html, Size: 1101 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 21:40 ` t. khan
2016-12-11 21:53 ` Bram Cohen
@ 2016-12-11 21:55 ` James Hilliard
2016-12-11 22:30 ` t. khan
1 sibling, 1 reply; 24+ messages in thread
From: James Hilliard @ 2016-12-11 21:55 UTC (permalink / raw)
To: t. khan; +Cc: Bitcoin Protocol Discussion
I think the main thing you're missing is that there will always be
transactions available to mine simply because demand for blockspace is
effectively unbounded as fees approach 0. Nodes generally have a
static mempool size and dynamic minrelaytxfee nowadays so as
transactions get mined lower fee transactions get accepted into the
mempool. An individual opting to not send a transaction would not make
the blocks smaller simply because there will always be other
transactions available(it would really only have an effect on the
transaction fees needed to get mined).
On Sun, Dec 11, 2016 at 3:40 PM, t. khan <teekhan42@gmail.com> wrote:
>
> On Sun, Dec 11, 2016 at 3:31 PM, James Hilliard <james.hilliard1@gmail.com>
> wrote:
>>
>> What's most likely to happen is miners will max out the blocks they
>> mine simply to try and get as many transaction fees as possible like
>> they are doing right now(there will be a backlog of transactions at
>> any block size). Having the block size double every year would likely
>> cause major problems and this proposal allows over a 7x increase it
>> seems.
>
>
> Block75 is not exponential scaling. It's true the max theoretical increase
> in the first year would be 7x, but the next year would be a max of 2x, and
> the next could only increase by 50% and so on.
>
> However, to reach the max in the first year: 1) ALL blocks would have to be
> 100% full and 2) transactions would have to increase at the same rate. We'd
> have to be doing 2.1 million transactions a day within a year to make that
> happen, and would therefore need blocks to be that big.
>
> Realistically, max block size will grow (and shrink) at a much slower rate
> ... even more so with SegWit.
>
>>
>> The main problem with this proposal I think is that users effectively
>>
>> have no way to stop the miners from increasing block size
>> continuously.
>
>
> Yes they could, simply by not sending transactions. Users don't care at all
> about block size. They just want their transactions to be fast and
> relatively cheap.
>
> -t.k.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 21:55 ` James Hilliard
@ 2016-12-11 22:30 ` t. khan
0 siblings, 0 replies; 24+ messages in thread
From: t. khan @ 2016-12-11 22:30 UTC (permalink / raw)
To: James Hilliard; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2676 bytes --]
The assumption you're making is incorrect. There is not an infinite number
of low-fee transactions.
Yes, the average fee will go down compared to today with Block75, but this
will balance itself between demand and the minimum fee miners are willing
to accept (not zero).
For example, add 200kb to today's max block size. How does that affect fees?
(200kb would likely be the first increase if Block75 activated today)
-t.k.
On Sun, Dec 11, 2016 at 4:55 PM, James Hilliard <james.hilliard1@gmail.com>
wrote:
> I think the main thing you're missing is that there will always be
> transactions available to mine simply because demand for blockspace is
> effectively unbounded as fees approach 0. Nodes generally have a
> static mempool size and dynamic minrelaytxfee nowadays so as
> transactions get mined lower fee transactions get accepted into the
> mempool. An individual opting to not send a transaction would not make
> the blocks smaller simply because there will always be other
> transactions available(it would really only have an effect on the
> transaction fees needed to get mined).
>
> On Sun, Dec 11, 2016 at 3:40 PM, t. khan <teekhan42@gmail.com> wrote:
> >
> > On Sun, Dec 11, 2016 at 3:31 PM, James Hilliard <
> james.hilliard1@gmail.com>
> > wrote:
> >>
> >> What's most likely to happen is miners will max out the blocks they
> >> mine simply to try and get as many transaction fees as possible like
> >> they are doing right now(there will be a backlog of transactions at
> >> any block size). Having the block size double every year would likely
> >> cause major problems and this proposal allows over a 7x increase it
> >> seems.
> >
> >
> > Block75 is not exponential scaling. It's true the max theoretical
> increase
> > in the first year would be 7x, but the next year would be a max of 2x,
> and
> > the next could only increase by 50% and so on.
> >
> > However, to reach the max in the first year: 1) ALL blocks would have to
> be
> > 100% full and 2) transactions would have to increase at the same rate.
> We'd
> > have to be doing 2.1 million transactions a day within a year to make
> that
> > happen, and would therefore need blocks to be that big.
> >
> > Realistically, max block size will grow (and shrink) at a much slower
> rate
> > ... even more so with SegWit.
> >
> >>
> >> The main problem with this proposal I think is that users effectively
> >>
> >> have no way to stop the miners from increasing block size
> >> continuously.
> >
> >
> > Yes they could, simply by not sending transactions. Users don't care at
> all
> > about block size. They just want their transactions to be fast and
> > relatively cheap.
> >
> > -t.k.
>
[-- Attachment #2: Type: text/html, Size: 3502 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 20:38 ` Andrew Johnson
@ 2016-12-11 23:22 ` s7r
2016-12-18 21:53 ` James MacWhyte
0 siblings, 1 reply; 24+ messages in thread
From: s7r @ 2016-12-11 23:22 UTC (permalink / raw)
To: Andrew Johnson, Bitcoin Dev
[-- Attachment #1.1: Type: text/plain, Size: 2530 bytes --]
Andrew Johnson wrote:
> "You miss something obvious that makes this attack actually free of cost.
> Nothing will "cost them more in transaction fees". A miner can create
> thousands of transactions paying to himself, and not broadcast them to
> the network, but hold them and include them in the blocks he mines. The
> fees are collected by him because transactions are included in a block
> that he mined and the left amount is in another wallet of the same
> person. Repeat this continuously to fill blocks."
>
> This is easily detectable as long as the network isn't heavily
> partitioned(which is an assumption we make today in order for
> transaction propagation to work reliably as well as for xThin and
> CompactBlocks to work effectively to reduce block transmission time).
> Other miners would have an incentive to intentionally orphan blocks that
> contained a large number of transactions that their nodes were unaware of.
>
> I don't think this sort of attack would last long. Even later when
> subsidies are drastically reduced, you would still lose out on
> significant genuine fee revenue if your orphan rate increased even
> 10%(one out of ten of your poison blocks intentionally orphaned by
> another miner).
>
I disagree.
I didn't say this is impossible to detect, but it is hard to act against
it. One miner orphaning the block intentionally is very unlikely if that
miner acts rationally. It would only make sense if 51% of the hash rate
would intentionally orphan it. Otherwise the miner who intentionally
orphans a valid block, let's say block X, has to continue to mine one in
its place on top of block X-1, and by the time he finds one:
a) his block X' is rejected by other miners because they already have a
valid block X on top of which they already started to mine;
b) block X+1 was already found and broadcasted, so the miner who
orphaned X intentionally is on the shorter chain ignored by the network.
So, one miner cannot do anything about it. Even a pool cannot do
anything about it, because the loss is greater. You need 51% of the hash
rate to intentionally orphan it, and all the miners forming 51% need to
be colluding and know for sure that every one will intentionally orphan
the said block, otherwise there's a huge risk of loss for who does it.
Nobody would gamble to do this (I am not sure if gambling is the right
word, since the loss is 100% sure here). But, we are not discussing 51%
attacks because those are a different topic.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 23:22 ` s7r
@ 2016-12-18 21:53 ` James MacWhyte
2016-12-19 1:42 ` Tom Harding
0 siblings, 1 reply; 24+ messages in thread
From: James MacWhyte @ 2016-12-18 21:53 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 4293 bytes --]
Hi All,
I'm coming late to the party. I like the Block75 proposal.
Multiple people have said miners would/could stuff blocks with insincere
transactions to increase the block size, but it was never adequately
explained what they would gain from this. If there aren't enough legitimate
transactions to fill up the block, where do you plan to earn extra income
once the block is bigger?
Miners would be incentivized to include as many legitimate transactions as
possible, but if propagation time is as big an issue as some of you have
said it is, miners would also be incentivized to keep their blocks small
enough to propagate. So why not give them the choice? Once the block size
gets too big to propagate effectively, miners would be naturally
incentivized to limit how much data they put in each block, finding the
perfect balance.
In my opinion, none of the downsides presented so far have been a good
argument. Risk of a 51% attack is not unique to this proposal, saying "we
could also do that with hardcoded limits" doesn't actually point out any
problem with this proposal, and miners already have the ability to add or
withhold transactions from their blocks.
We trust our miners to serve us by acting in their own best interests, and
this proposal simply gives them more options for doing that. If anyone can
make a strong argument against that would earn top marks in a high school
debate class, I'd love to hear it!
James
On Sun, Dec 11, 2016 at 3:23 PM s7r via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Andrew Johnson wrote:
> > "You miss something obvious that makes this attack actually free of cost.
> > Nothing will "cost them more in transaction fees". A miner can create
> > thousands of transactions paying to himself, and not broadcast them to
> > the network, but hold them and include them in the blocks he mines. The
> > fees are collected by him because transactions are included in a block
> > that he mined and the left amount is in another wallet of the same
> > person. Repeat this continuously to fill blocks."
> >
> > This is easily detectable as long as the network isn't heavily
> > partitioned(which is an assumption we make today in order for
> > transaction propagation to work reliably as well as for xThin and
> > CompactBlocks to work effectively to reduce block transmission time).
> > Other miners would have an incentive to intentionally orphan blocks that
> > contained a large number of transactions that their nodes were unaware
> of.
> >
> > I don't think this sort of attack would last long. Even later when
> > subsidies are drastically reduced, you would still lose out on
> > significant genuine fee revenue if your orphan rate increased even
> > 10%(one out of ten of your poison blocks intentionally orphaned by
> > another miner).
> >
>
> I disagree.
>
> I didn't say this is impossible to detect, but it is hard to act against
> it. One miner orphaning the block intentionally is very unlikely if that
> miner acts rationally. It would only make sense if 51% of the hash rate
> would intentionally orphan it. Otherwise the miner who intentionally
> orphans a valid block, let's say block X, has to continue to mine one in
> its place on top of block X-1, and by the time he finds one:
>
> a) his block X' is rejected by other miners because they already have a
> valid block X on top of which they already started to mine;
>
> b) block X+1 was already found and broadcasted, so the miner who
> orphaned X intentionally is on the shorter chain ignored by the network.
>
> So, one miner cannot do anything about it. Even a pool cannot do
> anything about it, because the loss is greater. You need 51% of the hash
> rate to intentionally orphan it, and all the miners forming 51% need to
> be colluding and know for sure that every one will intentionally orphan
> the said block, otherwise there's a huge risk of loss for who does it.
> Nobody would gamble to do this (I am not sure if gambling is the right
> word, since the loss is 100% sure here). But, we are not discussing 51%
> attacks because those are a different topic.
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 6134 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-18 21:53 ` James MacWhyte
@ 2016-12-19 1:42 ` Tom Harding
0 siblings, 0 replies; 24+ messages in thread
From: Tom Harding @ 2016-12-19 1:42 UTC (permalink / raw)
To: bitcoin-dev
James,
I share your conviction that miners are the natural gatekeepers of the
maximum block size.
The trouble I see with Block75 is that linear growth won't work forever.
Also, by reading actual and not miners' preferred max blocksize, this
proposal is sensitive to randomness in block timing and tx rate, and so
incentivizes miners to manipulate their block content unnaturally in
either the up or down direction to influence the calculation.
The EB/AD scheme of Bitcoin Unlimited recognizes implementation of the
max blocksize by miners, who publish their preferred max blocksize. But
it expects forks of unpredictable (probably short) length as network
behavior evolves.
BIP100, which also recognizes miner implementation of the max blocksize,
but has a change support threshold, and like Block75 defines the timing
of max blocksize increases, looks superior to me.
On 12/18/2016 1:53 PM, James MacWhyte via bitcoin-dev wrote:
> Hi All,
>
> I'm coming late to the party. I like the Block75 proposal.
>
> Multiple people have said miners would/could stuff blocks with
> insincere transactions to increase the block size, but it was never
> adequately explained what they would gain from this. If there aren't
> enough legitimate transactions to fill up the block, where do you plan
> to earn extra income once the block is bigger?
>
> Miners would be incentivized to include as many legitimate
> transactions as possible, but if propagation time is as big an issue
> as some of you have said it is, miners would also be incentivized to
> keep their blocks small enough to propagate. So why not give them the
> choice? Once the block size gets too big to propagate effectively,
> miners would be naturally incentivized to limit how much data they put
> in each block, finding the perfect balance.
>
> In my opinion, none of the downsides presented so far have been a good
> argument. Risk of a 51% attack is not unique to this proposal, saying
> "we could also do that with hardcoded limits" doesn't actually point
> out any problem with this proposal, and miners already have the
> ability to add or withhold transactions from their blocks.
>
> We trust our miners to serve us by acting in their own best interests,
> and this proposal simply gives them more options for doing that. If
> anyone can make a strong argument against that would earn top marks in
> a high school debate class, I'd love to hear it!
>
> James
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 3:17 ` Daniele Pinna
2016-12-11 5:29 ` Eric Voskuil
@ 2016-12-11 9:21 ` Adam Back
1 sibling, 0 replies; 24+ messages in thread
From: Adam Back @ 2016-12-11 9:21 UTC (permalink / raw)
To: Bitcoin Dev, Daniele Pinna
[-- Attachment #1: Type: text/plain, Size: 5037 bytes --]
Well I think empirical game-theory observed on the network involves more
types of strategy than honest vs dishonest. At least 4, maybe 5 types of
strategy and I would argue lumping the strategies together results in
incorrect game theory conclusions and predictions.
A) altruistic players (protocol following by principle to be good network
citizens, will forgo incremental profits to aid network health) eg aim to
decentralize hashrate, will mine stuck transactions for free, run pools
with zero fee, put more effort into custom spam filtering, tend to be power
users, or long term invested etc.
B) honest players (protocol following but non-altruistic or just
lazy/asleep run default software, but still leaving some dishonest profit
untaken). Eg reject spy mining, but no charitable actions, will not
retaliate in kind to semi-honest zero sum attacks that reduce their profits.
C) semi-honest (will violate protocol if their attack can be plausibly
deniable or argued to be not hugely damaging to network security). Eg spy
mining, centralised pools increasing other miners orphan rates.
D) rational players (will violate the protocol for profit: will not overtly
steal from users via double spends, but anything short particularly
disadvantaging other miners even if it results in centralisation is treated
as fair game) eg selfish mining. Would increase block size by filling with
pay to self transactions, if it increased orphans for others.
E) dishonest players (aka hyper-rational: will actually steal from users
probabilistically if possible, not as worried about detection). Eg double
spend and probabilistic double spends (against onchain gambling games).
Would DDoS competing pools.
In part the strategies depend on investment horizon, it is long term
rational for altruistic behavior to forgo incremental short term profit to
improve user experience. Hyper-rational to buy votes in a "ends justify
means" mentality though fortunately most network players are not dishonest.
So called meta-incentive (unwillingness to risk hurting bitcoin due to
intended long term ho dling coins or ASICs) can also explain bias towards
honest or altruistic strategies.
Renting too much hashrate is risky as it can avoid the meta-incentive and
increase rational or dishonest strategies.
In particular re differentiating from 51% attack so long as > 50% are
semi-honest, honest or altruistic it won't happen. It would seem actually
that > 66-75% are because we have not seen selfish mining on the network.
Though I think conveniently slow block publication by some players in the
60% spy mining semi-honest cartel was seen for a while, the claim has been
it was short-lived and due to technical issue.
It would be interesting to try to categorise and estimate the network %
engaging in each strategy. I think the information is mostly known.
Adam
On Dec 11, 2016 03:22, "Daniele Pinna via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> How is the adverse scenario you describe different from a plain old 51%
> attack? Each proposed protocol change where 51% or more of the network
> can potentially game the rules and break the system should be considered
> just as acceptable/unacceptable as another.
>
> There comes a point where some form of basic honesty must be assumed on
> behalf of participants benefiting from the system working properly and
> reliably.
>
> Afterall, what magic line of code prohibits all miners from simultaneously
> turning all their equipment off... just because?
>
> Maybe this 'one':
>
> "As long as a majority of CPU power is controlled by nodes that are not
> cooperating to attack the network, they'll generate the longest chain and
> outpace attackers. The network itself requires minimal structure."
>
> Is there such a thing as an unrecognizable 51% attack? One where the
> remaining 49% get dragged in against their will?
>
> Daniele
>
> On Dec 10, 2016 6:39 PM, "Pieter Wuille" <pieter.wuille@gmail.com> wrote:
>
>> On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> We have models for estimating the probability that a block is orphaned
>>> given average network bandwidth and block size.
>>>
>>> The question is, do we have objective measures of these two quantities?
>>> Couldn't we target an orphan_rate < max_rate?
>>>
>>
>> Models can predict orphan rate given block size and network/hashrate
>> topology, but you can't control the topology (and things like FIBRE hide
>> the effect of block size on this as well). The result is that if you're
>> purely optimizing for minimal orphan rate, you can end up with a single
>> (conglomerate of) pools producing all the blocks. Such a setup has no
>> propagation delay at all, and as a result can always achieve 0 orphans.
>>
>> Cheers,
>>
>> --
>> Pieter
>>
>>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 7011 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-11 3:17 ` Daniele Pinna
@ 2016-12-11 5:29 ` Eric Voskuil
2016-12-11 9:21 ` Adam Back
1 sibling, 0 replies; 24+ messages in thread
From: Eric Voskuil @ 2016-12-11 5:29 UTC (permalink / raw)
To: Daniele Pinna, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3018 bytes --]
The presumption of the mining aspect of the Bitcoin security model is that the mining majority is a broadly distributed set of independent people, not one person who controls a majority of the hash power.
You seem to have overlooked a qualifier in your Satoshi quote: "...by nodes that are not cooperating to attack the network". A single miner with majority hash power is of course cooperating with himself. At that point the question of whether he is attacking the network is moot, it's his network.
I believe that Pieter's point is that a system optimized for orphan rate may in effect be optimized for a single entity providing all double spend protection. That works directly against the central principle of Bitcoin security. The security of the money is a function of the number of independent miners and sellers.
e
> On Dec 10, 2016, at 7:17 PM, Daniele Pinna via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> How is the adverse scenario you describe different from a plain old 51% attack? Each proposed protocol change where 51% or more of the network can potentially game the rules and break the system should be considered just as acceptable/unacceptable as another.
>
> There comes a point where some form of basic honesty must be assumed on behalf of participants benefiting from the system working properly and reliably.
>
> Afterall, what magic line of code prohibits all miners from simultaneously turning all their equipment off... just because?
>
> Maybe this 'one':
>
> "As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they'll generate the longest chain and outpace attackers. The network itself requires minimal structure."
>
> Is there such a thing as an unrecognizable 51% attack? One where the remaining 49% get dragged in against their will?
>
> Daniele
>
>> On Dec 10, 2016 6:39 PM, "Pieter Wuille" <pieter.wuille@gmail.com> wrote:
>>> On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>>> We have models for estimating the probability that a block is orphaned given average network bandwidth and block size.
>>>
>>> The question is, do we have objective measures of these two quantities? Couldn't we target an orphan_rate < max_rate?
>>
>> Models can predict orphan rate given block size and network/hashrate topology, but you can't control the topology (and things like FIBRE hide the effect of block size on this as well). The result is that if you're purely optimizing for minimal orphan rate, you can end up with a single (conglomerate of) pools producing all the blocks. Such a setup has no propagation delay at all, and as a result can always achieve 0 orphans.
>>
>> Cheers,
>>
>> --
>> Pieter
>>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 4955 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-10 17:39 ` Pieter Wuille
@ 2016-12-11 3:17 ` Daniele Pinna
2016-12-11 5:29 ` Eric Voskuil
2016-12-11 9:21 ` Adam Back
0 siblings, 2 replies; 24+ messages in thread
From: Daniele Pinna @ 2016-12-11 3:17 UTC (permalink / raw)
To: Pieter Wuille; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1826 bytes --]
How is the adverse scenario you describe different from a plain old 51%
attack? Each proposed protocol change where 51% or more of the network
can potentially game the rules and break the system should be considered
just as acceptable/unacceptable as another.
There comes a point where some form of basic honesty must be assumed on
behalf of participants benefiting from the system working properly and
reliably.
Afterall, what magic line of code prohibits all miners from simultaneously
turning all their equipment off... just because?
Maybe this 'one':
"As long as a majority of CPU power is controlled by nodes that are not
cooperating to attack the network, they'll generate the longest chain and
outpace attackers. The network itself requires minimal structure."
Is there such a thing as an unrecognizable 51% attack? One where the
remaining 49% get dragged in against their will?
Daniele
On Dec 10, 2016 6:39 PM, "Pieter Wuille" <pieter.wuille@gmail.com> wrote:
> On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> We have models for estimating the probability that a block is orphaned
>> given average network bandwidth and block size.
>>
>> The question is, do we have objective measures of these two quantities?
>> Couldn't we target an orphan_rate < max_rate?
>>
>
> Models can predict orphan rate given block size and network/hashrate
> topology, but you can't control the topology (and things like FIBRE hide
> the effect of block size on this as well). The result is that if you're
> purely optimizing for minimal orphan rate, you can end up with a single
> (conglomerate of) pools producing all the blocks. Such a setup has no
> propagation delay at all, and as a result can always achieve 0 orphans.
>
> Cheers,
>
> --
> Pieter
>
>
[-- Attachment #2: Type: text/html, Size: 3124 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
2016-12-10 12:23 ` Daniele Pinna
@ 2016-12-10 17:39 ` Pieter Wuille
2016-12-11 3:17 ` Daniele Pinna
0 siblings, 1 reply; 24+ messages in thread
From: Pieter Wuille @ 2016-12-10 17:39 UTC (permalink / raw)
To: Daniele Pinna, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 811 bytes --]
On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> We have models for estimating the probability that a block is orphaned
> given average network bandwidth and block size.
>
> The question is, do we have objective measures of these two quantities?
> Couldn't we target an orphan_rate < max_rate?
>
Models can predict orphan rate given block size and network/hashrate
topology, but you can't control the topology (and things like FIBRE hide
the effect of block size on this as well). The result is that if you're
purely optimizing for minimal orphan rate, you can end up with a single
(conglomerate of) pools producing all the blocks. Such a setup has no
propagation delay at all, and as a result can always achieve 0 orphans.
Cheers,
--
Pieter
[-- Attachment #2: Type: text/html, Size: 1547 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
[not found] ` <CAEgR2PEVwt_shiqwGjK6dPscRUTHayis0PaQO5Dj_fVEGGgaCQ@mail.gmail.com>
@ 2016-12-10 12:23 ` Daniele Pinna
2016-12-10 17:39 ` Pieter Wuille
0 siblings, 1 reply; 24+ messages in thread
From: Daniele Pinna @ 2016-12-10 12:23 UTC (permalink / raw)
To: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 9936 bytes --]
We have models for estimating the probability that a block is orphaned
given average network bandwidth and block size.
The question is, do we have objective measures of these two quantities?
Couldn't we target an orphan_rate < max_rate?
On Dec 10, 2016 1:01 PM, <bitcoin-dev-request@lists.linuxfoundation.org>
wrote:
Send bitcoin-dev mailing list submissions to
bitcoin-dev@lists.linuxfoundation.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
or, via email, send a message with subject or body 'help' to
bitcoin-dev-request@lists.linuxfoundation.org
You can reach the person managing the list at
bitcoin-dev-owner@lists.linuxfoundation.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of bitcoin-dev digest..."
Today's Topics:
1. Managing block size the same way we do difficulty (aka
Block75) (t. khan)
2. Re: Managing block size the same way we do difficulty (aka
Block75) (s7r)
----------------------------------------------------------------------
Message: 1
Date: Mon, 5 Dec 2016 10:27:32 -0500
From: "t. khan" <teekhan42@gmail.com>
To: bitcoin-dev@lists.linuxfoundation.org
Subject: [bitcoin-dev] Managing block size the same way we do
difficulty (aka Block75)
Message-ID:
<CAGCNRJqdu7DMC+AMR4mYKRAYStRMKVGqbnjtEfmzcoeMij5u=A@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
BIP Proposal - Managing Bitcoin?s block size the same way we do difficulty
(aka Block75)
The every two-week adjustment of difficulty has proven to be a reasonably
effective and predictable way of managing how quickly blocks are mined.
Bitcoin needs a reasonably effective and predictable way of managing the
maximum block size.
It?s clear at this point that human beings should not be involved in the
determination of max block size, just as they?re not involved in deciding
the difficulty.
Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
passing the decision to miners/pool operators, the max block size should be
adjusted every two weeks (2016 blocks) using a system similar to how
difficulty is calculated.
Put another way: let?s stop thinking about what the max block size should
be and start thinking about how full we want the average block to be
regardless of size. Over the last year, we?ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this concept
?Block75?.
The target capacity over 2016 blocks would be 75%. If the last 2016 blocks
are more than 75% full, add the difference to the max block size. Like this:
MAX_BLOCK_BASE_SIZE = 1000000
TARGET_CAPACITY = 750000
AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
TARGET_CAPACITY
To check if a block is valid, ? (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
For example, if the last 2016 blocks are 85% full (average block is 850
KB), add 10% to the max block size. The new max block size would be 1,100
KB until the next 2016 blocks are mined, then reset and recalculate. The
1,000,000 byte limit that exists currently would remain, but would
effectively be the minimum max block size.
Another two weeks goes by, the last 2016 blocks are again 85% full, but now
that means they average 935 KB out of the 1,100 KB max block size. This is
93.5% of the 1,000,000 byte limit, so 18.5% would be added to that to make
the new max block size of 1,185 KB.
Another two weeks passes. This time, the average block is 1,050 KB. The new
max block size is calculated to 1,300 KB (as blocks were 105% full, minus
the 75% capacity target, so 30% added to max block size).
Repeat every 2016 blocks, forever.
If Block75 had been applied at the difficulty adjustment on November 18th,
the max block size would have been 1,080KB, as the average block during
that period was 83% full, so 8% is added to the 1,000KB limit. The current
size, after the December 2nd adjustment would be 1,150K.
Block75 would allow the max block size to grow (or shrink) in response to
transaction volume, and does so predictably, reasonably quickly, and in a
method that prevents wild swings in block size or transaction fees. It
attempts to keep blocks at 75% total capacity over each two week period,
the same way difficulty tries to keep blocks mined every ten minutes. It
also keeps blocks as small as possible.
Thoughts?
-t.k.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/
attachments/20161205/c24d6c6d/attachment-0001.html>
------------------------------
Message: 2
Date: Sat, 10 Dec 2016 12:44:31 +0200
From: s7r <s7r@sky-ip.org>
To: bitcoin-dev@lists.linuxfoundation.org
Subject: Re: [bitcoin-dev] Managing block size the same way we do
difficulty (aka Block75)
Message-ID: <c318f76d-0904-2e1b-453b-60179f8209bb@sky-ip.org>
Content-Type: text/plain; charset="utf-8"
t. khan via bitcoin-dev wrote:
> BIP Proposal - Managing Bitcoin?s block size the same way we do
> difficulty (aka Block75)
>
> The every two-week adjustment of difficulty has proven to be a
> reasonably effective and predictable way of managing how quickly blocks
> are mined. Bitcoin needs a reasonably effective and predictable way of
> managing the maximum block size.
>
> It?s clear at this point that human beings should not be involved in the
> determination of max block size, just as they?re not involved in
> deciding the difficulty.
>
> Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
> passing the decision to miners/pool operators, the max block size should
> be adjusted every two weeks (2016 blocks) using a system similar to how
> difficulty is calculated.
>
> Put another way: let?s stop thinking about what the max block size
> should be and start thinking about how full we want the average block to
> be regardless of size. Over the last year, we?ve had averages of 75% or
> higher, so aiming for 75% full seems reasonable, hence naming this
> concept ?Block75?.
>
> The target capacity over 2016 blocks would be 75%. If the last 2016
> blocks are more than 75% full, add the difference to the max block size.
> Like this:
>
> MAX_BLOCK_BASE_SIZE = 1000000
> TARGET_CAPACITY = 750000
> AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> TARGET_CAPACITY
>
> To check if a block is valid, ? (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
>
> For example, if the last 2016 blocks are 85% full (average block is 850
> KB), add 10% to the max block size. The new max block size would be
> 1,100 KB until the next 2016 blocks are mined, then reset and
> recalculate. The 1,000,000 byte limit that exists currently would
> remain, but would effectively be the minimum max block size.
>
> Another two weeks goes by, the last 2016 blocks are again 85% full, but
> now that means they average 935 KB out of the 1,100 KB max block size.
> This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> that to make the new max block size of 1,185 KB.
>
> Another two weeks passes. This time, the average block is 1,050 KB. The
> new max block size is calculated to 1,300 KB (as blocks were 105% full,
> minus the 75% capacity target, so 30% added to max block size).
>
> Repeat every 2016 blocks, forever.
>
> If Block75 had been applied at the difficulty adjustment on November
> 18th, the max block size would have been 1,080KB, as the average block
> during that period was 83% full, so 8% is added to the 1,000KB limit.
> The current size, after the December 2nd adjustment would be 1,150K.
>
> Block75 would allow the max block size to grow (or shrink) in response
> to transaction volume, and does so predictably, reasonably quickly, and
> in a method that prevents wild swings in block size or transaction fees.
> It attempts to keep blocks at 75% total capacity over each two week
> period, the same way difficulty tries to keep blocks mined every ten
> minutes. It also keeps blocks as small as possible.
>
> Thoughts?
>
> -t.k.
>
I like the idea. It is good wrt growing the max. block size
automatically without human action, but the main problem (or question)
is not how to grow this number, it is what number can the network
handle, considering both miners and users. While disk space requirements
might not be a big problem, block propagation time is. The time required
for a block to propagate in the network (or at least to all the miners)
is directly dependent of its size. If blocks take too much time to
propagate in the network, the orphan rate will increase in unpredictable
ways. For example if the internet speed in China is worse than in
Europe, and miners in China have more than 50% of the hashing power,
blocks mined by European miners might get orphaned.
The system as described can also be gamed, by filling the network with
transactions. Miners have the monetary interest to include as many
transactions as possible in a block in order to collect the fees.
Regardless how you think about it, there has to be a maximum block size
that the network will allow as a consensus rule. Increasing it
dynamically based on transaction volume will reach a point where the
number got big enough that it broke things. Bitcoin, because its
fundamental design, can scale by using offchain solutions.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: OpenPGP digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/
attachments/20161210/c231038d/attachment-0001.sig>
------------------------------
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
End of bitcoin-dev Digest, Vol 19, Issue 4
******************************************
[-- Attachment #2: Type: text/html, Size: 13257 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2016-12-19 1:42 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-05 15:27 [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75) t. khan
2016-12-10 10:44 ` s7r
2016-12-10 12:05 ` Hampus Sjöberg
2016-12-11 0:26 ` t. khan
2016-12-11 0:40 ` James Hilliard
2016-12-11 1:07 ` Bram Cohen
2016-12-11 17:11 ` s7r
2016-12-11 19:55 ` t. khan
2016-12-11 20:31 ` James Hilliard
2016-12-11 21:40 ` t. khan
2016-12-11 21:53 ` Bram Cohen
2016-12-11 21:55 ` James Hilliard
2016-12-11 22:30 ` t. khan
2016-12-11 20:38 ` Andrew Johnson
2016-12-11 23:22 ` s7r
2016-12-18 21:53 ` James MacWhyte
2016-12-19 1:42 ` Tom Harding
2016-12-10 23:12 ` Bram Cohen
2016-12-11 0:52 ` t. khan
[not found] <CAEgR2PEMPo3veqJat7OAps1DzTSNFJmJiRbkFgYKvYfxqdbUiw@mail.gmail.com>
[not found] ` <CAEgR2PELB1_s+o0Bj4Kj9vS27eoqP7gV_VS_6QHQtTUAOnMORg@mail.gmail.com>
[not found] ` <CAEgR2PFpGWxngq=fKGi7CC_d+=5YWzWwbEEsQNEifCuHAAPAHw@mail.gmail.com>
[not found] ` <CAEgR2PHnrsdaBiDgywvE9amK8_yPE_hBo0yYOYwUk4T8n7wnAQ@mail.gmail.com>
[not found] ` <CAEgR2PEgPkRe76hW0Jj7_Z1EdmmNTpTAOKGm_of2dG=XXUOtnA@mail.gmail.com>
[not found] ` <CAEgR2PHew+fcJWnAt+t8umcwKu4TkshH=AFJ-8MeYysud2MkBQ@mail.gmail.com>
[not found] ` <CAEgR2PEVwt_shiqwGjK6dPscRUTHayis0PaQO5Dj_fVEGGgaCQ@mail.gmail.com>
2016-12-10 12:23 ` Daniele Pinna
2016-12-10 17:39 ` Pieter Wuille
2016-12-11 3:17 ` Daniele Pinna
2016-12-11 5:29 ` Eric Voskuil
2016-12-11 9:21 ` Adam Back
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox