* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-07 22:02 Matt Corallo
@ 2015-05-07 23:24 ` Joseph Poon
2015-05-08 0:05 ` Peter Todd
` (3 subsequent siblings)
4 siblings, 0 replies; 31+ messages in thread
From: Joseph Poon @ 2015-05-07 23:24 UTC (permalink / raw)
To: Matt Corallo; +Cc: Bitcoin Dev
Hi Matt,
I agree that starting discussion on how to approach this problem is
necessary and it's difficult taking positions without details on what is
being discussed.
A simple hard 20-megabyte increase will likely create perverse
incentives, perhaps a method can exist with some safe transition. I
think ultimately, the underlying tension with this discussion is about
the relative power of miners. Any transition of blocksize increase will
increase the influence of miners, and it is about understanding the
tradeoffs for each possible approach.
On Thu, May 07, 2015 at 10:02:09PM +0000, Matt Corallo wrote:
> * I'd like to see some better conclusions to the discussion around
> long-term incentives within the system. If we're just building Bitcoin
> to work in five years, great, but if we want it all to keep working as
> subsidy drops significantly, I'd like a better answer than "we'll deal
> with it when we get there" or "it will happen, all the predictions based
> on people's behavior today say so" (which are hopefully invalid thanks
> to the previous point). Ideally, I'd love to see some real free pressure
> already on the network starting to develop when we commit to hardforking
> in a year. Not just full blocks with some fees because wallets are
> including far greater fees than they really need to, but software which
> properly handles fees across the ecosystem, smart fee increases when
> transactions arent confirming (eg replace-by-fee, which could be limited
> to increase-in-fees-only for those worried about double-spends).
I think the long-term fee incentive structure needs to be significantly
more granular. We've all seen miners and pools take the path of least
resistance; often they just do whatever the community tells them to
blindly. While this status quo can change in the future, I think
designing sane defaults is a good path for any possible transition.
It seems especially reasonable to maintain fee pressure for normal
transactions during a hard-fork transition. It's possible to do so using
some kind of soft-cap structure. Building in a default soft-cap of 1
megabyte for some far future scheduled fork would seem like a sane thing
to do for bitcoin-core.
It seems also viable to be far more aggressive. What's your (and the
community's) opinion on some kind of coinbase voting protocol for
soft-cap enforcement? It's possible to write in messages to the coinbase
for a enforcible soft-cap that orphans out any transaction which
violates these rules. It seems safest to have the transition has the
first hardforked block be above 1MB, however, the next block default to
an enforced 1MB block. If miners agree to go above this, they must vote
in their coinbase to do so.
There's a separate discussion about this starting on:
CAE-z3OXnjayLUeHBU0hdwU5pKrJ6fpj7YPtGBMQ7hKXG3Sj6hw@mail.gmail.com
I think defaulting some kind of mechanism on reading the coinbase seems
to be a good idea, I think left alone, miners may not do so. That way,
it's possible to have your cake and eat it too, fee pressure will still
exist, while block sizes can increase (provided it's in the miners'
greater interests to do so).
The Lightning Network's security model in the long-term may rely on a
multi-tier soft-cap, but I'm not sure. If 2nd order systemic miner
incentives were not a concern, a system which has an enforced soft-cap
and permits breaching that soft-cap with some agreed upon much higher
fee would work best. LN works without this, but it seems to be more
secure if some kind of miner consensus rule is reached regarding
prioritizing behavior of 2nd-layer consensus states.
No matter how it's done, certain aspects of the security model of
something like Lightning is reliant upon having block-space
availability for transactions to enter into the blockchain in a timely
manner (since "deprecated" channel states become valid again after some
agreed upon block-time).
I think pretty much everyone agrees that the 1MB block cap will
eventually be a problem. While people may disagree with when that will
be and how it'll play out, I think we're all in agreement that
discussion about it is a good idea, especially when it comes to
resolving blocking concerns.
Starting a discussion on how a hypothetical blocksize increase will
occur and the necessary blocking/want-to-have features/tradeoffs seems
to be a great way to approach this problem. The needs for Lightning
Network may be best optimized by being able to prioritizing a large mass
of timeout transactions at once (when a well-connected node stops
communicating).
--
Joseph Poon
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-07 22:02 Matt Corallo
2015-05-07 23:24 ` Joseph Poon
@ 2015-05-08 0:05 ` Peter Todd
2015-05-08 6:33 ` Arkady
2015-05-08 10:03 ` Mike Hearn
` (2 subsequent siblings)
4 siblings, 1 reply; 31+ messages in thread
From: Peter Todd @ 2015-05-08 0:05 UTC (permalink / raw)
To: Matt Corallo; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 5667 bytes --]
On Thu, May 07, 2015 at 10:02:09PM +0000, Matt Corallo wrote:
> OK, so lets do that. I've seen a lot of "I'm not entirely comfortable
> with committing to this right now, but think we should eventually", but
> not much "I'd be comfortable with committing to this when I see X". In
> the interest of ignoring debate and pushing people towards a consensus
> at all costs, ( ;) ) I'm gonna go ahead and suggest we talk about the
> second.
>
> Personally, there are several things that worry me significantly about
> committing to a blocksize increase, which I'd like to see resolved
> before I'd consider supporting a blocksize increase commitment.
>
> * Though there are many proposals floating around which could
> significantly decrease block propagation latency, none of them are
> implemented today. I'd expect to see these not only implemented but
> being used in production (though I dont particularly care about them
> being all that stable). I'd want to see measurements of how they perform
> both in production and in the face of high packet loss (eg across the
> GFW or in the case of small/moderate DoS). In addition, I'd expect to
> see analysis of how these systems perform in the worst-case, not just
> packet-loss-wise, but in the face of miners attempting to break the system.
It's really important that we remember that we're building security
software: it *must* hold up well even in the face of attack. That means
we need to figure out how it can be attacked, what the cost/profits of
such attacks are, and if the holes can be patched. Just testing the
software with simulated loads is insufficient.
Also, re: breaking, don't forget that this may not be a malicious act.
For instance, someone can send contradictory transactions to different
parts of the network simultaneously to prevent mempool consistency -
there's no easy way to fix this. There are also cases where miners have
different policy than others, e.g. version disagreements, commercial
contracts for tx mining, etc.
Finally, remember that it's not in miners' incentives in many situations
for their blocks to propagate to more than ~30% of the hashing power.(1)
Personally, I'm really skeptical that we'll ever find a block
propagation latency reduction technique that sucesfully meets all the
above criteria without changing the consensus algorithm itself.
* How do we ensure miners don't cheat and stop validating blocks fully
before building on them? This is a significant moral hazard with larger
blocks if fees don't become significant, and can lead to dangerous
forks. Also, think of the incentives: Why would a miner ever switch from
the longest chain, even if they don't actually have the blocks to back
it up?
* We need a clear understanding of how we expect new full nodes, pruned
or not, to sync up to the blockchain. Obviously 20MB blocks
significantly increases the time and data required to sync. Are we
planning on simply giving up on full validation and trusting others for
copies of UTXO sets? Are we going to rely on UTXO commitments? What
happens if the UTXO set size itself increases greatly?
> * I'd very much like to see someone working on better scaling
> technology, both in terms of development and in terms of getting
> traction in the marketplace. I know StrawPay is working on development,
> though its not obvious to me how far they are from their website, but I
> dont know of any commitments by large players (either SPV wallets,
> centralized wallet services, payment processors, or any others) to
> support such a system (to be fair, its probably too early for such
> players to commit to anything, since anything doesnt exist in public).
A good start would be for those players to commit to the general
principles of these systems; if they can't commit explain why.
For instance I'd be very interested in knowing if services like Coinbase
see legal issues with adopting technologies such as payment channels
between hosted wallet providers, payment processors, etc. I certainly
wouldn't be surprised if they see doing anythign not on-blockchain as a
source of legal uncertainty - based on discussions I've had with
regulatory types in this space it sounds like there's a reasonable
chance protocol details such as requiring that transactions happen on a
public blockchain will be "baked into" regulatory requirements.
> * I'd like to see some better conclusions to the discussion around
> long-term incentives within the system. If we're just building Bitcoin
> to work in five years, great, but if we want it all to keep working as
> subsidy drops significantly, I'd like a better answer than "we'll deal
> with it when we get there" or "it will happen, all the predictions based
> on people's behavior today say so" (which are hopefully invalid thanks
> to the previous point). Ideally, I'd love to see some real free pressure
> already on the network starting to develop when we commit to hardforking
> in a year.
Agreed.
> Not just full blocks with some fees because wallets are
> including far greater fees than they really need to, but software which
> properly handles fees across the ecosystem, smart fee increases when
> transactions arent confirming (eg replace-by-fee, which could be limited
> to increase-in-fees-only for those worried about double-spends).
FWIW I've got some funding to implement first-seen-safe replace-by-fee.
1) http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg03200.html
--
'peter'[:-1]@petertodd.org
00000000000000000fe0a96ac84aeb2e4e5c246e947cd8e759bd5fb158a16caf
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 650 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-08 0:05 ` Peter Todd
@ 2015-05-08 6:33 ` Arkady
0 siblings, 0 replies; 31+ messages in thread
From: Arkady @ 2015-05-08 6:33 UTC (permalink / raw)
To: Bitcoin Dev
--[remove this line and above]--
On Thu, 7 May 2015, Gregory Maxwell wrote:
> Date: Thu, 7 May 2015 00:37:54 +0000
> From: Gregory Maxwell <gmaxwell@gmail.com>
> To: Matt Corallo <bitcoin-list@bluematt.me>
> Cc: Bitcoin Dev <bitcoin-development@lists.sourceforge.net>
> Subject: Re: [Bitcoin-development] Block Size Increase
>
> Thanks Matt; I was actually really confused by this sudden push with
> not a word here or on Github--so much so that I responded on Reddit to
> people pointing to commits in Gavin's personal repository saying they
> were reading too much into it.
I saw this. I was also pointing this out to the people who were asking
me. A
commit to a personal repository does not at first seem more than
experimental. sipa commits weird/neat things to private branches all the
time, after all.
> to share behavior. In the case of mining, we're trying to optimize the
> social good of POW security. (But the analogy applies in other ways
> too:
About the only argument IMO in favour of block size increases is to
assume
that making more room in a block will make it attractive to use for more
people at some point in the future: increasing transaction velocity,
increasing economy size, increasing value overall.
> increases to the chain side are largely an externality; miners enjoy
> the
> benefits, everyone else takes the costs--either in reduced security or
> higher node operating else.)
Who else but miners and pool operators will run full nodes when full
nodes
are being shut down because they are too large and unwieldy to maintain?
It
is already so that casual users refuse to run full nodes. This fact is
indisputable. The only question remaining is, "Do we care?" Arguments
against users who feel that the dataset is too large to run a full node,
full-time, start from a premise that these users are a static and
irrelevant
fraction. Is this even true? "Do we care?" I do. I will shortly only be
able
to run half the nodes I currently do thanks to the growth of the
blockchain
at its current rate.
> One potential argument is that maybe miners would be _regulated_ to
> behave correctly. But this would require undermining the openness of
> the
> system--where anyone can mine anonymously--in order to enforce
> behavior,
> and that same enforcement mechanism would leave a political level to
> impose additional rules that violate the extra properties of the
> system.
I would refuse to mine under such a regulated regime; moreover, I would
enjoy forking away from this, and, I suspect, the only miners who remain
would be those whose ultimate motivations do not coincide with the
users.
That is, the set of miners who are users, and the set of users who are
miners, would be wholly non-intersecting.
> So far the mining ecosystem has become incredibly centralized over
> time.
This is unfortunate but true.
> of the regular contributors to Bitcoin Core do. Many participants
> have never mined or only did back in 2010/2011... we've basically
> ignored the mining ecosystem, and this has had devastating effects,
> causing a latent undermining of the security model: hacking a dozen or
> so computers--operated under totally unknown and probably not strong
> security policies--could compromise the network at least at the tip...
The explicit form of the block dictated by the reference client and
agreed-to by the people who were sold on bitcoin near the beginning
(myself
included) was explicitly the notion that the rules were static; that the
nature of transaction foundations and the subsidies would not be
altered.
Here we have a hardfork being contemplated which is not only
controversial,
but does not even address some of the highest-utility and most-requested
features in peoples' hardfork wishlists.
The fact that mining has effectively been centralized directly implies
that
destabilizing changes that some well-heeled (and thus theoretically
capable,
at least) people have explicitly begun plans to fork the blockchain
about
will have an unknown, and completely unforeseen combined effect.
We can pretend that, "If merchants and miners and exchanges go along,
then
who else matters," but the reality is that the value in bitcoin exists
because *people* use it for real transactions: Not miners, whose profits
are
parasitically fractionally based on the quality and strength of the
bitcoin
economy as a whole; not exchanges who lubricate transactions in service
to
the economy; not even today's merchants whose primary means of accepting
bitcoin seems to be to convert them instantly to fiat and not
participate
meaningfully in the economy at all; not enriched felons; but actual
users
themselves.
> Rightfully we should be regarding this an an emergency, and probably
> should have been have since 2011.
There are two ways to look at it, assuming that the blocksize change
increases bitcoin's value to people after all: mining centralization
will be
corrected; or, mining centralization will not be corrected.
I would argue that rapidly increasing profitability at this point will
exacerbate the mining centralization problem, and in much the same way
as
when people were throwing money and unknowingly funding the massive
frauds
of the current cabals when bitcoin's exchange-driven rise to $1200 was
first
realized.
Thus, even if the premise were true, what will a blocksize increase
achieve
given mining centralization itself is a bigger systemic risk?
> Hardfork changes should only be made if they're almost completely
> uncontroversial--where virtually everyone can look at the available
> data
> and say "yea, that isn't undermining my property rights or future use
> of Bitcoin; it's no big deal".
The recent "revelation" that there are masses of paid trolls on popular
forum sites like reddit who supposedly don't even know who is hiring
them,
and the anger of more vociferous commenters in general, does not
invalidate
the relevance of every non-"industry" voice. I think elevating the
discussion away from the users does the system and the development
process
as a whole quite an injustice.
> I'm curious as to what discussions people have seen; e.g., are people
> even here aware of these concerns? Are you aware of things like the
> hashcash mediated dynamic blocksize limiting?
I have seen most of these; or the ideas seem obvious based on their
names.
> About proposals like lightning network (instant transactions and
> massive
> scale, in exchange for some short term DOS risk if a counterparty opts
> out)? Do people (other than Mike Hearn; I guess) think a future where
> everyone depends on a small number of "Google scale" node operations
> for
> the system is actually okay? (I think not, and if so we're never going
> to
> agree--but it can be helpful to understand when a disagreement is
> ideological).
It is not okay. If the current mining cabals continue to exist, and
flourish, and the developers make major changes that ignore this glaring
elephant, then the decentralized promise of bitcoin will be put more at
risk.
signmessage 1DdcrjT9Yqb6U58wVMA2e7untFbz2rmZd4
"49786791f4d0a260689867ccdfb2cc5b8460984e335504444ade113d2768505c"
G6NPl7Wklo9lcdgeVI2H2pexzgqD0KPHhI/wAe32DBm8m59Qf31j5d4tsx5drcql/8wPeIb0QGarr/o4VIOLLGE=
--[remove this line and below]--
HHsTfiZ/S7+GNYRwws+QyAr+6/MgDz0Jyntl7CAvjhdfzbnwPorybQUXxRw3CE4DgYgAy1zLanE8H/5NK+l3UlE=
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-07 22:02 Matt Corallo
2015-05-07 23:24 ` Joseph Poon
2015-05-08 0:05 ` Peter Todd
@ 2015-05-08 10:03 ` Mike Hearn
2015-05-08 16:37 ` Peter Todd
2015-05-29 22:36 ` Gavin Andresen
2015-05-29 23:42 ` Chun Wang
4 siblings, 1 reply; 31+ messages in thread
From: Mike Hearn @ 2015-05-08 10:03 UTC (permalink / raw)
To: Matt Corallo; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1891 bytes --]
>
> * Though there are many proposals floating around which could
> significantly decrease block propagation latency, none of them are
> implemented today.
With a 20mb cap, miners still have the option of the soft limit.
I would actually be quite surprised if there were no point along the road
from 1mb to 20mb where miners felt a need to throttle their block sizes
artificially, for the exact reason you point out: propagation delays.
But we don't *need* to have fancy protocol upgrades implemented right now.
All we need is to demolish one bottleneck (the hard cap) so we can then
move on and demolish the next one (whatever that is, probably faster
propagation). Scaling is a series of walls we punch through as we encounter
them. One down, onto the next. We don't have to tackle them all
simultaneously.
FWIW I don't think the GFW just triggers packet loss, these days. It's
blocked port 8333 entirely.
* I'd very much like to see someone working on better scaling
> technology ... I know StrawPay is working on development,
>
So this request is already satisfied, isn't it? As you point out, expecting
more at this stage in development is unreasonable, there's nothing for
anyone to experiment with or commit to.
They have code here, by the way:
https://github.com/strawpay
You can find their fork of MultiBit HD, their implementation library, etc.
They've contributed patches and improvements to the payment channels code
we wrote.
> * I'd like to see some better conclusions to the discussion around
> long-term incentives within the system.
>
What are your thoughts on using assurance contracts to fund network
security?
I don't *know* if hashing assurance contracts (HACs) will work. But I don't
know they won't work either. And right now I'm pretty sure that plain old
fee pressure won't work. Demand doesn't outstrip supply forever - people
find substitutes.
[-- Attachment #2: Type: text/html, Size: 2904 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-08 10:03 ` Mike Hearn
@ 2015-05-08 16:37 ` Peter Todd
2015-05-08 19:47 ` Tier Nolan
0 siblings, 1 reply; 31+ messages in thread
From: Peter Todd @ 2015-05-08 16:37 UTC (permalink / raw)
To: Mike Hearn; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1134 bytes --]
On Fri, May 08, 2015 at 12:03:04PM +0200, Mike Hearn wrote:
> >
> > * Though there are many proposals floating around which could
> > significantly decrease block propagation latency, none of them are
> > implemented today.
>
>
> With a 20mb cap, miners still have the option of the soft limit.
The soft-limit is there miners themselves produce smaller blocks; the
soft-limit does not prevent other miners from producing larger blocks.
As we're talking about ways that other miners can use 20MB blocks to
harm the competition, talking about the soft-limit is irrelevant.
Similarly, as security engineers we must plan for the worst case; as
we've seen before by your campaigns to raise the soft-limit(1) even at a
time when the vast majority of transaction volume was from one user
(SatoshiDice) soft-limits are an extremely weak form of control.
For the proposes of discussing blocksize increase requirements we can
stop talking about the soft-limit.
1) https://bitcointalk.org/index.php?topic=149668.0
--
'peter'[:-1]@petertodd.org
000000000000000009344ba165781ee352f93d657c8b098c8e518e6011753e59
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 650 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-08 16:37 ` Peter Todd
@ 2015-05-08 19:47 ` Tier Nolan
2015-05-09 3:08 ` Peter Todd
0 siblings, 1 reply; 31+ messages in thread
From: Tier Nolan @ 2015-05-08 19:47 UTC (permalink / raw)
Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1347 bytes --]
On Fri, May 8, 2015 at 5:37 PM, Peter Todd <pete@petertodd.org> wrote:
> The soft-limit is there miners themselves produce smaller blocks; the
> soft-limit does not prevent other miners from producing larger blocks.
>
I wonder if having a "miner" flag would be good for the network.
Clients for general users and merchants would have a less strict rule than
the rule for miners. Miners who don't set their miners flag might get
orphaned off the chain.
For example, the limits could be setup as follows.
Clients: 20MB
Miners: 4MB
When in "miner mode", the client would reject 4MB blocks and wouldn't build
on them. The reference client might even track the miner and the non-miner
chain tip.
Miners would refuse to build on 5MB blocks, but merchants and general users
would accept them.
This allows the miners to soft fork the limit at some point in the future.
If 75% of miners decided to up the limit to 8MB, then all merchants and the
general users would accept the new blocks. It could follow the standard
soft fork rules.
This is a more general version of the system where miners are allowed to
vote on the block size (subject to a higher limit).
A similar system is where clients track all header trees. Your wallet
could warn you that there is an invalid tree that has > 75% of the hashing
power and you might want to upgrade.
[-- Attachment #2: Type: text/html, Size: 1886 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-08 19:47 ` Tier Nolan
@ 2015-05-09 3:08 ` Peter Todd
2015-05-16 4:39 ` Stephen
2015-05-16 11:25 ` Tier Nolan
0 siblings, 2 replies; 31+ messages in thread
From: Peter Todd @ 2015-05-09 3:08 UTC (permalink / raw)
To: Tier Nolan; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1376 bytes --]
On Fri, May 08, 2015 at 08:47:52PM +0100, Tier Nolan wrote:
> On Fri, May 8, 2015 at 5:37 PM, Peter Todd <pete@petertodd.org> wrote:
>
> > The soft-limit is there miners themselves produce smaller blocks; the
> > soft-limit does not prevent other miners from producing larger blocks.
> >
>
> I wonder if having a "miner" flag would be good for the network.
Makes it trivial to find miners and DoS attack them - a huge risk to the
network as a whole, as well as the miners.
Right now pools already get DoSed all the time through their work
submission systems; getting DoS attacked via their nodes as well would
be a disaster.
> When in "miner mode", the client would reject 4MB blocks and wouldn't build
> on them. The reference client might even track the miner and the non-miner
> chain tip.
>
> Miners would refuse to build on 5MB blocks, but merchants and general users
> would accept them.
That'd be an excellent way to double-spend merchants, significantly
increasing the chance that the double-spend would succeed as you only
have to get sufficient hashing power to get the lucky blocks; you don't
need enough hashing power to *also* ensure those blocks don't become the
longest chain, removing the need to sybil attack your target.
--
'peter'[:-1]@petertodd.org
000000000000000004bd67400df7577a30e6f509b6bd82633efeabe6395eb65a
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 650 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-09 3:08 ` Peter Todd
@ 2015-05-16 4:39 ` Stephen
2015-05-16 11:29 ` Tier Nolan
2015-05-16 11:25 ` Tier Nolan
1 sibling, 1 reply; 31+ messages in thread
From: Stephen @ 2015-05-16 4:39 UTC (permalink / raw)
To: Peter Todd; +Cc: Bitcoin Dev
Comments in line:
> On May 8, 2015, at 11:08 PM, Peter Todd <pete@petertodd.org> wrote:
>
> Makes it trivial to find miners and DoS attack them - a huge risk to the
> network as a whole, as well as the miners.
>
> Right now pools already get DoSed all the time through their work
> submission systems; getting DoS attacked via their nodes as well would
> be a disaster.
It seems that using a -miner flag to follow rules about smaller blocks would only reveal miner nodes if one sent the node a solved block that that was valid in every way except the block size. While not impossible, I wouldn't call this trivial, as it still requires wasting an entire block's worth of energy.
>> When in "miner mode", the client would reject 4MB blocks and wouldn't build
>> on them. The reference client might even track the miner and the non-miner
>> chain tip.
>>
>> Miners would refuse to build on 5MB blocks, but merchants and general users
>> would accept them.
>
> That'd be an excellent way to double-spend merchants, significantly
> increasing the chance that the double-spend would succeed as you only
> have to get sufficient hashing power to get the lucky blocks; you don't
> need enough hashing power to *also* ensure those blocks don't become the
> longest chain, removing the need to sybil attack your target.
>
I think this could be mitigated by counting confirmations differently. We should think of confirmations as only coming from blocks following the miners' more strict rule set. So if a merchant were to see payment for the first time in a block that met their own size restrictions but not the miners', then they would simply count it as unconfirmed.
If they get deep enough in the chain, though, the client should probably count them as being confirmed anyway, even if they don't meet the client nodes' expectation of the miners' block size limit. This happening probably just means that the client has not updated their software (or -minermaxblocksize configuration, depending on how it is implemented) in a long time.
I actually like Tier's suggestion quite a bit. I think we could have the default client limit set to some higher number, and have miners agree out of band on the latest block size limit. Or maybe even build in a way to vote into the blockchain.
Best,
Stephen
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-16 4:39 ` Stephen
@ 2015-05-16 11:29 ` Tier Nolan
0 siblings, 0 replies; 31+ messages in thread
From: Tier Nolan @ 2015-05-16 11:29 UTC (permalink / raw)
Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1414 bytes --]
On Sat, May 16, 2015 at 5:39 AM, Stephen <stephencalebmorse@gmail.com>
wrote:
> I think this could be mitigated by counting confirmations differently. We
> should think of confirmations as only coming from blocks following the
> miners' more strict rule set. So if a merchant were to see payment for the
> first time in a block that met their own size restrictions but not the
> miners', then they would simply count it as unconfirmed.
>
In effect, there is a confirm penalty for less strict blocks. Confirms =
max(miner_confirms, merchant_confirms - 3, 0)
Merchants who don't upgrade end up having to wait longer to hit
confirmations.
If they get deep enough in the chain, though, the client should probably
> count them as being confirmed anyway, even if they don't meet the client
> nodes' expectation of the miners' block size limit. This happening probably
> just means that the client has not updated their software (or
> -minermaxblocksize configuration, depending on how it is implemented) in a
> long time.
>
That is a good idea. Any parameters that have miner/merchant differences
should be modifiable (but only upwards) in the command line.
"Why are my transactions taking longer to confirm?"
"There was a soft fork to make the block size larger and your client is
being careful. You need to add "minermaxblocksize=4MB" to your
bitcoin.conf file."
Hah, it could be called a "semi-hard fork"?
[-- Attachment #2: Type: text/html, Size: 2089 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-09 3:08 ` Peter Todd
2015-05-16 4:39 ` Stephen
@ 2015-05-16 11:25 ` Tier Nolan
1 sibling, 0 replies; 31+ messages in thread
From: Tier Nolan @ 2015-05-16 11:25 UTC (permalink / raw)
Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1359 bytes --]
On Sat, May 9, 2015 at 4:08 AM, Peter Todd <pete@petertodd.org> wrote:
> > I wonder if having a "miner" flag would be good for the network.
>
> Makes it trivial to find miners and DoS attack them - a huge risk to the
> network as a whole, as well as the miners.
>
To mitigate against this, two chaintips could be tracked. The miner tip
and the client tip.
Miners would build on the miner tip. When performing client services, like
wallets, they would use the client tip.
The client would act exactly the same as any node, the only change would be
that it gives miner work based on the mining tip.
If the two tips end up significantly forking, there would be a warning to
the miner and perhaps eventually refuse to give out new work.
That would happen when there was a miner level hard-fork.
> That'd be an excellent way to double-spend merchants, significantly
> increasing the chance that the double-spend would succeed as you only
> have to get sufficient hashing power to get the lucky blocks; you don't
> need enough hashing power to *also* ensure those blocks don't become the
> longest chain, removing the need to sybil attack your target.
>
To launch that attack, you need to produce fake blocks. That is
expensive.
Stephen Cale's suggestion to wait more than one block before counting a
transaction as confirmed would also help mitigate.
[-- Attachment #2: Type: text/html, Size: 2036 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-07 22:02 Matt Corallo
` (2 preceding siblings ...)
2015-05-08 10:03 ` Mike Hearn
@ 2015-05-29 22:36 ` Gavin Andresen
2015-05-29 23:25 ` Matt Corallo
2015-05-29 23:42 ` Chun Wang
4 siblings, 1 reply; 31+ messages in thread
From: Gavin Andresen @ 2015-05-29 22:36 UTC (permalink / raw)
To: Matt Corallo; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1751 bytes --]
Matt brought this up on Twitter, I have no idea why I didn't respond weeks
ago (busy writing blog posts, probably):
On Thu, May 7, 2015 at 6:02 PM, Matt Corallo <bitcoin-list@bluematt.me>
wrote:
>
>
> * Though there are many proposals floating around which could
> significantly decrease block propagation latency, none of them are
> implemented today.
If block propagation isn't fixed, then mines have a strong incentive to
create smaller blocks.
So the max block size is irrelevant, it won't get hit.
> In addition, I'd expect to
> see analysis of how these systems perform in the worst-case, not just
> packet-loss-wise, but in the face of miners attempting to break the system.
>
See http://gavinandresen.ninja/are-bigger-blocks-better-for-bigger-miners
for analysis of "but that means bigger miners can get an advantage"
argument.
Executive summary: if little miners are stupid and produce huge blocks,
then yes, big miners have an advantage.
But they're not, so they won't.
Until the block reward goes away, and assuming transaction fees become an
important source of revenue for miners.
I think it is too early to worry about that; see:
http://gavinandresen.ninja/when-the-block-reward-goes-away
> * I'd very much like to see someone working on better scaling
> technology, both in terms of development and in terms of getting
> traction in the marketplace.
Ok. What does this have to do with the max block size?
Are you arguing that work won't happen if the max block size increases?
* I'd like to see some better conclusions to the discussion around
> long-term incentives within the system.
Again, see http://gavinandresen.ninja/when-the-block-reward-goes-away for
what I think about that.
--
--
Gavin Andresen
[-- Attachment #2: Type: text/html, Size: 3299 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-29 22:36 ` Gavin Andresen
@ 2015-05-29 23:25 ` Matt Corallo
[not found] ` <CABsx9T3__mHZ_kseRg-w-x2=8v78QJLhe+BWPezv+hpbFCufpw@mail.gmail.com>
0 siblings, 1 reply; 31+ messages in thread
From: Matt Corallo @ 2015-05-29 23:25 UTC (permalink / raw)
To: Gavin Andresen; +Cc: Bitcoin Dev
On 05/29/15 22:36, Gavin Andresen wrote:
> Matt brought this up on Twitter, I have no idea why I didn't respond
> weeks ago (busy writing blog posts, probably):
>
> On Thu, May 7, 2015 at 6:02 PM, Matt Corallo <bitcoin-list@bluematt.me
> <mailto:bitcoin-list@bluematt.me>> wrote:
>
>
>
> * Though there are many proposals floating around which could
> significantly decrease block propagation latency, none of them are
> implemented today.
>
>
> If block propagation isn't fixed, then mines have a strong incentive to
> create smaller blocks.
>
> So the max block size is irrelevant, it won't get hit.
Sadly, this is very far from the whole story. The issue of miners
optimizing for returns has been discussed several times during this
discussion, and, sadly, miners who are geographically colocated who are
optimizing for returns with a free-floating blocksize will optimize away
50% of the network!
>
> In addition, I'd expect to
> see analysis of how these systems perform in the worst-case, not just
> packet-loss-wise, but in the face of miners attempting to break the
> system.
>
>
> See http://gavinandresen.ninja/are-bigger-blocks-better-for-bigger-miners for
> analysis of "but that means bigger miners can get an advantage" argument.
>
> Executive summary: if little miners are stupid and produce huge blocks,
> then yes, big miners have an advantage.
I'll talk about transaction fees in a second, but there are several
problems with this already. As pointed out in the original mail, gfw has
already been known to interfere with Bitcoin P2P traffic. So now by
"little" miners, you mean any miner who is not located in mainland
China? Whats worse, the disadvantage is symmetric - little miners are at
a disadvantage when *anyone* mines a bigger block, and miners dont even
have to be "evil" for this to happen - just optimize for profits.
> But they're not, so they won't.
I dont know what you're referring to with this. Are you claiming little
miners today optimize for relay times and have good visibility into the
Bitcoin network and calculate an optimal block size based on this (or
would with a 20MB block size)?
> Until the block reward goes away, and assuming transaction fees become
> an important source of revenue for miners.
> I think it is too early to worry about that; see:
>
> http://gavinandresen.ninja/when-the-block-reward-goes-away
You dont make any points here with which I can argue, but let me respond
with the reason /I/ think it is a problem worth thinking a little bit
about...If we increase the blocksize sufficiently such that transaction
fees are not the way in which miners make their money, then either
miners are not being funded (ie hashpower has to drop to very little),
or the only people mining/funding miners are large orgs who are
"running" Bitcoin (ie the web wallets, payment processors, big
merchants, and exchanges of the world). Sadly, this is no longer a
decentralized Bitcoin and is, in fact, pretty much how the banking world
works today.
I'm not sure who, if anyone, claims Bitcoin is novel or interesting for
any reason other than its decentralization properties, and, in a world
which you are apparently proposing, the "natural" course of things is to
very strongly centralize.
> * I'd very much like to see someone working on better scaling
> technology, both in terms of development and in terms of getting
> traction in the marketplace.
>
>
> Ok. What does this have to do with the max block size?
>
> Are you arguing that work won't happen if the max block size increases?
Yes, I am arguing that by increasing the blocksize the incentives to
actually make Bitcoin scale go away. Even if amazing technologies get
built, no one will have any reason to use them.
> * I'd like to see some better conclusions to the discussion around
>
> long-term incentives within the system.
>
>
> Again, see http://gavinandresen.ninja/when-the-block-reward-goes-away
> for what I think about that.
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-07 22:02 Matt Corallo
` (3 preceding siblings ...)
2015-05-29 22:36 ` Gavin Andresen
@ 2015-05-29 23:42 ` Chun Wang
2015-05-30 13:57 ` Gavin Andresen
2015-05-31 7:05 ` Peter Todd
4 siblings, 2 replies; 31+ messages in thread
From: Chun Wang @ 2015-05-29 23:42 UTC (permalink / raw)
To: Bitcoin Dev
Hello. I am from F2Pool. We are currently mining the biggest blocks on
the network. So far top 100 biggest bitcoin blocks are all from us. We
do support bigger blocks and sooner rather than later. But we cannot
handle 20 MB blocks right now. I know most blocks would not be 20 MB
over night. But only if a small fraction of blocks more than 10 MB, it
could dramatically increase of our orphan rate, result of higher fee
to miners. Bad miners could attack us and the network with artificial
big blocks. As yhou know, other Chinese pools, AntPool, BW, they
produces ASIC chips and mining mostly with their own machines. They do
not care about a few percent of orphan increase as much as we do. They
would continue their zero fee policy. We would be the biggest loser.
As the exchanges had taught us, zero fee is not health to the network.
Also we have to redevelop our block broadcast logic. Server bandwidth
is a lot more expensive in China. And the Internet is slow. Currently
China has more than 50% of mining power, if block size increases, I
bet European and American pools could suffer more than us. We think
the max block size should be increased, but must be increased
smoothly, 2 MB first, and then after one or two years 4 MB, then 8 MB,
and so on. Thanks.
On Fri, May 8, 2015 at 6:02 AM, Matt Corallo <bitcoin-list@bluematt.me> wrote:
> OK, so lets do that. I've seen a lot of "I'm not entirely comfortable
> with committing to this right now, but think we should eventually", but
> not much "I'd be comfortable with committing to this when I see X". In
> the interest of ignoring debate and pushing people towards a consensus
> at all costs, ( ;) ) I'm gonna go ahead and suggest we talk about the
> second.
>
> Personally, there are several things that worry me significantly about
> committing to a blocksize increase, which I'd like to see resolved
> before I'd consider supporting a blocksize increase commitment.
>
> * Though there are many proposals floating around which could
> significantly decrease block propagation latency, none of them are
> implemented today. I'd expect to see these not only implemented but
> being used in production (though I dont particularly care about them
> being all that stable). I'd want to see measurements of how they perform
> both in production and in the face of high packet loss (eg across the
> GFW or in the case of small/moderate DoS). In addition, I'd expect to
> see analysis of how these systems perform in the worst-case, not just
> packet-loss-wise, but in the face of miners attempting to break the system.
>
> * I'd very much like to see someone working on better scaling
> technology, both in terms of development and in terms of getting
> traction in the marketplace. I know StrawPay is working on development,
> though its not obvious to me how far they are from their website, but I
> dont know of any commitments by large players (either SPV wallets,
> centralized wallet services, payment processors, or any others) to
> support such a system (to be fair, its probably too early for such
> players to commit to anything, since anything doesnt exist in public).
>
> * I'd like to see some better conclusions to the discussion around
> long-term incentives within the system. If we're just building Bitcoin
> to work in five years, great, but if we want it all to keep working as
> subsidy drops significantly, I'd like a better answer than "we'll deal
> with it when we get there" or "it will happen, all the predictions based
> on people's behavior today say so" (which are hopefully invalid thanks
> to the previous point). Ideally, I'd love to see some real free pressure
> already on the network starting to develop when we commit to hardforking
> in a year. Not just full blocks with some fees because wallets are
> including far greater fees than they really need to, but software which
> properly handles fees across the ecosystem, smart fee increases when
> transactions arent confirming (eg replace-by-fee, which could be limited
> to increase-in-fees-only for those worried about double-spends).
>
> I probably forgot one or two and certainly dont want to back myself into
> a corner on committing to something here, but those are a few things I
> see today as big blockers on larger blocks.
>
> Luckily, people have been making progress on building the software
> needed in all of the above for a while now, but I think they're all
> very, very immature today.
>
> On 05/07/15 19:13, Jeff Garzik wrote:> On Thu, May 7, 2015 at 3:03 PM,
> Matt Corallo <bitcoin-list@bluematt.me
>> <mailto:bitcoin-list@bluematt.me>> wrote:
> -snip-
>>> If, instead, there had been an intro on the list as "I think we should
>>> do the blocksize increase soon, what do people think?", the response
>>> could likely have focused much more around creating a specific list of
>>> things we should do before we (the technical community) think we are
>>> prepared for a blocksize increase.
>>
>> Agreed, but that is water under the bridge at this point. You - rightly
>> - opened the topic here and now we're discussing it.
>>
>> Mike and Gavin are due the benefit of doubt because making a change to a
>> leaderless automaton powered by leaderless open source software is
>> breaking new ground. I don't focus so much on how we got to this point,
>> but rather, where we go from here.
>
> ------------------------------------------------------------------------------
> One dashboard for servers and applications across Physical-Virtual-Cloud
> Widest out-of-the-box monitoring support with 50+ applications
> Performance metrics, stats and reports that give you Actionable Insights
> Deep dive visibility with transaction tracing using APM Insight.
> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-29 23:42 ` Chun Wang
@ 2015-05-30 13:57 ` Gavin Andresen
2015-05-30 14:08 ` Pindar Wong
2015-05-30 22:05 ` Alex Mizrahi
2015-05-31 7:05 ` Peter Todd
1 sibling, 2 replies; 31+ messages in thread
From: Gavin Andresen @ 2015-05-30 13:57 UTC (permalink / raw)
To: Chun Wang; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1000 bytes --]
On Fri, May 29, 2015 at 7:42 PM, Chun Wang <1240902@gmail.com> wrote:
> Hello. I am from F2Pool. We are currently mining the biggest blocks on
> the network.
Thanks for giving your opinion!
> Bad miners could attack us and the network with artificial
> big blocks.
How?
I ran some simulations, and I could not find a network topology where a big
miner producing big blocks could cause a loss of profit to another miner
(big or small) producing smaller blocks:
http://gavinandresen.ninja/are-bigger-blocks-better-for-bigger-miners
(the 0.3% advantage I DID find was for the situation where EVERYBODY was
producing big blocks).
> We think
> the max block size should be increased, but must be increased
> smoothly, 2 MB first, and then after one or two years 4 MB, then 8 MB,
> and so on. Thanks.
Why 2 MB ? You said that server bandwidth is much more expensive in
China; what would be the difference in your bandwidth costs between 2MB
blocks and 20MB blocks?
--
--
Gavin Andresen
[-- Attachment #2: Type: text/html, Size: 2044 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-30 13:57 ` Gavin Andresen
@ 2015-05-30 14:08 ` Pindar Wong
2015-05-30 22:05 ` Alex Mizrahi
1 sibling, 0 replies; 31+ messages in thread
From: Pindar Wong @ 2015-05-30 14:08 UTC (permalink / raw)
To: Gavin Andresen; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1563 bytes --]
On Sat, May 30, 2015 at 9:57 PM, Gavin Andresen <gavinandresen@gmail.com>
wrote:
> On Fri, May 29, 2015 at 7:42 PM, Chun Wang <1240902@gmail.com> wrote:
>
>> Hello. I am from F2Pool. We are currently mining the biggest blocks on
>> the network.
>
>
> Thanks for giving your opinion!
>
>
>
>> Bad miners could attack us and the network with artificial
>> big blocks.
>
>
> How?
>
> I ran some simulations, and I could not find a network topology where a
> big miner producing big blocks could cause a loss of profit to another
> miner (big or small) producing smaller blocks:
>
> http://gavinandresen.ninja/are-bigger-blocks-better-for-bigger-miners
>
> (the 0.3% advantage I DID find was for the situation where EVERYBODY was
> producing big blocks).
>
>
>> We think
>> the max block size should be increased, but must be increased
>> smoothly, 2 MB first, and then after one or two years 4 MB, then 8 MB,
>> and so on. Thanks.
>
>
> Why 2 MB ? You said that server bandwidth is much more expensive in
> China; what would be the difference in your bandwidth costs between 2MB
> blocks and 20MB blocks?
>
Perhaps we should arrange to run some more 'simulations' with miners from
China and elsewhere?
Let me know there's interest to do.
p.
>
>
> --
> --
> Gavin Andresen
>
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>
>
[-- Attachment #2: Type: text/html, Size: 3379 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-30 13:57 ` Gavin Andresen
2015-05-30 14:08 ` Pindar Wong
@ 2015-05-30 22:05 ` Alex Mizrahi
2015-05-30 23:16 ` Brian Hoffman
2015-05-31 5:05 ` gb
1 sibling, 2 replies; 31+ messages in thread
From: Alex Mizrahi @ 2015-05-30 22:05 UTC (permalink / raw)
To: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 726 bytes --]
> Why 2 MB ?
>
Why 20 MB? Do you anticipate 20x transaction count growth in 2016?
Why not grow it by 1 MB per year?
This is a safer option, I don't think that anybody claims that 2 MB blocks
will be a problem.
And in 10 years when we get to 10 MB we'll get more evidence as to whether
network can handle 10 MB blocks.
So this might be a solution which would satisfy both sides:
* people who are concerned about block size growth will have an
opportunity to stop it before it grows too much (e.g. with a soft fork),
* while people who want bigger blocks will get an equivalent of 25% per
year growth within the first 10 years, which isn't bad, is it?
So far I haven't heard any valid arguments against linear growth.
[-- Attachment #2: Type: text/html, Size: 1217 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-30 22:05 ` Alex Mizrahi
@ 2015-05-30 23:16 ` Brian Hoffman
2015-05-31 0:13 ` Alex Mizrahi
2015-05-31 5:05 ` gb
1 sibling, 1 reply; 31+ messages in thread
From: Brian Hoffman @ 2015-05-30 23:16 UTC (permalink / raw)
To: Alex Mizrahi; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1240 bytes --]
> Why 20 MB? Do you anticipate 20x transaction count growth in 2016?
Do you anticipate linear growth?
> On May 30, 2015, at 6:05 PM, Alex Mizrahi <alex.mizrahi@gmail.com> wrote:
>
>
>> Why 2 MB ?
>
> Why 20 MB? Do you anticipate 20x transaction count growth in 2016?
>
> Why not grow it by 1 MB per year?
> This is a safer option, I don't think that anybody claims that 2 MB blocks will be a problem.
> And in 10 years when we get to 10 MB we'll get more evidence as to whether network can handle 10 MB blocks.
>
> So this might be a solution which would satisfy both sides:
> * people who are concerned about block size growth will have an opportunity to stop it before it grows too much (e.g. with a soft fork),
> * while people who want bigger blocks will get an equivalent of 25% per year growth within the first 10 years, which isn't bad, is it?
>
> So far I haven't heard any valid arguments against linear growth.
> ------------------------------------------------------------------------------
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
[-- Attachment #2: Type: text/html, Size: 2464 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-30 23:16 ` Brian Hoffman
@ 2015-05-31 0:13 ` Alex Mizrahi
0 siblings, 0 replies; 31+ messages in thread
From: Alex Mizrahi @ 2015-05-31 0:13 UTC (permalink / raw)
To: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 2720 bytes --]
> Why 20 MB? Do you anticipate 20x transaction count growth in 2016?
>
> Do you anticipate linear growth?
>
It's safe to say that absolutely nobody can predict the actual growth with
any degree of an accuracy.
I believe that linear growth compares very favorably to other alternatives:
1. Exponential growth: Linear growth is better at modelling diminishing
returns, that is, risk that it grows too much is much smaller. At the same
time initially it will grow faster than reasonable exponential models.
E.g. linear year-over-year relative growth: 100% 50% 33% 25% ...10%
While exponential one which gives the same result in 10 years:
25% 25% ... 25%
This is on the same scale, but exponential starts slower than we want at
start (1.25 MB will be too little for 2016 as we already see fully filled 1
MB blocks), but goes a bit too fast in the long term. It's highly unlikely
we'll see bandwidth growing 10x each 10 years in the long term.
2. Single step increase: an obvious advantage is that linear growth gives
us time to adapt to near realities, time to change something if there is an
unwanted effects, etc. At the same a single step is not a long-term
solution.
While a slow-but-steady growth might be.
3. Adaptive solutions (e.g. limit depends on the last N blocks or something
of that nature):
The problem with them is that they are rather complex, and also:
3.1. prone to manipulation: somebody might try to push the limit if it
will favor him in future
3.2. possibility of a positive feedback loop.
3.3. possibility of an unhealthy game-theoretic dynamics
The main problem is that we do not understand game theoretic aspects of
bitcoin mining in presence of various real-world factors such as block
propagation delays. Thus we can't design a proper adaptive solution.
There is no perfect solution to this problem as we cannot predict the
future and our understanding is limited.
But among the 5 alternatives (linear, exponential, single step, adaptive,
no limit), linear seems to be the best option at this point as it's both
quite safe and doesn't stunt growth too much.
> bitcoin is really really small right now, any sign of real adoption could
make it grow 100x or even more in a matter of weeks.
This is certainly possible, but the thing is:
1) this can't be predicted;
2) this will be a serious problem for many bitcoind installations;
3) it's not necessarily a healthy thing, perhaps it will grow 100x in a
matter of weeks, and then will go to zero in matter of weeks as well.
So I don't think that sudden growth spurts is something we should take into
account on the planning stage. If anything we'd like to prevent them from
happening, slow growth is usually better.
[-- Attachment #2: Type: text/html, Size: 3791 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-30 22:05 ` Alex Mizrahi
2015-05-30 23:16 ` Brian Hoffman
@ 2015-05-31 5:05 ` gb
1 sibling, 0 replies; 31+ messages in thread
From: gb @ 2015-05-31 5:05 UTC (permalink / raw)
To: Alex Mizrahi; +Cc: Bitcoin Dev
Linear growth is indeed the 'simplest' model for growth so removes
concerns of complexity using such a growth model. Seems like it might be
a safe compromise between exponential growth, zero growth and buys some
time to observe the longer term scale network behaviour.
A simple linear growth 'hard' technical limit could also be used
conjunction with the simple periodic soft dynamic limit adjustment (e.g.
1.5x of moving average) as discussed recently. So that the combination
provides for growth, with fee pressure, up until if/when the technical
hard limit is hit. And if we keep hitting the hard limit that signals a
market demand for ancillary layers to be built out, that has been
missing until now.
On Sun, 2015-05-31 at 01:05 +0300, Alex Mizrahi wrote:
>
>
> Why 2 MB ?
>
>
> Why 20 MB? Do you anticipate 20x transaction count growth in 2016?
>
>
> Why not grow it by 1 MB per year?
> This is a safer option, I don't think that anybody claims that 2 MB
> blocks will be a problem.
> And in 10 years when we get to 10 MB we'll get more evidence as to
> whether network can handle 10 MB blocks.
>
>
> So this might be a solution which would satisfy both sides:
> * people who are concerned about block size growth will have an
> opportunity to stop it before it grows too much (e.g. with a soft
> fork),
> * while people who want bigger blocks will get an equivalent of 25%
> per year growth within the first 10 years, which isn't bad, is it?
>
>
> So far I haven't heard any valid arguments against linear growth.
> ------------------------------------------------------------------------------
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-29 23:42 ` Chun Wang
2015-05-30 13:57 ` Gavin Andresen
@ 2015-05-31 7:05 ` Peter Todd
2015-05-31 12:51 ` Gavin Andresen
1 sibling, 1 reply; 31+ messages in thread
From: Peter Todd @ 2015-05-31 7:05 UTC (permalink / raw)
To: Chun Wang; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 2567 bytes --]
On Sat, May 30, 2015 at 07:42:16AM +0800, Chun Wang wrote:
> Hello. I am from F2Pool. We are currently mining the biggest blocks on
> the network. So far top 100 biggest bitcoin blocks are all from us. We
> do support bigger blocks and sooner rather than later. But we cannot
> handle 20 MB blocks right now. I know most blocks would not be 20 MB
> over night. But only if a small fraction of blocks more than 10 MB, it
> could dramatically increase of our orphan rate, result of higher fee
> to miners. Bad miners could attack us and the network with artificial
> big blocks. As yhou know, other Chinese pools, AntPool, BW, they
> produces ASIC chips and mining mostly with their own machines. They do
> not care about a few percent of orphan increase as much as we do. They
> would continue their zero fee policy. We would be the biggest loser.
> As the exchanges had taught us, zero fee is not health to the network.
> Also we have to redevelop our block broadcast logic. Server bandwidth
> is a lot more expensive in China. And the Internet is slow. Currently
> China has more than 50% of mining power, if block size increases, I
> bet European and American pools could suffer more than us. We think
> the max block size should be increased, but must be increased
> smoothly, 2 MB first, and then after one or two years 4 MB, then 8 MB,
> and so on. Thanks.
Great to hear from you!
Yeah, I'm pretty surprised myself that Gavin never accepted the
compromises offered by others in this space for a slow growth solution,
rather than starting with over an order of magnitude blocksize increase.
This is particularly surprising when his own calculations - after
correcting an artithmetic error - came up with 8MB blocks rather than
20MB.
Something important to note in Gavin Andresen's analysises of this issue
is that he's using quite optimistic scenarios for how nodes are
connected to each other. For instance, assuming that connections between
miners are direct is a very optimistic assumption that depends on a
permissive, unregulated, environment where miners co-operate with each
other - obviously that's easily subject to change! Better block
broadcasting logic helps this in the "co-operation" case, but there's
not much it can do in the worst-case.
Unrelated: feel free to contact me directly if you have any questions
re: the BIP66 upgrade; I hear you guys were planning on upgrading your
mining nodes soon.
--
'peter'[:-1]@petertodd.org
00000000000000000db932d1cbd04a29d8e55989eda3f096d3ab8e8d95eb28e9
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 650 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [Bitcoin-development] Block Size Increase Requirements
2015-05-31 7:05 ` Peter Todd
@ 2015-05-31 12:51 ` Gavin Andresen
0 siblings, 0 replies; 31+ messages in thread
From: Gavin Andresen @ 2015-05-31 12:51 UTC (permalink / raw)
To: Peter Todd; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1691 bytes --]
On Sun, May 31, 2015 at 3:05 AM, Peter Todd <pete@petertodd.org> wrote:
> Yeah, I'm pretty surprised myself that Gavin never accepted the
> compromises offered by others in this space for a slow growth solution
>
What compromise? I haven't seen a specific proposal that could be turned
into a pull request.
> Something important to note in Gavin Andresen's analysises of this issue
> is that he's using quite optimistic scenarios for how nodes are
> connected to each other.
NO I AM NOT.
I simulated a variety of connectivities; see the .cfg files at
https://github.com/gavinandresen/bitcoin_miningsim
The results I give in the "are bigger blocks better" blog post are for
WORST CASE connectivity (one dominant big miner, multiple little miners,
big miner connects to only 30% of little miners, but all the little miners
connected directly to each other).
> For instance, assuming that connections between
> miners are direct is a very optimistic assumption
Again, I did not simulate all miners directly connected to each other.
I will note that miners are VERY HIGHLY connected today. It is in their
best interest to be highly connected to each other.
> that depends on a
> permissive, unregulated, environment where miners co-operate with each
> other - obviously that's easily subject to change!
Really? How is that easily subject to change? If it is easily subject to
change, do bigger blocks have any effect? Why are 1MB blocks not subject to
change?
I talk about "what if your government bans Bitcoin entirely" here:
http://gavinandresen.ninja/big-blocks-and-tor
... and the issues are essentially the same, independent of block size.
--
--
Gavin Andresen
[-- Attachment #2: Type: text/html, Size: 3115 bytes --]
^ permalink raw reply [flat|nested] 31+ messages in thread