* [bitcoin-dev] Capacity increases for the Bitcoin system. @ 2015-12-07 22:02 Gregory Maxwell 2015-12-07 22:54 ` Bryan Bishop ` (3 more replies) 0 siblings, 4 replies; 56+ messages in thread From: Gregory Maxwell @ 2015-12-07 22:02 UTC (permalink / raw) To: Bitcoin Dev The Scaling Bitcoin Workshop in HK is just wrapping up. Many fascinating proposals were presented. I think this would be a good time to share my view of the near term arc for capacity increases in the Bitcoin system. I believe we’re in a fantastic place right now and that the community is ready to deliver on a clear forward path with a shared vision that addresses the needs of the system while upholding its values. I think it’s important to first clearly express some of the relevant principles that I think should guide the ongoing development of the Bitcoin system. Bitcoin is P2P electronic cash that is valuable over legacy systems because of the monetary autonomy it brings to its users through decentralization. Bitcoin seeks to address the root problem with conventional currency: all the trust that's required to make it work-- -- Not that justified trust is a bad thing, but trust makes systems brittle, opaque, and costly to operate. Trust failures result in systemic collapses, trust curation creates inequality and monopoly lock-in, and naturally arising trust choke-points can be abused to deny access to due process. Through the use of cryptographic proof and decentralized networks Bitcoin minimizes and replaces these trust costs. With the available technology, there are fundamental trade-offs between scale and decentralization. If the system is too costly people will be forced to trust third parties rather than independently enforcing the system's rules. If the Bitcoin blockchain’s resource usage, relative to the available technology, is too great, Bitcoin loses its competitive advantages compared to legacy systems because validation will be too costly (pricing out many users), forcing trust back into the system. If capacity is too low and our methods of transacting too inefficient, access to the chain for dispute resolution will be too costly, again pushing trust back into the system. Since Bitcoin is an electronic cash, it _isn't_ a generic database; the demand for cheap highly-replicated perpetual storage is unbounded, and Bitcoin cannot and will not satisfy that demand for non-ecash (non-Bitcoin) usage, and there is no shame in that. Fortunately, Bitcoin can interoperate with other systems that address other applications, and--with luck and hard work--the Bitcoin system can and will satisfy the world's demand for electronic cash. Fortunately, a lot of great technology is in the works that make navigating the trade-offs easier. First up: after several years in the making Bitcoin Core has recently merged libsecp256k1, which results in a huge increase in signature validation performance. Combined with other recent work we're now getting ConnectTip performance 7x higher in 0.12 than in prior versions. This has been a long time coming, and without its anticipation and earlier work such as headers-first I probably would have been arguing for a block size decrease last year. This improvement in the state of the art for widely available production Bitcoin software sets a stage for some capacity increases while still catching up on our decentralization deficit. This shifts the bottlenecks off of CPU and more strongly onto propagation latency and bandwidth. Versionbits (BIP9) is approaching maturity and will allow the Bitcoin network to have multiple in-flight soft-forks. Up until now we’ve had to completely serialize soft-fork work, and also had no real way to handle a soft-fork that was merged in core but rejected by the network. All that is solved in BIP9, which should allow us to pick up the pace of improvements in the network. It looks like versionbits will be ready for use in the next soft-fork performed on the network. The next thing is that, at Scaling Bitcoin Hong Kong, Pieter Wuille presented on bringing Segregated Witness to Bitcoin. What is proposed is a _soft-fork_ that increases Bitcoin's scalability and capacity by reorganizing data in blocks to handle the signatures separately, and in doing so takes them outside the scope of the current blocksize limit. The particular proposal amounts to a 4MB blocksize increase at worst. The separation allows new security models, such as skipping downloading data you're not going to check and improved performance for lite clients (especially ones with high privacy). The proposal also includes fraud proofs which make violations of the Bitcoin system provable with a compact proof. This completes the vision of "alerts" described in the "Simplified Payment Verification" section of the Bitcoin whitepaper, and would make it possible for lite clients to enforce all the rules of the system (under a new strong assumption that they're not partitioned from someone who would generate the proofs). The design has numerous other features like making further enhancements safer and eliminating signature malleability problems. If widely used this proposal gives a 2x capacity increase (more if multisig is widely used), but most importantly it makes that additional capacity--and future capacity beyond it--safer by increasing efficiency and allowing more trade-offs (in particular, you can use much less bandwidth in exchange for a strong non-partitioning assumption). There is a working implementation (though it doesn't yet have the fraud proofs) at https://github.com/sipa/bitcoin/commits/segwit (Pieter's talk is at: transcript: http://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/segregated-witness-and-its-impact-on-scalability/ slides: https://prezi.com/lyghixkrguao/segregated-witness-and-deploying-it-for-bitcoin/ Video: https://www.youtube.com/watch?v=fst1IK_mrng#t=36m ) I had good success deploying an earlier (hard-fork) version of segwit in the Elements Alpha sidechain; the soft-fork segwit now proposed is a second-generation design. And I think it's quite reasonable to get this deployed in a relatively short time frame. The segwit design calls for a future bitcoinj compatible hardfork to further increase its efficiency--but it's not necessary to reap most of the benefits,and that means it can happen on its own schedule and in a non-contentious manner. Going beyond segwit, there has been some considerable activity brewing around more efficient block relay. There is a collection of proposals, some stemming from a p2pool-inspired informal sketch of mine and some independently invented, called "weak blocks", "thin blocks" or "soft blocks". These proposals build on top of efficient relay techniques (like the relay network protocol or IBLT) and move virtually all the transmission time of a block to before the block is found, eliminating size from the orphan race calculation. We already desperately need this at the current block sizes. These have not yet been implemented, but fortunately the path appears clear. I've seen at least one more or less complete specification, and I expect to see things running using this in a few months. This tool will remove propagation latency from being a problem in the absence of strategic behavior by miners. Better understanding their behavior when miners behave strategically is an open question. Concurrently, there is a lot of activity ongoing related to “non-bandwidth” scaling mechanisms. Non-bandwidth scaling mechanisms are tools like transaction cut-through and bidirectional payment channels which increase Bitcoin’s capacity and speed using clever smart contracts rather than increased bandwidth. Critically, these approaches strike right at the heart of the capacity vs autotomy trade-off, and may allow us to achieve very high capacity and very high decentralization. CLTV (BIP65), deployed a month ago and now active on the network, is very useful for these techniques (essential for making hold-up refunds work); CSV (BIP68 / BIP112) is in the pipeline for merge in core and making good progress (and will likely be ready ahead of segwit). Further Bitcoin protocol improvements for non-bandwidth scaling are in the works: Many of these proposals really want anti-malleability fixes (which would be provided by segwit), and there are checksig flag improvements already tendered and more being worked on, which would be much easier to deploy with segwit. I expect that within six months we could have considerably more features ready for deployment to enable these techniques. Even without them I believe we’ll be in an acceptable position with respect to capacity in the near term, but it’s important to enable them for the future. (http://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/overview-of-bips-necessary-for-lightning is a relevant talk for some of the wanted network features for Lightning, a bidirectional payment channel proposal which many parties are working on right now; other non-bandwidth improvements discussed in the past include transaction cut-through, which I consider a must-read for the basic intuition about how transaction capacity can be greater than blockchain capacity: https://bitcointalk.org/index.php?topic=281848.0 , though there are many others.) Further out, there are several proposals related to flex caps or incentive-aligned dynamic block size controls based on allowing miners to produce larger blocks at some cost. These proposals help preserve the alignment of incentives between miners and general node operators, and prevent defection between the miners from undermining the fee market behavior that will eventually fund security. I think that right now capacity is high enough and the needed capacity is low enough that we don't immediately need these proposals, but they will be critically important long term. I'm planning to help out and drive towards a more concrete direction out of these proposals in the following months. (Relevant talks include http://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/a-flexible-limit-trading-subsidy-for-larger-blocks/ ) Finally--at some point the capacity increases from the above may not be enough. Delivery on relay improvements, segwit fraud proofs, dynamic block size controls, and other advances in technology will reduce the risk and therefore controversy around moderate block size increase proposals (such as 2/4/8 rescaled to respect segwit's increase). Bitcoin will be able to move forward with these increases when improvements and understanding render their risks widely acceptable relative to the risks of not deploying them. In Bitcoin Core we should keep patches ready to implement them as the need and the will arises, to keep the basic software engineering from being the limiting factor. Our recent and current progress has well positioned the Bitcoin ecosystem to handle its current capacity needs. I think the above sets out some clear achievable milestones to continue to advance the art in Bitcoin capacity while putting us in a good position for further improvement and evolution. TL;DR: I propose we work immediately towards the segwit 4MB block soft-fork which increases capacity and scalability, and recent speedups and incoming relay improvements make segwit a reasonable risk. BIP9 and segwit will also make further improvements easier and faster to deploy. We’ll continue to set the stage for non-bandwidth-increase-based scaling, while building additional tools that would make bandwidth increases safer long term. Further work will prepare Bitcoin for further increases, which will become possible when justified, while also providing the groundwork to make them justifiable. Thanks for your time, ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-07 22:02 [bitcoin-dev] Capacity increases for the Bitcoin system Gregory Maxwell @ 2015-12-07 22:54 ` Bryan Bishop 2015-12-08 2:42 ` Anthony Towns ` (2 subsequent siblings) 3 siblings, 0 replies; 56+ messages in thread From: Bryan Bishop @ 2015-12-07 22:54 UTC (permalink / raw) To: Bitcoin Dev, Gregory Maxwell, Bryan Bishop On Mon, Dec 7, 2015 at 4:02 PM, Gregory Maxwell wrote: > The Scaling Bitcoin Workshop in HK is just wrapping up. Many fascinating > proposals were presented. I think this would be a good time to share my > view of the near term arc for capacity increases in the Bitcoin system. I > believe we’re in a fantastic place right now and that the community > is ready to deliver on a clear forward path with a shared vision that > addresses the needs of the system while upholding its values. ACK. One of the interesting take-aways from the workshops for me has been that there is a large discrepancy between what developers are doing and what's more widely known. When I was doing initial research and work for my keynote at the Montreal conference ( http://diyhpl.us/~bryan/irc/bitcoin/scalingbitcoin-review.pdf -- an attempt at being exhaustive, prior to seeing the workshop proposals ), what I was most surprised by was the discrepancy between what we think is being talked about versus what has been emphasized or socially processed (lots of proposals appear in text, but review efforts are sometimes "hidden" in corners of github pull request comments, for example). As another example, the libsecp256k1 testing work reached a level unseen except perhaps in the aerospace industry, but these sorts of details are not apparent if you are reading bitcoin-dev archives. It is very hard to listen to all ideas and find great ideas. Sometimes, our time can be almost completely exhausted by evaluating inefficient proposals, so it's not surprising that rough consensus building could take time. I suspect we will see consensus moving in positive directions around the proposals you have highlighted. When Satoshi originally released the Bitcoin whitepaper, practically everyone-- somehow with the exception of Hal Finney-- didn't look, because the costs of evaluating cryptographic system proposals is so high and everyone was jaded and burned out for the past umpteen decades. (I have IRC logs from January 10th 2009 where I immediately dismissed Bitcoin after I had seen its announcement on the p2pfoundation mailing list, perhaps in retrospect I should not let family tragedy so greatly impact my evaluation of proposals...). It's hard to evaluate these proposals. Sometimes it may feel like random proposals are review-resistant, or designed to burn our time up. But I think this is more reflective of the simple fact that consensus takes effort, and it's hard work, and this is to be expected in this sort of system design. Your email contains a good summary of recent scaling progress and of efforts presented at the Hong Kong workshop. I like summaries. I have previously recommended making more summaries and posting them to the mailing list. In general, it would be good if developers were to write summaries of recent work and efforts and post them to the bitcoin-dev mailing list. BIP drafts are excellent. Long-term proposals are excellent. Short-term coordination happens over IRC, and that makes sense to me. But I would point out that many of the developments even from, say, the Montreal workshop were notably absent from the mailing list. Unless someone was paying close attention, they wouldn't have noticed some of those efforts which, in some cases, haven't been mentioned since. I suspect most of this is a matter of attention, review and keeping track of loose ends, which can be admittedly difficult. Short (or even long) summaries in emails are helpful because they increase the ability of the community to coordinate and figure out what's going on. Often I will write an email that summarizes some content simply because I estimate that I am going to forget the details in the near future, and if I am going to forget them then it seems likely that others might.... This creates a broad base of proposals and content to build from when we're doing development work in the future, making for a much richer community as a consequence. The contributions from the scalingbitcoin.org workshops are a welcome addition, and the proposal outlined in the above email contains a good summary of recent progress. We need more of this sort of synthesis, we're richer for it. I am excitedly looking forward to the impending onslaught of Bitcoin progress. - Bryan http://heybryan.org/ 1 512 203 0507 ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-07 22:02 [bitcoin-dev] Capacity increases for the Bitcoin system Gregory Maxwell 2015-12-07 22:54 ` Bryan Bishop @ 2015-12-08 2:42 ` Anthony Towns 2015-12-08 4:58 ` Anthony Towns 2015-12-08 11:07 ` Wladimir J. van der Laan 2015-12-08 23:48 ` Jonathan Toomim 3 siblings, 1 reply; 56+ messages in thread From: Anthony Towns @ 2015-12-08 2:42 UTC (permalink / raw) To: bitcoin-dev On Mon, Dec 07, 2015 at 10:02:17PM +0000, Gregory Maxwell via bitcoin-dev wrote: > ... bringing Segregated Witness to Bitcoin. > The particular proposal amounts to a 4MB blocksize increase at worst. Bit ambiguous what "worst" means here; lots of people would say the smallest increase is the worst option. :) By my count, P2PKH transactions get 2x space saving with segwit [0], while 2-of-2 multisig P2SH transactions (and hence most of the on-chain lightning transactions) get a 3x space saving [1]. An on-chain HTLC (for a cross-chain atomic swap eg) would also get 3x space saving [2]. The most extreme lightning transactions (uncooperative close with bonus anonymity) could get a 6x saving, but would probably run into SIGOP limits [3]. > If widely used this proposal gives a 2x capacity increase > (more if multisig is widely used), So I think it's fair to say that on its own it gives up to a 2x increase for ordinary pay to public key transactions, and a 3x increase for 2/2 multisig and (on-chain) lightning transactions (which would mean lightning could scale to ~20M users with 1MB block sizes based on the estimates from Tadge Dryja's talk). More complicated smart contracts (even just 3 of 5 multisig) presumably benefit even more from this, which seems like an interesting approach to (part of) jgarzik's "Fidelity problem". Averaging those numbers as a 2.5x improvement, means that combining segwit with other proposals would allow you to derate them by a factor of 2.5, giving: BIP-100: maximum of 12.8MB BIP-101: 3.2MB in 2016, 6.4MB in 2018, 12.8MB in 2020, 25.6MB in 2022.. 2-4-8: 800kB in 2016, 1.6MB in 2018, 3.2MB in 2020 BIP-103: 400kB in 2016, 470kB in 2018, 650kB in 2020, 1MB in 2023... (ie, if BIP-103 had been the "perfect" approach, then post segwit, it would make sense to put non-consensus soft-limits back in place for quite a while) > TL;DR: I propose we work immediately towards the segwit 4MB block > soft-fork which increases capacity and scalability, and recent speedups > and incoming relay improvements make segwit a reasonable risk. I guess segwit effectively introduces two additional dimensions for working out how to optimally pack transactions into a block -- there's the existing constraints on block bytes (<=1MB) and sigops (<=20k), but there are problably additional constraints on witness bytes (<=3MB) and there *could* be a different constraint for sigops in witnesses (<=3*20k? <=4*20k?) compared to sigops in the block while remaining a soft-fork. It could also be an opportunity to combine the constraints, ie (segwit_bytes + 50*segwit_sigs < 6M) which would make it easier to avoid attacks where people try sending transactions with lots of sigops in very few bytes, filling up blocks by sigops, but only paying fees proportional to their byte count. Hmm, after a quick look, I'm not sure if the current segwit branch actually accounts for sigops in segregated witnesses? If it does, afaics it simply applies the existing 20k limit to the total, which seems too low to me? Having segwit with the current 1MB limit on the traditional block contents plus an additional 3MB for witness data seems like it would also give a somewhat gradual increase in transaction volume from the current 1x rate to an eventual 2x or 3x rate as wallet software upgrades to support segregated witness transactions. So if problems were found when block+witness data hit 1.5MB, there'd still be time to roll out fixes before it got to 1.8MB or 2MB or 3MB. ie this further reduces the risk compared to a single step increase to 2x capacity. BTW, it's never been quite clear to me what the risks are precisely. Here are some: - sometime soon, blockchain supply can't meet demand + I've never worked out how you'd tell if this is the case; there's potentially infinite demand if everything free, so at one level it's trivially true, but that's not helpful. + Presumably if this were happening in a way that "matters", fees would rise precipitously. Perhaps median fees of $2 USD/kB would indicate this is happening? If so, it's not here yet and seems like it's still a ways off. + If it were happening, then, presumably, people become would be less optimistic about bitcoin and the price of BTC would drop/not rise, but that seems pretty hard to interpret. - it becomes harder to build on blocks found by other miners, encouraging mining centralisation (which then makes censorship easier, and fungibility harder) or forcing trust between miners (eg SPV mining empty blocks) + latency/bandwidth limitations means miners can't get block information quickly enough (mitigated by weak blocks and IBLT) + blocks can't be verified quickly enough (due to too many crypto ops per block, or because the UTXO set can't be kept in RAM) (mitigated by libsecp256k1 improvements, ..?) + constructing a new block to mine takes too long - it becomes harder to maintain a validating, but non-mining node, which in turn makes non-validating nodes harder to run safely (ie, Sybil attacks become easier) + increased CPU to verify bigger/more complicated blocks (can't keep up on a raspberry pi) + increased storage (60GB of blockchain might mean it won't fit on your laptop) + increased bandwidth + increased initial sync time (delayed reward = less likely to bother) Cheers, aj [0] AIUI, segwit would make the "in block" transactions look like: * (4) version * (1) input count * for each input: - (32) tx hash - (4) txout index - (1) script length = 0 - (4) sequence number * (1) output count * for each output: - (8) value - (1) script length = 34 - (34) <33 byte push> * (4) locktime So about 10+41i+43o bytes (with the other information being external to the block and the 1MB limit, but committed to via the coinbase). A standard pay to public key hash would have a 25 byte output script instead of 34 bytes, but also a 105 bytes of input script, so about 10+146i+34o bytes. Over enough transactions inputs and outputs are about equal, so that's 10+84o versus 10+180o, so a factor of 2x-2.14x in the usual case. [1] With a P2SH to a 2-of-2 multisig address, the output script would be 23 bytes, and the input script would be a 71B redeem script, plus two signatures and an OP_0 for about 215B, so totalling 10+256i+32o. Again treating i=o over the long term, that's 10+84o version 10+288o, so that's a 3.2x-3.4x improvement. 2-of-2 multisig payment would cover the normal case for on-chain lightning channel transactions, ie where both sides are able to cooperatively close the channel. [2] A basic HTLC, ie: "pay to A if they know the preimage for X, or pay to B after a timeout of T", done by P2SH has about 98B of redeem script and either ~105B of signature or ~72B of signature for a total of 203B or 170B of input script. So that comes to 10+244i+32o or 10+211i+32o. Segwit gives an improvement of 3x-3.3x or 2.7x-2.9x there. [3] A lightning-style HTLC, which adds a third option of ", or pay to B if A was trying to cheat" adds an extra 25 bytes or so to the redeem script, changing those numbers to 10+270i+32o and 10+236i+32o, and an improvement of 3.3x-3.6x or 2.9x-3.2x. A lightning-style HTLC that also uses ecc private keys as the secret preimages to be revealed [4] might use an additional ~260 bytes of redeem script / script signature, which would make the worst case numbers be 10+530i+32o, so 10+562o versus 10+84o, which would be a 6x-6.7x improvement. But those particular scripts would be constrained by consensus sigop limits before the filled up much more than a quarter of a block in a segwit/1MB world anyway. [4] http://lists.linuxfoundation.org/pipermail/lightning-dev/2015-November/000344.html ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 2:42 ` Anthony Towns @ 2015-12-08 4:58 ` Anthony Towns 2015-12-08 5:21 ` Gregory Maxwell 2016-01-18 12:02 ` Anthony Towns 0 siblings, 2 replies; 56+ messages in thread From: Anthony Towns @ 2015-12-08 4:58 UTC (permalink / raw) To: bitcoin-dev > On Mon, Dec 07, 2015 at 10:02:17PM +0000, Gregory Maxwell wrote: > > If widely used this proposal gives a 2x capacity increase > > (more if multisig is widely used), So from IRC, this doesn't seem quite right -- capacity is constrained as base_size + witness_size/4 <= 1MB rather than base_size <= 1MB and base_size + witness_size <= 4MB or similar. So if you have a 500B transaction and move 250B into the witness, you're still using up 250B+250B/4 of the 1MB limit, rather than just 250B of the 1MB limit. In particular, if you use as many p2pkh transactions as possible, you'd have 800kB of base data plus 800kB of witness data, and for a block filled with 2-of-2 multisig p2sh transactions, you'd hit the limit at 670kB of base data and 1.33MB of witness data. That would be 1.6MB and 2MB of total actual data if you hit the limits with real transactions, so it's more like a 1.8x increase for real transactions afaics, even with substantial use of multisig addresses. The 4MB consensus limit could only be hit by having a single trivial transaction using as little base data as possible, then a single huge 4MB witness. So people trying to abuse the system have 4x the blocksize for 1 block's worth of fees, while people using it as intended only get 1.6x or 2x the blocksize... That seems kinda backwards. Having a cost function rather than separate limits does make it easier to build blocks (approximately) optimally, though (ie, just divide the fee by (base_bytes+witness_bytes/4) and sort). Are there any other benefits? But afaics, you could just have fixed consensus limits and use the cost function for building blocks, though? ie sort txs by fee divided by [B + S*50 + W/3] (where B is base bytes, S is sigops and W is witness bytes) then just fill up the block until one of the three limits (1MB base, 20k sigops, 3MB witness) is hit? (Doing a hard fork to make *all* the limits -- base data, witness data, and sigop count -- part of a single cost function might be a win; I'm just not seeing the gain in forcing witness data to trade off against block data when filling blocks is already a 2D knapsack problem) Cheers, aj ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 4:58 ` Anthony Towns @ 2015-12-08 5:21 ` Gregory Maxwell 2015-12-08 6:54 ` Anthony Towns 2016-01-18 12:02 ` Anthony Towns 1 sibling, 1 reply; 56+ messages in thread From: Gregory Maxwell @ 2015-12-08 5:21 UTC (permalink / raw) To: Anthony Towns; +Cc: Bitcoin Dev On Tue, Dec 8, 2015 at 4:58 AM, Anthony Towns via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > Having a cost function rather than separate limits does make it easier to > build blocks (approximately) optimally, though (ie, just divide the fee by > (base_bytes+witness_bytes/4) and sort). Are there any other benefits? Actually being able to compute fees for your transaction: If there are multiple limits that are "at play" then how you need to pay would depend on the entire set of other candidate transactions, which is unknown to you. Avoiding the need for a fancy solver in the miner is also virtuous, because requiring software complexity there can make for centralization advantages or divert development/maintenance cycles in open source software off to other ends... The multidimensional optimization is harder to accommodate for improved relay schemes, this is the same as the "build blocks" but much more critical both because of the need for consistency and the frequency in which you do it. These don't, however, apply all that strongly if only one limit is likely to be the limiting limit... though I am unsure about counting on that; after all if the other limits wouldn't be limiting, why have them? > That seems kinda backwards. It can seem that way, but all limiting schemes have pathological cases where someone runs up against the limit in the most costly way. Keep in mind that casual pathological behavior can be suppressed via IsStandard like rules without baking them into consensus; so long as the candidate attacker isn't miners themselves. Doing so where possible can help avoid cases like the current sigops limiting which is just ... pretty broken. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 5:21 ` Gregory Maxwell @ 2015-12-08 6:54 ` Anthony Towns 0 siblings, 0 replies; 56+ messages in thread From: Anthony Towns @ 2015-12-08 6:54 UTC (permalink / raw) To: bitcoin-dev On Tue, Dec 08, 2015 at 05:21:18AM +0000, Gregory Maxwell via bitcoin-dev wrote: > On Tue, Dec 8, 2015 at 4:58 AM, Anthony Towns via bitcoin-dev > <bitcoin-dev@lists.linuxfoundation.org> wrote: > > Having a cost function rather than separate limits does make it easier to > > build blocks (approximately) optimally, though (ie, just divide the fee by > > (base_bytes+witness_bytes/4) and sort). Are there any other benefits? > Actually being able to compute fees for your transaction: If there are > multiple limits that are "at play" then how you need to pay would > depend on the entire set of other candidate transactions, which is > unknown to you. Isn't that solvable in the short term, if miners just agree to order transactions via a cost function, without enforcing it at consensus level until a later hard fork that can also change the existing limits to enforce that balance? (1MB base + 3MB witness + 20k sigops) with segwit initially, to something like (B + W + 200*U + 40*S < 5e6) where B is base bytes, W is witness bytes, U is number of UTXOs added (or removed) and S is number of sigops, or whatever factors actually make sense. I guess segwit does allow soft-forking more sigops immediately -- segwit transactions only add sigops into the segregated witness, which doesn't get counted for existing consensus. So it would be possible to take the opposite approach, and make the rule immediately be something like: 50*S < 1M B + W/4 + 25*S' < 1M (where S is sigops in base data, and S' is sigops in witness) and just rely on S trending to zero (or soft-fork in a requirement that non-segregated witness transactions have fewer than B/50 sigops) so that there's only one (linear) equation to optimise, when deciding fees or creating a block. (I don't see how you could safely set the coefficient for S' too much smaller though) B+W/4+25*S' for a 2-in/2-out p2pkh would still be 178+206/4+25*2=280 though, which would allow 3570 transactions per block, versus 2700 now, which would only be a 32% increase... > These don't, however, apply all that strongly if only one limit is > likely to be the limiting limit... though I am unsure about counting > on that; after all if the other limits wouldn't be limiting, why have > them? Sure, but, at least for now, there's already two limits that are being hit. Having one is *much* better than two, but I don't think two is a lot better than three? (Also, the ratio between the parameters doesn't necessary seem like a constant; it's not clear to me that hardcoding a formula with a single limit is actually better than hardcoding separate limits, and letting miners/the market work out coefficients that match the sort of contracts that are actually being used) > > That seems kinda backwards. > It can seem that way, but all limiting schemes have pathological cases > where someone runs up against the limit in the most costly way. Keep > in mind that casual pathological behavior can be suppressed via > IsStandard like rules without baking them into consensus; so long as > the candidate attacker isn't miners themselves. Doing so where > possible can help avoid cases like the current sigops limiting which > is just ... pretty broken. Sure; it just seems to be halving the increase in block space (60% versus 100% extra for p2pkh, 100% versus 200% for 2/2 multisig p2sh) for what doesn't actually look like that much of a benefit in fee comparisons? I mean, as far as I'm concerned, segwit is great even if it doesn't buy any improvement in transactions/block, so even a 1% gain is brilliant. I'd just rather the 100%-200% gain I was expecting. :) Cheers, aj ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 4:58 ` Anthony Towns 2015-12-08 5:21 ` Gregory Maxwell @ 2016-01-18 12:02 ` Anthony Towns 2016-01-22 9:46 ` Anthony Towns 1 sibling, 1 reply; 56+ messages in thread From: Anthony Towns @ 2016-01-18 12:02 UTC (permalink / raw) To: bitcoin-dev TLDR: 1.7MB effective block size is a better estimate than 1.6MB for p2pkh with segwit. 2MB for 2/2 multisig still seems accurate. Additional post-segwit soft forked script improvements can improve the effective block size for p2pkh txns from 1.7MB to 1.9MB, and for 2/2 multisig from 2MB to 2.5MB/3MB. (To the best of my knowledge, anyway; if I've made a mistake in my maths or assumptions, corrections appreciated) On Tue, Dec 08, 2015 at 02:58:03PM +1000, Anthony Towns via bitcoin-dev wrote: > So from IRC, this doesn't seem quite right -- capacity is constrained as > base_size + witness_size/4 <= 1MB .. > That would be 1.6MB and 2MB of total actual data if you hit the limits > with real transactions, so it's more like a 1.8x increase for real > transactions afaics, even with substantial use of multisig addresses. I think these numbers are slightly mistaken -- I was only aware of version 1 segwit scripts at the time, and assumed 256 bit hashes would be used for all segwit transaction, however version 0 segwit txns would be more efficient for p2pkh, with the same security as bitcoin currently has (which seems fine). Also, segwit will make two additional soft-fork improvements possible that would have a positive effect on transactions per block without requiring more data per block: ecdsa public key recovery (more space efficient for *both* multisig and p2pkh) and schnorr signatures (more space efficient multisig) which might also improve things. I don't know how soon they're planned to be worked on post segwit's roll out; basic Schnorr signatures are in the Elements sidechain, but I don't think key recovery has been implemented anywhere? (Actually, I guess they could both be done already via softforking OP_NOP opcodes, though segwit makes them slightly cleaner) Anyhoo here's some revised figures, working explained in the footnotes. If I've made mistakes, corrections appreciated, of course. p2pkh: now: 10+146i+34o [0] segwit: 10+41i+36o + 0.25*105*i [1] ecdsa recovery: 10+41i+33o + 0.25*71*i [2] 80-bit schnorr: 10+41i+33o + 0.25*71*i (same as ecdsa recovery imo [3]) 128-bit schnorr: 10+41i+43o + 0.25*106*i [4] (128-bit schnorr provides a not very useful increase in security here) 2-of-2 multisig: now: 10+254i+32o [5] segwit: 10+43i+43o + 0.25*213*i [6] ecdsa recovery: 10+43i+43o + 0.25+187*i [7] 80-bit schnorr: 10+41i+33o + 0.25*71*i (same as p2pkh) 128-bit schnorr: 10+41i+43o + 0.25*106*i (same as p2pkh) (segwit, ecdsa recovery and 128-bit schnorr all provide a beneficial security increase here, as per the "Time to worry about 80-bit collision attacks" thread; 80-bit schnorr provides the same security as current p2sh multisig) Using the same assumptions in the previous mail, ie that over the long term number inputs is about the same as number of outputs, these simplify to: p2pkh 2-of-2 msig now 10+180i 10+286i segwit 10+104i 10+140i recov 10+92i 10+133i sch80 10+92i 10+92i sch128 10+111i 10+111i Translating "now" to 100%, the scaling factors work out to be: i=1, i->inf p2pkh 2-of-2 msig now 100% 100% segwit 166%-173% 197%-204% recov 186%-195% 207%-215% sch80 186%-195% 290%-310% sch128 157%-162% 244%-257% So 170% for p2pkh (rather my original estimate of 160%) and 200% for multisig (same as my original estimate), which can rise via further soft-forks up to 190% for p2pkh and 250% or 300% for 2-of-2 multisig (depending on whether you want additional security for 2/2 multisig beyond what's currently available). (I'm assuming people are mostly interested in the number of transactions per block (or tx/second or tx/day); if miners are worried about the actual data per block (which effects orphan rates) implied by the above, but don't want to work it out themselves, I could do the maths for that too pretty easily. Let me know) If a 2MB hard fork is done first, then the 1/4 discount for segwit could mean up to 8MB of total data per block -- from what I understand this is currently infeasible; so I presume that segwit on top of a hardfork and prior to IBLT/weak blocks would need to have a smaller discount or no discount applied so as to ensure total data per block remains at 4MB or less. With no discount for witness data (ie, no "accounting tricks") those figures look like: p2pkh 2-of-2 msig now 100% 100% segwit 99% 95% recov 122%-124% 104% sch80 122%-124% 191%-198% sch128 94%-95% 148%-150% That is, without discounting, segwit comes at a slight cost in transactions per block, and additional soft forks will only result in 25% gain for p2pkh (via key recovery) and 50%-100% for 2-of-2 multisig (through the use of schnorr sigs and key recovery, and depending on whether you want 128 bits of security rather than 80 bits). (So without the discounting factor, with a 2MB block size, 2MB per block with segwit and key recovery gives you 25% more p2pkh transactions than just 2MB per block now; while segwit and schnorr signatures gives you 50%-100% more 2/2 multisig transactions in the same 2MB. Likewise with 1MB or 4MB and no discounting. Discounting has the indirect benefit of providing a monetary incentive to limit UTXO sizes however) (2 of 3 multisig for escrow payments would probably be interesting to work out too; I think ecdsa key recovery combined with 1/4 discounting would provide a substantial improvement there. I don't think Schnorr helps at all for that case, unfortunately; and it's probably too small scale for merkle-ised abstract syntax trees to do any good either) A caveat: I'm only counting the script data from witnesses here; but it's possible that additional metadata (such as a length for each witness signature, or the value of the input, or even some/all of the merkle hashes) should also be accounted for. I don't think any of them need to be accounted for segwit as proposed, but I'm not sure. And it might well be different for a hardforked segwit; there I have no idea at all. I don't think a byte or two for length would make much difference, at least. Cheers, aj [0] 10 bytes for version (4), input count (1), output count (1) and locktime (4); 146 bytes per input consisting of tx hash (32), txout index (4), script length (1), scriptsig (signature and pubkey = 105), CHECKSIG = 25), and sequence number (4); 34 bytes per output consisting of value (8), script length (1) and scriptpubkey (DUP HASH160 PUSH20 EQVERIFY CHECKSIG = 25). [1] Same as now, except two extra bytes per output script (segwit push and segwit version byte), and moving the 105 bytes of signature script directly into the segregated witness [2] Allowing ECDSA recovery requires an additional soft-fork post segwit to change the CHECKSIG operation; this requires bumping the segwit script version to 2 or higher and possibly using a different opcode, but allows the scriptsig to just be the 70 byte signature, without also including the 33 byte pubkey. The 33 byte pubkey is automatically calculated from the signature, and verified against the hash provided in the scriptpubkey to maintain security, with a scriptpubkey like: [PUSH (20 byte pubkey hash) CHECKSIG_RECOVER] (22 bytes versus 25 bytes), and a scriptsig like [PUSH (70 byte sig)] (71 bytes versus 105 bytes). [3] libsecp256k1 has a function to recover a pubkey from a schnorr signature, so I'm assuming that means pubkey recovery with schnorr is possible :) -- I haven't actually verified the maths https://github.com/bitcoin/secp256k1/blob/master/include/secp256k1_schnorr.h [4] The witness scriptpubkey is limited to 32 bytes (plus push op and version byte for a total of 34 bytes, so 128 bit security requires version 1 segwit, and p2sh-style constuction. Hence: 10 bytes (version, input and output counts and locktime); 41 base bytes per input (tx hash, tx index, script length, and sequence number); 106 witness bytes per input (sig (70 bytes) plus witness script (PUSH schnorr merged pubkey (32 bytes) plus CHECKSCHNORR), plus PUSH ops); and 43 bytes per output (value, script length, and 34 bytes for the v1-style witness script). [5] Per input is (32 bytes tx hash, 4 bytes tx index, 4 bytes nsequence, 1 byte scriptsig length, 143 bytes for actual signature (2x70 for the sigs, 3 bytes for OP_0 and two OP_PUSH), and 70 bytes for the redeemscript (2 pub pub 2 OP_CHECKMULTISIG)) for 254 bytes; per output is (8 bytes value, 1 byte length, 23 bytes for HASH160 [20 byte hash] OP_EQUAL) for 32 bytes. [6] Per input is (34 bytes tx hash, 4 bytes tx index, 4 bytes nsequence, 1 byte scriptsig length) for 43 bytes in the base block and (143 bytes for the actual signature, plus 70 bytes for the redeemscript) for 213 bytes of witness data; per output is (8 bytes value, 1 byte length, and 34 bytes for version 1 segwit scriptpubkey) for 43 bytes. [7] Same as [6], but with key recovery on a MULTISIG op, rather than 33 bytes per pubkey, this could be reduced to a 20 byte pubkey hash per pubkey, for a saving of 26 bytes of witness data. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2016-01-18 12:02 ` Anthony Towns @ 2016-01-22 9:46 ` Anthony Towns 0 siblings, 0 replies; 56+ messages in thread From: Anthony Towns @ 2016-01-22 9:46 UTC (permalink / raw) To: bitcoin-dev On Mon, Jan 18, 2016 at 10:02:51PM +1000, Anthony Towns via bitcoin-dev wrote: > I think these numbers are slightly mistaken -- I was only aware of version > 1 segwit scripts at the time, and assumed 256 bit hashes would be used > for all segwit transaction, however version 0 segwit txns would be more > efficient for p2pkh, with the same security as bitcoin currently has > (which seems fine). Latest segwit code just has version 0 witness format, and treats a 32 byte push as the sha256 of a script, and a 20 byte push as the hash of the pub key. Also, the witness scriptPubKey format uses "OP_0 [hash]" to push the version and hash to the script separately, rather than "[0x00 script]" or "[0x01 hash]" (this changes allows segwit transactions to be encoded backwards compatibly as a p2sh payment). > p2pkh: > segwit: 10+41i+36o + 0.25*105*i [1] > [0] 10 bytes for version (4), input count (1), output count (1) and > locktime (4); 146 bytes per input consisting of tx hash (32), txout > index (4), script length (1), scriptsig (signature and pubkey = > 105), CHECKSIG = 25), and sequence number (4); 34 bytes per output > consisting of value (8), script length (1) and scriptpubkey (DUP > HASH160 PUSH20 EQVERIFY CHECKSIG = 25). > [1] Same as [0], except two extra bytes per output script (segwit push > and segwit version byte), and moving the 105 bytes of signature > script directly into the segregated witness So this change means segwit p2pkh needs 31 bytes per output not 36 bytes (value, length stay the same, scriptpubkey becomes "OP_0 PUSH20" for 22 bytes instead of 25+2 bytes). This gives another couple of percent gain, so: segwit: 10+41i+31o + 0.25*105*i [1] Setting i=o makes: > p2pkh 2-of-2 msig > now 10+180i 10+286i > segwit 10+104i 10+140i become: segwit 10+99i 10+140i and therefore, > p2pkh 2-of-2 msig > now 100% 100% > segwit 166%-173% 197%-204% becomes: segwit 174%-181% 197%-204% Constantly creeping up! Pretty nice. Also, p2pkh with segwit-via-p2sh is probably interesting, those numbers work out as: segwit: 10+41i+31o + 0.25*105*i (for comparison) segp2sh: 10+60i+32o + 0.25*105*i [0] -> 10+119i -> 147%-151% So that still looks like a reasonable improvement even if (eg) in the short term merchants are the only ones that upgrade, and customers just use non-segwit-aware wallets with a p2sh address that's only redeemable by a segwit-aware wallet. Cheers, aj [0] 10 bytes standard. For each input, tx hash (32) plus index (4), script length (1) and scriptsig which is a push of the standard segwit pubscript (22+1) totaling to 60, and witness data is the same as for normal segwit (105). Each output is standard p2sh, which is value (8), length (1) and script (23) for a total of 32. Cheers, aj ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-07 22:02 [bitcoin-dev] Capacity increases for the Bitcoin system Gregory Maxwell 2015-12-07 22:54 ` Bryan Bishop 2015-12-08 2:42 ` Anthony Towns @ 2015-12-08 11:07 ` Wladimir J. van der Laan 2015-12-08 11:14 ` Jorge Timón [not found] ` <CAPWm=eUomq6SBC0ky0WSs5=_G942vigm4RmgYuq0O-yJ-vqC2A@mail.gmail.com> 2015-12-08 23:48 ` Jonathan Toomim 3 siblings, 2 replies; 56+ messages in thread From: Wladimir J. van der Laan @ 2015-12-08 11:07 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev On Mon, Dec 07, 2015 at 10:02:17PM +0000, Gregory Maxwell via bitcoin-dev wrote: > The Scaling Bitcoin Workshop in HK is just wrapping up. Many fascinating > proposals were presented. I think this would be a good time to share my > view of the near term arc for capacity increases in the Bitcoin system. I > believe we’re in a fantastic place right now and that the community > is ready to deliver on a clear forward path with a shared vision that > addresses the needs of the system while upholding its values. Thanks for writing this up. Putting the progress, ongoing work and plans related to scaling in context, in one place, was badly needed. > TL;DR: I propose we work immediately towards the segwit 4MB block > soft-fork which increases capacity and scalability, and recent speedups > and incoming relay improvements make segwit a reasonable risk. BIP9 > and segwit will also make further improvements easier and faster to > deploy. We’ll continue to set the stage for non-bandwidth-increase-based > scaling, while building additional tools that would make bandwidth > increases safer long term. Further work will prepare Bitcoin for further > increases, which will become possible when justified, while also providing > the groundwork to make them justifiable. Sounds good to me. There are multiple ways to get involved in ongoing work, where the community can help to make this happen sooner: - Review the versionbits BIP https://github.com/bitcoin/bips/blob/master/bip-0009.mediawiki: - Compare and test with implementation: https://github.com/bitcoin/bitcoin/pull/6816 - Review CSV BIPs (BIP68 https://github.com/bitcoin/bips/blob/master/bip-0068.mediawiki / BIP112 https://github.com/bitcoin/bips/blob/master/bip-0112.mediawiki), - Compare and test implementation: https://github.com/bitcoin/bitcoin/pull/6564 BIP-112: Mempool-only CHECKSEQUENCEVERIFY https://github.com/bitcoin/bitcoin/pull/6312 BIP-68: Mempool-only sequence number constraint verification https://github.com/bitcoin/bitcoin/pull/7184 [WIP] Implement SequenceLocks functions for BIP 68 - Segwit BIP is being written, but has not yet been published. - Gregory linked to an implementation but as he mentions it is not completely finished yet. ETA for a Segwit testnet is later this month, then you can test as well. Wladimir ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 11:07 ` Wladimir J. van der Laan @ 2015-12-08 11:14 ` Jorge Timón 2015-12-08 15:12 ` Gavin Andresen [not found] ` <CAPWm=eUomq6SBC0ky0WSs5=_G942vigm4RmgYuq0O-yJ-vqC2A@mail.gmail.com> 1 sibling, 1 reply; 56+ messages in thread From: Jorge Timón @ 2015-12-08 11:14 UTC (permalink / raw) To: Wladimir; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 302 bytes --] On Dec 8, 2015 7:08 PM, "Wladimir J. van der Laan via bitcoin-dev" < bitcoin-dev@lists.linuxfoundation.org> wrote: > - Gregory linked to an implementation but as he mentions it is not completely > finished yet. ETA for a Segwit testnet is later this month, then you can test as well. Testnet4 ? [-- Attachment #2: Type: text/html, Size: 436 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 11:14 ` Jorge Timón @ 2015-12-08 15:12 ` Gavin Andresen 2015-12-08 15:55 ` Justus Ranvier ` (2 more replies) 0 siblings, 3 replies; 56+ messages in thread From: Gavin Andresen @ 2015-12-08 15:12 UTC (permalink / raw) To: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 2117 bytes --] Thanks for laying out a road-map, Greg. I'll need to think about it some more, but just a couple of initial reactions: Why segwitness as a soft fork? Stuffing the segwitness merkle tree in the coinbase is messy and will just complicate consensus-critical code (as opposed to making the right side of the merkle tree in block.version=5 blocks the segwitness data). It will also make any segwitness fraud proofs significantly larger (merkle path versus merkle path to coinbase transactions, plus ENTIRE coinbase transaction, which might be quite large, plus merkle path up to root). We also need to fix the O(n^2) sighash problem as an additional BIP for ANY blocksize increase. That also argues for a hard fork-- it is much easier to fix it correctly and simplify the consensus code than to continue to apply band-aid fixes on top of something fundamentally broken. Segwitness will require a hard or soft-fork rollout, then a significant fraction of the transaction-producing wallets to upgrade and start supporting segwitness-style transactions. I think it will be much quicker than the P2SH rollout, because the biggest transaction producers have a strong motivation to lower their fees, and it won't require a new type of bitcoin address to fund wallets. But it still feels like it'll be six months to a year at the earliest before any relief from the current problems we're seeing from blocks filling up. Segwitness will make the current bottleneck (block propagation) a little worse in the short term, because of the extra fraud-proof data. Benefits well worth the costs. ------------------ I think a barrier to quickly getting consensus might be a fundamental difference of opinion on this: "Even without them I believe we’ll be in an acceptable position with respect to capacity in the near term" The heaviest users of the Bitcoin network (businesses who generate tens of thousands of transactions per day on behalf of their customers) would strongly disgree; the current state of affairs is NOT acceptable to them. -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 2625 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 15:12 ` Gavin Andresen @ 2015-12-08 15:55 ` Justus Ranvier 2015-12-08 17:41 ` Mark Friedenbach ` (2 more replies) 2015-12-08 23:59 ` Gregory Maxwell 2015-12-09 14:51 ` Chris 2 siblings, 3 replies; 56+ messages in thread From: Justus Ranvier @ 2015-12-08 15:55 UTC (permalink / raw) To: bitcoin-dev [-- Attachment #1.1: Type: text/plain, Size: 624 bytes --] On 12/08/2015 09:12 AM, Gavin Andresen via bitcoin-dev wrote: > Stuffing the segwitness merkle tree in the coinbase If such a change is going to be deployed via a soft fork instead of a hard fork, then the coinbase is the worst place to put the segwitness merkle root. Instead, put it in the first output of the generation transaction as an OP_RETURN script. This is a better pattern because coinbase space is limited while output space is not. The next time there's a good reason to tie another merkle tree to a block, that proposal can be designated for the second output of the generation transaction. [-- Attachment #1.2: 0xEAD9E623.asc --] [-- Type: application/pgp-keys, Size: 23699 bytes --] [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 801 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 15:55 ` Justus Ranvier @ 2015-12-08 17:41 ` Mark Friedenbach 2015-12-08 18:43 ` Justus Ranvier 2015-12-08 19:08 ` Tier Nolan 2015-12-08 19:31 ` Gregory Maxwell 2015-12-08 23:40 ` Jonathan Toomim 2 siblings, 2 replies; 56+ messages in thread From: Mark Friedenbach @ 2015-12-08 17:41 UTC (permalink / raw) To: Justus Ranvier; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1147 bytes --] A far better place than the generation transaction (which I assume means coinbase transaction?) is the last transaction in the block. That allows you to save, on average, half of the hashes in the Merkle tree. On Tue, Dec 8, 2015 at 11:55 PM, Justus Ranvier via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > On 12/08/2015 09:12 AM, Gavin Andresen via bitcoin-dev wrote: > > Stuffing the segwitness merkle tree in the coinbase > > If such a change is going to be deployed via a soft fork instead of a > hard fork, then the coinbase is the worst place to put the segwitness > merkle root. > > Instead, put it in the first output of the generation transaction as an > OP_RETURN script. > > This is a better pattern because coinbase space is limited while output > space is not. The next time there's a good reason to tie another merkle > tree to a block, that proposal can be designated for the second output > of the generation transaction. > > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > [-- Attachment #2: Type: text/html, Size: 1723 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 17:41 ` Mark Friedenbach @ 2015-12-08 18:43 ` Justus Ranvier 2015-12-08 19:08 ` Tier Nolan 1 sibling, 0 replies; 56+ messages in thread From: Justus Ranvier @ 2015-12-08 18:43 UTC (permalink / raw) To: Mark Friedenbach; +Cc: Bitcoin Dev [-- Attachment #1.1: Type: text/plain, Size: 527 bytes --] On 12/08/2015 11:41 AM, Mark Friedenbach wrote: > A far better place than the generation transaction (which I assume means > coinbase transaction?) is the last transaction in the block. That allows > you to save, on average, half of the hashes in the Merkle tree. I don't care what color that bikeshed is painted. In whatever transaction it is placed, the hash should be on the output side, That way is more future-proof since it does not crowd out other hashes which might be equally valuable to commit someday. [-- Attachment #1.2: 0xEAD9E623.asc --] [-- Type: application/pgp-keys, Size: 23699 bytes --] [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 801 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 17:41 ` Mark Friedenbach 2015-12-08 18:43 ` Justus Ranvier @ 2015-12-08 19:08 ` Tier Nolan 1 sibling, 0 replies; 56+ messages in thread From: Tier Nolan @ 2015-12-08 19:08 UTC (permalink / raw) Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1313 bytes --] On Tue, Dec 8, 2015 at 5:41 PM, Mark Friedenbach via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > A far better place than the generation transaction (which I assume means > coinbase transaction?) is the last transaction in the block. That allows > you to save, on average, half of the hashes in the Merkle tree. > This trick can be improved by only using certain tx counts. If the number of transactions is limited to a power of 2 (other than the extra transactions), then you get a path of length zero. The number of non-zero bits in the tx count determings how many digests are required. https://github.com/TierNolan/bips/blob/aux_header/bip-aux-header.mediawiki This gets the benefit of a soft-fork, while also keeping the proof lengths small. The linked bip has a 105 byte overhead for the path. The cost is that only certain transaction counts are allowed. In the worst case, 12.5% of transactions would have to be left in the memory pool. This means around 7% of transactions would be delayed until the next block. Blank transactions (or just transactions with low latency requirements) could be used to increase the count so that it is raised to one of the valid numbers. Managing the UTXO set to ensure that there is at least one output that pays to OP_TRUE is also a hassle. [-- Attachment #2: Type: text/html, Size: 1985 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 15:55 ` Justus Ranvier 2015-12-08 17:41 ` Mark Friedenbach @ 2015-12-08 19:31 ` Gregory Maxwell 2015-12-08 23:40 ` Jonathan Toomim 2 siblings, 0 replies; 56+ messages in thread From: Gregory Maxwell @ 2015-12-08 19:31 UTC (permalink / raw) To: Justus Ranvier; +Cc: Bitcoin Dev On Tue, Dec 8, 2015 at 3:55 PM, Justus Ranvier via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > Instead, put it in the first output of the generation transaction as an > OP_RETURN script. Pieter was originally putting it in a different location; so it's no big deal to do so. But there exists deployed mining hardware that imposes constraints on the coinbase outputs, unfortunately. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 15:55 ` Justus Ranvier 2015-12-08 17:41 ` Mark Friedenbach 2015-12-08 19:31 ` Gregory Maxwell @ 2015-12-08 23:40 ` Jonathan Toomim 2015-12-08 23:48 ` Luke Dashjr 2015-12-08 23:50 ` Jorge Timón 2 siblings, 2 replies; 56+ messages in thread From: Jonathan Toomim @ 2015-12-08 23:40 UTC (permalink / raw) To: Justus Ranvier; +Cc: bitcoin-dev [-- Attachment #1: Type: text/plain, Size: 1292 bytes --] Agree. This data does not belong in the coinbase. That space is for miners to use, not devs. I also think that a hard fork is better for SegWit, as it reduces the size of fraud proofs considerably, makes the whole design more elegant and less kludgey, and is safer for clients who do not upgrade in a timely fashion. I don't like the idea that SegWit would invalidate the security assumptions of non-upgraded clients (including SPV wallets). I think that for these clients, no data is better than invalid data. Better to force them to upgrade by cutting them off the network than to let them think they're validating transactions when they're not. On Dec 8, 2015, at 11:55 PM, Justus Ranvier via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > If such a change is going to be deployed via a soft fork instead of a > hard fork, then the coinbase is the worst place to put the segwitness > merkle root. > > Instead, put it in the first output of the generation transaction as an > OP_RETURN script. > > This is a better pattern because coinbase space is limited while output > space is not. The next time there's a good reason to tie another merkle > tree to a block, that proposal can be designated for the second output > of the generation transaction. [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 496 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 23:40 ` Jonathan Toomim @ 2015-12-08 23:48 ` Luke Dashjr 2015-12-09 0:54 ` Jonathan Toomim 2015-12-08 23:50 ` Jorge Timón 1 sibling, 1 reply; 56+ messages in thread From: Luke Dashjr @ 2015-12-08 23:48 UTC (permalink / raw) To: bitcoin-dev, Jonathan Toomim On Tuesday, December 08, 2015 11:40:42 PM Jonathan Toomim via bitcoin-dev wrote: > Agree. This data does not belong in the coinbase. That space is for miners > to use, not devs. This has never been guaranteed, nor are softforks a "dev action" in the first place. > I also think that a hard fork is better for SegWit, as it reduces the size > of fraud proofs considerably, makes the whole design more elegant and less > kludgey, and is safer for clients who do not upgrade in a timely fashion. How about we pursue the SegWit softfork, and at the same time* work on a hardfork which will simplify the proofs and reduce the kludgeyness of merge- mining in general? Then, if the hardfork is ready before the softfork, they can both go together, but if not, we aren't stuck delaying the improvements of SegWit until the hardfork is completed. * I have been in fact working on such a proposal for a while now, since before SegWit. > I don't like the idea that SegWit would invalidate the security > assumptions of non-upgraded clients (including SPV wallets). I think that > for these clients, no data is better than invalid data. Better to force > them to upgrade by cutting them off the network than to let them think > they're validating transactions when they're not. There isn't an option for "no data", as non-upgraded nodes in a hardfork are left completely vulnerable to attacking miners, even much lower hashrate than the 51% attack risk. So the alternatives are: - hardfork: complete loss of all security for the old nodes - softfork: degraded security for old nodes Luke ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 23:48 ` Luke Dashjr @ 2015-12-09 0:54 ` Jonathan Toomim 0 siblings, 0 replies; 56+ messages in thread From: Jonathan Toomim @ 2015-12-09 0:54 UTC (permalink / raw) To: Bitcoin Dev [-- Attachment #1.1: Type: text/plain, Size: 662 bytes --] On Dec 9, 2015, at 7:48 AM, Luke Dashjr <luke@dashjr.org> wrote: > How about we pursue the SegWit softfork, and at the same time* work on a > hardfork which will simplify the proofs and reduce the kludgeyness of merge- > mining in general? Then, if the hardfork is ready before the softfork, they > can both go together, but if not, we aren't stuck delaying the improvements of > SegWit until the hardfork is completed. So that all our code that parses the blockchain needs to be able to find the sigwit data in both places? That doesn't really sound like an improvement to me. Why not just do it as a hard fork? They're really not that hard to do. [-- Attachment #1.2: Type: text/html, Size: 4602 bytes --] [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 496 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 23:40 ` Jonathan Toomim 2015-12-08 23:48 ` Luke Dashjr @ 2015-12-08 23:50 ` Jorge Timón 2015-12-09 0:56 ` Jonathan Toomim 1 sibling, 1 reply; 56+ messages in thread From: Jorge Timón @ 2015-12-08 23:50 UTC (permalink / raw) To: Jonathan Toomim; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1892 bytes --] On Dec 9, 2015 7:41 AM, "Jonathan Toomim via bitcoin-dev" < bitcoin-dev@lists.linuxfoundation.org> wrote: > I also think that a hard fork is better for SegWit, as it reduces the size of fraud proofs considerably, makes the whole design more elegant and less kludgey, and is safer for clients who do not upgrade in a timely fashion. I agree, although I disagree with the last reason. > I don't like the idea that SegWit would invalidate the security assumptions of non-upgraded clients (including SPV wallets). I think that for these clients, no data is better than invalid data. Better to force them to upgrade by cutting them off the network than to let them think they're validating transactions when they're not. I don't undesrtand. SPV nodes won't think they are validating transactions with the new version unless they adapt to the new format. They will be simply unable to receive payments using the new format if it is a softfork (although as said I agree with making it a hardfork on the simpler design and smaller fraud proofs grounds alone). > > On Dec 8, 2015, at 11:55 PM, Justus Ranvier via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > > > If such a change is going to be deployed via a soft fork instead of a > > hard fork, then the coinbase is the worst place to put the segwitness > > merkle root. > > > > Instead, put it in the first output of the generation transaction as an > > OP_RETURN script. > > > > This is a better pattern because coinbase space is limited while output > > space is not. The next time there's a good reason to tie another merkle > > tree to a block, that proposal can be designated for the second output > > of the generation transaction. > > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > [-- Attachment #2: Type: text/html, Size: 2509 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 23:50 ` Jorge Timón @ 2015-12-09 0:56 ` Jonathan Toomim 0 siblings, 0 replies; 56+ messages in thread From: Jonathan Toomim @ 2015-12-09 0:56 UTC (permalink / raw) To: Jorge Timón; +Cc: Bitcoin Dev [-- Attachment #1.1: Type: text/plain, Size: 555 bytes --] On Dec 9, 2015, at 7:50 AM, Jorge Timón <jtimon@jtimon.cc> wrote: > I don't undesrtand. SPV nodes won't think they are validating transactions with the new version unless they adapt to the new format. They will be simply unable to receive payments using the new format if it is a softfork (although as said I agree with making it a hardfork on the simpler design and smaller fraud proofs grounds alone). > Okay, I might just not understand how a sigwit payment would look to current software yet. I'll add learning about that to my to-do list... [-- Attachment #1.2: Type: text/html, Size: 1224 bytes --] [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 496 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 15:12 ` Gavin Andresen 2015-12-08 15:55 ` Justus Ranvier @ 2015-12-08 23:59 ` Gregory Maxwell 2015-12-09 0:58 ` Jorge Timón 2015-12-09 1:09 ` Gavin Andresen 2015-12-09 14:51 ` Chris 2 siblings, 2 replies; 56+ messages in thread From: Gregory Maxwell @ 2015-12-08 23:59 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev On Tue, Dec 8, 2015 at 3:12 PM, Gavin Andresen via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > Why segwitness as a soft fork? Stuffing the segwitness merkle tree in the > coinbase is messy and will just complicate consensus-critical code (as > opposed to making the right side of the merkle tree in block.version=5 > blocks the segwitness data). It's nearly complexity-costless to put it in the coinbase transaction. Exploring the costs is one of the reasons why this was implemented first. We already have consensus critical enforcement there, the height, which has almost never been problematic. (A popular block explorer recently misimplemented the var-int decode and suffered an outage). And most but not all prior commitment proposals have suggested the same or similar. The exact location is not that critical, however, and we do have several soft-fork compatible options. > It will also make any segwitness fraud proofs significantly larger (merkle > path versus merkle path to coinbase transactions, plus ENTIRE coinbase > transaction, which might be quite large, plus merkle path up to root). Yes, it will make them larger by log2() the number of transaction in a block which is-- say-- 448 bytes. With the coinbase transaction thats another couple kilobytes, I think this is negligible. From a risk reduction perspective, I think it is much preferable to perform the primary change in a backwards compatible manner, and pick up the data reorganization in a hardfork if anyone even cares. I think thats generally a nice cadence to split up risks that way; and avoid controversy. > We also need to fix the O(n^2) sighash problem as an additional BIP for ANY > blocksize increase. The witness data is never an input to sighash, so no, I don't agree that this holds for "any" increase. > Segwitness will make the current bottleneck (block propagation) a little > worse in the short term, because of the extra fraud-proof data. Benefits > well worth the costs. The fraud proof data is deterministic, full nodes could skip sending it between each other, if anyone cared; but the overhead is pretty tiny in any case. > I think a barrier to quickly getting consensus might be a fundamental > difference of opinion on this: > "Even without them I believe we’ll be in an acceptable position with > respect to capacity in the near term" > > The heaviest users of the Bitcoin network (businesses who generate tens of > thousands of transactions per day on behalf of their customers) would > strongly disgree; the current state of affairs is NOT acceptable to them. My message lays out a plan for several different complementary capacity advances; it's not referring to the current situation-- though the current capacity situation is no emergency. I believe it already reflects the emerging consensus in the Bitcoin Core project; in terms of the overall approach and philosophy, if not every specific technical detail. It's not a forever plan, but a pragmatic one that understand that the future is uncertain no matter what we do; one that trusts that we'll respond to whatever contingencies surprise us on the road to success. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 23:59 ` Gregory Maxwell @ 2015-12-09 0:58 ` Jorge Timón 2015-12-09 1:02 ` Jorge Timón 2015-12-09 1:09 ` Gavin Andresen 1 sibling, 1 reply; 56+ messages in thread From: Jorge Timón @ 2015-12-09 0:58 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev On Wed, Dec 9, 2015 at 12:59 AM, Gregory Maxwell via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > On Tue, Dec 8, 2015 at 3:12 PM, Gavin Andresen via bitcoin-dev > <bitcoin-dev@lists.linuxfoundation.org> wrote: > We already have consensus critical enforcement there, the height, > which has almost never been problematic. (A popular block explorer > recently misimplemented the var-int decode and suffered an outage). It would be also a nice opportunity to move the height to a more accessible place. For example CBlockHeader::hashMerkleRoot (and CBlockIndex's) could be replaced with a hash of the following struct: struct hashRootStruct { uint256 hashMerkleRoot; uint256 hashWitnessesRoot; int32_t nHeight; } > From a risk reduction perspective, I think it is much preferable to > perform the primary change in a backwards compatible manner, and pick > up the data reorganization in a hardfork if anyone even cares. But then all wallet software will need to adapt their software twice. Why introduce technical debt for no good reason? > I think thats generally a nice cadence to split up risks that way; and > avoid controversy. Uncontroversial hardforks can also be deployed with small risks as described in BIP99. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 0:58 ` Jorge Timón @ 2015-12-09 1:02 ` Jorge Timón 0 siblings, 0 replies; 56+ messages in thread From: Jorge Timón @ 2015-12-09 1:02 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev On Wed, Dec 9, 2015 at 1:58 AM, Jorge Timón <jtimon@jtimon.cc> wrote: > struct hashRootStruct > { > uint256 hashMerkleRoot; > uint256 hashWitnessesRoot; > int32_t nHeight; > } Or better, for forward compatibility (we may want to include more things apart from nHeight and hashWitnessesRoot in the future): struct hashRootStruct { uint256 hashMerkleRoot; uint256 hashWitnessesRoot; uint256 hashextendedHeader; } For example, we may want to chose to add an extra nonce there. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 23:59 ` Gregory Maxwell 2015-12-09 0:58 ` Jorge Timón @ 2015-12-09 1:09 ` Gavin Andresen 2015-12-09 1:31 ` Gregory Maxwell 1 sibling, 1 reply; 56+ messages in thread From: Gavin Andresen @ 2015-12-09 1:09 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1484 bytes --] On Tue, Dec 8, 2015 at 6:59 PM, Gregory Maxwell <greg@xiph.org> wrote: > > We also need to fix the O(n^2) sighash problem as an additional BIP for > ANY > > blocksize increase. > > The witness data is never an input to sighash, so no, I don't agree > that this holds for "any" increase. > Here's the attack: Create a 1-megabyte transaction, with all of it's inputs spending segwitness-spending SIGHASH_ALL inputs. Because the segwitness inputs are smaller in the block, you can fit more of them into 1 megabyte. Each will hash very close to one megabyte of data. That will be O(n^2) worse than the worst case of a 1-megabyte transaction with signatures in the scriptSigs. Did I misunderstand something or miss something about the 1-mb transaction data and 3-mb segwitness data proposal that would make this attack not possible? RE: fraud proof data being deterministic: yes, I see, the data can be computed instead of broadcast with the block. RE: emerging consensus of Core: I think it is a huge mistake not to "design for success" (see http://gavinandresen.ninja/designing-for-success ). I think it is a huge mistake to pile on technical debt in consensus-critical code. I think we should be working harder to make things simpler, not more complex, whenever possible. And I think there are pretty big self-inflicted current problems because worries about theoretical future problems have prevented us from coming to consensus on simple solutions. -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 2607 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 1:09 ` Gavin Andresen @ 2015-12-09 1:31 ` Gregory Maxwell 2015-12-09 4:44 ` Ryan Butler 2015-12-09 4:51 ` Anthony Towns 0 siblings, 2 replies; 56+ messages in thread From: Gregory Maxwell @ 2015-12-09 1:31 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev On Wed, Dec 9, 2015 at 1:09 AM, Gavin Andresen <gavinandresen@gmail.com> wrote: > Create a 1-megabyte transaction, with all of it's inputs spending > segwitness-spending SIGHASH_ALL inputs. > > Because the segwitness inputs are smaller in the block, you can fit more of > them into 1 megabyte. Each will hash very close to one megabyte of data. Witness size comes out of the 1MB at a factor of 0.25. It is not possible to make a block which has signatures with the full 1MB of data under the sighash while also having signatures externally. So every byte moved into the witness and thus only counted as 25% comes out of the data being hashed and is hashed nInputs (*checksigs) less times. > I think it is a huge mistake not to "design for success" (see > http://gavinandresen.ninja/designing-for-success ). We are designing for success; including the success of being able to adapt and cope with uncertainty-- which is the most critical kind of success we can have in a world where nothing is and can be predictable. > I think it is a huge mistake to pile on technical debt in consensus-critical > code. I think we should be working harder to make things simpler, not more > complex, whenever possible. I agree, but nothing I have advocated creates significant technical debt. It is also a bad engineering practice to combine functional changes (especially ones with poorly understood system wide consequences and low user autonomy) with structural tidying. > And I think there are pretty big self-inflicted current problems because > worries about theoretical future problems have prevented us from coming to > consensus on simple solutions. That isn't my perspective. I believe we've suffered delays because of a strong desire to be inclusive and hear out all ideas, and not forestall market adoption, even for ideas that eschewed pragmatism and tried to build for forever in a single step and which in our hear of hearts we knew were not the right path today. It's time to move past that and get back on track with the progress can make and have been making, in terms of capacity as well as many other areas. I think that is designing for success. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 1:31 ` Gregory Maxwell @ 2015-12-09 4:44 ` Ryan Butler 2015-12-09 6:29 ` Gregory Maxwell 2015-12-09 4:51 ` Anthony Towns 1 sibling, 1 reply; 56+ messages in thread From: Ryan Butler @ 2015-12-09 4:44 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 3873 bytes --] >I agree, but nothing I have advocated creates significant technical >debt. It is also a bad engineering practice to combine functional >changes (especially ones with poorly understood system wide >consequences and low user autonomy) with structural tidying. I don't think I would classify placing things in consensus critical code when it doesn't need to be as "structural tidying". Gavin said "pile on" which you took as implying "a lot", he can correct me, but I believe he meant "add to". > (especially ones with poorly understood system wide consequences and low user autonomy) This implies there you have no confidence in the unit tests and functional testing around Bitcoin and should not be a reason to avoid refactoring. It's more a reason to increase testing so that you will have confidence when you refactor. Also I don't think Martin Fowler would agree with you... "Refactoring should be done in conjunction with adding new features." "Always leave the code better than when you found it." "Often you start working on adding new functionality and you realize the existing structures don't play well with what you're about to do. In this situation it usually pays to begin by refactoring the existing code into the shape you now know is the right shape for what you're about to do." -Martin Fowler On Tue, Dec 8, 2015 at 7:31 PM, Gregory Maxwell via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > On Wed, Dec 9, 2015 at 1:09 AM, Gavin Andresen <gavinandresen@gmail.com> > wrote: > > Create a 1-megabyte transaction, with all of it's inputs spending > > segwitness-spending SIGHASH_ALL inputs. > > > > Because the segwitness inputs are smaller in the block, you can fit more > of > > them into 1 megabyte. Each will hash very close to one megabyte of data. > > Witness size comes out of the 1MB at a factor of 0.25. It is not > possible to make a block which has signatures with the full 1MB of > data under the sighash while also having signatures externally. So > every byte moved into the witness and thus only counted as 25% comes > out of the data being hashed and is hashed nInputs (*checksigs) less > times. > > > I think it is a huge mistake not to "design for success" (see > > http://gavinandresen.ninja/designing-for-success ). > > We are designing for success; including the success of being able to > adapt and cope with uncertainty-- which is the most critical kind of > success we can have in a world where nothing is and can be > predictable. > > > I think it is a huge mistake to pile on technical debt in > consensus-critical > > code. I think we should be working harder to make things simpler, not > more > > complex, whenever possible. > > I agree, but nothing I have advocated creates significant technical > debt. It is also a bad engineering practice to combine functional > changes (especially ones with poorly understood system wide > consequences and low user autonomy) with structural tidying. > > > And I think there are pretty big self-inflicted current problems because > > worries about theoretical future problems have prevented us from coming > to > > consensus on simple solutions. > > That isn't my perspective. I believe we've suffered delays because of > a strong desire to be inclusive and hear out all ideas, and not > forestall market adoption, even for ideas that eschewed pragmatism and > tried to build for forever in a single step and which in our hear of > hearts we knew were not the right path today. It's time to move past > that and get back on track with the progress can make and have been > making, in terms of capacity as well as many other areas. I think that > is designing for success. > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > [-- Attachment #2: Type: text/html, Size: 7397 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 4:44 ` Ryan Butler @ 2015-12-09 6:29 ` Gregory Maxwell 2015-12-09 6:36 ` Ryan Butler ` (2 more replies) 0 siblings, 3 replies; 56+ messages in thread From: Gregory Maxwell @ 2015-12-09 6:29 UTC (permalink / raw) To: Ryan Butler; +Cc: Bitcoin Dev On Wed, Dec 9, 2015 at 4:44 AM, Ryan Butler <rryananizer@gmail.com> wrote: >>I agree, but nothing I have advocated creates significant technical >>debt. It is also a bad engineering practice to combine functional >>changes (especially ones with poorly understood system wide >>consequences and low user autonomy) with structural tidying. > > I don't think I would classify placing things in consensus critical code > when it doesn't need to be as "structural tidying". Gavin said "pile on" > which you took as implying "a lot", he can correct me, but I believe he > meant "add to". Nothing being discussed would move something from consensus critical code to not consensus critical. What was being discussed was the location of the witness commitment; which is consensus critical regardless of where it is placed. Should it be placed in an available location which is compatible with the existing network, or should the block hashing data structure immediately be changed in an incompatible way to accommodate it in order to satisfy an ascetic sense of purity and to make fraud proofs somewhat smaller? I argue that the size difference in the fraud proofs is not interesting, the disruption to the network in an incompatible upgrade is interesting; and that if it really were desirable reorganization to move the commitment point could be done as part of a separate change that changes only the location of things (and/or other trivial adjustments); and that proceeding int this fashion would minimize disruption and risk... by making the incompatible changes that will force network wide software updates be as small and as simple as possible. >> (especially ones with poorly understood system wide consequences and low >> user autonomy) > > This implies there you have no confidence in the unit tests and functional > testing around Bitcoin and should not be a reason to avoid refactoring. > It's more a reason to increase testing so that you will have confidence when > you refactor. I am speaking from our engineering experience in a public, world-wide, multi-vendor, multi-version, inter-operable, distributed system which is constantly changing and in production contains private code, unknown and assorted hardware, mixtures of versions, unreliable networks, undisclosed usage patterns, and more sources of complex behavior than can be counted-- including complex economic incentives and malicious participants. Even if we knew the complete spectrum of possible states for the system the combinatioric explosion makes complete testing infeasible. Though testing is essential one cannot "unit test" away all the risks related to deploying a new behavior in the network. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 6:29 ` Gregory Maxwell @ 2015-12-09 6:36 ` Ryan Butler 2015-12-09 6:59 ` Mark Friedenbach 2015-12-09 7:54 ` Jorge Timón 2 siblings, 0 replies; 56+ messages in thread From: Ryan Butler @ 2015-12-09 6:36 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 2929 bytes --] I see, thanks for clearing that up, I misread what Gavin stated. On Wed, Dec 9, 2015 at 12:29 AM, Gregory Maxwell <greg@xiph.org> wrote: > On Wed, Dec 9, 2015 at 4:44 AM, Ryan Butler <rryananizer@gmail.com> wrote: > >>I agree, but nothing I have advocated creates significant technical > >>debt. It is also a bad engineering practice to combine functional > >>changes (especially ones with poorly understood system wide > >>consequences and low user autonomy) with structural tidying. > > > > I don't think I would classify placing things in consensus critical code > > when it doesn't need to be as "structural tidying". Gavin said "pile on" > > which you took as implying "a lot", he can correct me, but I believe he > > meant "add to". > > Nothing being discussed would move something from consensus critical > code to not consensus critical. > > What was being discussed was the location of the witness commitment; > which is consensus critical regardless of where it is placed. Should > it be placed in an available location which is compatible with the > existing network, or should the block hashing data structure > immediately be changed in an incompatible way to accommodate it in > order to satisfy an ascetic sense of purity and to make fraud proofs > somewhat smaller? > > I argue that the size difference in the fraud proofs is not > interesting, the disruption to the network in an incompatible upgrade > is interesting; and that if it really were desirable reorganization to > move the commitment point could be done as part of a separate change > that changes only the location of things (and/or other trivial > adjustments); and that proceeding int this fashion would minimize > disruption and risk... by making the incompatible changes that will > force network wide software updates be as small and as simple as > possible. > > >> (especially ones with poorly understood system wide consequences and low > >> user autonomy) > > > > This implies there you have no confidence in the unit tests and > functional > > testing around Bitcoin and should not be a reason to avoid refactoring. > > It's more a reason to increase testing so that you will have confidence > when > > you refactor. > > I am speaking from our engineering experience in a public, > world-wide, multi-vendor, multi-version, inter-operable, distributed > system which is constantly changing and in production contains private > code, unknown and assorted hardware, mixtures of versions, unreliable > networks, undisclosed usage patterns, and more sources of complex > behavior than can be counted-- including complex economic incentives > and malicious participants. > > Even if we knew the complete spectrum of possible states for the > system the combinatioric explosion makes complete testing infeasible. > > Though testing is essential one cannot "unit test" away all the risks > related to deploying a new behavior in the network. > [-- Attachment #2: Type: text/html, Size: 3600 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 6:29 ` Gregory Maxwell 2015-12-09 6:36 ` Ryan Butler @ 2015-12-09 6:59 ` Mark Friedenbach 2015-12-09 7:17 ` Gregory Maxwell 2015-12-09 7:54 ` Jorge Timón 2 siblings, 1 reply; 56+ messages in thread From: Mark Friedenbach @ 2015-12-09 6:59 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 3715 bytes --] Greg, if you have actual data showing that putting the commitment in the last transaction would be disruptive, and how disruptive, that would be appreciated. Of the mining hardware I have looked at, none of it cared at all what transactions other than the coinbase are. You need to provide a path to the coinbase for extranonce rolling, but the witness commitment wouldn't need to be updated. I'm sorry but it's not clear how this would be an incompatible upgrade, disruptive to anything other than the transaction selection code. Maybe I'm missing something? I'm not familiar with all the hardware or pooling setups out there. On Wed, Dec 9, 2015 at 2:29 PM, Gregory Maxwell via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > On Wed, Dec 9, 2015 at 4:44 AM, Ryan Butler <rryananizer@gmail.com> wrote: > >>I agree, but nothing I have advocated creates significant technical > >>debt. It is also a bad engineering practice to combine functional > >>changes (especially ones with poorly understood system wide > >>consequences and low user autonomy) with structural tidying. > > > > I don't think I would classify placing things in consensus critical code > > when it doesn't need to be as "structural tidying". Gavin said "pile on" > > which you took as implying "a lot", he can correct me, but I believe he > > meant "add to". > > Nothing being discussed would move something from consensus critical > code to not consensus critical. > > What was being discussed was the location of the witness commitment; > which is consensus critical regardless of where it is placed. Should > it be placed in an available location which is compatible with the > existing network, or should the block hashing data structure > immediately be changed in an incompatible way to accommodate it in > order to satisfy an ascetic sense of purity and to make fraud proofs > somewhat smaller? > > I argue that the size difference in the fraud proofs is not > interesting, the disruption to the network in an incompatible upgrade > is interesting; and that if it really were desirable reorganization to > move the commitment point could be done as part of a separate change > that changes only the location of things (and/or other trivial > adjustments); and that proceeding int this fashion would minimize > disruption and risk... by making the incompatible changes that will > force network wide software updates be as small and as simple as > possible. > > >> (especially ones with poorly understood system wide consequences and low > >> user autonomy) > > > > This implies there you have no confidence in the unit tests and > functional > > testing around Bitcoin and should not be a reason to avoid refactoring. > > It's more a reason to increase testing so that you will have confidence > when > > you refactor. > > I am speaking from our engineering experience in a public, > world-wide, multi-vendor, multi-version, inter-operable, distributed > system which is constantly changing and in production contains private > code, unknown and assorted hardware, mixtures of versions, unreliable > networks, undisclosed usage patterns, and more sources of complex > behavior than can be counted-- including complex economic incentives > and malicious participants. > > Even if we knew the complete spectrum of possible states for the > system the combinatioric explosion makes complete testing infeasible. > > Though testing is essential one cannot "unit test" away all the risks > related to deploying a new behavior in the network. > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > [-- Attachment #2: Type: text/html, Size: 4679 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 6:59 ` Mark Friedenbach @ 2015-12-09 7:17 ` Gregory Maxwell 0 siblings, 0 replies; 56+ messages in thread From: Gregory Maxwell @ 2015-12-09 7:17 UTC (permalink / raw) To: Mark Friedenbach; +Cc: Bitcoin Dev On Wed, Dec 9, 2015 at 6:59 AM, Mark Friedenbach <mark@friedenbach.org> wrote: > Greg, if you have actual data showing that putting the commitment in the > last transaction would be disruptive, and how disruptive, that would be > appreciated. Of the mining hardware I have looked at, none of it cared at > all what transactions other than the coinbase are. You need to provide a > path to the coinbase for extranonce rolling, but the witness commitment > wouldn't need to be updated. > > I'm sorry but it's not clear how this would be an incompatible upgrade, > disruptive to anything other than the transaction selection code. Maybe I'm > missing something? I'm not familiar with all the hardware or pooling setups > out there. I didn't comment on the transaction output. I have commented on coinbase outputs and on a hard-fork. Using an output in the last transaction would break the assumption that you can truncate a block and still have a valid block. This is used by some mining setups currently, because GBT does not generate the coinbase transaction and so cannot know its size; and you may have to drop the last transaction(s) to make room for it. That a block can be truncated and still result in a valid block also seems like a useful property to me. If the input for that transaction is supposed to be generated from a coinbase output some blocks earlier, then this may again run into hardware output constraints in coinbase transactions. (But it may be better since it wouldn't matter which output created it.). This could likely be escaped by creating a zero value output only once and just rolling it forward. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 6:29 ` Gregory Maxwell 2015-12-09 6:36 ` Ryan Butler 2015-12-09 6:59 ` Mark Friedenbach @ 2015-12-09 7:54 ` Jorge Timón 2015-12-09 8:03 ` Gregory Maxwell 2 siblings, 1 reply; 56+ messages in thread From: Jorge Timón @ 2015-12-09 7:54 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev On Wed, Dec 9, 2015 at 7:29 AM, Gregory Maxwell via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > What was being discussed was the location of the witness commitment; > which is consensus critical regardless of where it is placed. Should > it be placed in an available location which is compatible with the > existing network, or should the block hashing data structure > immediately be changed in an incompatible way to accommodate it in > order to satisfy an ascetic sense of purity and to make fraud proofs > somewhat smaller? From this question one could think that when you said "we can do the cleanup hardfork later" earlier you didn't really meant it. And that you will oppose to that hardfork later just like you are opposing to it now. As said I disagree that making a softfork first and then move the commitment is less disruptive (because people will need to adapt their software twice), but if the intention is to never do the second part then of course I agree it would be less disruptive. How long after the softfork would you like to do the hardfork? 1 year after the softfork? 2 years? never? ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 7:54 ` Jorge Timón @ 2015-12-09 8:03 ` Gregory Maxwell 2015-12-09 8:46 ` Mark Friedenbach ` (2 more replies) 0 siblings, 3 replies; 56+ messages in thread From: Gregory Maxwell @ 2015-12-09 8:03 UTC (permalink / raw) To: Jorge Timón; +Cc: Bitcoin Dev On Wed, Dec 9, 2015 at 7:54 AM, Jorge Timón <jtimon@jtimon.cc> wrote: > From this question one could think that when you said "we can do the > cleanup hardfork later" earlier you didn't really meant it. And that > you will oppose to that hardfork later just like you are opposing to > it now. > As said I disagree that making a softfork first and then move the > commitment is less disruptive (because people will need to adapt their > software twice), but if the intention is to never do the second part > then of course I agree it would be less disruptive. > How long after the softfork would you like to do the hardfork? > 1 year after the softfork? 2 years? never? I think it would be logical to do as part of a hardfork that moved commitments generally; e.g. a better position for merged mining (such a hardfork was suggested in 2010 as something that could be done if merged mining was used), room for commitments to additional block back-references for compact SPV proofs, and/or UTXO set commitments. Part of the reason to not do it now is that the requirements for the other things that would be there are not yet well defined. For these other applications, the additional overhead is actually fairly meaningful; unlike the fraud proofs. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 8:03 ` Gregory Maxwell @ 2015-12-09 8:46 ` Mark Friedenbach 2015-12-09 11:08 ` Jorge Timón 2015-12-09 16:40 ` Gavin Andresen 2 siblings, 0 replies; 56+ messages in thread From: Mark Friedenbach @ 2015-12-09 8:46 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 2573 bytes --] My apologies for the apparent miscommunication earlier. It is of interest to me that the soft-fork be done which is necessary to put a commitment in the most efficient spot possible, in part because that commitment could be used for other data such as the merged mining auxiliary blocks, which are very sensitive to proof size. Perhaps we have a different view of how the commitment transaction would be generated. Just as GBT doesn't create the coinbase, it was my expectation that it wouldn't generate the commitment transaction either -- but generation of the commitment would be easy, requiring either the coinbase txid 100 blocks back, or the commitment txid of the prior transaction (note this impacts SPV mining). The truncation shouldn't be an issue because the commitment txn would not be part of the list of transactions selected by GBT, and in any case the truncation would change the witness data which changes the commitment. On Wed, Dec 9, 2015 at 4:03 PM, Gregory Maxwell via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > On Wed, Dec 9, 2015 at 7:54 AM, Jorge Timón <jtimon@jtimon.cc> wrote: > > From this question one could think that when you said "we can do the > > cleanup hardfork later" earlier you didn't really meant it. And that > > you will oppose to that hardfork later just like you are opposing to > > it now. > > As said I disagree that making a softfork first and then move the > > commitment is less disruptive (because people will need to adapt their > > software twice), but if the intention is to never do the second part > > then of course I agree it would be less disruptive. > > How long after the softfork would you like to do the hardfork? > > 1 year after the softfork? 2 years? never? > > I think it would be logical to do as part of a hardfork that moved > commitments generally; e.g. a better position for merged mining (such > a hardfork was suggested in 2010 as something that could be done if > merged mining was used), room for commitments to additional block > back-references for compact SPV proofs, and/or UTXO set commitments. > Part of the reason to not do it now is that the requirements for the > other things that would be there are not yet well defined. For these > other applications, the additional overhead is actually fairly > meaningful; unlike the fraud proofs. > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > [-- Attachment #2: Type: text/html, Size: 3235 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 8:03 ` Gregory Maxwell 2015-12-09 8:46 ` Mark Friedenbach @ 2015-12-09 11:08 ` Jorge Timón 2015-12-09 16:40 ` Gavin Andresen 2 siblings, 0 replies; 56+ messages in thread From: Jorge Timón @ 2015-12-09 11:08 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1396 bytes --] Fair enough. On Dec 9, 2015 4:03 PM, "Gregory Maxwell" <greg@xiph.org> wrote: > On Wed, Dec 9, 2015 at 7:54 AM, Jorge Timón <jtimon@jtimon.cc> wrote: > > From this question one could think that when you said "we can do the > > cleanup hardfork later" earlier you didn't really meant it. And that > > you will oppose to that hardfork later just like you are opposing to > > it now. > > As said I disagree that making a softfork first and then move the > > commitment is less disruptive (because people will need to adapt their > > software twice), but if the intention is to never do the second part > > then of course I agree it would be less disruptive. > > How long after the softfork would you like to do the hardfork? > > 1 year after the softfork? 2 years? never? > > I think it would be logical to do as part of a hardfork that moved > commitments generally; e.g. a better position for merged mining (such > a hardfork was suggested in 2010 as something that could be done if > merged mining was used), room for commitments to additional block > back-references for compact SPV proofs, and/or UTXO set commitments. > Part of the reason to not do it now is that the requirements for the > other things that would be there are not yet well defined. For these > other applications, the additional overhead is actually fairly > meaningful; unlike the fraud proofs. > [-- Attachment #2: Type: text/html, Size: 1721 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 8:03 ` Gregory Maxwell 2015-12-09 8:46 ` Mark Friedenbach 2015-12-09 11:08 ` Jorge Timón @ 2015-12-09 16:40 ` Gavin Andresen 2015-12-11 16:18 ` Jorge Timón 2 siblings, 1 reply; 56+ messages in thread From: Gavin Andresen @ 2015-12-09 16:40 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1510 bytes --] On Wed, Dec 9, 2015 at 3:03 AM, Gregory Maxwell via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > I think it would be logical to do as part of a hardfork that moved > commitments generally; e.g. a better position for merged mining (such > a hardfork was suggested in 2010 as something that could be done if > merged mining was used), room for commitments to additional block > back-references for compact SPV proofs, and/or UTXO set commitments. > Part of the reason to not do it now is that the requirements for the > other things that would be there are not yet well defined. For these > other applications, the additional overhead is actually fairly > meaningful; unlike the fraud proofs. > So just design ahead for those future uses. Make the merkle tree: root_in_block_header / \ tx_data_root other_root / \ segwitness_root reserved_for_future_use_root ... where reserved_for_future_use is zero until some future block version (or perhaps better, is just chosen arbitrarily by the miner and sent along with the block data until some future block version). That would minimize future disruption of any code that produced or consumed merkle proofs of the transaction data or segwitness data, especially if the reserved_for_future_use_root is allowed to be any arbitrary 256-bit value and not a constant that would get hard-coded into segwitness-proof-checking code. -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 2509 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 16:40 ` Gavin Andresen @ 2015-12-11 16:18 ` Jorge Timón 2015-12-11 16:43 ` Gavin Andresen 0 siblings, 1 reply; 56+ messages in thread From: Jorge Timón @ 2015-12-11 16:18 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 2780 bytes --] On Dec 9, 2015 5:40 PM, "Gavin Andresen" <gavinandresen@gmail.com> wrote: > > On Wed, Dec 9, 2015 at 3:03 AM, Gregory Maxwell via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: >> >> I think it would be logical to do as part of a hardfork that moved >> commitments generally; e.g. a better position for merged mining (such >> a hardfork was suggested in 2010 as something that could be done if >> merged mining was used), room for commitments to additional block >> back-references for compact SPV proofs, and/or UTXO set commitments. >> Part of the reason to not do it now is that the requirements for the >> other things that would be there are not yet well defined. For these >> other applications, the additional overhead is actually fairly >> meaningful; unlike the fraud proofs. > > > So just design ahead for those future uses. Make the merkle tree: > > > root_in_block_header > / \ > tx_data_root other_root > / \ > segwitness_root reserved_for_future_use_root This is basically what I meant by struct hashRootStruct { uint256 hashMerkleRoot; uint256 hashWitnessesRoot; uint256 hashextendedHeader; } but my design doesn't calculate other_root as it appears in your tree (is not necessary). Since stop requiring bip34 (height in coinbase) is also a hardfork (and a trivial one) I suggested to move it at the same time. But thinking more about it, since BIP34 also elegantly solves BIP30, I would keep the height in the coinbase (even if we move it to the extented header tree as well for convenience). That should be able to include future consensus-enforced commitments (extra back-refs for compact proofs, txo/utxo commitments, etc) or non-consensus data (merged mining data, miner-published data). Greg Maxwell suggested to move those later and I answered fair enough. But thinking more about it, if the extra commitments field is extensible, we don't need to move anything now, and therefore we don't need for those designs (extra back-refs for compact proofs, txo/utxo commitments, etc) to be ready to deploy a hardfork segregated witness: you just need to make sure that your format is extensible via softfork in the future. I'm therefore back to the "let's better deploy segregated witness as a hardfork" position. The change required to the softfork segregated witnesses implementation would be relatively small. Another option would be to deploy both parts (sw and the movement from the coinbase to the extra header) at the same time but with different activation conditions, for example: - For sw: deploy as soon as possible with bip9. - For the hardfork codebase to extra header movement: 1 year grace + bip9 for later miner upgrade confirmation. [-- Attachment #2: Type: text/html, Size: 3387 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-11 16:18 ` Jorge Timón @ 2015-12-11 16:43 ` Gavin Andresen 2015-12-12 5:13 ` digitsu 0 siblings, 1 reply; 56+ messages in thread From: Gavin Andresen @ 2015-12-11 16:43 UTC (permalink / raw) To: Jorge Timón, Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 840 bytes --] On Fri, Dec 11, 2015 at 11:18 AM, Jorge Timón <jtimon@jtimon.cc> wrote: > This is basically what I meant by > > struct hashRootStruct > { > uint256 hashMerkleRoot; > uint256 hashWitnessesRoot; > uint256 hashextendedHeader; > } > > but my design doesn't calculate other_root as it appears in your tree (is > not necessary). > > It is necessary to maintain compatibility with SPV nodes/wallets. Any code that just checks merkle paths up into the block header would have to change if the structure of the merkle tree changed to be three-headed at the top. If it remains a binary tree, then it doesn't need to change at all-- the code that produces the merkle paths will just send a path that is one step deeper. Plus, it's just weird to have a merkle tree that isn't a binary tree..... -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 1506 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-11 16:43 ` Gavin Andresen @ 2015-12-12 5:13 ` digitsu 2015-12-12 15:18 ` Mark Friedenbach 0 siblings, 1 reply; 56+ messages in thread From: digitsu @ 2015-12-12 5:13 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1367 bytes --] If this means essentially that a soft fork deployment of SegWit will require SPV wallet servers to change their logic (or risk not being able to send payments) then it does seem to me that a hard fork to deploy this non controversial change is not only cleaner (on the data structure side) but safer in terms of the potential to affect the user experience. — Regards, On Sat, Dec 12, 2015 at 1:43 AM, Gavin Andresen via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > On Fri, Dec 11, 2015 at 11:18 AM, Jorge Timón <jtimon@jtimon.cc> wrote: >> This is basically what I meant by >> >> struct hashRootStruct >> { >> uint256 hashMerkleRoot; >> uint256 hashWitnessesRoot; >> uint256 hashextendedHeader; >> } >> >> but my design doesn't calculate other_root as it appears in your tree (is >> not necessary). >> >> It is necessary to maintain compatibility with SPV nodes/wallets. > Any code that just checks merkle paths up into the block header would have > to change if the structure of the merkle tree changed to be three-headed at > the top. > If it remains a binary tree, then it doesn't need to change at all-- the > code that produces the merkle paths will just send a path that is one step > deeper. > Plus, it's just weird to have a merkle tree that isn't a binary tree..... > -- > -- > Gavin Andresen [-- Attachment #2: Type: text/html, Size: 2346 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-12 5:13 ` digitsu @ 2015-12-12 15:18 ` Mark Friedenbach 2015-12-14 11:21 ` Jonathan Toomim 0 siblings, 1 reply; 56+ messages in thread From: Mark Friedenbach @ 2015-12-12 15:18 UTC (permalink / raw) To: digitsu; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1892 bytes --] A segwit supporting server would be required to support relaying segwit transactions, although a non-segwit server could at least inform a wallet of segwit txns observed, even if it doesn't relay all information necessary to validate. Non segwit servers and wallets would continue operations as if nothing had occurred. If this means essentially that a soft fork deployment of SegWit will require SPV wallet servers to change their logic (or risk not being able to send payments) then it does seem to me that a hard fork to deploy this non controversial change is not only cleaner (on the data structure side) but safer in terms of the potential to affect the user experience. — Regards, On Sat, Dec 12, 2015 at 1:43 AM, Gavin Andresen via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > On Fri, Dec 11, 2015 at 11:18 AM, Jorge Timón <jtimon@jtimon.cc> wrote: > >> This is basically what I meant by >> >> struct hashRootStruct >> { >> uint256 hashMerkleRoot; >> uint256 hashWitnessesRoot; >> uint256 hashextendedHeader; >> } >> >> but my design doesn't calculate other_root as it appears in your tree (is >> not necessary). >> >> It is necessary to maintain compatibility with SPV nodes/wallets. > > Any code that just checks merkle paths up into the block header would have > to change if the structure of the merkle tree changed to be three-headed at > the top. > > If it remains a binary tree, then it doesn't need to change at all-- the > code that produces the merkle paths will just send a path that is one step > deeper. > > Plus, it's just weird to have a merkle tree that isn't a binary tree..... > > -- > -- > Gavin Andresen > _______________________________________________ bitcoin-dev mailing list bitcoin-dev@lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev [-- Attachment #2: Type: text/html, Size: 3172 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-12 15:18 ` Mark Friedenbach @ 2015-12-14 11:21 ` Jonathan Toomim 2015-12-14 12:44 ` Adam Back 0 siblings, 1 reply; 56+ messages in thread From: Jonathan Toomim @ 2015-12-14 11:21 UTC (permalink / raw) To: Bitcoin Dev [-- Attachment #1.1: Type: text/plain, Size: 2378 bytes --] This means that a server supporting SW might only hear of the tx data and not get the signature data for some transactions, depending on how the relay rules worked (e.g. if the SW peers had higher minrelaytxfee settings than the legacy peers). This would complicate fast block relay code like IBLTs, since we now have to check to see that the recipient has both the tx data and the witness/sig data. The same issue might happen with block relay if we do SW as a soft fork. A SW node might see a block inv from a legacy node first, and might start downloading the block from that node. This block would then be marked as in-flight, and the witness data might not get downloaded. This shouldn't be too hard to fix by creating an inv for the witness data as a separate object, so that a node could download the block from e.g. Peer 1 and the segwit data from Peer 2. Of course, the code would be simpler if we did this as a hard fork and we could rely on everyone on the segwit fork supporting the segwit data. Although maybe we want to write the interfaces in a way that supports some nodes not downloading the segwit data anyway, just because not every node will want that data. I haven't had time to read sipa's code yet. I apologize for talking out of a position of ignorance. For anyone who has, do you feel like sharing how it deals with these network relay issues? By the way, since this thread is really about SegWit and not about any other mechanism for increasing Bitcoin capacity, perhaps we should rename it accordingly? On Dec 12, 2015, at 11:18 PM, Mark Friedenbach via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > A segwit supporting server would be required to support relaying segwit transactions, although a non-segwit server could at least inform a wallet of segwit txns observed, even if it doesn't relay all information necessary to validate. > > Non segwit servers and wallets would continue operations as if nothing had occurred. > > If this means essentially that a soft fork deployment of SegWit will require SPV wallet servers to change their logic (or risk not being able to send payments) then it does seem to me that a hard fork to deploy this non controversial change is not only cleaner (on the data structure side) but safer in terms of the potential to affect the user experience. > > > — Regards, [-- Attachment #1.2: Type: text/html, Size: 3165 bytes --] [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 496 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-14 11:21 ` Jonathan Toomim @ 2015-12-14 12:44 ` Adam Back 0 siblings, 0 replies; 56+ messages in thread From: Adam Back @ 2015-12-14 12:44 UTC (permalink / raw) To: Jonathan Toomim; +Cc: Bitcoin Dev I think someone, maybe Pieter, commented on this relay issue that it would be likely very transitory, as a lot of stuff would be fairly quickly upgraded in practice from previous deployment experience, and I think anyway there is a huge excess connectivity and capacity in the p2p network vs having a connected network of various versions, and supporting SPV client load (SPV load is quite low relative to capacity, even one respectable node can support a large number of SPV clients). (Ie so two classes of network node and connectivity wouldnt be a problem in practice even if it did persist; also the higher capacity better run nodes are more likely to upgrade due to having more clued in power user, miner, pool or company operators). Maybe someone more detailed knowledge could clarify further. Adam On 14 December 2015 at 19:21, Jonathan Toomim via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > This means that a server supporting SW might only hear of the tx data and > not get the signature data for some transactions, depending on how the relay > rules worked (e.g. if the SW peers had higher minrelaytxfee settings than > the legacy peers). This would complicate fast block relay code like IBLTs, > since we now have to check to see that the recipient has both the tx data > and the witness/sig data. > > The same issue might happen with block relay if we do SW as a soft fork. A > SW node might see a block inv from a legacy node first, and might start > downloading the block from that node. This block would then be marked as > in-flight, and the witness data might not get downloaded. This shouldn't be > too hard to fix by creating an inv for the witness data as a separate > object, so that a node could download the block from e.g. Peer 1 and the > segwit data from Peer 2. > > Of course, the code would be simpler if we did this as a hard fork and we > could rely on everyone on the segwit fork supporting the segwit data. > Although maybe we want to write the interfaces in a way that supports some > nodes not downloading the segwit data anyway, just because not every node > will want that data. > > I haven't had time to read sipa's code yet. I apologize for talking out of a > position of ignorance. For anyone who has, do you feel like sharing how it > deals with these network relay issues? > > By the way, since this thread is really about SegWit and not about any other > mechanism for increasing Bitcoin capacity, perhaps we should rename it > accordingly? > > > On Dec 12, 2015, at 11:18 PM, Mark Friedenbach via bitcoin-dev > <bitcoin-dev@lists.linuxfoundation.org> wrote: > > A segwit supporting server would be required to support relaying segwit > transactions, although a non-segwit server could at least inform a wallet of > segwit txns observed, even if it doesn't relay all information necessary to > validate. > > Non segwit servers and wallets would continue operations as if nothing had > occurred. > > If this means essentially that a soft fork deployment of SegWit will require > SPV wallet servers to change their logic (or risk not being able to send > payments) then it does seem to me that a hard fork to deploy this non > controversial change is not only cleaner (on the data structure side) but > safer in terms of the potential to affect the user experience. > > > — Regards, > > > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-09 1:31 ` Gregory Maxwell 2015-12-09 4:44 ` Ryan Butler @ 2015-12-09 4:51 ` Anthony Towns 1 sibling, 0 replies; 56+ messages in thread From: Anthony Towns @ 2015-12-09 4:51 UTC (permalink / raw) To: bitcoin-dev On Wed, Dec 09, 2015 at 01:31:51AM +0000, Gregory Maxwell via bitcoin-dev wrote: > On Wed, Dec 9, 2015 at 1:09 AM, Gavin Andresen <gavinandresen@gmail.com> wrote: > > Create a 1-megabyte transaction, with all of it's inputs spending > > segwitness-spending SIGHASH_ALL inputs. > > Because the segwitness inputs are smaller in the block, you can fit more of > > them into 1 megabyte. Each will hash very close to one megabyte of data. > Witness size comes out of the 1MB at a factor of 0.25. It is not > possible to make a block which has signatures with the full 1MB of > data under the sighash while also having signatures externally. So > every byte moved into the witness and thus only counted as 25% comes > out of the data being hashed and is hashed nInputs (*checksigs) less > times. So the worst case script I can come up with is: <pubkey> 1 0 {2OVER CHECKSIG ADD CODESEP} OP_EQUAL which (if I didn't mess it up) would give you a redeem script of about 36B plus 4B per sigop, redeemable via a single signature that's valid for precisely one of the checksigs. Maxing out 20k sigops gives 80kB of redeemscript in that case; so you could have to hash 19.9GB of data to fully verify the script with current bitcoin rules. Segwit with the 75% factor and the same sigop limit would make that very slightly worse -- it'd up the hashed data by maybe 1MB in total. Without a sigop limit at all it'd be severely worse of course -- you could fit almost 500k sigops in 2MB of witness data, leaving 500kB of base data, for a total of 250GB of data to hash to verify your 3MB block... Segwit without the 75% factor, but with a 3MB of witness data limit, makes that up to three times worse (750k sigops in 3MB of witness data, with 1MB of base data for 750GB of data to hash), but with any reasonable sigop limit, afaics it's pretty much the same. However I think you could add some fairly straightforward (maybe soft-forking) optimisations to just rule out that sort of (deliberate) abuse; eg disallowing more than a dozen sigops per input, or just failing checksigs with the same key in a single input, maybe. So maybe that's not sufficiently realistic? I think the only realistic transactions that would cause lots of sigs and hashing are ones that have lots of inputs that each require a signature or two, so might happen if a miner is cleaning up dust. In that case, your 1MB transaction is a single output with a bunch of 41B inputs. If you have 10k such inputs, that's only 410kB. If each input is a legitimate 2 of 2 multisig, that's about 210 bytes of witness data per input, or 2.1MB, leaving 475kB of base data free, which matches up. 20k sigops by 475kB of data is 9.5GB of hashing. Switching from 2-of-2 multisig to just a single public key would prevent you from hitting the sigop limit; I think you could hit 14900 signatures with about 626kB of base data and 1488kB of witness data, for about 9.3GB of hashed data. That's a factor of 2x improvement over the deliberately malicious exploit case above, but it's /only/ a factor of 2x. I think Rusty's calculation http://rusty.ozlabs.org/?p=522 was that the worst case for now is hashing about 406kB, 3300 times for 1.34GB of hashed data [0]. So that's still almost a factor of 4 or 5 worse than what's possible now? Unless I messed up the maths somewhere? Cheers, aj [0] Though I'm not sure that's correct? Seems like with a 1MB transaction with i inputs, each with s bytes of scriptsig, that you're hashing (1MB-s*i), and the scriptsig for a p2pkh should only be about 105B, not 180B. So maximising i*(1MB-s*i) = 1e6*i - 105*i^2 gives i = 1e6/210, so 4762 inputs, and hashing 500kB of data each time, for about 2.4GB of hashed data total. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 15:12 ` Gavin Andresen 2015-12-08 15:55 ` Justus Ranvier 2015-12-08 23:59 ` Gregory Maxwell @ 2015-12-09 14:51 ` Chris 2 siblings, 0 replies; 56+ messages in thread From: Chris @ 2015-12-09 14:51 UTC (permalink / raw) To: bitcoin-dev On 12/08/2015 10:12 AM, Gavin Andresen via bitcoin-dev wrote: > Why segwitness as a soft fork? Stuffing the segwitness merkle tree in > the coinbase is messy and will just complicate consensus-critical code > (as opposed to making the right side of the merkle tree in > block.version=5 blocks the segwitness data). Agreed. I thought the rule was no contentious hark forks. It seems hardly anyone opposes this change and there seems to be widespread agreement that the hardfork version would be much cleaner. ^ permalink raw reply [flat|nested] 56+ messages in thread
[parent not found: <CAPWm=eUomq6SBC0ky0WSs5=_G942vigm4RmgYuq0O-yJ-vqC2A@mail.gmail.com>]
[parent not found: <CAPg+sBig9O5+he0PWhTkX5iin14QLz5+eCCu6KfwU=DxntKYtg@mail.gmail.com>]
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. [not found] ` <CAPg+sBig9O5+he0PWhTkX5iin14QLz5+eCCu6KfwU=DxntKYtg@mail.gmail.com> @ 2015-12-21 4:33 ` Pieter Wuille 2015-12-21 4:42 ` Justus Ranvier ` (3 more replies) 0 siblings, 4 replies; 56+ messages in thread From: Pieter Wuille @ 2015-12-21 4:33 UTC (permalink / raw) To: Bitcoin Dev; +Cc: Gregory Maxwell [-- Attachment #1: Type: text/plain, Size: 2059 bytes --] On Tue, Dec 8, 2015 at 6:07 AM, Wladimir J. van der Laan wrote: > On Mon, Dec 07, 2015 at 10:02:17PM +0000, Gregory Maxwell via bitcoin-dev wrote: >> TL;DR: I propose we work immediately towards the segwit 4MB block >> soft-fork which increases capacity and scalability, and recent speedups >> and incoming relay improvements make segwit a reasonable risk. BIP9 >> and segwit will also make further improvements easier and faster to >> deploy. We’ll continue to set the stage for non-bandwidth-increase-based >> scaling, while building additional tools that would make bandwidth >> increases safer long term. Further work will prepare Bitcoin for further >> increases, which will become possible when justified, while also providing >> the groundwork to make them justifiable. > > Sounds good to me. Better late than never, let me comment on why I believe pursuing this plan is important. For months, the block size debate, and the apparent need for agreement on a hardfork has distracted from needed engineering work, fed the external impression that nothing is being done, and generally created a toxic environment to work in. It has affected my own productivity and health, and I do not think I am alone. I believe that soft-fork segwit can help us out of this deadlock and get us going again. It does not require the pervasive assumption that the entire world will simultaneously switch to new consensus rules like a hardfork does, while at the same time: * Give a short-term capacity bump * Show the world that scalability is being worked on * Actually improve scalability (as opposed to just scale) by reducing bandwidth/storage and indirectly improving the effectiveness of systems like Lightning. * Solve several unrelated problems at the same time (fraud proofs, script extensibility, malleability, ...). So I'd like to ask the community that we work towards this plan, as it allows to make progress without being forced to make a possibly divisive choice for one hardfork or another yet. -- Pieter [-- Attachment #2: Type: text/html, Size: 2277 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-21 4:33 ` Pieter Wuille @ 2015-12-21 4:42 ` Justus Ranvier 2015-12-21 4:44 ` Alex Morcos ` (2 subsequent siblings) 3 siblings, 0 replies; 56+ messages in thread From: Justus Ranvier @ 2015-12-21 4:42 UTC (permalink / raw) To: bitcoin-dev [-- Attachment #1.1: Type: text/plain, Size: 644 bytes --] On 12/20/2015 10:33 PM, Pieter Wuille via bitcoin-dev wrote: > Solve several unrelated problems at the same time (fraud proofs, script > extensibility, malleability, ...). By "solve" do you mean, "actually implement", or do you mean "make future implementation theoretically possible?" In other words, would a deployment of SW involve the creation of new network message for relaying fraud proofs, a specification that SPV wallet developers can use to validate these messages and so know when to ignore the highest (but invalid) PoW chain, and the ability to automatically generate and broadcast these proofs in Bitcoin Core? [-- Attachment #1.2: 0xEAD9E623.asc --] [-- Type: application/pgp-keys, Size: 23699 bytes --] [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 801 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-21 4:33 ` Pieter Wuille 2015-12-21 4:42 ` Justus Ranvier @ 2015-12-21 4:44 ` Alex Morcos 2015-12-21 4:50 ` Mark Friedenbach 2015-12-21 5:21 ` Btc Drak 3 siblings, 0 replies; 56+ messages in thread From: Alex Morcos @ 2015-12-21 4:44 UTC (permalink / raw) To: Pieter Wuille; +Cc: Bitcoin Dev, Gregory Maxwell [-- Attachment #1: Type: text/plain, Size: 3085 bytes --] I'm also strongly in favor of moving forward with this plan. A couple of points: 1) There has been too much confusion in looking at segwit as an alternative way to increase the block size and I think that is incorrect. It should not be drawn into the block size debate as it brings many needed improvements and tools we'd want even if no one were worried about block size now. 2) The full capacity increase plan Greg lays out makes it clear that we can accomplish a tremendous amount without a contentious hard fork at this point. 3) Let's stop arguing endlessly and actually do work that will benefit everyone. On Sun, Dec 20, 2015 at 11:33 PM, Pieter Wuille via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > On Tue, Dec 8, 2015 at 6:07 AM, Wladimir J. van der Laan wrote: > > On Mon, Dec 07, 2015 at 10:02:17PM +0000, Gregory Maxwell via > bitcoin-dev wrote: > >> TL;DR: I propose we work immediately towards the segwit 4MB block > >> soft-fork which increases capacity and scalability, and recent speedups > >> and incoming relay improvements make segwit a reasonable risk. BIP9 > >> and segwit will also make further improvements easier and faster to > >> deploy. We’ll continue to set the stage for non-bandwidth-increase-based > >> scaling, while building additional tools that would make bandwidth > >> increases safer long term. Further work will prepare Bitcoin for further > >> increases, which will become possible when justified, while also > providing > >> the groundwork to make them justifiable. > > > > Sounds good to me. > > Better late than never, let me comment on why I believe pursuing this plan > is important. > > For months, the block size debate, and the apparent need for agreement on > a hardfork has distracted from needed engineering work, fed the external > impression that nothing is being done, and generally created a toxic > environment to work in. It has affected my own productivity and health, and > I do not think I am alone. > > I believe that soft-fork segwit can help us out of this deadlock and get > us going again. It does not require the pervasive assumption that the > entire world will simultaneously switch to new consensus rules like a > hardfork does, while at the same time: > * Give a short-term capacity bump > * Show the world that scalability is being worked on > * Actually improve scalability (as opposed to just scale) by reducing > bandwidth/storage and indirectly improving the effectiveness of systems > like Lightning. > * Solve several unrelated problems at the same time (fraud proofs, script > extensibility, malleability, ...). > > So I'd like to ask the community that we work towards this plan, as it > allows to make progress without being forced to make a possibly divisive > choice for one hardfork or another yet. > > -- > Pieter > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > [-- Attachment #2: Type: text/html, Size: 3926 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-21 4:33 ` Pieter Wuille 2015-12-21 4:42 ` Justus Ranvier 2015-12-21 4:44 ` Alex Morcos @ 2015-12-21 4:50 ` Mark Friedenbach 2015-12-21 5:29 ` Douglas Roark 2015-12-21 5:21 ` Btc Drak 3 siblings, 1 reply; 56+ messages in thread From: Mark Friedenbach @ 2015-12-21 4:50 UTC (permalink / raw) To: Pieter Wuille; +Cc: Bitcoin Dev, Gregory Maxwell [-- Attachment #1: Type: text/plain, Size: 2891 bytes --] I am fully in support of the plan laid out in "Capacity increases for the bitcoin system". This plan provides real benefit to the ecosystem in solving a number of longstanding problems in bitcoin. It improves the scalability of bitcoin considerably. Furthermore it is time that we stop bikeshedding, start implementing, and move forward, lest we lose more developers to the toxic atmosphere this hard-fork debacle has created. On Mon, Dec 21, 2015 at 12:33 PM, Pieter Wuille via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > On Tue, Dec 8, 2015 at 6:07 AM, Wladimir J. van der Laan wrote: > > On Mon, Dec 07, 2015 at 10:02:17PM +0000, Gregory Maxwell via > bitcoin-dev wrote: > >> TL;DR: I propose we work immediately towards the segwit 4MB block > >> soft-fork which increases capacity and scalability, and recent speedups > >> and incoming relay improvements make segwit a reasonable risk. BIP9 > >> and segwit will also make further improvements easier and faster to > >> deploy. We’ll continue to set the stage for non-bandwidth-increase-based > >> scaling, while building additional tools that would make bandwidth > >> increases safer long term. Further work will prepare Bitcoin for further > >> increases, which will become possible when justified, while also > providing > >> the groundwork to make them justifiable. > > > > Sounds good to me. > > Better late than never, let me comment on why I believe pursuing this plan > is important. > > For months, the block size debate, and the apparent need for agreement on > a hardfork has distracted from needed engineering work, fed the external > impression that nothing is being done, and generally created a toxic > environment to work in. It has affected my own productivity and health, and > I do not think I am alone. > > I believe that soft-fork segwit can help us out of this deadlock and get > us going again. It does not require the pervasive assumption that the > entire world will simultaneously switch to new consensus rules like a > hardfork does, while at the same time: > * Give a short-term capacity bump > * Show the world that scalability is being worked on > * Actually improve scalability (as opposed to just scale) by reducing > bandwidth/storage and indirectly improving the effectiveness of systems > like Lightning. > * Solve several unrelated problems at the same time (fraud proofs, script > extensibility, malleability, ...). > > So I'd like to ask the community that we work towards this plan, as it > allows to make progress without being forced to make a possibly divisive > choice for one hardfork or another yet. > > -- > Pieter > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > [-- Attachment #2: Type: text/html, Size: 3640 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-21 4:50 ` Mark Friedenbach @ 2015-12-21 5:29 ` Douglas Roark 0 siblings, 0 replies; 56+ messages in thread From: Douglas Roark @ 2015-12-21 5:29 UTC (permalink / raw) To: bitcoin-dev -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 2015/12/20 20:50, Mark Friedenbach via bitcoin-dev wrote: > I am fully in support of the plan laid out in "Capacity increases > for the bitcoin system". > > This plan provides real benefit to the ecosystem in solving a > number of longstanding problems in bitcoin. It improves the > scalability of bitcoin considerably. > > Furthermore it is time that we stop bikeshedding, start > implementing, and move forward, lest we lose more developers to > the toxic atmosphere this hard-fork debacle has created. Another +1 here. While I'd still like to see some sort of short-term bump happen this year - good points have been raised about SegWit uptake by wallet devs, for one thing - I really do think this is one of the last pieces of the puzzle that'll make Bitcoin reasonably stable and robust. If people have legitimate concerns, that's great, and they should be addressed. I just worry that more navel-gazing and bikeshedding will play into the hands of those with less than noble intentions. That and, due to the somewhat complicated nature of SegWit, it may take time to get skeptical miners and wallet devs on-boar d. While we're talking about capacity increases, I'd like to reiterate that I do think there should be some sort of short-term bump (Jeff's BIP 102 or his "BIP 202" variant, Dr. Back's 2/4/8 proposal ("BIP 248"), etc.), hopefully chosen by this summer so that everybody can start to prepare. I believe the KISS theory will work best. I talked to a couple of miners at Scaling Bitcoin. It was obvious they generally prefer simple solutions. (For that matter, if I put my miner's cap on, I prefer simple solutions too!) The research presented at Scaling Bitcoin regarding block size formulas was quite interesting and worthy of discussion. The research was also, IMO, nowhere near ready for consensus. Work and discussions on that front should certainly continue and push for a more permanent (final?) block size solution. I just think that, barring some extraordinary solution that hasn't been widely discussed yet, a permanent solution isn't feasible right now. A temporary bump isn't ideal. It's just the only thing I've seen that strikes me as having any real shot at consensus. - -- - --- Douglas Roark Cryptocurrency, network security, travel, and art. https://onename.com/droark joroark@vt.edu PGP key ID: 26623924 -----BEGIN PGP SIGNATURE----- Comment: GPGTools - https://gpgtools.org iQIcBAEBCgAGBQJWd44sAAoJEEOBHRomYjkklUkP/AqnD4+oiNNNYRGDY3m0bQSG noUoRmWG/h86AW+2LuNYtn72UVefWJscUcmXWeOOem1KX49KdtCRWz3UZcrmfPUF h/ilOpYpjCN69nFBhpJPp+0Jqr/PjQpoZkUQ2G1BznGIcIo3jwh7H7dQeI6PMtLB qTbfdYEqPawb2kIhrCKVVQqsf7dLjg0Hlzvnq+xqyggZ1+k89kXSMEHJaybras7q DFj1lOhzktzAtxquzAMcctkZM3JvFMnKUwOP6zC+ke9YlmvU0Yhu74F+30/EClLc XGL5GMvUtvJcC0VRxDlh4pIW3m+eWjLWxvPQGe58eLE2u2Ja2MNjcuVtJdRgouLI VSPBrUKoGOGfNfsqJH9U9jsvRuQMvT6JFS3jjxiapgi+ip1O7+Pkbq6tO55Mz7Gd WMG71HdrLzZtjOzRmOFL5q3CkTpZp75tsXOYxn7jVcJlYJUh/jrnVMvSbPAT/VAY yJIPtWRj+jtMKAR9m4Lx+9N4F56OC3g0M749v31luoYZkKMl7ohgkONgpKhrDRBU uVmWH0pUIvaScsJxrUtgZdqn2AUqRowq6nM0YNDKo4go5/LyAkYYi1mICb0O0JJG mt+3fabix6biBPHZDAvKxKX5CAPDapno2adTBx7vY36evGdhI9sWA1jw91He8Zmw 8hwnRV7R8bPdkoIfnc8e =jJzD -----END PGP SIGNATURE----- ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-21 4:33 ` Pieter Wuille ` (2 preceding siblings ...) 2015-12-21 4:50 ` Mark Friedenbach @ 2015-12-21 5:21 ` Btc Drak 2015-12-21 8:07 ` Anthony Towns 3 siblings, 1 reply; 56+ messages in thread From: Btc Drak @ 2015-12-21 5:21 UTC (permalink / raw) To: Pieter Wuille; +Cc: Bitcoin Dev, Gregory Maxwell [-- Attachment #1: Type: text/plain, Size: 2494 bytes --] On Mon, Dec 21, 2015 at 4:33 AM, Pieter Wuille via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > On Tue, Dec 8, 2015 at 6:07 AM, Wladimir J. van der Laan wrote: > > On Mon, Dec 07, 2015 at 10:02:17PM +0000, Gregory Maxwell via > bitcoin-dev wrote: > >> TL;DR: I propose we work immediately towards the segwit 4MB block > >> soft-fork which increases capacity and scalability, and recent speedups > >> and incoming relay improvements make segwit a reasonable risk. BIP9 > >> and segwit will also make further improvements easier and faster to > >> deploy. We’ll continue to set the stage for non-bandwidth-increase-based > >> scaling, while building additional tools that would make bandwidth > >> increases safer long term. Further work will prepare Bitcoin for further > >> increases, which will become possible when justified, while also > providing > >> the groundwork to make them justifiable. > > > > Sounds good to me. > > Better late than never, let me comment on why I believe pursuing this plan > is important. > > For months, the block size debate, and the apparent need for agreement on > a hardfork has distracted from needed engineering work, fed the external > impression that nothing is being done, and generally created a toxic > environment to work in. It has affected my own productivity and health, and > I do not think I am alone. > > I believe that soft-fork segwit can help us out of this deadlock and get > us going again. It does not require the pervasive assumption that the > entire world will simultaneously switch to new consensus rules like a > hardfork does, while at the same time: > * Give a short-term capacity bump > * Show the world that scalability is being worked on > * Actually improve scalability (as opposed to just scale) by reducing > bandwidth/storage and indirectly improving the effectiveness of systems > like Lightning. > * Solve several unrelated problems at the same time (fraud proofs, script > extensibility, malleability, ...). > > So I'd like to ask the community that we work towards this plan, as it > allows to make progress without being forced to make a possibly divisive > choice for one hardfork or another yet. > Thank you for saying this. I also think the plan is solid and delivers multiple benefits without being contentious. The number of wins are so numerous, it's frankly a no-brainer. I guess the next step for segwit is a BIP and deployment on a testnet? [-- Attachment #2: Type: text/html, Size: 3006 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-21 5:21 ` Btc Drak @ 2015-12-21 8:07 ` Anthony Towns 2015-12-21 9:56 ` Jorge Timón 0 siblings, 1 reply; 56+ messages in thread From: Anthony Towns @ 2015-12-21 8:07 UTC (permalink / raw) To: bitcoin-dev On Mon, Dec 21, 2015 at 05:21:55AM +0000, Btc Drak via bitcoin-dev wrote: > On Mon, Dec 21, 2015 at 4:33 AM, Pieter Wuille via bitcoin-dev < > > So I'd like to ask the community that we work towards this plan, as it > > allows to make progress without being forced to make a possibly divisive > > choice for one hardfork or another yet. > Thank you for saying this. I also think the plan is solid and delivers > multiple benefits without being contentious. The number of wins are so > numerous, it's frankly a no-brainer. +1's are off-topic, but... +1. My impression is that each of libsecp256k1, versionbits, segregated witness, IBLT, weak blocks, and OP_CSV have been demonstrated to be significant improvements that are implementable, and don't introduce any new attacks or risks [0]. There's some freaking awesome engineering that's gone into all of those. > I guess the next step for segwit is a BIP and deployment on a testnet? I think the following proposed features are as yet missing from Pieter's segwit branch, and I'm guessing patches for them would be appreciated: - enforcing the proposed base+witness/4 < 1MB calculation - applying limits to sigops seen in witness signatures I guess there might be other things that still need to be implemented as well (and presumably bugs of course)? I think I'm convinced that the proposed plan is the best approach (as opposed to separate base<1MB, witness<3MB limits, or done as a hard fork, or without committing to a merkle head for the witnesses, eg), though. jl2012 already pointed to a draft segwit BIP in another thread, repeated here though: https://github.com/jl2012/bips/blob/segwit/bip-segwit.mediawiki Cheers, aj (hoping that was enough content after the +1 to not get modded ;) [0] I'm still not persuaded that even a small increase in blocksize doesn't introduce unacceptable risks (frankly, I'm not entirely persuaded the *current* limits don't have unacceptable risk) and that frustrates me no end. But I guess (even after six months of reading arguments about it!) I'm equally unpersuaded that there's actually more to the intense desire for more blocksize is anything other than fear/uncertainty/doubt mixed with a desire for transactions to be effectively free, rather than costing even a few cents each... So, personally, since the above doesn't really resolve that quandry for me, it doesn't really resolve the blocksize debate for me either. YMMV. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-21 8:07 ` Anthony Towns @ 2015-12-21 9:56 ` Jorge Timón 0 siblings, 0 replies; 56+ messages in thread From: Jorge Timón @ 2015-12-21 9:56 UTC (permalink / raw) To: Anthony Towns; +Cc: Bitcoin Dev To clarify, although I have defended the deployment of segwit as a hardfork, I have no strong opinion on whether to do that or do it as a softfork first and then do a hardfork to move things out of the coinbase to a better place. I have a strong opinion against never doing the later hardfork though. I would have supported segwit for Bitcoin even if it was only possible as a hardfork, but there's a softfork version and that will hopefully accelerate its deployment. Since the plan seems to be to do a softfork first and a hardfork moving the witness tree (and probably more things) outside of the coinbase later, I support the plan for segwit deployment. In fact, the plan is very exciting to me. ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-07 22:02 [bitcoin-dev] Capacity increases for the Bitcoin system Gregory Maxwell ` (2 preceding siblings ...) 2015-12-08 11:07 ` Wladimir J. van der Laan @ 2015-12-08 23:48 ` Jonathan Toomim 2015-12-09 0:23 ` Gregory Maxwell [not found] ` <CAAS2fgRP8bLWZoKR9-iJS-2RKTGQQ9NG-LpAfa2BOdcR=GuB_A@mail.gmail.com> 3 siblings, 2 replies; 56+ messages in thread From: Jonathan Toomim @ 2015-12-08 23:48 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev [-- Attachment #1.1: Type: text/plain, Size: 928 bytes --] On Dec 8, 2015, at 6:02 AM, Gregory Maxwell via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > The particular proposal amounts to a 4MB blocksize increase at worst. I understood that SegWit would allow about 1.75 MB of data in the average case while also allowing up to 4 MB of data in the worst case. This means that the mining and block distribution network would need a larger safety factor to deal with worst-case situations, right? If you want to make sure that nothing goes wrong when everything is at its worst, you need to size your network pipes to handle 4 MB in a timely (DoS-resistant) fashion, but you'd normally only be able to use 1.75 MB of it. It seems to me that it would be safer to use a 3 MB limit, and that way you'd also be able to use 3 MB of actual transactions. As an accounting trick to bypass the 1 MB limit, SegWit sounds like it might make things less well accounted for. [-- Attachment #1.2: Type: text/html, Size: 1726 bytes --] [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 496 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. 2015-12-08 23:48 ` Jonathan Toomim @ 2015-12-09 0:23 ` Gregory Maxwell [not found] ` <CAAS2fgRP8bLWZoKR9-iJS-2RKTGQQ9NG-LpAfa2BOdcR=GuB_A@mail.gmail.com> 1 sibling, 0 replies; 56+ messages in thread From: Gregory Maxwell @ 2015-12-09 0:23 UTC (permalink / raw) To: Jonathan Toomim; +Cc: Bitcoin Dev On Tue, Dec 8, 2015 at 11:48 PM, Jonathan Toomim <j@toom.im> wrote: > I understood that SegWit would allow about 1.75 MB of data in the average > case while also allowing up to 4 MB of data in the worst case. This means > that the mining and block distribution network would need a larger safety > factor to deal with worst-case situations, right? If you want to make sure By contrast it does not reduce the safety factor for the UTXO set at all; which most hold as a much greater concern in general; and that isn't something you can say for a block size increase. With respect to witness safety factor; it's only needed in the case of strategic or malicious behavior by miners-- both concerns which several people promoting large block size increases have not only disregarded but portrayed as unrealistic fear-mongering. Are you concerned about it? In any case-- the other improvements described in my post give me reason to believe that risks created by that possibility will be addressable. ^ permalink raw reply [flat|nested] 56+ messages in thread
[parent not found: <CAAS2fgRP8bLWZoKR9-iJS-2RKTGQQ9NG-LpAfa2BOdcR=GuB_A@mail.gmail.com>]
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. [not found] ` <CAAS2fgRP8bLWZoKR9-iJS-2RKTGQQ9NG-LpAfa2BOdcR=GuB_A@mail.gmail.com> @ 2015-12-09 0:40 ` Jonathan Toomim 0 siblings, 0 replies; 56+ messages in thread From: Jonathan Toomim @ 2015-12-09 0:40 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 2166 bytes --] On Dec 9, 2015, at 8:09 AM, Gregory Maxwell <gmaxwell@gmail.com> wrote: > On Tue, Dec 8, 2015 at 11:48 PM, Jonathan Toomim <j@toom.im> wrote: > > By contrast it does not reduce the safety factor for the UTXO set at > all; which most hold as a much greater concern in general; I don't agree that "most" hold UTXO as a much greater concern in general. I think that it's a concern that has been addressed less, which means it is a more unsolved concern. But it is not currently a bottleneck on block size. Miners can afford way more RAM than 1 GB, and non-mining full nodes don't need to store the UTXO in memory.I think that at the moment, block propagation time is the bottleneck, not UTXO size. It confuses me that SigWit is being pushed as a short-term fix to the capacity issue when it does not address the short-term bottleneck at all. > and that > isn't something you can say for a block size increase. True. I'd really like to see a grand unified cost metric that includes UTXO expansion. In the mean time, I think miners can use a bit more RAM. > With respect to witness safety factor; it's only needed in the case of > strategic or malicious behavior by miners-- both concerns which > several people promoting large block size increases have not only > disregarded but portrayed as unrealistic fear-mongering. Are you > concerned about it? Some. Much less than e.g. Peter Todd, for example, but when other people see something as a concern that I don't, I try to pay attention to it. I expect Peter wouldn't like the safety factor issue, and I'm surprised he didn't bring it up. Even if I didn't care about adversarial conditions, it would still interest me to pay attention to the safety factor for political reasons, as it would make subsequent blocksize increases much more difficult. Conspiracy theorists might have a field day with that one... > In any case-- the other improvements described in > my post give me reason to believe that risks created by that > possibility will be addressable. I'll take a look and try to see which of the worst-case concerns can and cannot be addressed by those improvements. [-- Attachment #2: Message signed with OpenPGP using GPGMail --] [-- Type: application/pgp-signature, Size: 496 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [bitcoin-dev] Capacity increases for the Bitcoin system. @ 2015-12-09 12:28 Daniele Pinna 0 siblings, 0 replies; 56+ messages in thread From: Daniele Pinna @ 2015-12-09 12:28 UTC (permalink / raw) To: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 146 bytes --] If SegWit were implemented as a hardfork, could the entire blockchain be reorganized starting from the Genesis block to free up historical space? [-- Attachment #2: Type: text/html, Size: 164 bytes --] ^ permalink raw reply [flat|nested] 56+ messages in thread
end of thread, other threads:[~2016-01-22 9:46 UTC | newest] Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2015-12-07 22:02 [bitcoin-dev] Capacity increases for the Bitcoin system Gregory Maxwell 2015-12-07 22:54 ` Bryan Bishop 2015-12-08 2:42 ` Anthony Towns 2015-12-08 4:58 ` Anthony Towns 2015-12-08 5:21 ` Gregory Maxwell 2015-12-08 6:54 ` Anthony Towns 2016-01-18 12:02 ` Anthony Towns 2016-01-22 9:46 ` Anthony Towns 2015-12-08 11:07 ` Wladimir J. van der Laan 2015-12-08 11:14 ` Jorge Timón 2015-12-08 15:12 ` Gavin Andresen 2015-12-08 15:55 ` Justus Ranvier 2015-12-08 17:41 ` Mark Friedenbach 2015-12-08 18:43 ` Justus Ranvier 2015-12-08 19:08 ` Tier Nolan 2015-12-08 19:31 ` Gregory Maxwell 2015-12-08 23:40 ` Jonathan Toomim 2015-12-08 23:48 ` Luke Dashjr 2015-12-09 0:54 ` Jonathan Toomim 2015-12-08 23:50 ` Jorge Timón 2015-12-09 0:56 ` Jonathan Toomim 2015-12-08 23:59 ` Gregory Maxwell 2015-12-09 0:58 ` Jorge Timón 2015-12-09 1:02 ` Jorge Timón 2015-12-09 1:09 ` Gavin Andresen 2015-12-09 1:31 ` Gregory Maxwell 2015-12-09 4:44 ` Ryan Butler 2015-12-09 6:29 ` Gregory Maxwell 2015-12-09 6:36 ` Ryan Butler 2015-12-09 6:59 ` Mark Friedenbach 2015-12-09 7:17 ` Gregory Maxwell 2015-12-09 7:54 ` Jorge Timón 2015-12-09 8:03 ` Gregory Maxwell 2015-12-09 8:46 ` Mark Friedenbach 2015-12-09 11:08 ` Jorge Timón 2015-12-09 16:40 ` Gavin Andresen 2015-12-11 16:18 ` Jorge Timón 2015-12-11 16:43 ` Gavin Andresen 2015-12-12 5:13 ` digitsu 2015-12-12 15:18 ` Mark Friedenbach 2015-12-14 11:21 ` Jonathan Toomim 2015-12-14 12:44 ` Adam Back 2015-12-09 4:51 ` Anthony Towns 2015-12-09 14:51 ` Chris [not found] ` <CAPWm=eUomq6SBC0ky0WSs5=_G942vigm4RmgYuq0O-yJ-vqC2A@mail.gmail.com> [not found] ` <CAPg+sBig9O5+he0PWhTkX5iin14QLz5+eCCu6KfwU=DxntKYtg@mail.gmail.com> 2015-12-21 4:33 ` Pieter Wuille 2015-12-21 4:42 ` Justus Ranvier 2015-12-21 4:44 ` Alex Morcos 2015-12-21 4:50 ` Mark Friedenbach 2015-12-21 5:29 ` Douglas Roark 2015-12-21 5:21 ` Btc Drak 2015-12-21 8:07 ` Anthony Towns 2015-12-21 9:56 ` Jorge Timón 2015-12-08 23:48 ` Jonathan Toomim 2015-12-09 0:23 ` Gregory Maxwell [not found] ` <CAAS2fgRP8bLWZoKR9-iJS-2RKTGQQ9NG-LpAfa2BOdcR=GuB_A@mail.gmail.com> 2015-12-09 0:40 ` Jonathan Toomim 2015-12-09 12:28 Daniele Pinna
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox