* [bitcoin-dev] Chain width expansion @ 2019-10-04 0:38 Braydon Fuller 2019-10-04 8:20 ` David A. Harding 2019-10-04 23:31 ` Tier Nolan 0 siblings, 2 replies; 17+ messages in thread From: Braydon Fuller @ 2019-10-04 0:38 UTC (permalink / raw) To: bitcoin-dev Hi everyone, We would like to share a paper for broad discussion, it is titled "Bitcoin Chain Width Expansion Denial-of-Service Attacks". From the abstract: The attacks leverage unprotected resources for a denial-of-service by filling the disk and exhausting the CPU with unnecessary header and block data. This forces the node to halt operation. The attack difficulty ranges from difficult to easy. There are currently limited guards for some of the attacks that require checkpoints to be enabled. This paper describes a solution that does not require enabling or maintaining checkpoints and provides improved security. As the checkpoints in Bitcoin Core have not been maintained or updated since mid 2014, this is especially relevant. Bitcoin Core implements headers-first synchronization, since 2014, that provides the base for the further improvements upon that design. The paper is available at: https://bcoin.io/papers/bitcoin-chain-expansion.pdf The proposed solution has been implemented in Bcoin and is available at: https://github.com/bcoin-org/bcoin/tree/chain-expansion Best, Braydon Fuller ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-04 0:38 [bitcoin-dev] Chain width expansion Braydon Fuller @ 2019-10-04 8:20 ` David A. Harding 2019-10-04 19:51 ` Braydon Fuller 2019-10-11 21:24 ` Braydon Fuller 2019-10-04 23:31 ` Tier Nolan 1 sibling, 2 replies; 17+ messages in thread From: David A. Harding @ 2019-10-04 8:20 UTC (permalink / raw) To: Braydon Fuller, Bitcoin Protocol Discussion On Thu, Oct 03, 2019 at 05:38:36PM -0700, Braydon Fuller via bitcoin-dev wrote: > This paper describes a solution [to DoS attacks] that does not > require enabling or maintaining checkpoints and provides improved security. > [...] > The paper is available at: > https://bcoin.io/papers/bitcoin-chain-expansion.pdf Hi Braydon, Thank you for researching this important issue. An alternative solution proposed some time ago (I believe originally by Gregory Maxwell) was a soft fork to raise the minimum difficulty. You can find discussion of it in various old IRC conversations[1,2] as well as in related changes to Bitcoin Core such as PR #9053 addining minimum chain work[3] and the assumed-valid change added in Bitcoin Core 0.14.0[4]. [1] http://www.erisian.com.au/meetbot/bitcoin-core-dev/2016/bitcoin-core-dev.2016-10-27-19.01.log.html#l-121 [2] http://www.erisian.com.au/meetbot/bitcoin-core-dev/2017/bitcoin-core-dev.2017-03-02-19.01.log.html#l-57 [3] https://github.com/bitcoin/bitcoin/pull/9053/commits/fd46136dfaf68a7046cf7b8693824d73ac6b1caf [4] https://bitcoincore.org/en/2017/03/08/release-0.14.0/#assumed-valid-blocks The solutions proposed in section 4.2 and 4.3 of your paper have the advantage of not requiring any consensus changes. However, I find it hard to analyze the full consequences of the throttling solution in 4.3 and the pruning solution in 4.2. If we assume a node is on the most-PoW valid chain and that a huge fork is unlikely, it seems fine. But I worry that the mechanisms could also be used to keep a node that synced to a long-but-lower-PoW chain on that false chain (or other false chain) indefinitely even if it had connections to honest peers that tried to tell it about the most-PoW chain. For example, with your maximum throttle of 5 seconds between `getheaders` requests and the `headers` P2P message maximum of 2,000 headers per instance, it would take about half an hour to get a full chain worth of headers. If a peer was disconnected before sending enough headers to establish they were on the most-PoW chain, your pruning solution would delete whatever progress was made, forcing the next peer to start from genesis and taking them at least half an hour too. On frequently-suspended laptops or poor connections, it's possible a node could be be operational for a long time before it kept the same connection open for half an hour. All that time, it would be on a dishonest chain. By comparison, I find it easy to analyze the effect of raising the minimum difficulty. It is a change to the consensus rules, so it's something we should be careful about, but it's the kind of basically-one-line change that I expect should be easy for a large number of people to review directly. Assuming the choice of a new minimum (and what point in the chain to use it) is sane, I think it would be easy to get acceptance, and I think it would further be easy increase it again every five years or so as overall hashrate increases. -Dave ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-04 8:20 ` David A. Harding @ 2019-10-04 19:51 ` Braydon Fuller 2019-10-11 21:24 ` Braydon Fuller 1 sibling, 0 replies; 17+ messages in thread From: Braydon Fuller @ 2019-10-04 19:51 UTC (permalink / raw) To: David A. Harding, Bitcoin Protocol Discussion On 10/4/19 1:20 AM, David A. Harding wrote: > On Thu, Oct 03, 2019 at 05:38:36PM -0700, Braydon Fuller via bitcoin-dev wrote: >> This paper describes a solution [to DoS attacks] that does not >> require enabling or maintaining checkpoints and provides improved security. >> [...] >> The paper is available at: >> https://bcoin.io/papers/bitcoin-chain-expansion.pdf > [..] But I worry that the mechanisms could also be used to keep a node that > synced to a long-but-lower-PoW chain on that false chain (or other false > chain) indefinitely even if it had connections to honest peers that > tried to tell it about the most-PoW chain. Here is an example: An attacker eclipses a target node during the initial block download; all of the target's outgoing peers are the attacker. The attacker has a low work chain that is sent to the target. The total chainwork for the low work chain is 0x09104210421039 at a height of 593,975. The target is now in the state of a fully validated low work dishonest chain. The target node then connects to an honest peer and learns about the honest chain. The chainwork of the honest chain is 0x085b67d9e07a751e53679d68 at a height of 593,975. The first 69,500 headers of the honest chain would have a delay, however the remaining 52,4475 would not be delayed. Given a maximum of 5 seconds, this would be a total delay of only 157 seconds. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-04 8:20 ` David A. Harding 2019-10-04 19:51 ` Braydon Fuller @ 2019-10-11 21:24 ` Braydon Fuller 1 sibling, 0 replies; 17+ messages in thread From: Braydon Fuller @ 2019-10-11 21:24 UTC (permalink / raw) To: David A. Harding, Bitcoin Protocol Discussion On 10/4/19 1:20 AM, David A. Harding wrote: > On Thu, Oct 03, 2019 at 05:38:36PM -0700, Braydon Fuller via bitcoin-dev wrote: >> This paper describes a solution [to DoS attacks] that does not >> require enabling or maintaining checkpoints and provides improved security. >> [...] >> The paper is available at: >> https://bcoin.io/papers/bitcoin-chain-expansion.pdf > Hi Braydon, > > Thank you for researching this important issue. An alternative solution > proposed some time ago (I believe originally by Gregory Maxwell) was a > soft fork to raise the minimum difficulty. You can find discussion of > it in various old IRC conversations[1,2] as well as in related changes > to Bitcoin Core such as PR #9053 addining minimum chain work[3] and the > assumed-valid change added in Bitcoin Core 0.14.0[4]. > > [1] http://www.erisian.com.au/meetbot/bitcoin-core-dev/2016/bitcoin-core-dev.2016-10-27-19.01.log.html#l-121 > [2] http://www.erisian.com.au/meetbot/bitcoin-core-dev/2017/bitcoin-core-dev.2017-03-02-19.01.log.html#l-57 > [3] https://github.com/bitcoin/bitcoin/pull/9053/commits/fd46136dfaf68a7046cf7b8693824d73ac6b1caf > [4] https://bitcoincore.org/en/2017/03/08/release-0.14.0/#assumed-valid-blocks > Okay, here is an overview of what I have found for the minimum difficulty proposal: It describes having a new consensus rule to not fork or accept headers prior to, or below, a minimum difficulty once the best chain work is achieved at release time of the software. This would be instead of the rule to not fork before the last checkpoint, as checkpoints are removed. It has an advantage to the existing checkpoint solution as it does not require checkpoints to be enabled. This is not a surprise as the proposal was to remove checkpoints entirely. It would increase the cost of the attack without checkpoints. Long header chains would need to be built using this minimum difficulty, instead of the current lowest difficulty of the genesis block. The exact cost of that is not yet calculated. There are a few caveats with the approach mentioned; nodes are vulnerable if the initial loader peer is the attacker, it could leave minority hashpower without an ability to softfork away during a contentious hardfork, and requires period consensus changes to continue to maintain: - Nodes are vulnerable during the initial sync when joining the network until the minimum chainwork is achieved. This is possible if the loader peer is the attacker. To mitigate this there would need to be a minimum chainwork defined based on the current chainwork. However, such could also be used to prevent nodes from joining the network as it's rejecting rather that throttling. - A contentious hardfork could leave a minority hashpower without an ability to softfork away without agreeing on a hardfork. This was the reason why the minimum difficulty was about 10 devices instead of 10,000. - It's technically a consensus change each time the minimum difficulty or best chainwork is updated. It is a similar consensus change as maintaining the last checkpoint, as it's used to prevent forking prior to the last checkpoint. I think the solution proposed in the Bitcoin Chain Width Expansion paper solves those issues by limiting chain width and throttling based on chainwork, instead of rejecting blocks based on the minimum difficulty. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-04 0:38 [bitcoin-dev] Chain width expansion Braydon Fuller 2019-10-04 8:20 ` David A. Harding @ 2019-10-04 23:31 ` Tier Nolan 2019-10-10 16:16 ` Braydon Fuller 1 sibling, 1 reply; 17+ messages in thread From: Tier Nolan @ 2019-10-04 23:31 UTC (permalink / raw) To: Braydon Fuller via bitcoin-dev [-- Attachment #1: Type: text/plain, Size: 2635 bytes --] Are you assuming no network protocol changes? At root, the requirement is that peers can prove their total chain POW. Since each block has the height in the coinbase, a peer can send a short proof of height for a disconnected header and could assert the POW for that header. Each peer could send the the N strongest headers (lowest digest/most POW) for their main chain and prove the height of each one. The total chain work can be estimated as N times the POW for the lowest in the list. This is an interesting property of how POW works. The 10th best POW block will have about 10% of the total POW. The N blocks would be spread along the chain and the peer could ask for all headers between any 2 of them and check the different in claimed POW. If dishonesty is discovered, the peer can be banned and all info from that peer wiped. You can apply the rule hierarchically. The honest peers would have a much higher POW chain. You could ask the peer to give you the N strongest headers between 2 headers that they gave for their best chain. You can check that their height is between the two limits. The peer would effectively be proving their total POW recursively. This would require a new set of messages so you can request info about the best chain. It also has the nice feature that it allows you to see if multiple peers are on the same chain, since they will have the same best blocks. The most elegant would be something like using SNARKS to directly prove that your chain tip has a particular POW. The download would go tip to genesis, unlike now when it is in the other direction. ------------------------------------------------------------------------ In regard to your proposal, I think the key is to limit things by peer, rather than globally. The limit to header width should be split between peers. If you have N outgoing peers, they get 1/N of your header download resources each. You store the current best/most POW header chain and at least one alternative chain per outgoing peer. You could still prune old chains based on POW, but the best chain and the current chain for each outgoing peer should not be pruned. The security assumption is that a node is connected to at least one honest node. If you split resources between all peers, then it prevents the dishonest nodes from flooding and wiping out the progress for the honest peer. - Message Limiting - I have the same objection here. The message limiting should be per peer. An honest peer who has just been connected to shouldn't suffer a penalty. Your point that it is only a few minutes anyway may make this point moot though. [-- Attachment #2: Type: text/html, Size: 3239 bytes --] ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-04 23:31 ` Tier Nolan @ 2019-10-10 16:16 ` Braydon Fuller 2019-10-12 16:27 ` Tier Nolan 0 siblings, 1 reply; 17+ messages in thread From: Braydon Fuller @ 2019-10-10 16:16 UTC (permalink / raw) To: bitcoin-dev On 10/4/19 4:31 PM, Tier Nolan via bitcoin-dev wrote > [..] At root, the requirement is that peers can prove their total chain POW. [...] Indeed, it's currently necessary to receive all of the chain headers to determine. It would be interesting to have a succinct chainwork proof for all cases. Chainwork being a sum of the total proof-of-work in a chain. Such proofs currently only require a few headers for common cases and the other cases can be identified. > In regard to your proposal, I think the key is to limit things by peer, rather than globally. [...] Yeah, there should be enough width available for every active connection, only one chain of headers is requested at a time per peer. Peer based limiting is susceptible to Sybil attacks; A peer could broadcast a few low-work header chains, reconnect and repeat ad nauseam. The delay for the next set of headers is based on the chainwork of the last received headers from the peer. The peer could change identity and run into the same limit. The unrequested header rate is tracked per peer. A header chain with more chainwork will be requested at a faster rate than a header chain with less chainwork. The chainwork is compared to the current fully validated chain. Honest peers with more chainwork will have a time advantage over dishonest peers with less chainwork. For example, let's assume a case that the initial chain of headers was dishonest and with low chainwork. The initial block download retrieves the header chain from a single loader peer first. Once recent time is reached, header chains are downloaded from all outgoing peers. A single honest peer will have an advantage over many dishonest peers. Thus, as you have mentioned, there is a security assumption that there is at least one connected honest node. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-10 16:16 ` Braydon Fuller @ 2019-10-12 16:27 ` Tier Nolan 2019-10-12 17:56 ` Joachim Strömbergson 2019-10-15 0:38 ` Braydon Fuller 0 siblings, 2 replies; 17+ messages in thread From: Tier Nolan @ 2019-10-12 16:27 UTC (permalink / raw) To: Braydon Fuller via bitcoin-dev [-- Attachment #1: Type: text/plain, Size: 6034 bytes --] On Thu, Oct 10, 2019 at 5:20 PM Braydon Fuller via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > It would be interesting to have a succinct chainwork proof > for all cases. Chainwork being a sum of the total proof-of-work in a > chain. Such proofs currently only require a few headers for common cases > and the other cases can be identified. > I wonder if a "seed" based system would be useful. A seed is defined as a header with a very low digest. When a new peer connects, you ask him to send you the header with the lowest digest on his main chain. Chains ending at the strongest seeds are kept preferentially when discarding chains. This requires a way to download chains backwards, which the protocol doesn't support at the moment. The most chain work chain is overwhelmingly likely to contain the header with the strongest digest. This means that the honest peer's chain would be kept preferentially. It also means that a node that is synced to the main chain can easily discard noise from dishonest peers. Before downloading, they could ask the peer to provide a header with at least 1% of the POW of the best header on the main chain starting at the fork point. If they can't then their fork probably has less POW than the main chain. > A peer could > broadcast a few low-work header chains, reconnect and repeat ad nauseam. > I meant connected peer rather than peer. If a peer disconnects and then reconnects as a new peer, then their allocation of bandwidth/RAM resets to zero. Each peer would be allocated a certain bandwidth per minute for headers as in a token bucket system. New peers would start with empty buckets. If an active (outgoing) peer is building on a header chain, then that chain is preferentially kept. Essentially, the last chain that each outgoing peer built on may not be discarded. In retrospect, that works out as the same as throttling peer download, just with a different method for throttling. In your system, peers who extend the best chain don't get throttled, but the other peers do (but with a gradual transition). This could be accomplished by adding 80 bytes into the peers bucket if it extends the main chain. > For example, let's assume a case that the initial chain of headers was > dishonest and with low chainwork. The initial block download retrieves > the header chain from a single loader peer first. Once recent time is > reached, header chains are downloaded from all outgoing peers. The key it that it must not be possible to prevent a single honest peer from making progress by flooding with other peers and getting the honest peer's chain discarded. I think parallel downloading would be better than focusing on one peer initially. Otherwise, a dishonest peer can slowly send their headers to prevent moving to parallel mode. Each connected peer is given a bandwidth and RAM allowance. If a connected peer forks off their own chain before reaching current time, then the fork is just discarded. The RAM allowance would be sufficient to hold one header per minute since genesis. The header chains are relatively small (50MB), so it is not unreasonable to expect the honest peer to send the entire chain in one go. I wonder if there is a formula that gives the minimum chain work required to have a particular chain length by now. 1 minute per header would mean that the difficulty would increase every adjustment, so it couldn't be maintained without an exponentially rising total chain work. On Sat, Oct 12, 2019 at 2:41 AM Braydon Fuller via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > - Nodes are vulnerable during the initial sync when joining the > network until the minimum chainwork is achieved. Nodes should stay "headers-only" until they have hit the threshold. It isn't really any different from a checkpoint anyway. Download headers until you hit this header is about the same as download headers until you hit this chain work. It would be different if header chains were downloaded from the final checkpoint backwards. You would start at a final checkpoint and work backwards. Each ancestor header is committed to by the final checkpoint, so it would not be possible a dishonest peer to fool the node during IBD. > This is possible if the > loader peer is the attacker. To mitigate this there would need to be a > minimum chainwork defined based on the current chainwork. However, such > could also be used to prevent nodes from joining the network as it's > rejecting rather that throttling. > I think mixing two different concepts makes this problem more complex than needed. It looks like they are aiming for hard-coding A) "The main chain has at least C chainwork" B) "All blocks after A is satisfied have at least X POW" To me, this is equivalent to a checkpoint, without it having it be called a checkpoint. The point about excluding checkpoints is that it means that (in theory) two clients can't end up on incompatible forks due to different checkpoints. The "checkpoint" is replaced by a statement by the dev team that "There exists at least one valid chain with C chainwork" which is equivalent to "The longest valid chain has at least C chainwork" Two client making those statements can't cause a permanent incompatibility. If they pick a different C, then eventually, once the main chain has more than the larger chain work, they will agree again. Checkpoints don't automatically heal. Adding in a minimum POW requirement could break the requirement for that to happen. Just because B was met on the original main chain, a fork isn't required to meet it. - It's technically a consensus change each time the minimum difficulty > or best chainwork is updated. It is a similar consensus change as > maintaining the last checkpoint, as it's used to prevent forking prior > to the last checkpoint. > I agree on the min difficulty being a consensus change. The minimum chain work is just the devs making a true statement and then using it to optimize things. [-- Attachment #2: Type: text/html, Size: 8514 bytes --] ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-12 16:27 ` Tier Nolan @ 2019-10-12 17:56 ` Joachim Strömbergson 2019-10-12 20:46 ` Tier Nolan 2019-10-15 0:42 ` Braydon Fuller 2019-10-15 0:38 ` Braydon Fuller 1 sibling, 2 replies; 17+ messages in thread From: Joachim Strömbergson @ 2019-10-12 17:56 UTC (permalink / raw) To: Tier Nolan, Bitcoin Protocol Discussion [-- Attachment #1: Type: text/plain, Size: 7188 bytes --] I like the backwards syncing idea. First you provide proof of your best block height via coinbase, then sync backwards. It solves lots of related problems. You know how much you can expect from the given peer. On different note, one of the problems that I haven't seen mentioned here yet is the timewarp attack. It is relevant to some of the proposed solutions. It should be possible, IIRC, for a malicious node to generate much longer chain with superslow timestamp increase (~5 blocks in 1 second) without increasing difficulty (i.e. staying at min. diff.). This could produce chain that is ~2500 times longer than main chain without having multiple branches. I also agree that there is no big difference between hash checkpoints and "min. diff. checkpoints". Joachim Sent with [ProtonMail](https://protonmail.com) Secure Email. ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Saturday, October 12, 2019 4:27 PM, Tier Nolan via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > On Thu, Oct 10, 2019 at 5:20 PM Braydon Fuller via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > >> It would be interesting to have a succinct chainwork proof >> for all cases. Chainwork being a sum of the total proof-of-work in a >> chain. Such proofs currently only require a few headers for common cases >> and the other cases can be identified. > > I wonder if a "seed" based system would be useful. > > A seed is defined as a header with a very low digest. > > When a new peer connects, you ask him to send you the header with the lowest digest on his main chain. > > Chains ending at the strongest seeds are kept preferentially when discarding chains. > > This requires a way to download chains backwards, which the protocol doesn't support at the moment. > > The most chain work chain is overwhelmingly likely to contain the header with the strongest digest. > > This means that the honest peer's chain would be kept preferentially. > > It also means that a node that is synced to the main chain can easily discard noise from dishonest peers. Before downloading, they could ask the peer to provide a header with at least 1% of the POW of the best header on the main chain starting at the fork point. If they can't then their fork probably has less POW than the main chain. > >> A peer could >> broadcast a few low-work header chains, reconnect and repeat ad nauseam. > > I meant connected peer rather than peer. If a peer disconnects and then reconnects as a new peer, then their allocation of bandwidth/RAM resets to zero. > > Each peer would be allocated a certain bandwidth per minute for headers as in a token bucket system. New peers would start with empty buckets. > > If an active (outgoing) peer is building on a header chain, then that chain is preferentially kept. Essentially, the last chain that each outgoing peer built on may not be discarded. > > In retrospect, that works out as the same as throttling peer download, just with a different method for throttling. > > In your system, peers who extend the best chain don't get throttled, but the other peers do (but with a gradual transition). > > This could be accomplished by adding 80 bytes into the peers bucket if it extends the main chain. > >> For example, let's assume a case that the initial chain of headers was >> dishonest and with low chainwork. The initial block download retrieves >> the header chain from a single loader peer first. Once recent time is >> reached, header chains are downloaded from all outgoing peers. > > The key it that it must not be possible to prevent a single honest peer from making progress by flooding with other peers and getting the honest peer's chain discarded. > > I think parallel downloading would be better than focusing on one peer initially. Otherwise, a dishonest peer can slowly send their headers to prevent moving to parallel mode. > > Each connected peer is given a bandwidth and RAM allowance. If a connected peer forks off their own chain before reaching current time, then the fork is just discarded. > > The RAM allowance would be sufficient to hold one header per minute since genesis. > > The header chains are relatively small (50MB), so it is not unreasonable to expect the honest peer to send the entire chain in one go. > > I wonder if there is a formula that gives the minimum chain work required to have a particular chain length by now. > > 1 minute per header would mean that the difficulty would increase every adjustment, so it couldn't be maintained without an exponentially rising total chain work. > > On Sat, Oct 12, 2019 at 2:41 AM Braydon Fuller via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote: > >> - Nodes are vulnerable during the initial sync when joining the >> network until the minimum chainwork is achieved. > > Nodes should stay "headers-only" until they have hit the threshold. > > It isn't really any different from a checkpoint anyway. > > Download headers until you hit this header is about the same as download headers until you hit this chain work. > > It would be different if header chains were downloaded from the final checkpoint backwards. > > You would start at a final checkpoint and work backwards. Each ancestor header is committed to by the final checkpoint, so it would not be possible a dishonest peer to fool the node during IBD. > >> This is possible if the >> loader peer is the attacker. To mitigate this there would need to be a >> minimum chainwork defined based on the current chainwork. However, such >> could also be used to prevent nodes from joining the network as it's >> rejecting rather that throttling. > > I think mixing two different concepts makes this problem more complex than needed. > > It looks like they are aiming for hard-coding > > A) "The main chain has at least C chainwork" > B) "All blocks after A is satisfied have at least X POW" > > To me, this is equivalent to a checkpoint, without it having it be called a checkpoint. > > The point about excluding checkpoints is that it means that (in theory) two clients can't end up on incompatible forks due to different checkpoints. > > The "checkpoint" is replaced by a statement by the dev team that > > "There exists at least one valid chain with C chainwork" > > which is equivalent to > > "The longest valid chain has at least C chainwork" > > Two client making those statements can't cause a permanent incompatibility. If they pick a different C, then eventually, once the main chain has more than the larger chain work, they will agree again. > > Checkpoints don't automatically heal. > > Adding in a minimum POW requirement could break the requirement for that to happen. > > Just because B was met on the original main chain, a fork isn't required to meet it. > >> - It's technically a consensus change each time the minimum difficulty >> or best chainwork is updated. It is a similar consensus change as >> maintaining the last checkpoint, as it's used to prevent forking prior >> to the last checkpoint. > > I agree on the min difficulty being a consensus change. > > The minimum chain work is just the devs making a true statement and then using it to optimize things. [-- Attachment #2: Type: text/html, Size: 10569 bytes --] ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-12 17:56 ` Joachim Strömbergson @ 2019-10-12 20:46 ` Tier Nolan 2019-10-16 19:07 ` Braydon Fuller 2019-10-15 0:42 ` Braydon Fuller 1 sibling, 1 reply; 17+ messages in thread From: Tier Nolan @ 2019-10-12 20:46 UTC (permalink / raw) To: Joachim Strömbergson; +Cc: Bitcoin Protocol Discussion [-- Attachment #1: Type: text/plain, Size: 2381 bytes --] On Sat, Oct 12, 2019 at 6:56 PM Joachim Strömbergson < joachimstr@protonmail.com> wrote: > I like the backwards syncing idea. First you provide proof of your best > block height via coinbase, then sync backwards. It solves lots of related > problems. You know how much you can expect from the given peer. > It shows you which nodes are on the same chain too. If you have 8 peers and you ask the 8 of them for their 8 best, then they should all agree on most of them. You can then ask each of the 8 to start sending you headers backwards from one of the 8 seeds. They will all roughly split the chain into 8 equal pieces, though the split will be based on work rather than header height. If there is disagreement, you can give priority to the node(s) with the lowest headers until they have completed their download. It requires a network protocol change to allow reverse block downloads though (and messages to indicate lowest headers etc.) On different note, one of the problems that I haven't seen mentioned here > yet is the timewarp attack. It is relevant to some of the proposed > solutions. It should be possible, IIRC, for a malicious node to generate > much longer chain with superslow timestamp increase (~5 blocks in 1 second) > without increasing difficulty (i.e. staying at min. diff.). This could > produce chain that is ~2500 times longer than main chain without having > multiple branches. > That is a good point. It answers my question about formula for maximum number of blocks. 5 * 60 * 60 * 24 * 365 = 157,680,000 That's around 150 million blocks per year at that rate. I assume the 5 per second limit is that it is greater that the median of the last 11 rather than greater or equal? The timewarp bug can be fixed by a basic soft fork. You just need to limit the maximum difference between the timestamp for the first header in a period and the last header in the previous period. An alternative would be to soft fork in a maximum block rate. In addition to the current rules, you could limit it to a maximum of 1 block every 2 mins. That rule shouldn't activate normally. block.height <= (block.timestamp - genesis.timestamp) / (2 mins) It could have some weird incentives if it actually activated though. Miners would have to shutdown mining if they were finding blocks to fast. [-- Attachment #2: Type: text/html, Size: 3138 bytes --] ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-12 20:46 ` Tier Nolan @ 2019-10-16 19:07 ` Braydon Fuller 0 siblings, 0 replies; 17+ messages in thread From: Braydon Fuller @ 2019-10-16 19:07 UTC (permalink / raw) To: Tier Nolan, Bitcoin Protocol Discussion On 10/12/19 1:46 PM, Tier Nolan via bitcoin-dev wrote: > On Sat, Oct 12, 2019 at 6:56 PM Joachim Strömbergson <joachimstr@protonmail.com> wrote: >> On different note, one of the problems that I haven't seen mentioned here >> yet is the timewarp attack. It is relevant to some of the proposed >> solutions. It should be possible, IIRC, for a malicious node to generate >> much longer chain with superslow timestamp increase (~5 blocks in 1 second) >> without increasing difficulty (i.e. staying at min. diff.). This could >> produce chain that is ~2500 times longer than main chain without having >> multiple branches. >> > [..] > > The timewarp bug can be fixed by a basic soft fork. You just need to limit > the maximum difference between the timestamp for the first header in a > period and the last header in the previous period. Yeah, that makes sense as it corrects the off-by-one error. I think this solution has been included in a draft proposal "The Great Consensus Cleanup". It would need to be effective for not only the main chain but also for any future forked chain. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-12 17:56 ` Joachim Strömbergson 2019-10-12 20:46 ` Tier Nolan @ 2019-10-15 0:42 ` Braydon Fuller 2019-10-15 7:20 ` Joachim Strömbergson 2019-10-15 18:30 ` Tier Nolan 1 sibling, 2 replies; 17+ messages in thread From: Braydon Fuller @ 2019-10-15 0:42 UTC (permalink / raw) To: bitcoin-dev On 10/12/19 10:56 AM, Joachim Strömbergson via bitcoin-dev wrote: > [...] First you provide proof of your best block height via coinbase [...] So I don't think you can use the height in the coinbase for that purpose, as it's not possible to validate it without the previous headers. That's common for more than just the height. > [...] to generate much longer chain with superslow timestamp increase (~5 blocks in 1 second) without increasing difficulty (i.e. staying at min. diff.). [...] In that case, it would take about 7 minutes of block time seconds for the next retarget period, every 2016 blocks, and the difficulty would adjust. The difficulty would adjust in that case as if 2 weeks of blocks had been mined in 7 minutes. For the difficulty to remain the same the time between blocks needs to be 10 minutes. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-15 0:42 ` Braydon Fuller @ 2019-10-15 7:20 ` Joachim Strömbergson 2019-10-15 8:12 ` Braydon Fuller 2019-10-15 18:30 ` Tier Nolan 1 sibling, 1 reply; 17+ messages in thread From: Joachim Strömbergson @ 2019-10-15 7:20 UTC (permalink / raw) To: Braydon Fuller, Bitcoin Protocol Discussion > > [...] First you provide proof of your best block height via coinbase [...] > > So I don't think you can use the height in the coinbase for that > purpose, as it's not possible to validate it without the previous > headers. That's common for more than just the height. You can fake it of course, but it means you are spending money for PoW and I can't see viable attack here. You can commit to either lower height than actual or higher height, if you are malicious. If it is lower, then your chain will look weaker with some heuristic based on PoW of the tip and the chain length. So you are not going to do that. It thus seem it only makes sense to commit to higher than actual height. OK, this could convince me to be interested in your chain over others. So let's say the real chain is 600k blocks, you claim 1m blocks, and you prove 1m height block to me. So I can ask you about 1) 2000 headers from the tip down 2) AND another proof of height for the block at 1m-2000. To be able to provide that you need to have such a chain and you can reuse any real subchain from mainnet. It's still possible that you can deliver that, but not for high difficulty. Now if time warp attack is blocked, you will have pretty hard time to create such a chain that would look strongest in cumulative work. I don't have actual numbers though, so I can't say this would mitigate everything. > > [...] to generate much longer chain with superslow timestamp increase (~5 blocks in 1 second) without increasing difficulty (i.e. staying at min. diff.). [...] > > In that case, it would take about 7 minutes of block time seconds for > the next retarget period, every 2016 blocks, and the difficulty would > adjust. The difficulty would adjust in that case as if 2 weeks of blocks > had been mined in 7 minutes. For the difficulty to remain the same the > time between blocks needs to be 10 minutes. This calculation does not apply under time warp attack. You can fake timestamps of all blocks except for those relevant to the retarget calculation. Those are only the first and the last block in the 2016 block window. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-15 7:20 ` Joachim Strömbergson @ 2019-10-15 8:12 ` Braydon Fuller 2019-10-15 15:50 ` Joachim Strömbergson 0 siblings, 1 reply; 17+ messages in thread From: Braydon Fuller @ 2019-10-15 8:12 UTC (permalink / raw) To: Joachim Strömbergson, Bitcoin Protocol Discussion On 10/15/19 12:20 AM, Joachim Strömbergson wrote: >>> [...] to generate much longer chain with superslow timestamp increase (~5 blocks in 1 second) without increasing difficulty (i.e. staying at min. diff.). [...] >> In that case, it would take about 7 minutes of block time seconds for >> the next retarget period, every 2016 blocks, and the difficulty would >> adjust. The difficulty would adjust in that case as if 2 weeks of blocks >> had been mined in 7 minutes. For the difficulty to remain the same the >> time between blocks needs to be 10 minutes. > This calculation does not apply under time warp attack. You can fake timestamps of all blocks except for those relevant to the retarget calculation. Those are only the first and the last block in the 2016 block window. This must be in reference to the non-overlapping difficulty calculation and off-by-one bug? ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-15 8:12 ` Braydon Fuller @ 2019-10-15 15:50 ` Joachim Strömbergson 2019-10-16 19:25 ` Braydon Fuller 0 siblings, 1 reply; 17+ messages in thread From: Joachim Strömbergson @ 2019-10-15 15:50 UTC (permalink / raw) To: Braydon Fuller; +Cc: Bitcoin Protocol Discussion > > > > [...] to generate much longer chain with superslow timestamp increase (~5 blocks in 1 second) without increasing difficulty (i.e. staying at min. diff.). [...] > > > > In that case, it would take about 7 minutes of block time seconds for > > > > the next retarget period, every 2016 blocks, and the difficulty would > > > > adjust. The difficulty would adjust in that case as if 2 weeks of blocks > > > > had been mined in 7 minutes. For the difficulty to remain the same the > > > > time between blocks needs to be 10 minutes. > > > > This calculation does not apply under time warp attack. You can fake timestamps of all blocks except for those relevant to the retarget calculation. Those are only the first and the last block in the 2016 block window. > > This must be in reference to the non-overlapping difficulty calculation > and off-by-one bug? Indeed. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-15 15:50 ` Joachim Strömbergson @ 2019-10-16 19:25 ` Braydon Fuller 0 siblings, 0 replies; 17+ messages in thread From: Braydon Fuller @ 2019-10-16 19:25 UTC (permalink / raw) To: Joachim Strömbergson; +Cc: Bitcoin Protocol Discussion On 10/15/19 8:50 AM, Joachim Strömbergson wrote: >>>>> [...] to generate much longer chain with superslow timestamp increase (~5 blocks in 1 second) without increasing difficulty (i.e. staying at min. diff.). [...] >>>>> In that case, it would take about 7 minutes of block time seconds for >>>>> the next retarget period, every 2016 blocks, and the difficulty would >>>>> adjust. The difficulty would adjust in that case as if 2 weeks of blocks >>>>> had been mined in 7 minutes. For the difficulty to remain the same the >>>>> time between blocks needs to be 10 minutes. >>>>> This calculation does not apply under time warp attack. You can fake timestamps of all blocks except for those relevant to the retarget calculation. Those are only the first and the last block in the 2016 block window. >> This must be in reference to the non-overlapping difficulty calculation >> and off-by-one bug? > Indeed. > Yeah, limiting the width of the chain would not be effective unless the timewarp off-by-one bug is resolved — the height can be extended instead. Rate limiting based on chainwork would slow down a timewarped low work header chain. There would be a maximum rate at which the headers could be sent. It would be around 32KB/s. It would take about a month to send 100GB. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-15 0:42 ` Braydon Fuller 2019-10-15 7:20 ` Joachim Strömbergson @ 2019-10-15 18:30 ` Tier Nolan 1 sibling, 0 replies; 17+ messages in thread From: Tier Nolan @ 2019-10-15 18:30 UTC (permalink / raw) To: Braydon Fuller via bitcoin-dev [-- Attachment #1: Type: text/plain, Size: 833 bytes --] On Tue, Oct 15, 2019 at 7:29 AM Braydon Fuller via bitcoin-dev < bitcoin-dev@lists.linuxfoundation.org> wrote: > So I don't think you can use the height in the coinbase for that > purpose, as it's not possible to validate it without the previous > headers. That's common for more than just the height. > It is a property of blockchains that the lowest digest for a chain represents the total chainwork. Estimate total hash count = N * (2^256) / (Nth lowest (i.e. strongest) digest over all headers) To produce a fake set of 10 headers that give a higher work estimate than the main chain would require around the same effort as went into the main chain in the first place. You might as well completely build an alternative chain. Working backwards for one of those headers, you have to follow the actual chain back to genesis. [-- Attachment #2: Type: text/html, Size: 1251 bytes --] ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [bitcoin-dev] Chain width expansion 2019-10-12 16:27 ` Tier Nolan 2019-10-12 17:56 ` Joachim Strömbergson @ 2019-10-15 0:38 ` Braydon Fuller 1 sibling, 0 replies; 17+ messages in thread From: Braydon Fuller @ 2019-10-15 0:38 UTC (permalink / raw) To: bitcoin-dev On 10/12/19 9:27 AM, Tier Nolan via bitcoin-dev wrote: > [...] > > I think parallel downloading would be better than focusing on one peer > initially. Otherwise, a dishonest peer can slowly send their headers to > prevent moving to parallel mode. > > [...] As implemented, there is a timeout for that loader peer based on the amount of time it should take to request all the headers. The time period is defined as a base time plus the number of expected headers times an expected amount of time per header. For example, the timeout would be 25 minutes with a base time of 15 minutes, 1 millisecond per header and an expected 600000 headers. ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2019-10-16 19:25 UTC | newest] Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-10-04 0:38 [bitcoin-dev] Chain width expansion Braydon Fuller 2019-10-04 8:20 ` David A. Harding 2019-10-04 19:51 ` Braydon Fuller 2019-10-11 21:24 ` Braydon Fuller 2019-10-04 23:31 ` Tier Nolan 2019-10-10 16:16 ` Braydon Fuller 2019-10-12 16:27 ` Tier Nolan 2019-10-12 17:56 ` Joachim Strömbergson 2019-10-12 20:46 ` Tier Nolan 2019-10-16 19:07 ` Braydon Fuller 2019-10-15 0:42 ` Braydon Fuller 2019-10-15 7:20 ` Joachim Strömbergson 2019-10-15 8:12 ` Braydon Fuller 2019-10-15 15:50 ` Joachim Strömbergson 2019-10-16 19:25 ` Braydon Fuller 2019-10-15 18:30 ` Tier Nolan 2019-10-15 0:38 ` Braydon Fuller
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox