* [Bitcoin-development] Proposed alternatives to the 20MB step function @ 2015-05-08 7:20 Matt Whitlock 2015-05-08 10:15 ` Mike Hearn ` (5 more replies) 0 siblings, 6 replies; 69+ messages in thread From: Matt Whitlock @ 2015-05-08 7:20 UTC (permalink / raw) To: bitcoin-development Between all the flames on this list, several ideas were raised that did not get much attention. I hereby resubmit these ideas for consideration and discussion. - Perhaps the hard block size limit should be a function of the actual block sizes over some trailing sampling period. For example, take the median block size among the most recent 2016 blocks and multiply it by 1.5. This allows Bitcoin to scale up gradually and organically, rather than having human beings guessing at what is an appropriate limit. - Perhaps the hard block size limit should be determined by a vote of the miners. Each miner could embed a desired block size limit in the coinbase transactions of the blocks it publishes. The effective hard block size limit would be that size having the greatest number of votes within a sliding window of most recent blocks. - Perhaps the hard block size limit should be a function of block-chain length, so that it can scale up smoothly rather than jumping immediately to 20 MB. This function could be linear (anticipating a breakdown of Moore's Law) or quadratic. I would be in support of any of the above, but I do not support Mike Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the road without actually solving the problem, and it does so in a controversial (step function) way. ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 7:20 [Bitcoin-development] Proposed alternatives to the 20MB step function Matt Whitlock @ 2015-05-08 10:15 ` Mike Hearn 2015-05-08 10:30 ` Clément Elbaz ` (4 subsequent siblings) 5 siblings, 0 replies; 69+ messages in thread From: Mike Hearn @ 2015-05-08 10:15 UTC (permalink / raw) To: Matt Whitlock; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 726 bytes --] There are certainly arguments to be made for and against all of these proposals. The fixed 20mb cap isn't actually my proposal at all, it is from Gavin. I am supporting it because anything is better than nothing. Gavin originally proposed the block size be a function of time. That got dropped, I suppose to make the process of getting consensus easier. It is "the simplest thing that can possibly work". I would like to see the process of chain forking becoming less traumatic. I remember Gavin, Jeff and I once considered (on stage at a conference??) that maybe there should be a scheduled fork every year, so people know when to expect them. If everything goes well, I see no reason why 20mb would be the limit forever. [-- Attachment #2: Type: text/html, Size: 834 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 7:20 [Bitcoin-development] Proposed alternatives to the 20MB step function Matt Whitlock 2015-05-08 10:15 ` Mike Hearn @ 2015-05-08 10:30 ` Clément Elbaz 2015-05-08 12:32 ` Joel Joonatan Kaartinen 2015-05-08 15:57 ` Alex Mizrahi ` (3 subsequent siblings) 5 siblings, 1 reply; 69+ messages in thread From: Clément Elbaz @ 2015-05-08 10:30 UTC (permalink / raw) To: Matt Whitlock, bitcoin-development [-- Attachment #1: Type: text/plain, Size: 3381 bytes --] Matt : I think proposal #1 and #3 are a lot better than #2, and #1 is my favorite. I see two problems with proposal #2. The first problem with proposal #2 is that, as we see in democracies, there is often a mismatch between the people conscious vote and these same people behavior. Relying on an intentional vote made consciously by miners by choosing a configuration value can lead to twisted results if their actual behavior doesn't correlate with their vote (eg, they all vote for a small block size because it is the default configuration of their software, and then they fill it completely all the time and everything crashes). The second problem with proposal #2 is that if Gavin and Mike are right, there is simply no time to gather a meaningful amount of votes over the coinbases, after the fork but before the Bitcoin scalability crash. I like proposal #1 because the "vote" is made using already available data. Also there is no possible mismatch between behavior and vote. As a miner you vote by choosing to create a big (or small) block, and your actions reflect your vote. It is simple and straightforward. My feelings on proposal #3 is it is a little bit mixing apples and oranges, but I may not seeing all the implications. Le ven. 8 mai 2015 à 09:21, Matt Whitlock <bip@mattwhitlock.name> a écrit : > Between all the flames on this list, several ideas were raised that did > not get much attention. I hereby resubmit these ideas for consideration and > discussion. > > - Perhaps the hard block size limit should be a function of the actual > block sizes over some trailing sampling period. For example, take the > median block size among the most recent 2016 blocks and multiply it by 1.5. > This allows Bitcoin to scale up gradually and organically, rather than > having human beings guessing at what is an appropriate limit. > > - Perhaps the hard block size limit should be determined by a vote of the > miners. Each miner could embed a desired block size limit in the coinbase > transactions of the blocks it publishes. The effective hard block size > limit would be that size having the greatest number of votes within a > sliding window of most recent blocks. > > - Perhaps the hard block size limit should be a function of block-chain > length, so that it can scale up smoothly rather than jumping immediately to > 20 MB. This function could be linear (anticipating a breakdown of Moore's > Law) or quadratic. > > I would be in support of any of the above, but I do not support Mike > Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the > road without actually solving the problem, and it does so in a > controversial (step function) way. > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > [-- Attachment #2: Type: text/html, Size: 4018 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 10:30 ` Clément Elbaz @ 2015-05-08 12:32 ` Joel Joonatan Kaartinen 2015-05-08 12:48 ` Matt Whitlock ` (2 more replies) 0 siblings, 3 replies; 69+ messages in thread From: Joel Joonatan Kaartinen @ 2015-05-08 12:32 UTC (permalink / raw) To: Matt Whitlock; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 4599 bytes --] Matt, It seems you missed my suggestion about basing the maximum block size on the bitcoin days destroyed in transactions that are included in the block. I think it has potential for both scaling as well as keeping up a constant fee pressure. If tuned properly, it should both stop spamming and increase block size maximum when there are a lot of real transactions waiting for inclusion. - Joel On Fri, May 8, 2015 at 1:30 PM, Clément Elbaz <clem.ds@gmail.com> wrote: > Matt : I think proposal #1 and #3 are a lot better than #2, and #1 is my > favorite. > > I see two problems with proposal #2. > The first problem with proposal #2 is that, as we see in democracies, > there is often a mismatch between the people conscious vote and these same > people behavior. > > Relying on an intentional vote made consciously by miners by choosing a > configuration value can lead to twisted results if their actual behavior > doesn't correlate with their vote (eg, they all vote for a small block size > because it is the default configuration of their software, and then they > fill it completely all the time and everything crashes). > > The second problem with proposal #2 is that if Gavin and Mike are right, > there is simply no time to gather a meaningful amount of votes over the > coinbases, after the fork but before the Bitcoin scalability crash. > > I like proposal #1 because the "vote" is made using already available > data. Also there is no possible mismatch between behavior and vote. As a > miner you vote by choosing to create a big (or small) block, and your > actions reflect your vote. It is simple and straightforward. > > My feelings on proposal #3 is it is a little bit mixing apples and > oranges, but I may not seeing all the implications. > > > Le ven. 8 mai 2015 à 09:21, Matt Whitlock <bip@mattwhitlock.name> a > écrit : > >> Between all the flames on this list, several ideas were raised that did >> not get much attention. I hereby resubmit these ideas for consideration and >> discussion. >> >> - Perhaps the hard block size limit should be a function of the actual >> block sizes over some trailing sampling period. For example, take the >> median block size among the most recent 2016 blocks and multiply it by 1.5. >> This allows Bitcoin to scale up gradually and organically, rather than >> having human beings guessing at what is an appropriate limit. >> >> - Perhaps the hard block size limit should be determined by a vote of the >> miners. Each miner could embed a desired block size limit in the coinbase >> transactions of the blocks it publishes. The effective hard block size >> limit would be that size having the greatest number of votes within a >> sliding window of most recent blocks. >> >> - Perhaps the hard block size limit should be a function of block-chain >> length, so that it can scale up smoothly rather than jumping immediately to >> 20 MB. This function could be linear (anticipating a breakdown of Moore's >> Law) or quadratic. >> >> I would be in support of any of the above, but I do not support Mike >> Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the >> road without actually solving the problem, and it does so in a >> controversial (step function) way. >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bitcoin-development mailing list >> Bitcoin-development@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/bitcoin-development >> > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > [-- Attachment #2: Type: text/html, Size: 5781 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 12:32 ` Joel Joonatan Kaartinen @ 2015-05-08 12:48 ` Matt Whitlock 2015-05-08 13:24 ` Matt Whitlock 2015-05-08 12:48 ` Gavin Andresen 2015-05-08 16:51 ` Peter Todd 2 siblings, 1 reply; 69+ messages in thread From: Matt Whitlock @ 2015-05-08 12:48 UTC (permalink / raw) To: Joel Joonatan Kaartinen; +Cc: Bitcoin Development On Friday, 8 May 2015, at 3:32 pm, Joel Joonatan Kaartinen wrote: > It seems you missed my suggestion about basing the maximum block size on > the bitcoin days destroyed in transactions that are included in the block. > I think it has potential for both scaling as well as keeping up a constant > fee pressure. If tuned properly, it should both stop spamming and increase > block size maximum when there are a lot of real transactions waiting for > inclusion. I saw it. I apologize for not including it in my list. I should have, for sake of discussion, even though I have a problem with it. My problem with it is that "bitcoin days destroyed" is not a measure of demand for space in the block chain. In the distant future, when Bitcoin is the predominant global currency, bitcoins will have such high velocity that the number of bitcoin days destroyed in each block will be much lower than at present. Does this mean that the block size limit should be lower in the future than it is now? Clearly this would be incorrect. Perhaps I am misunderstanding your proposal. Could you describe it more explicitly? ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 12:48 ` Matt Whitlock @ 2015-05-08 13:24 ` Matt Whitlock 0 siblings, 0 replies; 69+ messages in thread From: Matt Whitlock @ 2015-05-08 13:24 UTC (permalink / raw) To: Joel Joonatan Kaartinen; +Cc: Bitcoin Development On Friday, 8 May 2015, at 8:48 am, Matt Whitlock wrote: > On Friday, 8 May 2015, at 3:32 pm, Joel Joonatan Kaartinen wrote: > > It seems you missed my suggestion about basing the maximum block size on > > the bitcoin days destroyed in transactions that are included in the block. > > I think it has potential for both scaling as well as keeping up a constant > > fee pressure. If tuned properly, it should both stop spamming and increase > > block size maximum when there are a lot of real transactions waiting for > > inclusion. > > My problem with it is that "bitcoin days destroyed" is not a measure of demand for space in the block chain. In the distant future, when Bitcoin is the predominant global currency, bitcoins will have such high velocity that the number of bitcoin days destroyed in each block will be much lower than at present. Does this mean that the block size limit should be lower in the future than it is now? Clearly this would be incorrect. I feel a need to point out something that may be obvious to some but not to others: the cumulative total number of "bitcoin days destroyed" since the genesis block is bounded by the cumulative total number of "bitcoin days created" since the genesis block. (You can't destroy something that hasn't yet been created.) After all coins have been mined, bitcoin days will be created at a rate of 21M bitcoin days per day. In the long run, bitcoin days will be destroyed at a rate not exceeding 21M bitcoin days per day. This is so because bitcoin days cannot be destroyed at a rate faster than they are created for an indefinitely long time. This upper limit on the rate of bitcoin days destruction is irrespective of bitcoin adoption and the growth in demand for space in the block chain. Even ignoring the fact that "bitcoin days destroyed" is bounded whereas demand for block-chain space is not, we'd still have to answer the question of whether the rate of bitcoin days destroyed is a good estimator of demand for block-chain space. Why would it be? Suppose some day Satoshi moves his 1M coins to a new address. Would this huge destruction of bitcoin days imply anything about future demand for space in the block chain? No. ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 12:32 ` Joel Joonatan Kaartinen 2015-05-08 12:48 ` Matt Whitlock @ 2015-05-08 12:48 ` Gavin Andresen 2015-05-08 16:51 ` Peter Todd 2 siblings, 0 replies; 69+ messages in thread From: Gavin Andresen @ 2015-05-08 12:48 UTC (permalink / raw) To: Joel Joonatan Kaartinen; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 1119 bytes --] I like the bitcoin days destroyed idea. I like lots of the ideas that have been presented here, on the bitcointalk forums, etc etc etc. It is easy to make a proposal, it is hard to wade through all of the proposals. I'm going to balance that equation by completely ignoring any proposal that isn't accompanied by code that implements the proposal (with appropriate tests). However, I'm not the bottleneck-- you need to get the attention of the other committers and convince THEM: a) something should be done "now-ish" b) your idea is good We are stuck on (a) right now, I think. On Fri, May 8, 2015 at 8:32 AM, Joel Joonatan Kaartinen < joel.kaartinen@gmail.com> wrote: > Matt, > > It seems you missed my suggestion about basing the maximum block size on > the bitcoin days destroyed in transactions that are included in the block. > I think it has potential for both scaling as well as keeping up a constant > fee pressure. If tuned properly, it should both stop spamming and increase > block size maximum when there are a lot of real transactions waiting for > inclusion. > > - Joel > -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 1831 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 12:32 ` Joel Joonatan Kaartinen 2015-05-08 12:48 ` Matt Whitlock 2015-05-08 12:48 ` Gavin Andresen @ 2015-05-08 16:51 ` Peter Todd 2015-05-08 22:36 ` Joel Joonatan Kaartinen 2 siblings, 1 reply; 69+ messages in thread From: Peter Todd @ 2015-05-08 16:51 UTC (permalink / raw) To: Joel Joonatan Kaartinen; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 1241 bytes --] On Fri, May 08, 2015 at 03:32:00PM +0300, Joel Joonatan Kaartinen wrote: > Matt, > > It seems you missed my suggestion about basing the maximum block size on > the bitcoin days destroyed in transactions that are included in the block. > I think it has potential for both scaling as well as keeping up a constant > fee pressure. If tuned properly, it should both stop spamming and increase > block size maximum when there are a lot of real transactions waiting for > inclusion. The problem with gating block creation on Bitcoin days destroyed is there's a strong potential of giving big mining pools an huge advantage, because they can contract with large Bitcoin owners and buy dummy transactions with large numbers of Bitcoin days destroyed on demand whenever they need more days-destroyed to create larger blocks. Similarly, with appropriate SIGHASH flags such contracting can be done by modifying *existing* transactions on demand. Ultimately bitcoin days destroyed just becomes a very complex version of transaction fees, and it's already well known that gating blocksize on total transaction fees doesn't work. -- 'peter'[:-1]@petertodd.org 00000000000000000f53e2d214685abf15b6d62d32453a03b0d472e374e10e94 [-- Attachment #2: Digital signature --] [-- Type: application/pgp-signature, Size: 650 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 16:51 ` Peter Todd @ 2015-05-08 22:36 ` Joel Joonatan Kaartinen 2015-05-09 18:30 ` Peter Todd 0 siblings, 1 reply; 69+ messages in thread From: Joel Joonatan Kaartinen @ 2015-05-08 22:36 UTC (permalink / raw) To: Peter Todd; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 2232 bytes --] such a contract is a possibility, but why would big owners give an exclusive right to such pools? It seems to me it'd make sense to offer those for any miner as long as the get paid a little for it. Especially when it's as simple as offering an incomplete transaction with the appropriate SIGHASH flags. a part of the reason I like this idea is because it will allow stakeholders a degree of influence on how large the fees are. At least from the surface, it looks like incentives are pretty well matched. They have an incentive to not let the fees drop too low so the network continues to be usable and they also have an incentive to not raise them too high because it'll push users into using other systems. Also, there'll be competition between stakeholders, which should keep the fees reasonable. I think this would at least be preferable to the "let the miner decide" model. - Joel On Fri, May 8, 2015 at 7:51 PM, Peter Todd <pete@petertodd.org> wrote: > On Fri, May 08, 2015 at 03:32:00PM +0300, Joel Joonatan Kaartinen wrote: > > Matt, > > > > It seems you missed my suggestion about basing the maximum block size on > > the bitcoin days destroyed in transactions that are included in the > block. > > I think it has potential for both scaling as well as keeping up a > constant > > fee pressure. If tuned properly, it should both stop spamming and > increase > > block size maximum when there are a lot of real transactions waiting for > > inclusion. > > The problem with gating block creation on Bitcoin days destroyed is > there's a strong potential of giving big mining pools an huge advantage, > because they can contract with large Bitcoin owners and buy dummy > transactions with large numbers of Bitcoin days destroyed on demand > whenever they need more days-destroyed to create larger blocks. > Similarly, with appropriate SIGHASH flags such contracting can be done > by modifying *existing* transactions on demand. > > Ultimately bitcoin days destroyed just becomes a very complex version of > transaction fees, and it's already well known that gating blocksize on > total transaction fees doesn't work. > > -- > 'peter'[:-1]@petertodd.org > 00000000000000000f53e2d214685abf15b6d62d32453a03b0d472e374e10e94 > [-- Attachment #2: Type: text/html, Size: 2915 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 22:36 ` Joel Joonatan Kaartinen @ 2015-05-09 18:30 ` Peter Todd 0 siblings, 0 replies; 69+ messages in thread From: Peter Todd @ 2015-05-09 18:30 UTC (permalink / raw) To: Joel Joonatan Kaartinen; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 1415 bytes --] On Sat, May 09, 2015 at 01:36:56AM +0300, Joel Joonatan Kaartinen wrote: > such a contract is a possibility, but why would big owners give an > exclusive right to such pools? It seems to me it'd make sense to offer > those for any miner as long as the get paid a little for it. Especially > when it's as simple as offering an incomplete transaction with the > appropriate SIGHASH flags. Like many things, the fact that they need to negotiate the right at all is a *huge* barrier to smaller mining operations, as well as being an attractive point of control for regulators. > a part of the reason I like this idea is because it will allow stakeholders > a degree of influence on how large the fees are. At least from the surface, > it looks like incentives are pretty well matched. They have an incentive to > not let the fees drop too low so the network continues to be usable and > they also have an incentive to not raise them too high because it'll push > users into using other systems. Also, there'll be competition between > stakeholders, which should keep the fees reasonable. If you want to allow stakeholders influence you should look into John Dillon's proof-of-stake blocksize voting scheme: http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg02323.html -- 'peter'[:-1]@petertodd.org 00000000000000000e7980aab9c096c46e7f34c43a661c5cb2ea71525ebb8af7 [-- Attachment #2: Digital signature --] [-- Type: application/pgp-signature, Size: 650 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 7:20 [Bitcoin-development] Proposed alternatives to the 20MB step function Matt Whitlock 2015-05-08 10:15 ` Mike Hearn 2015-05-08 10:30 ` Clément Elbaz @ 2015-05-08 15:57 ` Alex Mizrahi 2015-05-08 16:55 ` Bryan Bishop ` (2 subsequent siblings) 5 siblings, 0 replies; 69+ messages in thread From: Alex Mizrahi @ 2015-05-08 15:57 UTC (permalink / raw) To: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 992 bytes --] Adaptive schedules, i.e. those where block size limit depends not only on block height, but on other parameters as well, are surely attractive in the sense that the system can adapt to the actual use, but they also open a possibility of a manipulation. E.g. one of mining companies might try to bankrupt other companies by making mining non-profitable. To do that they will accept transactions with ridiculously low fees (e.g. 1 satoshi per transaction). Of course, they will suffer losees themselves, but the they might be able to survive that if they have access to financial resources. (E.g. companies backed by banks and such will have an advantage). Once competitors close down their mining operations, they can drive fees upwards. So if you don't want to open room for manipulation (which is very hard to analyze), it is better to have a block size hard limit which depends only on block height. On top of that there might be a soft limit which is enforced by the majority of miners. [-- Attachment #2: Type: text/html, Size: 1237 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 7:20 [Bitcoin-development] Proposed alternatives to the 20MB step function Matt Whitlock ` (2 preceding siblings ...) 2015-05-08 15:57 ` Alex Mizrahi @ 2015-05-08 16:55 ` Bryan Bishop 2015-05-08 20:33 ` Mark Friedenbach 2015-05-28 15:53 ` Gavin Andresen 5 siblings, 0 replies; 69+ messages in thread From: Bryan Bishop @ 2015-05-08 16:55 UTC (permalink / raw) To: Matt Whitlock, Bryan Bishop; +Cc: Bitcoin Dev On Fri, May 8, 2015 at 2:20 AM, Matt Whitlock <bip@mattwhitlock.name> wrote: > - Perhaps the hard block size limit should be a function of the actual block sizes over some > trailing sampling period. For example, take the median block size among the most recent > 2016 blocks and multiply it by 1.5. This allows Bitcoin to scale up gradually and organically, > rather than having human beings guessing at what is an appropriate limit. Block contents can be grinded much faster than hashgrinding and mining. There is a significant run-away effect there, and it also works in the gradual sense as a miner probabilistically mines large blocks that get averaged into that 2016 median block size computation. At least this proposal would be a slower way of pushing out miners and network participants that can't handle 100 GB blocks immediately.. As the size of the blocks are increased, low-end hardware participants have to fall off the network because they no longer meet the minimum performance requirements. Adjustment might become severely mismatched with general economic trends in data storage device development or availability or even current-market-saturation of said storage devices. With the assistance of transaction stuffing or grinding, that 2016 block median metric can be gamed to increase faster than other participants can keep up with or, perhaps worse, in a way that was unintended by developers yet known to be a failure mode. These are just some issues to keep and mind and consider. - Bryan http://heybryan.org/ 1 512 203 0507 ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 7:20 [Bitcoin-development] Proposed alternatives to the 20MB step function Matt Whitlock ` (3 preceding siblings ...) 2015-05-08 16:55 ` Bryan Bishop @ 2015-05-08 20:33 ` Mark Friedenbach 2015-05-08 22:43 ` Aaron Voisine ` (2 more replies) 2015-05-28 15:53 ` Gavin Andresen 5 siblings, 3 replies; 69+ messages in thread From: Mark Friedenbach @ 2015-05-08 20:33 UTC (permalink / raw) To: Matt Whitlock, Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 7391 bytes --] It is my professional opinion that raising the block size by merely adjusting a constant without any sort of feedback mechanism would be a dangerous and foolhardy thing to do. We are custodians of a multi-billion dollar asset, and it falls upon us to weigh the consequences of our own actions against the combined value of the entire bitcoin ecosystem. Ideally we would take no action for which we are not absolutely certain of the ramifications, with the information that can be made available to us. But of course that is not always possible: there are unknown-unknowns, time pressures, and known-unknowns where information has too high a marginal cost. So where certainty is unobtainable, we must instead hedge against unwanted outcomes. The proposal to raise the block size now by redefining a constant carries with it risk associated with infrastructure scaling, centralization pressures, and delaying the necessary development of a constraint-based fee economy. It also simply kicks the can down the road in settling these issues because a larger but realistic hard limit must still exist, meaning a future hard fork may still be required. But whatever new hard limit is chosen, there is also a real possibility that it may be too high. The standard response is that it is a soft-fork change to impose a lower block size limit, which miners could do with a minimal amount of coordination. This is however undermined by the unfortunate reality that so many mining operations are absentee-run businesses, or run by individuals without a strong background in bitcoin protocol policy, or with interests which are not well aligned with other users or holders of bitcoin. We cannot rely on miners being vigilant about issues that develop, as they develop, or able to respond in the appropriate fashion that someone with full domain knowledge and an objective perspective would. The alternative then is to have some sort of dynamic block size limit controller, and ideally one which applies a cost to raising the block size in some way the preserves the decentralization and/or long-term stability features that we care about. I will now describe one such proposal: * For each block, the miner is allowed to select a different difficulty (nBits) within a certain range, e.g. +/- 25% of the expected difficulty, and this miner-selected difficulty is used for the proof of work check. In addition to adjusting the hashcash target, selecting a different difficulty also raises or lowers the maximum block size for that block by a function of the difference in difficulty. So increasing the difficulty of the block by an additional 25% raises the block limit for that block from 100% of the current limit to 125%, and lowering the difficulty by 10% would also lower the maximum block size for that block from 100% to 90% of the current limit. For simplicity I will assume a linear identity transform as the function, but a quadratic or other function with compounding marginal cost may be preferred. * The default maximum block size limit is then adjusted at regular intervals. For simplicity I will assume an adjustment at the end of each 2016 block interval, at the same time that difficulty is adjusted, but there is no reason these have to be aligned. The adjustment algorithm itself is either the selection of the median, or perhaps some sort of weighted average that respects the "middle majority." There would of course be limits on how quickly the block size limit can adjusted in any one period, just as there are min/max limits on the difficulty adjustment. * To prevent perverse mining incentives, the original difficulty without adjustment is used in the aggregate work calculations for selecting the most-work chain, and the allowable miner-selected adjustment to difficulty would have to be tightly constrained. These rules create an incentive environment where raising the block size has a real cost associated with it: a more difficult hashcash target for the same subsidy reward. For rational miners that cost must be counter-balanced by additional fees provided in the larger block. This allows block size to increase, but only within the confines of a self-supporting fee economy. When the subsidy goes away or is reduced to an insignificant fraction of the block reward, this incentive structure goes away. Hopefully at that time we would have sufficient information to soft-fork set a hard block size maximum. But in the mean time, the block size limit controller constrains the maximum allowed block size to be within a range supported by fees on the network, providing an emergency relief valve that we can be assured will only be used at significant cost. Mark Friedenbach * There has over time been various discussions on the bitcointalk forums about dynamically adjusting block size limits. The true origin of the idea is unclear at this time (citations would be appreciated!) but a form of it was implemented in Bytecoin / Monero using subsidy burning to increase the block size. That approach has various limitations. These were corrected in Greg Maxwell's suggestion to adjust the difficulty/nBits field directly, which also has the added benefit of providing incentive for bidirectional movement during the subsidy period. The description in this email and any errors are my own. On Fri, May 8, 2015 at 12:20 AM, Matt Whitlock <bip@mattwhitlock.name> wrote: > Between all the flames on this list, several ideas were raised that did > not get much attention. I hereby resubmit these ideas for consideration and > discussion. > > - Perhaps the hard block size limit should be a function of the actual > block sizes over some trailing sampling period. For example, take the > median block size among the most recent 2016 blocks and multiply it by 1.5. > This allows Bitcoin to scale up gradually and organically, rather than > having human beings guessing at what is an appropriate limit. > > - Perhaps the hard block size limit should be determined by a vote of the > miners. Each miner could embed a desired block size limit in the coinbase > transactions of the blocks it publishes. The effective hard block size > limit would be that size having the greatest number of votes within a > sliding window of most recent blocks. > > - Perhaps the hard block size limit should be a function of block-chain > length, so that it can scale up smoothly rather than jumping immediately to > 20 MB. This function could be linear (anticipating a breakdown of Moore's > Law) or quadratic. > > I would be in support of any of the above, but I do not support Mike > Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the > road without actually solving the problem, and it does so in a > controversial (step function) way. > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > [-- Attachment #2: Type: text/html, Size: 8064 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 20:33 ` Mark Friedenbach @ 2015-05-08 22:43 ` Aaron Voisine 2015-05-08 22:45 ` Mark Friedenbach 2015-05-09 3:36 ` Gregory Maxwell 2015-05-10 21:48 ` Thomas Voegtlin 2 siblings, 1 reply; 69+ messages in thread From: Aaron Voisine @ 2015-05-08 22:43 UTC (permalink / raw) To: Mark Friedenbach; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 8729 bytes --] This is a clever way to tie block size to fees. I would just like to point out though that it still fundamentally is using hard block size limits to enforce scarcity. Transactions with below market fees will hang in limbo for days and fail, instead of failing immediately by not propagating, or seeing degraded, long confirmation times followed by eventual success. Aaron Voisine co-founder and CEO breadwallet.com On Fri, May 8, 2015 at 1:33 PM, Mark Friedenbach <mark@friedenbach.org> wrote: > It is my professional opinion that raising the block size by merely > adjusting a constant without any sort of feedback mechanism would be a > dangerous and foolhardy thing to do. We are custodians of a multi-billion > dollar asset, and it falls upon us to weigh the consequences of our own > actions against the combined value of the entire bitcoin ecosystem. Ideally > we would take no action for which we are not absolutely certain of the > ramifications, with the information that can be made available to us. But > of course that is not always possible: there are unknown-unknowns, time > pressures, and known-unknowns where information has too high a marginal > cost. So where certainty is unobtainable, we must instead hedge against > unwanted outcomes. > > The proposal to raise the block size now by redefining a constant carries > with it risk associated with infrastructure scaling, centralization > pressures, and delaying the necessary development of a constraint-based fee > economy. It also simply kicks the can down the road in settling these > issues because a larger but realistic hard limit must still exist, meaning > a future hard fork may still be required. > > But whatever new hard limit is chosen, there is also a real possibility > that it may be too high. The standard response is that it is a soft-fork > change to impose a lower block size limit, which miners could do with a > minimal amount of coordination. This is however undermined by the > unfortunate reality that so many mining operations are absentee-run > businesses, or run by individuals without a strong background in bitcoin > protocol policy, or with interests which are not well aligned with other > users or holders of bitcoin. We cannot rely on miners being vigilant about > issues that develop, as they develop, or able to respond in the appropriate > fashion that someone with full domain knowledge and an objective > perspective would. > > The alternative then is to have some sort of dynamic block size limit > controller, and ideally one which applies a cost to raising the block size > in some way the preserves the decentralization and/or long-term stability > features that we care about. I will now describe one such proposal: > > * For each block, the miner is allowed to select a different difficulty > (nBits) within a certain range, e.g. +/- 25% of the expected difficulty, > and this miner-selected difficulty is used for the proof of work check. In > addition to adjusting the hashcash target, selecting a different difficulty > also raises or lowers the maximum block size for that block by a function > of the difference in difficulty. So increasing the difficulty of the block > by an additional 25% raises the block limit for that block from 100% of the > current limit to 125%, and lowering the difficulty by 10% would also lower > the maximum block size for that block from 100% to 90% of the current > limit. For simplicity I will assume a linear identity transform as the > function, but a quadratic or other function with compounding marginal cost > may be preferred. > > * The default maximum block size limit is then adjusted at regular > intervals. For simplicity I will assume an adjustment at the end of each > 2016 block interval, at the same time that difficulty is adjusted, but > there is no reason these have to be aligned. The adjustment algorithm > itself is either the selection of the median, or perhaps some sort of > weighted average that respects the "middle majority." There would of course > be limits on how quickly the block size limit can adjusted in any one > period, just as there are min/max limits on the difficulty adjustment. > > * To prevent perverse mining incentives, the original difficulty without > adjustment is used in the aggregate work calculations for selecting the > most-work chain, and the allowable miner-selected adjustment to difficulty > would have to be tightly constrained. > > These rules create an incentive environment where raising the block size > has a real cost associated with it: a more difficult hashcash target for > the same subsidy reward. For rational miners that cost must be > counter-balanced by additional fees provided in the larger block. This > allows block size to increase, but only within the confines of a > self-supporting fee economy. > > When the subsidy goes away or is reduced to an insignificant fraction of > the block reward, this incentive structure goes away. Hopefully at that > time we would have sufficient information to soft-fork set a hard block > size maximum. But in the mean time, the block size limit controller > constrains the maximum allowed block size to be within a range supported by > fees on the network, providing an emergency relief valve that we can be > assured will only be used at significant cost. > > Mark Friedenbach > > * There has over time been various discussions on the bitcointalk forums > about dynamically adjusting block size limits. The true origin of the idea > is unclear at this time (citations would be appreciated!) but a form of it > was implemented in Bytecoin / Monero using subsidy burning to increase the > block size. That approach has various limitations. These were corrected in > Greg Maxwell's suggestion to adjust the difficulty/nBits field directly, > which also has the added benefit of providing incentive for bidirectional > movement during the subsidy period. The description in this email and any > errors are my own. > > On Fri, May 8, 2015 at 12:20 AM, Matt Whitlock <bip@mattwhitlock.name> > wrote: > >> Between all the flames on this list, several ideas were raised that did >> not get much attention. I hereby resubmit these ideas for consideration and >> discussion. >> >> - Perhaps the hard block size limit should be a function of the actual >> block sizes over some trailing sampling period. For example, take the >> median block size among the most recent 2016 blocks and multiply it by 1.5. >> This allows Bitcoin to scale up gradually and organically, rather than >> having human beings guessing at what is an appropriate limit. >> >> - Perhaps the hard block size limit should be determined by a vote of the >> miners. Each miner could embed a desired block size limit in the coinbase >> transactions of the blocks it publishes. The effective hard block size >> limit would be that size having the greatest number of votes within a >> sliding window of most recent blocks. >> >> - Perhaps the hard block size limit should be a function of block-chain >> length, so that it can scale up smoothly rather than jumping immediately to >> 20 MB. This function could be linear (anticipating a breakdown of Moore's >> Law) or quadratic. >> >> I would be in support of any of the above, but I do not support Mike >> Hearn's proposed jump to 20 MB. Hearn's proposal kicks the can down the >> road without actually solving the problem, and it does so in a >> controversial (step function) way. >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Bitcoin-development mailing list >> Bitcoin-development@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/bitcoin-development >> > > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > [-- Attachment #2: Type: text/html, Size: 10038 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 22:43 ` Aaron Voisine @ 2015-05-08 22:45 ` Mark Friedenbach 2015-05-08 23:15 ` Aaron Voisine 0 siblings, 1 reply; 69+ messages in thread From: Mark Friedenbach @ 2015-05-08 22:45 UTC (permalink / raw) To: Aaron Voisine; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 644 bytes --] On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine <voisine@gmail.com> wrote: > This is a clever way to tie block size to fees. > > I would just like to point out though that it still fundamentally is using > hard block size limits to enforce scarcity. Transactions with below market > fees will hang in limbo for days and fail, instead of failing immediately > by not propagating, or seeing degraded, long confirmation times followed by > eventual success. > There are already solutions to this which are waiting to be deployed as default policy to bitcoind, and need to be implemented in other clients: replace-by-fee and child-pays-for-parent. [-- Attachment #2: Type: text/html, Size: 988 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 22:45 ` Mark Friedenbach @ 2015-05-08 23:15 ` Aaron Voisine 2015-05-08 23:58 ` Mark Friedenbach 0 siblings, 1 reply; 69+ messages in thread From: Aaron Voisine @ 2015-05-08 23:15 UTC (permalink / raw) To: Mark Friedenbach; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 1408 bytes --] That's fair, and we've implemented child-pays-for-parent for spending unconfirmed inputs in breadwallet. But what should the behavior be when those options aren't understood/implemented/used? My argument is that the less risky, more conservative default fallback behavior should be either non-propagation or delayed confirmation, which is generally what we have now, until we hit the block size limit. We still have lots of safe, non-controversial, easy to experiment with options to add fee pressure, causing users to economize on block space without resorting to dropping transactions after a prolonged delay. Aaron Voisine co-founder and CEO breadwallet.com On Fri, May 8, 2015 at 3:45 PM, Mark Friedenbach <mark@friedenbach.org> wrote: > On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine <voisine@gmail.com> wrote: > >> This is a clever way to tie block size to fees. >> >> I would just like to point out though that it still fundamentally is >> using hard block size limits to enforce scarcity. Transactions with below >> market fees will hang in limbo for days and fail, instead of failing >> immediately by not propagating, or seeing degraded, long confirmation times >> followed by eventual success. >> > > There are already solutions to this which are waiting to be deployed as > default policy to bitcoind, and need to be implemented in other clients: > replace-by-fee and child-pays-for-parent. > [-- Attachment #2: Type: text/html, Size: 2287 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 23:15 ` Aaron Voisine @ 2015-05-08 23:58 ` Mark Friedenbach 0 siblings, 0 replies; 69+ messages in thread From: Mark Friedenbach @ 2015-05-08 23:58 UTC (permalink / raw) To: Aaron Voisine; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 3298 bytes --] In a fee-dominated future, replace-by-fee is not an opt-in feature. When you create a transaction, the wallet presents a range of fees that it expects you might pay. It then signs copies of the transaction with spaced fees from this interval and broadcasts the lowest fee first. In the user interface, the transaction is shown with its transacted amount and the approved fee range. All of the inputs used are placed on hold until the transaction gets a confirmation. As time goes by and it looks like the transaction is not getting accepted, successively higher fee versions are released. You can opt-out and send a no-fee or base-fee-only transaction, but that should not be the default. On the receiving end, local policy controls how much fee should be spent trying to obtain confirmations before alerting the user, if there are fees available in the hot wallet to do this. The receiving wallet then adds its own fees via a spend if it thinks insufficient fees were provided to get a confirmation. Again, this should all be automated so long as there is a hot wallet on the receiving end. Is this more complicated than now, where blocks are not full and clients generally don't have to worry about their transactions eventually confirming? Yes, it is significantly more complicated. But such complication is unavoidable. It is a simple fact that the block size cannot increase so much as to cover every single use by every single person in the world, so there is no getting around the reality that we will have to transition into an economy where at least one side has to pay up for a transaction to get confirmation, at all. We are going to have to deal with this issue whether it is now at 1MB or later at 20MB. And frankly, it'll be much easier to do now. On Fri, May 8, 2015 at 4:15 PM, Aaron Voisine <voisine@gmail.com> wrote: > That's fair, and we've implemented child-pays-for-parent for spending > unconfirmed inputs in breadwallet. But what should the behavior be when > those options aren't understood/implemented/used? > > My argument is that the less risky, more conservative default fallback > behavior should be either non-propagation or delayed confirmation, which is > generally what we have now, until we hit the block size limit. We still > have lots of safe, non-controversial, easy to experiment with options to > add fee pressure, causing users to economize on block space without > resorting to dropping transactions after a prolonged delay. > > Aaron Voisine > co-founder and CEO > breadwallet.com > > On Fri, May 8, 2015 at 3:45 PM, Mark Friedenbach <mark@friedenbach.org> > wrote: > >> On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine <voisine@gmail.com> wrote: >> >>> This is a clever way to tie block size to fees. >>> >>> I would just like to point out though that it still fundamentally is >>> using hard block size limits to enforce scarcity. Transactions with below >>> market fees will hang in limbo for days and fail, instead of failing >>> immediately by not propagating, or seeing degraded, long confirmation times >>> followed by eventual success. >>> >> >> There are already solutions to this which are waiting to be deployed as >> default policy to bitcoind, and need to be implemented in other clients: >> replace-by-fee and child-pays-for-parent. >> > > [-- Attachment #2: Type: text/html, Size: 4488 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 20:33 ` Mark Friedenbach 2015-05-08 22:43 ` Aaron Voisine @ 2015-05-09 3:36 ` Gregory Maxwell 2015-05-09 11:58 ` Gavin Andresen ` (3 more replies) 2015-05-10 21:48 ` Thomas Voegtlin 2 siblings, 4 replies; 69+ messages in thread From: Gregory Maxwell @ 2015-05-09 3:36 UTC (permalink / raw) To: Mark Friedenbach; +Cc: Bitcoin Development On Fri, May 8, 2015 at 8:33 PM, Mark Friedenbach <mark@friedenbach.org> wrote: > These rules create an incentive environment where raising the block size has > a real cost associated with it: a more difficult hashcash target for the > same subsidy reward. For rational miners that cost must be counter-balanced > by additional fees provided in the larger block. This allows block size to > increase, but only within the confines of a self-supporting fee economy. > > When the subsidy goes away or is reduced to an insignificant fraction of the > block reward, this incentive structure goes away. Hopefully at that time we > would have sufficient information to soft-fork set a hard block size > maximum. But in the mean time, the block size limit controller constrains > the maximum allowed block size to be within a range supported by fees on the > network, providing an emergency relief valve that we can be assured will > only be used at significant cost. Though I'm a fan of this class of techniques(*) and think using something in this space is strictly superior to not, and I think it makes larger sizes safer long term; I do not think it adequately obviates the need for a hard upper limit for two reasons: (1) for software engineering and operational reasons it is very difficult to develop, test for, or provision for something without knowing limits. There would in fact be hard limits on real deployments but they'd be opaque to their operators and you could easily imagine the network forking by surprise as hosts crossed those limits. (2) At best this approach mitigates the collective action problem between miners around fees; it does not correct the incentive alignment between miners and everyone else (miners can afford huge node costs because they have income; but the full-node-using-users that need to exist in plenty to keep miners honest do not), or the centralization pressures (N miners can reduce their storage/bandwidth/cpu costs N fold by centralizing). A dynamic limit can be combined with a hard upper to at least be no worse than a hard upper with respect to those two points. Another related point which has been tendered before but seems to have been ignored is that changing how the size limit is computed can help better align incentives and thus reduce risk. E.g. a major cost to the network is the UTXO impact of transactions, but since the limit is blind to UTXO impact a miner would gain less income if substantially factoring UTXO impact into its fee calculations; and without fee impact users have little reason to optimize their UTXO behavior. This can be corrected by augmenting the "size" used for limit calculations. An example would be tx_size = MAX( real_size >> 1, real_size + 4*utxo_created_size - 3*utxo_consumed_size). The reason for the MAX is so that a block which cleaned a bunch of big UTXO could not break software by being super large, the utxo_consumed basically lets you credit your fees by cleaning the utxo set; but since you get less credit than you cost the pressure should be downward but not hugely so. The 1/2, 4, 3 I regard as parameters which I don't have very strong opinions on which could be set based on observations in the network today (e.g. adjusted so that a normal cleaning transaction can hit the minimum size). One way to think about this is that it makes it so that every output you create "prepays" the transaction fees needed to spend it by shifting "space" from the current block to a future block. The fact that the prepayment is not perfectly efficient reduces the incentive for miners to create lots of extra outputs when they have room left in their block in order to store space to use later [an issue that is potentially less of a concern with a dynamic size limit]. With the right parameters there would never be such at thing as a dust output (one which costs more to spend than its worth). (likewise the sigops limit should be counted correctly and turned into size augmentation (ones that get run by the txn); which would greatly simplify selection rules: maximize income within a single scalar limit) (*) I believe my currently favored formulation of general dynamic control idea is that each miner expresses in their coinbase a preferred size between some minimum (e.g. 500k) and the miner's effective-maximum; the actual block size can be up to the effective maximum even if the preference is lower (you're not forced to make a lower block because you stated you wished the limit were lower). There is a computed maximum which is the 33-rd percentile of the last 2016 coinbase preferences minus computed_max/52 (rounding up to 1) bytes-- or 500k if thats larger. The effective maximum is X bytes more, where X on the range [0, computed_maximum] e.g. the miner can double the size of their block at most. If X > 0, then the miners must also reach a target F(x/computed_maximum) times the bits-difficulty; with F(x) = x^2+1 --- so the maximum penalty is 2, with a quadratic shape; for a given mempool there will be some value that maximizes expected income. (obviously all implemented with precise fixed point arithmetic). The percentile is intended to give the preferences of the 33% least preferring miners a veto on increases (unless a majority chooses to soft-fork them out). The minus-comp_max/52 provides an incentive to slowly shrink the maximum if its too large-- x/52 would halve the size in one year if miners were doing the lowest difficulty mining. The parameters 500k/33rd, -computed_max/52 bytes, and f(x) I have less strong opinions about; and would love to hear reasoned arguments for particular parameters. ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-09 3:36 ` Gregory Maxwell @ 2015-05-09 11:58 ` Gavin Andresen 2015-05-09 13:49 ` Tier Nolan 2015-05-10 17:36 ` Owen Gunden ` (2 subsequent siblings) 3 siblings, 1 reply; 69+ messages in thread From: Gavin Andresen @ 2015-05-09 11:58 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 1515 bytes --] RE: fixing sigop counting, and building in UTXO cost: great idea! One of the problems with this debate is it is easy for great ideas get lost in all the noise. RE: a hard upper limit, with a dynamic limit under it: I like that idea. Can we drill down on the hard upper limit? There are lots of people who want a very high upper limit, right now (all the big Bitcoin companies, and anybody who thinks as-rapid-as-possible growth now is the best path to long-term success). This is the "it is OK if you have to run full nodes in a data center" camp. There are also lots of people who want an upper limit low enough that they can continue to run Bitcoin on the hardware and Internet connection that they have (or are concerned about centralization, so want to make sure OTHER people can continue to run....). Is there an upper limit "we" can choose to make both sets of people mostly happy? I've proposed "must be inexpensive enough that a 'hobbyist' can afford to run a full node" ... Is the limit chosen once, now, via hard-fork, or should we expect multiple hard-forks to change it "when necessary" ? The economics change every time the block reward halves, which make me think that might be a good time to adjust the hard upper limit. If we have a hard upper limit and a lower dynamic limit, perhaps adjusting the hard upper limit (up or down) to account for the block reward halving, based on the dynamic limit.... RE: the lower dynamic limit algorithm: I REALLY like that idea. -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 1901 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-09 11:58 ` Gavin Andresen @ 2015-05-09 13:49 ` Tier Nolan 0 siblings, 0 replies; 69+ messages in thread From: Tier Nolan @ 2015-05-09 13:49 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 1697 bytes --] On Sat, May 9, 2015 at 12:58 PM, Gavin Andresen <gavinandresen@gmail.com> wrote: > RE: fixing sigop counting, and building in UTXO cost: great idea! One of > the problems with this debate is it is easy for great ideas get lost in all > the noise. > If the UTXO set cost is built in, UTXO database entries suddenly are worth something, in addition to the bitcoin held in that entry. A user's client might display how many they own. When sending money to a merchant, the user might demand the merchant indicate a slot to pay to. The user could send an ANYONE_CAN_PAY partial transaction. The transaction would guarantee that the user has at least as many UTXOs as before. Discussing the possibility of doing this creates an incentive to bloat the UTXO set right now, since UTXOs would be valuable in the future. The objective would be to make them valuable enough to encourage conservation, but not so valuable that the UTXO contains more value than the bitcoins in the output. Gmaxwell's suggested "tx_size = MAX( real_size >> 1, real_size + 4*utxo_created_size - 3*utxo_consumed_size)" for a 250 byte transaction with 1 input and 2 outputs has very little effect. real_size + 4 * (2) - 3 * 1 = 255 That gives a 2% size penalty for adding an extra UTXO. I doubt that is enough to change behavior. The UTXO set growth could be limited directly. A block would be invalid if it increases the number of UTXO entries above the charted path. RE: a hard upper limit, with a dynamic limit under it: > If the block is greater than 32MB, then it means an update to how blocks are broadcast, so that could be a reasonable hard upper limit (or maybe 31MB, or just the 20MB already suggested). [-- Attachment #2: Type: text/html, Size: 2432 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-09 3:36 ` Gregory Maxwell 2015-05-09 11:58 ` Gavin Andresen @ 2015-05-10 17:36 ` Owen Gunden 2015-05-10 18:10 ` Mark Friedenbach 2015-05-10 21:21 ` Gavin Andresen 2015-05-13 10:43 ` Tier Nolan 3 siblings, 1 reply; 69+ messages in thread From: Owen Gunden @ 2015-05-10 17:36 UTC (permalink / raw) To: bitcoin-development On 05/08/2015 11:36 PM, Gregory Maxwell wrote: > Another related point which has been tendered before but seems to have > been ignored is that changing how the size limit is computed can help > better align incentives and thus reduce risk. E.g. a major cost to the > network is the UTXO impact of transactions, but since the limit is blind > to UTXO impact a miner would gain less income if substantially factoring > UTXO impact into its fee calculations; and without fee impact users have > little reason to optimize their UTXO behavior. Along the lines of aligning incentives with a diversity of costs to a variety of network participants, I am curious about reactions to Justus' general approach: http://bitcoinism.liberty.me/2015/02/09/economic-fallacies-and-the-block-size-limit-part-2-price-discovery/ I realize it relies on pie-in-the-sky ideas like micropayment channels, but I wonder if it's a worthy long-term ideal direction for this stuff. ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-10 17:36 ` Owen Gunden @ 2015-05-10 18:10 ` Mark Friedenbach 0 siblings, 0 replies; 69+ messages in thread From: Mark Friedenbach @ 2015-05-10 18:10 UTC (permalink / raw) To: Owen Gunden; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 1838 bytes --] Micropayment channels are not pie in the sky proposals. They work today on Bitcoin as it is deployed without any changes. People just need to start using them. On May 10, 2015 11:03, "Owen Gunden" <ogunden@phauna.org> wrote: > On 05/08/2015 11:36 PM, Gregory Maxwell wrote: > > Another related point which has been tendered before but seems to have > > been ignored is that changing how the size limit is computed can help > > better align incentives and thus reduce risk. E.g. a major cost to the > > network is the UTXO impact of transactions, but since the limit is blind > > to UTXO impact a miner would gain less income if substantially factoring > > UTXO impact into its fee calculations; and without fee impact users have > > little reason to optimize their UTXO behavior. > > Along the lines of aligning incentives with a diversity of costs to a > variety of network participants, I am curious about reactions to Justus' > general approach: > > > http://bitcoinism.liberty.me/2015/02/09/economic-fallacies-and-the-block-size-limit-part-2-price-discovery/ > > I realize it relies on pie-in-the-sky ideas like micropayment channels, > but I wonder if it's a worthy long-term ideal direction for this stuff. > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > [-- Attachment #2: Type: text/html, Size: 2575 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-09 3:36 ` Gregory Maxwell 2015-05-09 11:58 ` Gavin Andresen 2015-05-10 17:36 ` Owen Gunden @ 2015-05-10 21:21 ` Gavin Andresen 2015-05-10 21:33 ` Gregory Maxwell 2015-05-10 21:56 ` Rob Golding 2015-05-13 10:43 ` Tier Nolan 3 siblings, 2 replies; 69+ messages in thread From: Gavin Andresen @ 2015-05-10 21:21 UTC (permalink / raw) To: Gregory Maxwell; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 4100 bytes --] Let me make sure I understand this proposal: On Fri, May 8, 2015 at 11:36 PM, Gregory Maxwell <gmaxwell@gmail.com> wrote: > (*) I believe my currently favored formulation of general dynamic control > idea is that each miner expresses in their coinbase a preferred size > between some minimum (e.g. 500k) and the miner's effective-maximum; > the actual block size can be up to the effective maximum even if the > preference is lower (you're not forced to make a lower block because you > stated you wished the limit were lower). There is a computed maximum > which is the 33-rd percentile of the last 2016 coinbase preferences > minus computed_max/52 (rounding up to 1) bytes-- or 500k if thats > larger. The effective maximum is X bytes more, where X on the range > [0, computed_maximum] e.g. the miner can double the size of their > block at most. If X > 0, then the miners must also reach a target > F(x/computed_maximum) times the bits-difficulty; with F(x) = x^2+1 --- > so the maximum penalty is 2, with a quadratic shape; for a given mempool > there will be some value that maximizes expected income. (obviously all > implemented with precise fixed point arithmetic). The percentile is > intended to give the preferences of the 33% least preferring miners a > veto on increases (unless a majority chooses to soft-fork them out). The > minus-comp_max/52 provides an incentive to slowly shrink the maximum > if its too large-- x/52 would halve the size in one year if miners > were doing the lowest difficulty mining. The parameters 500k/33rd, > -computed_max/52 bytes, and f(x) I have less strong opinions about; > and would love to hear reasoned arguments for particular parameters. > I'm going to try to figure out how much transaction fee a transaction would have to pay to bribe a miner to include it. Greg, please let me know if I've misinterpreted the proposed algorithm. And everybody, please let me know if I'm making a bone-headed mistake in how I'm computing anything: Lets say miners are expressing a desire for 600,000 byte blocks in their coinbases. computed_max = 600,000 - 600,000/52 = 588,462 bytes. --> this is about 23 average-size (500-byte) transactions less than 600,000. effective_max = 1,176,923 Lets say I want to maintain status quo at 600,000 bytes; how much penalty do I have? ((600,000-588,462)/588,462)^2 + 1 = 1.00038 How much will that cost me? The network is hashing at 310PetaHash/sec right now. Takes 600 seconds to find a block, so 186,000PH per block 186,000 * 0.00038 = 70 extra PH If it takes 186,000 PH to find a block, and a block is worth 25.13 BTC (reward plus fees), that 70 PH costs: (25.13 BTC/block / 186,000 PH/block) * 70 PH = 0.00945 BTC or at $240 / BTC: $2.27 ... so average transaction fee will have to be about ten cents ($2.27 spread across 23 average-sized transactions) for miners to decide to stay at 600K blocks. If they fill up 588,462 bytes and don't have some ten-cent-fee transactions left, they should express a desire to create a 588,462-byte-block and mine with no penalty. Is that too much? Not enough? Average transaction fees today are about 3 cents per transaction. I created a spreadsheet playing with the parameters: https://docs.google.com/spreadsheets/d/1zYZfb44Uns8ai0KnoQ-LixDwdhqO5iTI3ZRcihQXlgk/edit?usp=sharing "We" could tweak the constants or function to get a transaction fee we think is reasonable... but we really shouldn't be deciding whether transaction fees are too high, too low, or just right, and after thinking about this for a while I think any algorithm that ties difficulty to block size is just a complicated way of dictating minimum fees. As for some other dynamic algorithm: OK with me. How do we get consensus on what the best algorithm is? I'm ok with any "don't grow too quickly, give some reasonable-percentage-minority of miners the ability to block further increases." Also relevant here: "The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design." - Friedrich August von Hayek -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 5914 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-10 21:21 ` Gavin Andresen @ 2015-05-10 21:33 ` Gregory Maxwell 2015-05-10 21:56 ` Rob Golding 1 sibling, 0 replies; 69+ messages in thread From: Gregory Maxwell @ 2015-05-10 21:33 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Development On Sun, May 10, 2015 at 9:21 PM, Gavin Andresen <gavinandresen@gmail.com> wrote: > a while I think any algorithm that ties difficulty to block size is just a > complicated way of dictating minimum fees. Thats not the long term effect or the motivation-- what you're seeing is that the subsidy gets in the way here. Consider how the procedure behaves with subsidy being negligible compared to fees. What it accomplishes in that case is that it incentivizes increasing the size until the marginal "value" to miners of the transaction-data being left out is not enormously smaller than the "value" of the data in the block on average. Value in quotes because it's blind to the "fees" the transaction claims. With a large subsidy, the marginal value of the first byte in the block is HUGE; and so that pushes up the average-- and creates the "base fee effect" that you're looking at. It's not that anyone is picking a fee there, it's that someone picked the subsidy there. :) As the subsidy goes down the only thing fees are relative to is fees. An earlier version of the proposal took subsidy out of the picture completely by increasing it linearly with the increased difficulty; but that creates additional complexity both to implement and to explain to people (e.g. that the setup doesn't change the supply of coins); ... I suppose without it that starting disadvantage parameter (the offset that reduces the size if you're indifferent) needs to be much smaller, unfortunately. ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-10 21:21 ` Gavin Andresen 2015-05-10 21:33 ` Gregory Maxwell @ 2015-05-10 21:56 ` Rob Golding 1 sibling, 0 replies; 69+ messages in thread From: Rob Golding @ 2015-05-10 21:56 UTC (permalink / raw) To: Bitcoin Development > How much will that cost me? > The network is hashing at 310PetaHash/sec right now. > Takes 600 seconds to find a block, so 186,000PH per block > 186,000 * 0.00038 = 70 extra PH > > If it takes 186,000 PH to find a block, and a block is worth 25.13 BTC > (reward plus fees), that 70 PH costs: > (25.13 BTC/block / 186,000 PH/block) * 70 PH = 0.00945 BTC > or at $240 / BTC: $2.27 > > ... so average transaction fee will have to be about ten cents ($2.27 > spread across 23 average-sized transactions) for miners to decide to > stay at 600K blocks Surely that's an *extra* $2.27 as you've already included .13BTC ($31.20) in fees in the calculation ? Rob ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-09 3:36 ` Gregory Maxwell ` (2 preceding siblings ...) 2015-05-10 21:21 ` Gavin Andresen @ 2015-05-13 10:43 ` Tier Nolan 2015-05-16 0:22 ` Rusty Russell 3 siblings, 1 reply; 69+ messages in thread From: Tier Nolan @ 2015-05-13 10:43 UTC (permalink / raw) Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 820 bytes --] On Sat, May 9, 2015 at 4:36 AM, Gregory Maxwell <gmaxwell@gmail.com> wrote: > An example would > be tx_size = MAX( real_size >> 1, real_size + 4*utxo_created_size - > 3*utxo_consumed_size). This could be implemented as a soft fork too. * 1MB hard size limit * 900kB soft limit S = block size U = UTXO_adjusted_size = S + 4 * outputs - 3 * inputs A block is valid if S < 1MB and U < 1MB A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted size of 252 bytes. The memory pool could be sorted by fee per adjusted_size. Coin selection could be adjusted so it tries to have at least 2 inputs when creating transactions, unless the input is worth more than a threshold (say 0.001 BTC). This is a pretty weak incentive, especially if the block size is increased. Maybe it will cause a "nudge" [-- Attachment #2: Type: text/html, Size: 1336 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-13 10:43 ` Tier Nolan @ 2015-05-16 0:22 ` Rusty Russell 2015-05-16 11:09 ` Tier Nolan 0 siblings, 1 reply; 69+ messages in thread From: Rusty Russell @ 2015-05-16 0:22 UTC (permalink / raw) To: Tier Nolan; +Cc: Bitcoin Development Tier Nolan <tier.nolan@gmail.com> writes: > On Sat, May 9, 2015 at 4:36 AM, Gregory Maxwell <gmaxwell@gmail.com> wrote: > >> An example would >> be tx_size = MAX( real_size >> 1, real_size + 4*utxo_created_size - >> 3*utxo_consumed_size). > > > This could be implemented as a soft fork too. > > * 1MB hard size limit > * 900kB soft limit I like this too. Some tweaks: 1) Nomenclature: call tx_size "tx_cost" and real_size "tx_bytes"? 2) If we have a reasonable hard *byte* limit, I don't think that we need the MAX(). In fact, it's probably OK to go negative. 3) ... or maybe not, if any consumed UTXO was generated before the soft fork (reducing Tier's perverse incentive). 4) How do we measure UTXO size? There are some constant-ish things in there (eg. txid as key, height, outnum, amount). Maybe just add 32 to scriptlen? 5) Add a CHECKSIG cost. Naively, since we allow 20,000 CHECKSIGs and 1MB blocks, that implies a cost of 50 bytes per CHECKSIG (but counted correctly, unlike now). This last one implies that the initial cost limit would be 2M, but in practice probably somewhere in the middle. tx_cost = 50*num-CHECKSIG + tx_bytes + 4*utxo_created_size - 3*utxo_consumed_size > A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted > size of 252 bytes. Now cost == 352. Cheers, Rusty. ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-16 0:22 ` Rusty Russell @ 2015-05-16 11:09 ` Tier Nolan 2015-05-18 1:42 ` Rusty Russell 0 siblings, 1 reply; 69+ messages in thread From: Tier Nolan @ 2015-05-16 11:09 UTC (permalink / raw) Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 2115 bytes --] On Sat, May 16, 2015 at 1:22 AM, Rusty Russell <rusty@rustcorp.com.au> wrote: > Some tweaks: > > 1) Nomenclature: call tx_size "tx_cost" and real_size "tx_bytes"? > Fair enough. > > 2) If we have a reasonable hard *byte* limit, I don't think that we need > the MAX(). In fact, it's probably OK to go negative. > I agree, we want people to compress the UTXO space and a transaction with 100 inputs and one output is great. It may have privacy problem though. > > 3) ... or maybe not, if any consumed UTXO was generated before the soft > fork (reducing Tier's perverse incentive). > The incentive problem can be fixed by excluding UTXOs from blocks before a certain count. UTXOs in blocks before 375000 don't count. > > 4) How do we measure UTXO size? There are some constant-ish things in > there (eg. txid as key, height, outnum, amount). Maybe just add 32 > to scriptlen? > They can be stored as a fixed digest. That can be any size, depending on security requirements. Gmaxwell's cost proposal is 3-4 bytes per UTXO change. It isn't 4*UXTO.size - 3*UTXO.size It is only a small nudge. With only 10% of the block space to play with it can't be massive. This requires that transactions include scriptPubKey information when broadcasting them. > > 5) Add a CHECKSIG cost. Naively, since we allow 20,000 CHECKSIGs and > 1MB blocks, that implies a cost of 50 bytes per CHECKSIG (but counted > correctly, unlike now). > > This last one implies that the initial cost limit would be 2M, but in > practice probably somewhere in the middle. > > tx_cost = 50*num-CHECKSIG > + tx_bytes > + 4*utxo_created_size > - 3*utxo_consumed_size > > > A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted > > size of 252 bytes. > > Now cost == 352. > That is to large a cost for a 10% block change. It could be included in the block size hard fork though. I think have one combined "cost" for transactions is good. It means much fewer spread out transaction checks. The code for the cost formula would be in one place. [-- Attachment #2: Type: text/html, Size: 3347 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-16 11:09 ` Tier Nolan @ 2015-05-18 1:42 ` Rusty Russell 2015-05-19 8:59 ` Tier Nolan 0 siblings, 1 reply; 69+ messages in thread From: Rusty Russell @ 2015-05-18 1:42 UTC (permalink / raw) To: Tier Nolan; +Cc: Bitcoin Development Tier Nolan <tier.nolan@gmail.com> writes: > On Sat, May 16, 2015 at 1:22 AM, Rusty Russell <rusty@rustcorp.com.au> > wrote: >> 3) ... or maybe not, if any consumed UTXO was generated before the soft >> fork (reducing Tier's perverse incentive). > > The incentive problem can be fixed by excluding UTXOs from blocks before a > certain count. > > UTXOs in blocks before 375000 don't count. OK. Be nice if these were cleaned up, but I guess it's a sunk cost. >> 4) How do we measure UTXO size? There are some constant-ish things in >> there (eg. txid as key, height, outnum, amount). Maybe just add 32 >> to scriptlen? >> > > They can be stored as a fixed digest. That can be any size, depending on > security requirements. > > Gmaxwell's cost proposal is 3-4 bytes per UTXO change. It isn't > 4*UXTO.size - 3*UTXO.size He said "utxo_created_size" not "utxo_created" so I assumed scriptlen? > It is only a small nudge. With only 10% of the block space to play with it > can't be massive. But you made that number up? The soft cap and hard byte limit are different beasts, so there's no need for soft cost cap < hard byte limit. > This requires that transactions include scriptPubKey information when > broadcasting them. Brilliant! I completely missed that possibility... >> 5) Add a CHECKSIG cost. Naively, since we allow 20,000 CHECKSIGs and >> 1MB blocks, that implies a cost of 50 bytes per CHECKSIG (but counted >> correctly, unlike now). >> >> This last one implies that the initial cost limit would be 2M, but in >> practice probably somewhere in the middle. >> >> tx_cost = 50*num-CHECKSIG >> + tx_bytes >> + 4*utxo_created_size >> - 3*utxo_consumed_size >> >> > A 250 byte transaction with 2 inputs and 2 outputs would have an adjusted >> > size of 252 bytes. >> >> Now cost == 352. > > That is to large a cost for a 10% block change. It could be included in > the block size hard fork though. I don't think so. Again, you're mixing units. > I think have one combined "cost" for > transactions is good. It means much fewer spread out transaction checks. > The code for the cost formula would be in one place. Agreed! Unfortunately there'll always be 2, because we really do want a hard byte limit: it's total tx bytes which brings most concerns about centralization. But ideally it'll be so rarely hit that it can be ~ ignored (and certainly not optimized for). Cheers, Rusty. ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-18 1:42 ` Rusty Russell @ 2015-05-19 8:59 ` Tier Nolan 0 siblings, 0 replies; 69+ messages in thread From: Tier Nolan @ 2015-05-19 8:59 UTC (permalink / raw) Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 2732 bytes --] On Mon, May 18, 2015 at 2:42 AM, Rusty Russell <rusty@rustcorp.com.au> wrote: > OK. Be nice if these were cleaned up, but I guess it's a sunk cost. > Yeah. On the plus side, as people spend their money, old UTXOs would be used up and then they would be included in the cost function. It is only people who are storing their money long term that wouldn't. They are unlikely to have consumed their UTXOs anyway, unless miners started paying for UTXOs. We could make it a range. UTXOs from below 355,000 and above 375,000 are included. That can create incentive problems for the next similar change, I think a future threshold is better. > He said "utxo_created_size" not "utxo_created" so I assumed scriptlen? > Maybe I mis-read. > But you made that number up? The soft cap and hard byte limit are > different beasts, so there's no need for soft cost cap < hard byte > limit. > I was thinking about it being a soft-fork. If it was combined with the 20MB limit change, then it can be anything. I made a suggestion somewhere (her or forums not sure), that transactions should be allowed to store bytes. For example, a new opcode could be added, <byte_count> OP_LOCK_BYTES. This makes the transaction seem <byte_count> larger. However, when spending the UTXO, that transaction counts as <byte_count> smaller, even against the hard-cap. This would be useful for channels. If channels were 100-1000X the blockchain volume and someone caused lots of channels to close, there mightn't be enough space for all the close channel transactions. Some people might be able to get their refund transactions included in the blockchain because the timeout expires. If transactions could store enough space to be spent, then a mass channel close would cause some very large blocks, but then they would have to be followed by lots of tiny blocks. The block limit would be an average not fixed per block. There would be 3 limits Absolute hard limit (max bytes no matter what): 100MB Hard limit (max bytes after stored bytes offset): 30MB Soft limit (max bytes equivalents): 10MB Blocks lager than ~32MB require a new network protocol, which makes the hard fork even "harder". The protocol change could be "messages can now be 150MB max" though, so maybe not so complex. > > > This requires that transactions include scriptPubKey information when > > broadcasting them. > > Brilliant! I completely missed that possibility... > I have written a BIP about it. It is still in the draft stage. I had a look into writing up the code for the protocol change. https://github.com/TierNolan/bips/blob/extended_transactions/bip-etx.mediawiki https://github.com/TierNolan/bips/blob/extended_transactions/bip-etx-fork.mediawiki [-- Attachment #2: Type: text/html, Size: 4244 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 20:33 ` Mark Friedenbach 2015-05-08 22:43 ` Aaron Voisine 2015-05-09 3:36 ` Gregory Maxwell @ 2015-05-10 21:48 ` Thomas Voegtlin 2015-05-10 22:31 ` Mark Friedenbach 2 siblings, 1 reply; 69+ messages in thread From: Thomas Voegtlin @ 2015-05-10 21:48 UTC (permalink / raw) To: bitcoin-development Le 08/05/2015 22:33, Mark Friedenbach a écrit : > * For each block, the miner is allowed to select a different difficulty > (nBits) within a certain range, e.g. +/- 25% of the expected difficulty, > and this miner-selected difficulty is used for the proof of work check. In > addition to adjusting the hashcash target, selecting a different difficulty > also raises or lowers the maximum block size for that block by a function > of the difference in difficulty. So increasing the difficulty of the block > by an additional 25% raises the block limit for that block from 100% of the > current limit to 125%, and lowering the difficulty by 10% would also lower > the maximum block size for that block from 100% to 90% of the current > limit. For simplicity I will assume a linear identity transform as the > function, but a quadratic or other function with compounding marginal cost > may be preferred. > Sorry but I fail to see how a linear identity transform between block size and difficulty would work. The miner's reward for finding a block is the sum of subsidy and fees: R = S + F The probability that the miner will find a block over a time interval is inversely proportional to the difficulty D: P = K / D where K is a constant that depends on the miner's hashrate. The expected reward of the miner is: E = P * R Consider that the miner chooses a new difficulty: D' = D(1 + x). With a linear identity transform between block size and difficulty, the miner will be allowed to collect fees from a block of size: S'=S(1+x) In the best case, collected will be proportional to block size: F' = F(1+x) Thus we get: E' = P' * R' = K/(D(1+x)) * (S + F(1+x)) E' = E - x/(1+x) * S * K / D So with this linear identity transform, increasing block size never increases the miners gain. As long as the subsidy exists, the best strategy for miners is to reduce block size (i.e. to choose x<0). ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-10 21:48 ` Thomas Voegtlin @ 2015-05-10 22:31 ` Mark Friedenbach 2015-05-10 23:11 ` Thomas Voegtlin 0 siblings, 1 reply; 69+ messages in thread From: Mark Friedenbach @ 2015-05-10 22:31 UTC (permalink / raw) To: Thomas Voegtlin; +Cc: Bitcoin Development [-- Attachment #1: Type: text/plain, Size: 3633 bytes --] I'm on my phone today so I'm somewhat constrained in my reply, but the key takeaway is that the proposal is a mechanism for miners to trade subsidy for the increased fees of a larger block. Necessarily it only makes sense to do so when the marginal fee per KB exceeds the subsidy fee per KB. It correspondingly makes sense to use a smaller block size if fees are less than subsidy, but note that fees are not uniform and as the block shrinks the marginal fee rate goes up.. Limits on both the relative and absolute amount a miner can trade subsidy for block size prevent incentive edge cases as well as prevent a sharp shock to the current fee-poor economy (by disallowing adjustment below 1MB). Also the identity transform was used only for didactic purposes. I fully expect there to be other, more interesting functions to use. On May 10, 2015 3:03 PM, "Thomas Voegtlin" <thomasv@electrum.org> wrote: > Le 08/05/2015 22:33, Mark Friedenbach a écrit : > > > * For each block, the miner is allowed to select a different difficulty > > (nBits) within a certain range, e.g. +/- 25% of the expected difficulty, > > and this miner-selected difficulty is used for the proof of work check. > In > > addition to adjusting the hashcash target, selecting a different > difficulty > > also raises or lowers the maximum block size for that block by a function > > of the difference in difficulty. So increasing the difficulty of the > block > > by an additional 25% raises the block limit for that block from 100% of > the > > current limit to 125%, and lowering the difficulty by 10% would also > lower > > the maximum block size for that block from 100% to 90% of the current > > limit. For simplicity I will assume a linear identity transform as the > > function, but a quadratic or other function with compounding marginal > cost > > may be preferred. > > > > Sorry but I fail to see how a linear identity transform between block > size and difficulty would work. > > The miner's reward for finding a block is the sum of subsidy and fees: > > R = S + F > > The probability that the miner will find a block over a time interval is > inversely proportional to the difficulty D: > > P = K / D > > where K is a constant that depends on the miner's hashrate. The expected > reward of the miner is: > > E = P * R > > Consider that the miner chooses a new difficulty: > > D' = D(1 + x). > > With a linear identity transform between block size and difficulty, the > miner will be allowed to collect fees from a block of size: S'=S(1+x) > > In the best case, collected will be proportional to block size: > > F' = F(1+x) > > Thus we get: > > E' = P' * R' = K/(D(1+x)) * (S + F(1+x)) > > E' = E - x/(1+x) * S * K / D > > So with this linear identity transform, increasing block size never > increases the miners gain. As long as the subsidy exists, the best > strategy for miners is to reduce block size (i.e. to choose x<0). > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > [-- Attachment #2: Type: text/html, Size: 4357 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-10 22:31 ` Mark Friedenbach @ 2015-05-10 23:11 ` Thomas Voegtlin 0 siblings, 0 replies; 69+ messages in thread From: Thomas Voegtlin @ 2015-05-10 23:11 UTC (permalink / raw) To: Mark Friedenbach; +Cc: Bitcoin Development Le 11/05/2015 00:31, Mark Friedenbach a écrit : > I'm on my phone today so I'm somewhat constrained in my reply, but the key > takeaway is that the proposal is a mechanism for miners to trade subsidy > for the increased fees of a larger block. Necessarily it only makes sense > to do so when the marginal fee per KB exceeds the subsidy fee per KB. It > correspondingly makes sense to use a smaller block size if fees are less > than subsidy, but note that fees are not uniform and as the block shrinks > the marginal fee rate goes up.. > Oh I see, you expect the sign of the dE/dx to change depending on whether fees exceed the subsidy. This is possible, but instead of the linear identity, you have to increase the block size twice as fast as the difficulty. In that case we would get (using the notations of my previous email): D' = D(1+x) F' = F(1+2x) and thus: E' - E = x/(1+x)P(F-S) The presence of the (F-S) factor means that the sign reversal occurs when fees exceed subsidy. > Limits on both the relative and absolute amount a miner can trade subsidy > for block size prevent incentive edge cases as well as prevent a sharp > shock to the current fee-poor economy (by disallowing adjustment below 1MB). > > Also the identity transform was used only for didactic purposes. I fully > expect there to be other, more interesting functions to use. ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-08 7:20 [Bitcoin-development] Proposed alternatives to the 20MB step function Matt Whitlock ` (4 preceding siblings ...) 2015-05-08 20:33 ` Mark Friedenbach @ 2015-05-28 15:53 ` Gavin Andresen 2015-05-28 17:05 ` Mike Hearn ` (3 more replies) 5 siblings, 4 replies; 69+ messages in thread From: Gavin Andresen @ 2015-05-28 15:53 UTC (permalink / raw) To: Matt Whitlock; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 4468 bytes --] On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock <bip@mattwhitlock.name> wrote: > Between all the flames on this list, several ideas were raised that did > not get much attention. I hereby resubmit these ideas for consideration and > discussion. > > - Perhaps the hard block size limit should be a function of the actual > block sizes over some trailing sampling period. For example, take the > median block size among the most recent 2016 blocks and multiply it by 1.5. > This allows Bitcoin to scale up gradually and organically, rather than > having human beings guessing at what is an appropriate limit. > A lot of people like this idea, or something like it. It is nice and simple, which is really important for consensus-critical code. With this rule in place, I believe there would be more "fee pressure" (miners would be creating smaller blocks) today. I created a couple of histograms of block sizes to infer what policy miners are ACTUALLY following today with respect to block size: Last 1,000 blocks: http://bitcoincore.org/~gavin/sizes_last1000.html Notice a big spike at 750K -- the default size for Bitcoin Core. This graph might be misleading, because transaction volume or fees might not be high enough over the last few days to fill blocks to whatever limit miners are willing to mine. So I graphed a time when (according to statoshi.info) there WERE a lot of transactions waiting to be confirmed: http://bitcoincore.org/~gavin/sizes_357511.html That might also be misleading, because it is possible there were a lot of transactions waiting to be confirmed because miners who choose to create small blocks got lucky and found more blocks than normal. In fact, it looks like that is what happened: more smaller-than-normal blocks were found, and the memory pool backed up. So: what if we had a dynamic maximum size limit based on recent history? The average block size is about 400K, so a 1.5x rule would make the max block size 600K; miners would definitely be squeezing out transactions / putting pressure to increase transaction fees. Even a 2x rule (implying 800K max blocks) would, today, be squeezing out transactions / putting pressure to increase fees. Using a median size instead of an average means the size can increase or decrease more quickly. For example, imagine the rule is "median of last 2016 blocks" and 49% of miners are producing 0-size blocks and 51% are producing max-size blocks. The median is max-size, so the 51% have total control over making blocks bigger. Swap the roles, and the median is min-size. Because of that, I think using an average is better-- it means the max size will change (up or down) more slowly. I also think 2016 blocks is too long, because transaction volumes change quicker than that. An average over 144 blocks (last 24 hours) would be better able to handle increased transaction volume around major holidays, and would also be able to react more quickly if an economically irrational attacker attempted to flood the network with fee-paying transactions. So my straw-man proposal would be: max size 2x average size over last 144 blocks, calculated at every block. There are a couple of other changes I'd pair with that consensus change: + Make the default mining policy for Bitcoin Core neutral-- have its target block size be the average size, so miners that don't care will "go along with the people who do care." + Use something like Greg's formula for size instead of bytes-on-the-wire, to discourage bloating the UTXO set. --------- When I've proposed (privately, to the other core committers) some dynamic algorithm the objection has been "but that gives miners complete control over the max block size." I think that worry is unjustified right now-- certainly, until we have size-independent new block propagation there is an incentive for miners to keep their blocks small, and we see miners creating small blocks even when there are fee-paying transactions waiting to be confirmed. I don't even think it will be a problem if/when we do have size-independent new block propagation, because I think the combination of the random timing of block-finding plus a dynamic limit as described above will create a healthy system. If I'm wrong, then it seems to me the miners will have a very strong incentive to, collectively, impose whatever rules are necessary (maybe a soft-fork to put a hard cap on block size) to make the system healthy again. -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 5686 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-28 15:53 ` Gavin Andresen @ 2015-05-28 17:05 ` Mike Hearn 2015-05-28 17:19 ` Gavin Andresen 2015-05-28 17:14 ` Thomas Voegtlin ` (2 subsequent siblings) 3 siblings, 1 reply; 69+ messages in thread From: Mike Hearn @ 2015-05-28 17:05 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 656 bytes --] > > Even a 2x rule (implying 800K max blocks) would, today, be squeezing out > transactions / putting pressure to increase fees ..... > > So my straw-man proposal would be: max size 2x average size over last 144 > blocks, calculated at every block. > Isn't that a step backwards, then? I see no reason for fee pressure to exist at the moment. All it's doing is turning away users for no purpose: mining isn't supported by fees, and the tiny fees we use right now seem to be good enough to stop penny flooding. Why not set the max size to be 20x the average size? Why 2x, given you just pointed out that'd result in blocks shrinking rather than growing. [-- Attachment #2: Type: text/html, Size: 1031 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-28 17:05 ` Mike Hearn @ 2015-05-28 17:19 ` Gavin Andresen 2015-05-28 17:34 ` Mike Hearn ` (2 more replies) 0 siblings, 3 replies; 69+ messages in thread From: Gavin Andresen @ 2015-05-28 17:19 UTC (permalink / raw) To: Mike Hearn; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1259 bytes --] On Thu, May 28, 2015 at 1:05 PM, Mike Hearn <mike@plan99.net> wrote: > Isn't that a step backwards, then? I see no reason for fee pressure to >> exist at the moment. All it's doing is turning away users for no purpose: >> mining isn't supported by fees, and the tiny fees we use right now seem to >> be good enough to stop penny flooding. >> > > Why not set the max size to be 20x the average size? Why 2x, given you > just pointed out that'd result in blocks shrinking rather than growing. > Twenty is scary. And two is a very neutral number: if 50% of hashpower want the max size to grow as fast as possible and 50% are dead-set opposed to any increase in max size, then half produce blocks 2 times as big, half produce empty blocks, and the max size doesn't change. If it was 20, then a small minority of miners could force a max size increase. (if it is less than 2, then a minority of minors can force the block size down) As for whether there "should" be fee pressure now or not: I have no opinion, besides "we should make block propagation faster so there is no technical reason for miners to produce tiny blocks." I don't think us developers should be deciding things like whether or not fees are too high, too low, ..... -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 2084 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-28 17:19 ` Gavin Andresen @ 2015-05-28 17:34 ` Mike Hearn 2015-05-28 18:23 ` Gavin Andresen 2015-05-28 17:39 ` [Bitcoin-development] Proposed alternatives to the 20MB stepfunction Raystonn . 2015-05-28 17:50 ` [Bitcoin-development] Proposed alternatives to the 20MB step function Peter Todd 2 siblings, 1 reply; 69+ messages in thread From: Mike Hearn @ 2015-05-28 17:34 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1411 bytes --] > > Twenty is scary. > To whom? The only justification for the max size is DoS attacks, right? Back when Bitcoin had an average block size of 10kb, the max block size was 100x the average. Things worked fine, nobody was scared. The max block size is really a limit set by hardware capability, which is something that's difficult to measure in software. I think I preferred your original formula that guesstimated based on previous trends to one that just tries to follow some average. As noted, many miners just accept the defaults. With your proposed change their target would effectively *drop* from 1mb to 800kb today, which seems crazy. That's the exact opposite of what is needed right now. I am very skeptical about this idea. > I don't think us developers should be deciding things like whether or not > fees are too high, too low, > Miners can already attempt to apply fee pressure by just not mining transactions that they feel don't pay enough. Some sort of auto-cartel that attempts to restrict supply based on everyone looking at everyone else feels overly complex and prone to strange situations: it looks a lot like some kind of Mexican standoff to me. Additionally, the justification for the block size limit was DoS by someone mining "troll blocks". It was never meant to be about fee pressure. Resource management inside Bitcoin Core is certainly something to be handled by developers. [-- Attachment #2: Type: text/html, Size: 2129 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-28 17:34 ` Mike Hearn @ 2015-05-28 18:23 ` Gavin Andresen 2015-05-29 11:26 ` Mike Hearn 0 siblings, 1 reply; 69+ messages in thread From: Gavin Andresen @ 2015-05-28 18:23 UTC (permalink / raw) To: Mike Hearn; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 574 bytes --] On Thu, May 28, 2015 at 1:34 PM, Mike Hearn <mike@plan99.net> wrote: > As noted, many miners just accept the defaults. With your proposed change >> their target would effectively *drop* from 1mb to 800kb today, which >> seems crazy. That's the exact opposite of what is needed right now. >> > > I am very skeptical about this idea. > By the time a hard fork can happen, I expect average block size will be above 500K. Would you support a rule that was "larger of 1MB or 2x average size" ? That is strictly better than the situation we're in today. -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 1510 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-28 18:23 ` Gavin Andresen @ 2015-05-29 11:26 ` Mike Hearn 2015-05-29 11:42 ` Tier Nolan 0 siblings, 1 reply; 69+ messages in thread From: Mike Hearn @ 2015-05-29 11:26 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 977 bytes --] > > By the time a hard fork can happen, I expect average block size will be > above 500K. > Yes, possibly. > Would you support a rule that was "larger of 1MB or 2x average size" ? > That is strictly better than the situation we're in today. > It is, but only by a trivial amount - hitting the limit is still very likely. I don't want to see this issue come up over and over again. Ideally never. We shouldn't be artificially throttling organic growth of the network, especially not by accident. IMO it's not even clear there needs to be a size limit at all. Currently the 32mb message cap imposes one anyway, but if miners can always just discourage blocks over some particular size if they want to. But I can get behind a 20mb limit (or 20mb+N) as it represents a reasonable compromise: the limit still exists, it's far below VISA capacity etc, but it should also free up enough space that everyone can get back to what we *should* be focusing on, which is user growth! [-- Attachment #2: Type: text/html, Size: 1611 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 11:26 ` Mike Hearn @ 2015-05-29 11:42 ` Tier Nolan 2015-05-29 11:57 ` Mike Hearn 0 siblings, 1 reply; 69+ messages in thread From: Tier Nolan @ 2015-05-29 11:42 UTC (permalink / raw) Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 754 bytes --] On Fri, May 29, 2015 at 12:26 PM, Mike Hearn <mike@plan99.net> wrote: > IMO it's not even clear there needs to be a size limit at all. Currently > the 32mb message cap imposes one anyway > If the plan is a fix once and for all, then that should be changed too. It could be set so that it is at least some multiple of the max block size allowed. Alternatively, the merkle block message already incorporates the required functionality. Send - headers message (with 1 header) - merkleblock messages (max 1MB per message) The transactions for each merkleblock could be sent directly before each merkleblock, as is currently the case. That system can send a block of any size. It would require a change to the processing of any merkleblocks received. [-- Attachment #2: Type: text/html, Size: 1217 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 11:42 ` Tier Nolan @ 2015-05-29 11:57 ` Mike Hearn 2015-05-29 12:39 ` Gavin Andresen 0 siblings, 1 reply; 69+ messages in thread From: Mike Hearn @ 2015-05-29 11:57 UTC (permalink / raw) To: Tier Nolan; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 876 bytes --] > > If the plan is a fix once and for all, then that should be changed too. > It could be set so that it is at least some multiple of the max block size > allowed. > Well, but RAM is not infinite :-) Effectively what these caps are doing is setting the minimum hardware requirements for running a Bitcoin node. That's OK by me - I don't think we are actually going to exhaust the hardware abilities of any reasonable computer any time soon, but still, having the software recognise the finite nature of a computing machine doesn't seem unwise. > That system can send a block of any size. It would require a change to > the processing of any merkleblocks received. > Not "any" size because, again, the remote node must buffer things up and have the transaction data actually in memory in order to digest it. But a much larger size, yes. However, that's a bigger change. [-- Attachment #2: Type: text/html, Size: 1503 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 11:57 ` Mike Hearn @ 2015-05-29 12:39 ` Gavin Andresen 2015-05-29 14:00 ` insecurity ` (3 more replies) 0 siblings, 4 replies; 69+ messages in thread From: Gavin Andresen @ 2015-05-29 12:39 UTC (permalink / raw) To: Mike Hearn; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1596 bytes --] What do other people think? If we can't come to an agreement soon, then I'll ask for help reviewing/submitting patches to Mike's Bitcoin-Xt project that implement a big increase now that grows over time so we may never have to go through all this rancor and debate again. I'll then ask for help lobbying the merchant services and exchanges and hosted wallet companies and other bitcoind-using-infrastructure companies (and anybody who agrees with me that we need bigger blocks sooner rather than later) to run Bitcoin-Xt instead of Bitcoin Core, and state that they are running it. We'll be able to see uptake on the network by monitoring client versions. Perhaps by the time that happens there will be consensus bigger blocks are needed sooner rather than later; if so, great! The early deployment will just serve as early testing, and all of the software already deployed will ready for bigger blocks. But if there is still no consensus among developers but the "bigger blocks now" movement is successful, I'll ask for help getting big miners to do the same, and use the soft-fork block version voting mechanism to (hopefully) get a majority and then a super-majority willing to produce bigger blocks. The purpose of that process is to prove to any doubters that they'd better start supporting bigger blocks or they'll be left behind, and to give them a chance to upgrade before that happens. Because if we can't come to consensus here, the ultimate authority for determining consensus is what code the majority of merchants and exchanges and miners are running. -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 1917 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 12:39 ` Gavin Andresen @ 2015-05-29 14:00 ` insecurity 2015-05-29 14:15 ` Braun Brelin 2015-05-29 14:09 ` Tier Nolan ` (2 subsequent siblings) 3 siblings, 1 reply; 69+ messages in thread From: insecurity @ 2015-05-29 14:00 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev Are you really that pig headed that you are going to try and blow up the entire system just to get your way? A bunch of ignorant redditors do not make consensus, mercifully. On 2015-05-29 12:39, Gavin Andresen wrote: > What do other people think? > > If we can't come to an agreement soon, then I'll ask for help > reviewing/submitting patches to Mike's Bitcoin-Xt project that > implement a big increase now that grows over time so we may never have > to go through all this rancor and debate again. > > I'll then ask for help lobbying the merchant services and exchanges > and hosted wallet companies and other bitcoind-using-infrastructure > companies (and anybody who agrees with me that we need bigger blocks > sooner rather than later) to run Bitcoin-Xt instead of Bitcoin Core, > and state that they are running it. We'll be able to see uptake on the > network by monitoring client versions. > > Perhaps by the time that happens there will be consensus bigger blocks > are needed sooner rather than later; if so, great! The early > deployment will just serve as early testing, and all of the software > already deployed will ready for bigger blocks. > > But if there is still no consensus among developers but the "bigger > blocks now" movement is successful, I'll ask for help getting big > miners to do the same, and use the soft-fork block version voting > mechanism to (hopefully) get a majority and then a super-majority > willing to produce bigger blocks. The purpose of that process is to > prove to any doubters that they'd better start supporting bigger > blocks or they'll be left behind, and to give them a chance to upgrade > before that happens. > > Because if we can't come to consensus here, the ultimate authority for > determining consensus is what code the majority of merchants and > exchanges and miners are running. > > -- > > -- > Gavin Andresen > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 14:00 ` insecurity @ 2015-05-29 14:15 ` Braun Brelin 0 siblings, 0 replies; 69+ messages in thread From: Braun Brelin @ 2015-05-29 14:15 UTC (permalink / raw) To: insecurity; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 3015 bytes --] How is this being pigheaded? In my opinion, this is leadership. If *something* isn't implemented soon, the network is going to have some real problems, right at the time when adoption is starting to accelerate. I've been seeing nothing but navel-gazing and circlejerks on this issue for weeks now. Gavin or Mike or someone at some point needs to step up and say "follow me". Braun Brelin On Fri, May 29, 2015 at 5:00 PM, <insecurity@national.shitposting.agency> wrote: > Are you really that pig headed that you are going to try and blow up the > entire system just to get your way? A bunch of ignorant redditors do not > make consensus, mercifully. > > > On 2015-05-29 12:39, Gavin Andresen wrote: > > What do other people think? > > > > If we can't come to an agreement soon, then I'll ask for help > > reviewing/submitting patches to Mike's Bitcoin-Xt project that > > implement a big increase now that grows over time so we may never have > > to go through all this rancor and debate again. > > > > I'll then ask for help lobbying the merchant services and exchanges > > and hosted wallet companies and other bitcoind-using-infrastructure > > companies (and anybody who agrees with me that we need bigger blocks > > sooner rather than later) to run Bitcoin-Xt instead of Bitcoin Core, > > and state that they are running it. We'll be able to see uptake on the > > network by monitoring client versions. > > > > Perhaps by the time that happens there will be consensus bigger blocks > > are needed sooner rather than later; if so, great! The early > > deployment will just serve as early testing, and all of the software > > already deployed will ready for bigger blocks. > > > > But if there is still no consensus among developers but the "bigger > > blocks now" movement is successful, I'll ask for help getting big > > miners to do the same, and use the soft-fork block version voting > > mechanism to (hopefully) get a majority and then a super-majority > > willing to produce bigger blocks. The purpose of that process is to > > prove to any doubters that they'd better start supporting bigger > > blocks or they'll be left behind, and to give them a chance to upgrade > > before that happens. > > > > Because if we can't come to consensus here, the ultimate authority for > > determining consensus is what code the majority of merchants and > > exchanges and miners are running. > > > > -- > > > > -- > > Gavin Andresen > > > > > ------------------------------------------------------------------------------ > > > > _______________________________________________ > > Bitcoin-development mailing list > > Bitcoin-development@lists.sourceforge.net > > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > [-- Attachment #2: Type: text/html, Size: 4094 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 12:39 ` Gavin Andresen 2015-05-29 14:00 ` insecurity @ 2015-05-29 14:09 ` Tier Nolan 2015-05-29 14:20 ` Gavin Andresen ` (2 more replies) 2015-05-29 17:53 ` [Bitcoin-development] Proposed alternatives to the 20MB step function Admin Istrator 2015-05-29 18:47 ` Bryan Cheng 3 siblings, 3 replies; 69+ messages in thread From: Tier Nolan @ 2015-05-29 14:09 UTC (permalink / raw) Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 919 bytes --] On Fri, May 29, 2015 at 1:39 PM, Gavin Andresen <gavinandresen@gmail.com> wrote: > But if there is still no consensus among developers but the "bigger blocks > now" movement is successful, I'll ask for help getting big miners to do the > same, and use the soft-fork block version voting mechanism to (hopefully) > get a majority and then a super-majority willing to produce bigger blocks. > The purpose of that process is to prove to any doubters that they'd better > start supporting bigger blocks or they'll be left behind, and to give them > a chance to upgrade before that happens. > How do you define that the movement is successful? For > Because if we can't come to consensus here, the ultimate authority for > determining consensus is what code the majority of merchants and exchanges > and miners are running. > The measure is miner consensus. How do you intend to measure exchange/merchant acceptance? [-- Attachment #2: Type: text/html, Size: 1515 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 14:09 ` Tier Nolan @ 2015-05-29 14:20 ` Gavin Andresen 2015-05-29 14:22 ` Mike Hearn 2015-05-29 14:21 ` Mike Hearn 2015-05-29 14:22 ` Tier Nolan 2 siblings, 1 reply; 69+ messages in thread From: Gavin Andresen @ 2015-05-29 14:20 UTC (permalink / raw) To: Tier Nolan; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 405 bytes --] On Fri, May 29, 2015 at 10:09 AM, Tier Nolan <tier.nolan@gmail.com> wrote: > How do you intend to measure exchange/merchant acceptance? > Public statements saying "we're running software that is ready for bigger blocks." And looking at the version (aka user-agent) strings of publicly reachable nodes on the network. (e.g. see the count at https://getaddr.bitnodes.io/nodes/ ) -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 1066 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 14:20 ` Gavin Andresen @ 2015-05-29 14:22 ` Mike Hearn 0 siblings, 0 replies; 69+ messages in thread From: Mike Hearn @ 2015-05-29 14:22 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 554 bytes --] > > And looking at the version (aka user-agent) strings of publicly reachable > nodes on the network. > (e.g. see the count at https://getaddr.bitnodes.io/nodes/ ) > Yeah, though FYI Luke informed me last week that I somehow managed to take out the change to the user-agent string in Bitcoin XT, presumably I made a mistake during a rebase of the rebranding change. So the actual number of XT nodes is a bit higher than counting user-agent strings would suggest. I sort of neglected XT lately. If we go ahead with this then I'll fix things like this. [-- Attachment #2: Type: text/html, Size: 965 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 14:09 ` Tier Nolan 2015-05-29 14:20 ` Gavin Andresen @ 2015-05-29 14:21 ` Mike Hearn 2015-05-29 14:22 ` Tier Nolan 2 siblings, 0 replies; 69+ messages in thread From: Mike Hearn @ 2015-05-29 14:21 UTC (permalink / raw) To: Tier Nolan; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 717 bytes --] > > The measure is miner consensus. How do you intend to measure > exchange/merchant acceptance? > Asking them. In fact, we already have. I have been talking to well known people and CEOs in the Bitcoin community for some time now. *All* of them support bigger blocks, this includes: - Every wallet developer I have asked (other than Bitcoin Core) - So far, every payment processor and every exchange company I know Gavin has also been talking to people about this. There's a feeling on this list that there's no consensus, or that Gavin and myself are on the wrong side of it. I'd put it differently - there's very strong consensus out in the wider community and this list is something of an aberration. [-- Attachment #2: Type: text/html, Size: 1135 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 14:09 ` Tier Nolan 2015-05-29 14:20 ` Gavin Andresen 2015-05-29 14:21 ` Mike Hearn @ 2015-05-29 14:22 ` Tier Nolan 2015-05-29 16:39 ` [Bitcoin-development] Proposed alternatives to the 20MB stepfunction Raystonn . 2 siblings, 1 reply; 69+ messages in thread From: Tier Nolan @ 2015-05-29 14:22 UTC (permalink / raw) Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 2006 bytes --] On Fri, May 29, 2015 at 3:09 PM, Tier Nolan <tier.nolan@gmail.com> wrote: > > > On Fri, May 29, 2015 at 1:39 PM, Gavin Andresen <gavinandresen@gmail.com> > wrote: > >> But if there is still no consensus among developers but the "bigger >> blocks now" movement is successful, I'll ask for help getting big miners to >> do the same, and use the soft-fork block version voting mechanism to >> (hopefully) get a majority and then a super-majority willing to produce >> bigger blocks. The purpose of that process is to prove to any doubters that >> they'd better start supporting bigger blocks or they'll be left behind, and >> to give them a chance to upgrade before that happens. >> > > How do you define that the movement is successful? > Sorry again, I keep auto-sending from gmail when trying to delete. In theory, using the "nuclear option", the block size can be increased via soft fork. Version 4 blocks would contain the hash of the a valid extended block in the coinbase. <block height> <32 byte extended hash> To send coins to the auxiliary block, you send them to some template. OP_P2SH_EXTENDED <scriptPubKey hash> OP_TRUE This transaction can be spent by anyone (under the current rules). The soft fork would lock the transaction output unless it transferred money from the extended block. To unlock the transaction output, you need to include the txid of transaction(s) in the extended block and signature(s) in the scriptSig. The transaction output can be spent in the extended block using P2SH against the scriptPubKey hash. This means that people can choose to move their money to the extended block. It might have lower security than leaving it in the root chain. The extended chain could use the updated script language too. This is obviously more complex than just increasing the size though, but it could be a fallback option if no consensus is reached. It has the advantage of giving people a choice. They can move their money to the extended chain or not, as they wish. [-- Attachment #2: Type: text/html, Size: 2960 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB stepfunction 2015-05-29 14:22 ` Tier Nolan @ 2015-05-29 16:39 ` Raystonn . 2015-05-29 18:28 ` Tier Nolan 0 siblings, 1 reply; 69+ messages in thread From: Raystonn . @ 2015-05-29 16:39 UTC (permalink / raw) To: Tier Nolan; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 3592 bytes --] Regarding Tier’s proposal: The lower security you mention for extended blocks would delay, possibly forever, the larger blocks maximum block size that we want for the entire network. That doesn’t sound like an optimal solution. Regarding consensus for larger maximum block size, what we are seeing on this list is typical of what we see in the U.S. Congress. Support for changes by the stakeholders (support for bills by the citizens as a whole) has become irrelevant to the probability of these changes being adopted. Lobbyists have all the sway in getting their policies enacted. In our case, I would bet on some lobbying of core developers by wealthy miners. Someone recently proposed that secret ballots could help eliminate the power of lobbyists in Congress. Nobody invests in that which cannot be confirmed. Secret ballots mean the vote you are buying cannot be confirmed. Perhaps this will work for Bitcoin Core as well. From: Tier Nolan Sent: Friday, May 29, 2015 7:22 AM Cc: Bitcoin Dev Subject: Re: [Bitcoin-development] Proposed alternatives to the 20MB stepfunction On Fri, May 29, 2015 at 3:09 PM, Tier Nolan <tier.nolan@gmail.com> wrote: On Fri, May 29, 2015 at 1:39 PM, Gavin Andresen <gavinandresen@gmail.com> wrote: But if there is still no consensus among developers but the "bigger blocks now" movement is successful, I'll ask for help getting big miners to do the same, and use the soft-fork block version voting mechanism to (hopefully) get a majority and then a super-majority willing to produce bigger blocks. The purpose of that process is to prove to any doubters that they'd better start supporting bigger blocks or they'll be left behind, and to give them a chance to upgrade before that happens. How do you define that the movement is successful? Sorry again, I keep auto-sending from gmail when trying to delete. In theory, using the "nuclear option", the block size can be increased via soft fork. Version 4 blocks would contain the hash of the a valid extended block in the coinbase. <block height> <32 byte extended hash> To send coins to the auxiliary block, you send them to some template. OP_P2SH_EXTENDED <scriptPubKey hash> OP_TRUE This transaction can be spent by anyone (under the current rules). The soft fork would lock the transaction output unless it transferred money from the extended block. To unlock the transaction output, you need to include the txid of transaction(s) in the extended block and signature(s) in the scriptSig. The transaction output can be spent in the extended block using P2SH against the scriptPubKey hash. This means that people can choose to move their money to the extended block. It might have lower security than leaving it in the root chain. The extended chain could use the updated script language too. This is obviously more complex than just increasing the size though, but it could be a fallback option if no consensus is reached. It has the advantage of giving people a choice. They can move their money to the extended chain or not, as they wish. -------------------------------------------------------------------------------- ------------------------------------------------------------------------------ -------------------------------------------------------------------------------- _______________________________________________ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development [-- Attachment #2: Type: text/html, Size: 5572 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB stepfunction 2015-05-29 16:39 ` [Bitcoin-development] Proposed alternatives to the 20MB stepfunction Raystonn . @ 2015-05-29 18:28 ` Tier Nolan 0 siblings, 0 replies; 69+ messages in thread From: Tier Nolan @ 2015-05-29 18:28 UTC (permalink / raw) Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1134 bytes --] On Fri, May 29, 2015 at 5:39 PM, Raystonn . <raystonn@hotmail.com> wrote: > Regarding Tier’s proposal: The lower security you mention for extended > blocks would delay, possibly forever, the larger blocks maximum block size > that we want for the entire network. That doesn’t sound like an optimal > solution. > I don't think so. The lower security is the potential centralisation risk. If you have your money in the "root" chain, then you can watch it. You can probably also watch it in a 20MB chain. Full nodes would still verify the entire block (root + extended). It is a "nuclear option", since you can make any changes you want to the rules for the extended chain. The only safe guard is that people have to voluntarly transfer coins to the extended block. The extended block might have 10-15% of the total bitcoins, but still be useful, since they would be the ones that move the most. If you want to store your coins long term, you move them back to the root block where you can watch them more closely. It does make things more complex though. Wallets would have to list 2 balances. [-- Attachment #2: Type: text/html, Size: 1641 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 12:39 ` Gavin Andresen 2015-05-29 14:00 ` insecurity 2015-05-29 14:09 ` Tier Nolan @ 2015-05-29 17:53 ` Admin Istrator 2015-05-30 9:03 ` Aaron Voisine 2015-05-29 18:47 ` Bryan Cheng 3 siblings, 1 reply; 69+ messages in thread From: Admin Istrator @ 2015-05-29 17:53 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 3377 bytes --] What about trying the dynamic scaling method within the 20MB range + 1 year with a 40% increase of that cap? Until a way to dynamically scale is found, the cap will only continue to be an issue. With 20 MB + 40% yoy, we're either imposing an arbitrary cap later, or achieving less than great DOS protection always. Why not set that policy as a maximum for 2 years as a protection against the possibility of dynamic scaling abuse, and see what happens with a dynamic method in the mean time. The policy of Max(1MB, (average size over previous 144 blocks) * 2) calculated at each block seems pretty reasonable. As an outsider, the real 'median' here seems to be 'keeping the cap as small as possible while allowing for larger blocks still'. We know miners will want to keep space in their blocks relatively scarce, but we also know that doesn't exclude the more powerful miners from including superfluous transactions to increase their effective share of the network. I have the luck of not being drained by this topic over the past three years, so it looks to me as if its two poles of 'block size must increase' and 'block size must not increase' are forcing what is the clear route to establishing the 'right' block size off the table. --Andrew Len (sorry if anybody received this twice, sent as the wrong email the first time around). On Fri, May 29, 2015 at 5:39 AM, Gavin Andresen <gavinandresen@gmail.com> wrote: > What do other people think? > > > If we can't come to an agreement soon, then I'll ask for help > reviewing/submitting patches to Mike's Bitcoin-Xt project that implement a > big increase now that grows over time so we may never have to go through > all this rancor and debate again. > > I'll then ask for help lobbying the merchant services and exchanges and > hosted wallet companies and other bitcoind-using-infrastructure companies > (and anybody who agrees with me that we need bigger blocks sooner rather > than later) to run Bitcoin-Xt instead of Bitcoin Core, and state that they > are running it. We'll be able to see uptake on the network by monitoring > client versions. > > Perhaps by the time that happens there will be consensus bigger blocks are > needed sooner rather than later; if so, great! The early deployment will > just serve as early testing, and all of the software already deployed will > ready for bigger blocks. > > But if there is still no consensus among developers but the "bigger blocks > now" movement is successful, I'll ask for help getting big miners to do the > same, and use the soft-fork block version voting mechanism to (hopefully) > get a majority and then a super-majority willing to produce bigger blocks. > The purpose of that process is to prove to any doubters that they'd better > start supporting bigger blocks or they'll be left behind, and to give them > a chance to upgrade before that happens. > > > Because if we can't come to consensus here, the ultimate authority for > determining consensus is what code the majority of merchants and exchanges > and miners are running. > > > -- > -- > Gavin Andresen > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > [-- Attachment #2: Type: text/html, Size: 4397 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 17:53 ` [Bitcoin-development] Proposed alternatives to the 20MB step function Admin Istrator @ 2015-05-30 9:03 ` Aaron Voisine 2015-06-01 11:30 ` Ricardo Filipe 0 siblings, 1 reply; 69+ messages in thread From: Aaron Voisine @ 2015-05-30 9:03 UTC (permalink / raw) To: Admin Istrator; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 4214 bytes --] > or achieving less than great DOS protection Right now a bunch of redditors can DOS the network at the cost of a few thousand dollars per day, shared between them. Since the cost of validating transactions is far lower than current minimum relay fees, then increasing the block size increases the cost of DOSing the network. Aaron Voisine co-founder and CEO breadwallet.com On Fri, May 29, 2015 at 10:53 AM, Admin Istrator <andy@ftlio.com> wrote: > What about trying the dynamic scaling method within the 20MB range + 1 > year with a 40% increase of that cap? Until a way to dynamically scale is > found, the cap will only continue to be an issue. With 20 MB + 40% yoy, > we're either imposing an arbitrary cap later, or achieving less than great > DOS protection always. Why not set that policy as a maximum for 2 years as > a protection against the possibility of dynamic scaling abuse, and see what > happens with a dynamic method in the mean time. The policy of Max(1MB, > (average size over previous 144 blocks) * 2) calculated at each block seems > pretty reasonable. > > As an outsider, the real 'median' here seems to be 'keeping the cap as > small as possible while allowing for larger blocks still'. We know > miners will want to keep space in their blocks relatively scarce, but we > also know that doesn't exclude the more powerful miners from > including superfluous transactions to increase their effective share of the > network. I have the luck of not being drained by this topic over the past > three years, so it looks to me as if its two poles of 'block size must > increase' and 'block size must not increase' are forcing what is the clear > route to establishing the 'right' block size off the table. > > --Andrew Len > (sorry if anybody received this twice, sent as the wrong email the first > time around). > > On Fri, May 29, 2015 at 5:39 AM, Gavin Andresen <gavinandresen@gmail.com> > wrote: > >> What do other people think? >> >> >> If we can't come to an agreement soon, then I'll ask for help >> reviewing/submitting patches to Mike's Bitcoin-Xt project that implement a >> big increase now that grows over time so we may never have to go through >> all this rancor and debate again. >> >> I'll then ask for help lobbying the merchant services and exchanges and >> hosted wallet companies and other bitcoind-using-infrastructure companies >> (and anybody who agrees with me that we need bigger blocks sooner rather >> than later) to run Bitcoin-Xt instead of Bitcoin Core, and state that they >> are running it. We'll be able to see uptake on the network by monitoring >> client versions. >> >> Perhaps by the time that happens there will be consensus bigger blocks >> are needed sooner rather than later; if so, great! The early deployment >> will just serve as early testing, and all of the software already deployed >> will ready for bigger blocks. >> >> But if there is still no consensus among developers but the "bigger >> blocks now" movement is successful, I'll ask for help getting big miners to >> do the same, and use the soft-fork block version voting mechanism to >> (hopefully) get a majority and then a super-majority willing to produce >> bigger blocks. The purpose of that process is to prove to any doubters that >> they'd better start supporting bigger blocks or they'll be left behind, and >> to give them a chance to upgrade before that happens. >> >> >> Because if we can't come to consensus here, the ultimate authority for >> determining consensus is what code the majority of merchants and exchanges >> and miners are running. >> >> >> -- >> -- >> Gavin Andresen >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Bitcoin-development mailing list >> Bitcoin-development@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/bitcoin-development >> >> > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > [-- Attachment #2: Type: text/html, Size: 5988 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-30 9:03 ` Aaron Voisine @ 2015-06-01 11:30 ` Ricardo Filipe 2015-06-01 11:46 ` Marcel Jamin 0 siblings, 1 reply; 69+ messages in thread From: Ricardo Filipe @ 2015-06-01 11:30 UTC (permalink / raw) Cc: Bitcoin Dev I've been following the discussion of the block size limit and IMO it is clear that any constant block size limit is, as many have said before, just kicking the can down the road. My problem with the dynamic lower limit solution based on past blocks is that it doesn't account for usage spikes. I would like to propose another dynamic lower limit scheme: Let the block size limit be a function of the number of current transactions in the mempool. This way, bitcoin usage regulates the block size limit. I'm sorry i don't have the knowledge of the code base or time to make simulations on this kind of approach, but nevertheless I would like to leave it here for discussion or foster other ideas. cheers ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-06-01 11:30 ` Ricardo Filipe @ 2015-06-01 11:46 ` Marcel Jamin 0 siblings, 0 replies; 69+ messages in thread From: Marcel Jamin @ 2015-06-01 11:46 UTC (permalink / raw) To: Ricardo Filipe; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1329 bytes --] > Let the block size limit be a function of the number of current transactions in the mempool. There is no single mempool which transactions could be counted and there is no consensus about the average number of unconfirmed transactions. 2015-06-01 13:30 GMT+02:00 Ricardo Filipe <ricardojdfilipe@gmail.com>: > I've been following the discussion of the block size limit and IMO it > is clear that any constant block size limit is, as many have said > before, just kicking the can down the road. > My problem with the dynamic lower limit solution based on past blocks > is that it doesn't account for usage spikes. I would like to propose > another dynamic lower limit scheme: > Let the block size limit be a function of the number of current > transactions in the mempool. This way, bitcoin usage regulates the > block size limit. > > I'm sorry i don't have the knowledge of the code base or time to make > simulations on this kind of approach, but nevertheless I would like to > leave it here for discussion or foster other ideas. > > cheers > > > ------------------------------------------------------------------------------ > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > [-- Attachment #2: Type: text/html, Size: 2053 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 12:39 ` Gavin Andresen ` (2 preceding siblings ...) 2015-05-29 17:53 ` [Bitcoin-development] Proposed alternatives to the 20MB step function Admin Istrator @ 2015-05-29 18:47 ` Bryan Cheng 2015-05-30 1:36 ` Cameron Garnham 3 siblings, 1 reply; 69+ messages in thread From: Bryan Cheng @ 2015-05-29 18:47 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1185 bytes --] On Fri, May 29, 2015 at 5:39 AM, Gavin Andresen <gavinandresen@gmail.com> wrote: > What do other people think? > > > If we can't come to an agreement soon, then I'll ask for help > reviewing/submitting patches to Mike's Bitcoin-Xt project that implement a > big increase now that grows over time so we may never have to go through > all this rancor and debate again. > > I'll then ask for help lobbying the merchant services and exchanges and > hosted wallet companies and other bitcoind-using-infrastructure companies > (and anybody who agrees with me that we need bigger blocks sooner rather > than later) to run Bitcoin-Xt instead of Bitcoin Core, and state that they > are running it. We'll be able to see uptake on the network by monitoring > client versions. > > > While I think we'd all prefer Core to make changes like this, the current environment may make that impossible. If this change happens in XT, we will support the necessary changes in our own implementation. The block size limit is a problem _today_, and I'd rather we solve today's problems with today's understanding rather than let speculation about future unknowns stop our ability to respond to known issues. [-- Attachment #2: Type: text/html, Size: 1623 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-29 18:47 ` Bryan Cheng @ 2015-05-30 1:36 ` Cameron Garnham 0 siblings, 0 replies; 69+ messages in thread From: Cameron Garnham @ 2015-05-30 1:36 UTC (permalink / raw) To: bitcoin-development [-- Attachment #1: Type: text/plain, Size: 7484 bytes --] First off, I am glad that the idea of dynamic block size adjustment is gaining some attention, in particular the model that I proposed. I wanted to take some time and explain some of the philosophy of how, and why, I proposed this this particular model. When Bitcoin was first made, there was a 32MB block size limit; this was quickly found to be open to spam (and potentially DOS, as the code was not-at-all optimized to support large blocks), and was reduced to 1MB, this was a quick fix that was never intended to last; at some point the network should come to an understanding, a consensus if you will, of what (and how much) belongs in a block. The core point of this is that miners have always, and will always; hold the power, to decide what goes into blocks; this implicitly, obviously, includes how large blocks are. Miners are able to come any sort of agreement they wish, providing the bitcoin clients accept their blocks as valid. Say if Satoshi never decided to place the 1MB block limit: It would be up to the miners to decide what they consider a ‘reasonable’ block is. However, they would need to find some way to communicate this and reach an agreement; some protocol. They, say, could have done this informally on what is now the bitcointalk forum, or used Twitter. However, what they really need is indeed a "consensus protocol". Some simple terms to define what is acceptable and what is not. Hence, the proposal introducing a consensus protocol for block sizes; instead of just having a hard limit (enforced by everyone), instead, we have a constant factor above the average block size over a fixed intervals that is soft-forked by only the miners. (The next simplest mathematical construct). This proposal is entirely a soft-fork and may be implemented without changing any client code what so ever. In-fact, it could be implemented by only a simple 51% majority of miners, with-or-without gaining the wider community consensus. (Assuming that the 1MB block size rule still applies). The nice thing about this is that it really is impossible to stop, for-example, if pre-relaying of block headers is implemented; the miners could always soft-fork to include the block-size in the coinbase. The only reason that the miners have not done this yet, is that there has not yet been a strong will to increase transaction fees. If we assume the miners will operate in a way to collectively maximize profit; then we can assume they will not try to maximize utility of the network (having as many transactions as possible), rather have as few transactions as the total economy can support the cost. Meaning that limiting to much smaller blocks will probably be much more profitable than having large blocks. Since there is no requirement for the clients to know about the block size consensus protocol, this truly can be a ‘bi-directional-soft-fork’, in that the miners can choose to change the rules at any time, with only a simple 51% majority. Therefore, any parameters that we pick are always up for debate. Why the 1.5x over 2016 blocks? - Using some game theory, and deduction: I wished to pick the type of agreement that would be natural for the miners to come to (selfishly). First, Why 1.5x, this means that only a super-majority of miners can easily increase the block size. – There is no natural incentive for miners to produce large blocks that have very few fees. Second, Why 2016 blocks for adjusting the average: Miners HATE unpredictability, for shorter time periods the miner will need to have infrastructure ready to support potentially much larger block almost immediately. 2016 blocks is a period that the miners are already well used to, meaning that it will take slightly less than a month for blocks of double size to be permitted. This entire infrastructure can be implemented without needing to update any clients; once implemented, tested, solid, and well accepted by the (mining) community then we can revisit increasing the 1M hard limit. (If we still have demand for it, maybe the average block size will reduce to say, 100KB). Cam. > -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 > > While being in the Bitcoin community for a long time, I haven't > been so directly involved in the development. However I wish to > suggest a different pre-hard-fork soft-fork approach: > > > Set a 'block size cap' in the similar same way as we set > difficulty. > > Every 2016 blocks take the average size of the blocks and multiply > the size by 1.5x, rejecting blocks that are larger than this size, > for the next 2016 period. > > I would of-course suggest that we keep the limits at min 100kb and > max (initially) 990kb (not 1mb on purpose, as this should become > the new limit), rounding up to the nearest 10kb. > > A: we don't have pressure at the 1mb limit, (we reduce the limit in > a flexible manner to 990kb). > > B: we can upgrade the network to XYZ hard-limit, then slowly raze > the soft-limit after being sure the network, as-a-whole is ready. > > If we on-day remove the block-size limit, this rule will stop a > rouge miner from making 10mb, or 100mb blocks, or 1gb blocks. > > This could be implemented by the miners without breaking any of > the clients, and would tend to produce a better dynamic fee > pressure. > > > This will give the mechanics to the miners to create consensus to > agree what block-sizes they believe are best for the network, and > allows the block-sizes to dynamically grow in response to larger > demand. > > > > On 5/8/2015 10:35 AM, Pieter Wuille wrote: >> On May 7, 2015 3:08 PM, "Roy Badami" <roy@gnomon.org.uk> wrote: >>> >>> On Thu, May 07, 2015 at 11:49:28PM +0200, Pieter Wuille wrote: >>>> I would not modify my node if the change introduced a >>>> perpetual 100 BTC subsidy per block, even if 99% of miners >>>> went along with it. >>> >>> Surely, in that scenario Bitcoin is dead. If the fork you >>> prefer has only 1% of the hash power it is trivially vulnerably >>> not just to a 51% attack but to a 501% attack, not to mention >>> the fact that you'd only be getting one block every 16 hours. >> >> Yes, indeed, Bitcoin would be dead if this actually happens. But >> that is still where the power lies: before anyone (miners or >> others) would think about trying such a change, they would need >> to convince people and be sure they will effectively modify >> their code. >> >> >> >> ---------------------------------------------------------------------- > >> - -------- >> >> > One dashboard for servers and applications across > Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable >> Insights Deep dive visibility with transaction tracing using APM >> Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> >> >> >> _______________________________________________ >> Bitcoin-development mailing list >> Bitcoin-development@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/bitcoin-development >> > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 > > iF4EAREIAAYFAlVMKZYACgkQBJ8cMDO159aTiQEApTITEBrhE1DRbj/w+GncNeqB > 0hGvmIBa1z0hGww0kaMBAOhxjn/K5leRJgdt1fKhNEDKKHdeCOIX3QRgry90D3NO > =p0+H -----END PGP SIGNATURE----- [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 213 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB stepfunction 2015-05-28 17:19 ` Gavin Andresen 2015-05-28 17:34 ` Mike Hearn @ 2015-05-28 17:39 ` Raystonn . 2015-05-28 17:59 ` Pieter Wuille 2015-05-28 17:50 ` [Bitcoin-development] Proposed alternatives to the 20MB step function Peter Todd 2 siblings, 1 reply; 69+ messages in thread From: Raystonn . @ 2015-05-28 17:39 UTC (permalink / raw) To: Gavin Andresen, Mike Hearn; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 2188 bytes --] I agree that developers should avoid imposing economic policy. It is dangerous for Bitcoin and the core developers themselves to become such a central point of attack for those wishing to disrupt Bitcoin. My opinion is these things are better left to a decentralized free market anyhow. From: Gavin Andresen Sent: Thursday, May 28, 2015 10:19 AM To: Mike Hearn Cc: Bitcoin Dev Subject: Re: [Bitcoin-development] Proposed alternatives to the 20MB stepfunction On Thu, May 28, 2015 at 1:05 PM, Mike Hearn <mike@plan99.net> wrote: Isn't that a step backwards, then? I see no reason for fee pressure to exist at the moment. All it's doing is turning away users for no purpose: mining isn't supported by fees, and the tiny fees we use right now seem to be good enough to stop penny flooding. Why not set the max size to be 20x the average size? Why 2x, given you just pointed out that'd result in blocks shrinking rather than growing. Twenty is scary. And two is a very neutral number: if 50% of hashpower want the max size to grow as fast as possible and 50% are dead-set opposed to any increase in max size, then half produce blocks 2 times as big, half produce empty blocks, and the max size doesn't change. If it was 20, then a small minority of miners could force a max size increase. (if it is less than 2, then a minority of minors can force the block size down) As for whether there "should" be fee pressure now or not: I have no opinion, besides "we should make block propagation faster so there is no technical reason for miners to produce tiny blocks." I don't think us developers should be deciding things like whether or not fees are too high, too low, ..... -- -- Gavin Andresen -------------------------------------------------------------------------------- ------------------------------------------------------------------------------ -------------------------------------------------------------------------------- _______________________________________________ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development [-- Attachment #2: Type: text/html, Size: 3947 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB stepfunction 2015-05-28 17:39 ` [Bitcoin-development] Proposed alternatives to the 20MB stepfunction Raystonn . @ 2015-05-28 17:59 ` Pieter Wuille 2015-05-28 18:21 ` Gavin Andresen 0 siblings, 1 reply; 69+ messages in thread From: Pieter Wuille @ 2015-05-28 17:59 UTC (permalink / raw) To: Raystonn; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1227 bytes --] On May 28, 2015 10:42 AM, "Raystonn ." <raystonn@hotmail.com> wrote: > > I agree that developers should avoid imposing economic policy. It is dangerous for Bitcoin and the core developers themselves to become such a central point of attack for those wishing to disrupt Bitcoin. I could not agree more that developers should not be in charge of the network rules. Which is why - in my opinion - hard forks cannot be controversial things. A controversial change to the software, forced to be adopted by the public because the only alternative is a permanent chain fork, is a use of power that developers (or anyone) should not have, and an incredibly dangerous precedent for other changes that only a subset of participants would want. The block size is also not just an economic policy. It is the compromise the _network_ chooses to make between utility and various forms of centralization pressure, and we should treat it as a compromise, and not as some limit that is inferior to scaling demands. I personally think the block size should increase, by the way, but only if we can do it under a policy of doing it after technological growth has been shown to be sufficient to support it without increased risk. -- Pieter [-- Attachment #2: Type: text/html, Size: 1424 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB stepfunction 2015-05-28 17:59 ` Pieter Wuille @ 2015-05-28 18:21 ` Gavin Andresen 0 siblings, 0 replies; 69+ messages in thread From: Gavin Andresen @ 2015-05-28 18:21 UTC (permalink / raw) To: Pieter Wuille; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 794 bytes --] On Thu, May 28, 2015 at 1:59 PM, Pieter Wuille <pieter.wuille@gmail.com> wrote: > I personally think the block size should increase, by the way, but only if > we can do it under a policy of doing it after technological growth has been > shown to be sufficient to support it without increased risk. > > Can you be more specific about this? What risks are you worried about? I've tried to cover all that I've heard about in my blog posts about why I think the risks of 20MB blocks are outweighed by the benefits, am I missing something? (blog posts are linked from http://gavinandresen.ninja/time-to-roll-out-bigger-blocks ) There is the "a sudden jump to a 20MB max might have unforseen consequences" risk that I don't address, but a dynamic increase would fix that. -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 1572 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-28 17:19 ` Gavin Andresen 2015-05-28 17:34 ` Mike Hearn 2015-05-28 17:39 ` [Bitcoin-development] Proposed alternatives to the 20MB stepfunction Raystonn . @ 2015-05-28 17:50 ` Peter Todd 2 siblings, 0 replies; 69+ messages in thread From: Peter Todd @ 2015-05-28 17:50 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 1363 bytes --] On Thu, May 28, 2015 at 01:19:44PM -0400, Gavin Andresen wrote: > As for whether there "should" be fee pressure now or not: I have no > opinion, besides "we should make block propagation faster so there is no > technical reason for miners to produce tiny blocks." I don't think us > developers should be deciding things like whether or not fees are too high, > too low, ..... Note that the majority of hashing power is using Matt Corallo's block relay network, something I confirmed the other day through my mining contacts. Interestingly, the miners that aren't using it include some of the largest pools; I haven't yet gotten an answer as to what their rational for not using it was exactly. Importantly, this does mean that block propagation is probably fairly close to optimal already, modulo major changes to the consensus protocol; IBLT won't improve the situation much, if any. It's also notable that we're already having issues with miners turning validation off as a way to lower their latency; I've been asked myself about the possibility of creating an "SPV miner" that skips validation while new blocks are propagating to shave off time and builds directly off of block headers corresponding to blocks with unknown contents. -- 'peter'[:-1]@petertodd.org 00000000000000000327487b689490b73f9d336b3008f82114fd3ada336bcac0 [-- Attachment #2: Digital signature --] [-- Type: application/pgp-signature, Size: 650 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-28 15:53 ` Gavin Andresen 2015-05-28 17:05 ` Mike Hearn @ 2015-05-28 17:14 ` Thomas Voegtlin 2015-05-28 17:34 ` Pieter Wuille 2015-05-29 17:45 ` Aaron Voisine 3 siblings, 0 replies; 69+ messages in thread From: Thomas Voegtlin @ 2015-05-28 17:14 UTC (permalink / raw) To: bitcoin-development Le 28/05/2015 17:53, Gavin Andresen a écrit : > > So my straw-man proposal would be: max size 2x average size over last 144 > blocks, calculated at every block. > I like that idea. Average is a better choice than median. The median is not well defined on discrete sets, as shown in your example, and there is no need to be robust to outliers, thanks to the max size. ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-28 15:53 ` Gavin Andresen 2015-05-28 17:05 ` Mike Hearn 2015-05-28 17:14 ` Thomas Voegtlin @ 2015-05-28 17:34 ` Pieter Wuille 2015-05-29 17:45 ` Aaron Voisine 3 siblings, 0 replies; 69+ messages in thread From: Pieter Wuille @ 2015-05-28 17:34 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 7147 bytes --] > until we have size-independent new block propagation I don't really believe that is possible. I'll argue why below. To be clear, this is not an argument against increasing the block size, only against using the assumption of size-independent propagation. There are several significant improvements likely possible to various aspects of block propagation, but I don't believe you can make any part completely size-independent. Perhaps the remaining aspects result in terms in the total time that vanish compared to the link latencies for 1 MB blocks, but there will be some block sizes for which this is no longer the case, and we need to know where that is the case. * You can't assume that every transaction is pre-relayed and pre-validated. This can happen due to non-uniform relay policies (different codebases, and future things like size-limited mempools), double spend attempts, and transactions generated before a block had time to propagate. You've previously argued for a policy of not including too recent transactions, but that requires a bound on network diameter, and if these late transactions are profitable, it has exactly the same problem as making larger blocks non-proportionally more economic for larger pools groups if propagation time is size dependent). * This results in extra bandwidth usage for efficient relay protocols, and if discrepancy estimation mispredicts the size of IBLT or error correction data needed, extra roundtrips. * Signature validation for unrelayed transactions will be needed at block relay time. * Database lookups for the inputs of unrelayed transactions cannot be cached in advance. * Block validation with 100% known and pre-validated transactions is not constant time, due to updates that need to be made to the UTXO set (and future ideas like UTXO commitments would make this effect an order of magnitude worse). * More efficient relay protocols also have higher CPU cost for encoding/decoding. Again, none of this is a reason why the block size can't increase. If availability of hardware with higher bandwidth, faster disk/ram access times, and faster CPUs increases, we should be able to have larger blocks with the same propagation profile as smaller blocks with earlier technology. But we should know how technology scales with larger blocks, and I don't believe we do, apart from microbenchmarks in laboratory conditions. -- Pieter On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock <bip@mattwhitlock.name> wrote: > Between all the flames on this list, several ideas were raised that did > not get much attention. I hereby resubmit these ideas for consideration and > discussion. > > - Perhaps the hard block size limit should be a function of the actual > block sizes over some trailing sampling period. For example, take the > median block size among the most recent 2016 blocks and multiply it by 1.5. > This allows Bitcoin to scale up gradually and organically, rather than > having human beings guessing at what is an appropriate limit. > A lot of people like this idea, or something like it. It is nice and simple, which is really important for consensus-critical code. With this rule in place, I believe there would be more "fee pressure" (miners would be creating smaller blocks) today. I created a couple of histograms of block sizes to infer what policy miners are ACTUALLY following today with respect to block size: Last 1,000 blocks: http://bitcoincore.org/~gavin/sizes_last1000.html Notice a big spike at 750K -- the default size for Bitcoin Core. This graph might be misleading, because transaction volume or fees might not be high enough over the last few days to fill blocks to whatever limit miners are willing to mine. So I graphed a time when (according to statoshi.info) there WERE a lot of transactions waiting to be confirmed: http://bitcoincore.org/~gavin/sizes_357511.html That might also be misleading, because it is possible there were a lot of transactions waiting to be confirmed because miners who choose to create small blocks got lucky and found more blocks than normal. In fact, it looks like that is what happened: more smaller-than-normal blocks were found, and the memory pool backed up. So: what if we had a dynamic maximum size limit based on recent history? The average block size is about 400K, so a 1.5x rule would make the max block size 600K; miners would definitely be squeezing out transactions / putting pressure to increase transaction fees. Even a 2x rule (implying 800K max blocks) would, today, be squeezing out transactions / putting pressure to increase fees. Using a median size instead of an average means the size can increase or decrease more quickly. For example, imagine the rule is "median of last 2016 blocks" and 49% of miners are producing 0-size blocks and 51% are producing max-size blocks. The median is max-size, so the 51% have total control over making blocks bigger. Swap the roles, and the median is min-size. Because of that, I think using an average is better-- it means the max size will change (up or down) more slowly. I also think 2016 blocks is too long, because transaction volumes change quicker than that. An average over 144 blocks (last 24 hours) would be better able to handle increased transaction volume around major holidays, and would also be able to react more quickly if an economically irrational attacker attempted to flood the network with fee-paying transactions. So my straw-man proposal would be: max size 2x average size over last 144 blocks, calculated at every block. There are a couple of other changes I'd pair with that consensus change: + Make the default mining policy for Bitcoin Core neutral-- have its target block size be the average size, so miners that don't care will "go along with the people who do care." + Use something like Greg's formula for size instead of bytes-on-the-wire, to discourage bloating the UTXO set. --------- When I've proposed (privately, to the other core committers) some dynamic algorithm the objection has been "but that gives miners complete control over the max block size." I think that worry is unjustified right now-- certainly, until we have size-independent new block propagation there is an incentive for miners to keep their blocks small, and we see miners creating small blocks even when there are fee-paying transactions waiting to be confirmed. I don't even think it will be a problem if/when we do have size-independent new block propagation, because I think the combination of the random timing of block-finding plus a dynamic limit as described above will create a healthy system. If I'm wrong, then it seems to me the miners will have a very strong incentive to, collectively, impose whatever rules are necessary (maybe a soft-fork to put a hard cap on block size) to make the system healthy again. -- -- Gavin Andresen ------------------------------------------------------------------------------ _______________________________________________ Bitcoin-development mailing list Bitcoin-development@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bitcoin-development [-- Attachment #2: Type: text/html, Size: 8893 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-28 15:53 ` Gavin Andresen ` (2 preceding siblings ...) 2015-05-28 17:34 ` Pieter Wuille @ 2015-05-29 17:45 ` Aaron Voisine 3 siblings, 0 replies; 69+ messages in thread From: Aaron Voisine @ 2015-05-29 17:45 UTC (permalink / raw) To: Gavin Andresen; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 5468 bytes --] > miners would definitely be squeezing out transactions / putting pressure to increase transaction fees I'd just like to re-iterate that transactions getting "squeezed out" (failure after a lengthy period of uncertainty) is a radical change from the current behavior of the network. There are plenty of avenues to create fee pressure without resorting to such a drastic change in how the network works today. Aaron Voisine co-founder and CEO breadwallet.com On Thu, May 28, 2015 at 8:53 AM, Gavin Andresen <gavinandresen@gmail.com> wrote: > On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock <bip@mattwhitlock.name> > wrote: > >> Between all the flames on this list, several ideas were raised that did >> not get much attention. I hereby resubmit these ideas for consideration and >> discussion. >> >> - Perhaps the hard block size limit should be a function of the actual >> block sizes over some trailing sampling period. For example, take the >> median block size among the most recent 2016 blocks and multiply it by 1.5. >> This allows Bitcoin to scale up gradually and organically, rather than >> having human beings guessing at what is an appropriate limit. >> > > A lot of people like this idea, or something like it. It is nice and > simple, which is really important for consensus-critical code. > > With this rule in place, I believe there would be more "fee pressure" > (miners would be creating smaller blocks) today. I created a couple of > histograms of block sizes to infer what policy miners are ACTUALLY > following today with respect to block size: > > Last 1,000 blocks: > http://bitcoincore.org/~gavin/sizes_last1000.html > > Notice a big spike at 750K -- the default size for Bitcoin Core. > This graph might be misleading, because transaction volume or fees might > not be high enough over the last few days to fill blocks to whatever limit > miners are willing to mine. > > So I graphed a time when (according to statoshi.info) there WERE a lot of > transactions waiting to be confirmed: > http://bitcoincore.org/~gavin/sizes_357511.html > > That might also be misleading, because it is possible there were a lot of > transactions waiting to be confirmed because miners who choose to create > small blocks got lucky and found more blocks than normal. In fact, it > looks like that is what happened: more smaller-than-normal blocks were > found, and the memory pool backed up. > > So: what if we had a dynamic maximum size limit based on recent history? > > The average block size is about 400K, so a 1.5x rule would make the max > block size 600K; miners would definitely be squeezing out transactions / > putting pressure to increase transaction fees. Even a 2x rule (implying > 800K max blocks) would, today, be squeezing out transactions / putting > pressure to increase fees. > > Using a median size instead of an average means the size can increase or > decrease more quickly. For example, imagine the rule is "median of last > 2016 blocks" and 49% of miners are producing 0-size blocks and 51% are > producing max-size blocks. The median is max-size, so the 51% have total > control over making blocks bigger. Swap the roles, and the median is > min-size. > > Because of that, I think using an average is better-- it means the max > size will change (up or down) more slowly. > > I also think 2016 blocks is too long, because transaction volumes change > quicker than that. An average over 144 blocks (last 24 hours) would be > better able to handle increased transaction volume around major holidays, > and would also be able to react more quickly if an economically irrational > attacker attempted to flood the network with fee-paying transactions. > > So my straw-man proposal would be: max size 2x average size over last 144 > blocks, calculated at every block. > > There are a couple of other changes I'd pair with that consensus change: > > + Make the default mining policy for Bitcoin Core neutral-- have its > target block size be the average size, so miners that don't care will "go > along with the people who do care." > > + Use something like Greg's formula for size instead of bytes-on-the-wire, > to discourage bloating the UTXO set. > > > --------- > > When I've proposed (privately, to the other core committers) some dynamic > algorithm the objection has been "but that gives miners complete control > over the max block size." > > I think that worry is unjustified right now-- certainly, until we have > size-independent new block propagation there is an incentive for miners to > keep their blocks small, and we see miners creating small blocks even when > there are fee-paying transactions waiting to be confirmed. > > I don't even think it will be a problem if/when we do have > size-independent new block propagation, because I think the combination of > the random timing of block-finding plus a dynamic limit as described above > will create a healthy system. > > If I'm wrong, then it seems to me the miners will have a very strong > incentive to, collectively, impose whatever rules are necessary (maybe a > soft-fork to put a hard cap on block size) to make the system healthy again. > > > -- > -- > Gavin Andresen > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > [-- Attachment #2: Type: text/html, Size: 7382 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function @ 2015-05-08 14:57 Steven Pine 0 siblings, 0 replies; 69+ messages in thread From: Steven Pine @ 2015-05-08 14:57 UTC (permalink / raw) To: bitcoin-development [-- Attachment #1: Type: text/plain, Size: 129 bytes --] Block size scaling should be as transparent and simple as possible, like pegging it to total transactions per difficulty change. [-- Attachment #2: Type: text/html, Size: 146 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function
@ 2015-05-09 0:13 Raystonn
0 siblings, 0 replies; 69+ messages in thread
From: Raystonn @ 2015-05-09 0:13 UTC (permalink / raw)
To: mark; +Cc: Bitcoin Development
It seems to me all this would do is encourage 0-transaction blocks, crippling the network. Individual blocks don't have a "maximum" block size, they have an actual block size. Rational miners would pick blocks to minimize difficulty, lowering the "effective" maximum block size as defined by the optimal size for rational miners. This would be a tragedy of the commons.
In addition to that, average block cinfirmation time, and hence rate of inflation of the bitcoin currency, would now be subject to manipulation. This undermined a core value of Bitcoin.
> On Fri, May 8, 2015 at 1:33 PM, Mark Friedenbach <mark@friedenbach.org> wrote:
>
> * For each block, the miner is allowed to select a different difficulty (nBits) within a certain range, e.g. +/- 25% of the expected difficulty, and this miner-selected difficulty is used for the proof of work check. In addition to adjusting the hashcash target, selecting a different difficulty also raises or lowers the maximum block size for that block by a function of the difference in difficulty.
^ permalink raw reply [flat|nested] 69+ messages in thread
[parent not found: <CAAjy6kDdB8uODpPcmS8h4eap8fke7Y2y773NHJZja8tB5mPk4Q@mail.gmail.com>]
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function [not found] <CAAjy6kDdB8uODpPcmS8h4eap8fke7Y2y773NHJZja8tB5mPk4Q@mail.gmail.com> @ 2015-05-28 16:30 ` Steven Pine [not found] ` <CABsx9T03aNRC5DRbR06nNtsiBdJAcQsGAHvbCOe3pnuRpdvq5w@mail.gmail.com> 0 siblings, 1 reply; 69+ messages in thread From: Steven Pine @ 2015-05-28 16:30 UTC (permalink / raw) To: bitcoin-development [-- Attachment #1: Type: text/plain, Size: 6449 bytes --] I would support a dynamic block size increase as outlined. I have a few questions though. Is scaling by average block size the best and easiest method, why not scale by transactions confirmed instead? Anyone can write and relay a transaction, and those are what we want to scale for, why not measure it directly? I would prefer changes every 2016 blocks, it is a well known change and a reasonable time period for planning on changes. Two weeks is plenty fast, especially at a 50% rate increase, in a few months the block size could be dramatically larger. Daily change to size seems confusing especially considering that max block size will be dipping up and down. Also if something breaks trying to fix it in a day seems problematic. The hard fork database size difference error comes to mind. Finally daily 50% increases could quickly crowd out smaller nodes if changes happen too quickly to adapt for. > Date: Thu, 28 May 2015 11:53:41 -0400 > From: Gavin Andresen <gavinandresen@gmail.com> > Subject: > To: Matt Whitlock <bip@mattwhitlock.name> > Cc: Bitcoin Dev <bitcoin-development@lists.sourceforge.net> > Message-ID: > < CABsx9T3-zxCAagAS0megd06xvG5n-3tUL9NUK9TT3vt7XNL9Tg@mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock <bip@mattwhitlock.name> wrote: > > > Between all the flames on this list, several ideas were raised that did > > not get much attention. I hereby resubmit these ideas for consideration and > > discussion. > > > > - Perhaps the hard block size limit should be a function of the actual > > block sizes over some trailing sampling period. For example, take the > > median block size among the most recent 2016 blocks and multiply it by 1.5. > > This allows Bitcoin to scale up gradually and organically, rather than > > having human beings guessing at what is an appropriate limit. > > > > A lot of people like this idea, or something like it. It is nice and > simple, which is really important for consensus-critical code. > > With this rule in place, I believe there would be more "fee pressure" > (miners would be creating smaller blocks) today. I created a couple of > histograms of block sizes to infer what policy miners are ACTUALLY > following today with respect to block size: > > Last 1,000 blocks: > http://bitcoincore.org/~gavin/sizes_last1000.html > > Notice a big spike at 750K -- the default size for Bitcoin Core. > This graph might be misleading, because transaction volume or fees might > not be high enough over the last few days to fill blocks to whatever limit > miners are willing to mine. > > So I graphed a time when (according to statoshi.info) there WERE a lot of > transactions waiting to be confirmed: > http://bitcoincore.org/~gavin/sizes_357511.html > > That might also be misleading, because it is possible there were a lot of > transactions waiting to be confirmed because miners who choose to create > small blocks got lucky and found more blocks than normal. In fact, it > looks like that is what happened: more smaller-than-normal blocks were > found, and the memory pool backed up. > > So: what if we had a dynamic maximum size limit based on recent history? > > The average block size is about 400K, so a 1.5x rule would make the max > block size 600K; miners would definitely be squeezing out transactions / > putting pressure to increase transaction fees. Even a 2x rule (implying > 800K max blocks) would, today, be squeezing out transactions / putting > pressure to increase fees. > > Using a median size instead of an average means the size can increase or > decrease more quickly. For example, imagine the rule is "median of last > 2016 blocks" and 49% of miners are producing 0-size blocks and 51% are > producing max-size blocks. The median is max-size, so the 51% have total > control over making blocks bigger. Swap the roles, and the median is > min-size. > > Because of that, I think using an average is better-- it means the max size > will change (up or down) more slowly. > > I also think 2016 blocks is too long, because transaction volumes change > quicker than that. An average over 144 blocks (last 24 hours) would be > better able to handle increased transaction volume around major holidays, > and would also be able to react more quickly if an economically irrational > attacker attempted to flood the network with fee-paying transactions. > > So my straw-man proposal would be: max size 2x average size over last 144 > blocks, calculated at every block. > > There are a couple of other changes I'd pair with that consensus change: > > + Make the default mining policy for Bitcoin Core neutral-- have its target > block size be the average size, so miners that don't care will "go along > with the people who do care." > > + Use something like Greg's formula for size instead of bytes-on-the-wire, > to discourage bloating the UTXO set. > > > --------- > > When I've proposed (privately, to the other core committers) some dynamic > algorithm the objection has been "but that gives miners complete control > over the max block size." > > I think that worry is unjustified right now-- certainly, until we have > size-independent new block propagation there is an incentive for miners to > keep their blocks small, and we see miners creating small blocks even when > there are fee-paying transactions waiting to be confirmed. > > I don't even think it will be a problem if/when we do have size-independent > new block propagation, because I think the combination of the random timing > of block-finding plus a dynamic limit as described above will create a > healthy system. > > If I'm wrong, then it seems to me the miners will have a very strong > incentive to, collectively, impose whatever rules are necessary (maybe a > soft-fork to put a hard cap on block size) to make the system healthy again. > > > -- > -- > Gavin Andresen > -------------- next part -------------- > An HTML attachment was scrubbed... > > ------------------------------ > > ------------------------------------------------------------------------------ > > > ------------------------------ > > _______________________________________________ > Bitcoin-development mailing list > Bitcoin-development@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bitcoin-development > > > End of Bitcoin-development Digest, Vol 48, Issue 122 > **************************************************** [-- Attachment #2: Type: text/html, Size: 8302 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
[parent not found: <CABsx9T03aNRC5DRbR06nNtsiBdJAcQsGAHvbCOe3pnuRpdvq5w@mail.gmail.com>]
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function [not found] ` <CABsx9T03aNRC5DRbR06nNtsiBdJAcQsGAHvbCOe3pnuRpdvq5w@mail.gmail.com> @ 2015-05-28 18:25 ` Steven Pine 2015-05-28 18:31 ` Gavin Andresen 0 siblings, 1 reply; 69+ messages in thread From: Steven Pine @ 2015-05-28 18:25 UTC (permalink / raw) To: bitcoin-development, gavinandresen [-- Attachment #1: Type: text/plain, Size: 3202 bytes --] My understanding, which is very likely wrong in one way or another, is transaction size and block size are two slightly different things but perhaps it's so negligible that block size is a fine stand-in for total transaction throughput. Potentially Doubling the block size everyday is frankly imprudent. The logarithmic increases in difficulty, which were often closer to 10% or 20% every 2016 blocks was and is plenty fast, potentially changing blocksize by twice daily is the mentality I would expect from a startup with the move fast break things motto. Infrastructure takes time, not everyone wants to run a node on a virtual amazon instance, provisioning additional hard drive and bandwidth can't happen overnight and trying to plan when block size from one week to the next is a total mystery would be extremely difficult. Anyone who has spent time examining the mining difficulty increases and trajectory knows future planning is very very hard, allowing block size to double daily would make it impossible. Perhaps a middle way would be 300% increase every 2016 blocks, that will scale to 20mbs within a month or two The problem is logarithmic increases seem slow until they seem fast. If the network begins to grow and block size hits 20, then the next day 40, 80... Small nodes could get swamped within a week or less. As for your point about Christmas, Bitcoin is a global network, Christmas, while widely celebrated, isn't the only holiday, and planning around American buying habits seems short sighted and no different from developers trying to choose what the right fee pressure is. On May 28, 2015 1:22 PM, "Gavin Andresen" <gavinandresen@gmail.com> wrote: > > On Thu, May 28, 2015 at 12:30 PM, Steven Pine <steven.pine@gmail.com> wrote: >> >> I would support a dynamic block size increase as outlined. I have a few questions though. >> >> Is scaling by average block size the best and easiest method, why not scale by transactions confirmed instead? Anyone can write and relay a transaction, and those are what we want to scale for, why not measure it directly? > > > What do you mean? Transactions aren't confirmed until they're in a block... > >> >> I would prefer changes every 2016 blocks, it is a well known change and a reasonable time period for planning on changes. Two weeks is plenty fast, especially at a 50% rate increase, in a few months the block size could be dramatically larger. > > > What type of planning do you imagine is necessary? > > And have you looked at transaction volumes for credit-card payment networks around Christmas? > >> >> Daily change to size seems confusing especially considering that max block size will be dipping up and down. Also if something breaks trying to fix it in a day seems problematic. The hard fork database size difference error comes to mind. Finally daily 50% increases could quickly crowd out smaller nodes if changes happen too quickly to adapt for. > > The bottleneck is transaction volume; blocks won't get bigger unless there are fee-paying transactions around to pay them. What scenario are you imagining where transaction volume increases by 50% a day for a sustained period of time? > > -- > -- > Gavin Andresen [-- Attachment #2: Type: text/html, Size: 3725 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
* Re: [Bitcoin-development] Proposed alternatives to the 20MB step function 2015-05-28 18:25 ` Steven Pine @ 2015-05-28 18:31 ` Gavin Andresen 0 siblings, 0 replies; 69+ messages in thread From: Gavin Andresen @ 2015-05-28 18:31 UTC (permalink / raw) To: Steven Pine; +Cc: Bitcoin Dev [-- Attachment #1: Type: text/plain, Size: 220 bytes --] Can we hold off on bike-shedding the particular choice of parameters until people have a chance to weigh in on whether or not there is SOME set of dynamic parameters they would support right now? -- -- Gavin Andresen [-- Attachment #2: Type: text/html, Size: 347 bytes --] ^ permalink raw reply [flat|nested] 69+ messages in thread
end of thread, other threads:[~2015-06-01 12:38 UTC | newest] Thread overview: 69+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2015-05-08 7:20 [Bitcoin-development] Proposed alternatives to the 20MB step function Matt Whitlock 2015-05-08 10:15 ` Mike Hearn 2015-05-08 10:30 ` Clément Elbaz 2015-05-08 12:32 ` Joel Joonatan Kaartinen 2015-05-08 12:48 ` Matt Whitlock 2015-05-08 13:24 ` Matt Whitlock 2015-05-08 12:48 ` Gavin Andresen 2015-05-08 16:51 ` Peter Todd 2015-05-08 22:36 ` Joel Joonatan Kaartinen 2015-05-09 18:30 ` Peter Todd 2015-05-08 15:57 ` Alex Mizrahi 2015-05-08 16:55 ` Bryan Bishop 2015-05-08 20:33 ` Mark Friedenbach 2015-05-08 22:43 ` Aaron Voisine 2015-05-08 22:45 ` Mark Friedenbach 2015-05-08 23:15 ` Aaron Voisine 2015-05-08 23:58 ` Mark Friedenbach 2015-05-09 3:36 ` Gregory Maxwell 2015-05-09 11:58 ` Gavin Andresen 2015-05-09 13:49 ` Tier Nolan 2015-05-10 17:36 ` Owen Gunden 2015-05-10 18:10 ` Mark Friedenbach 2015-05-10 21:21 ` Gavin Andresen 2015-05-10 21:33 ` Gregory Maxwell 2015-05-10 21:56 ` Rob Golding 2015-05-13 10:43 ` Tier Nolan 2015-05-16 0:22 ` Rusty Russell 2015-05-16 11:09 ` Tier Nolan 2015-05-18 1:42 ` Rusty Russell 2015-05-19 8:59 ` Tier Nolan 2015-05-10 21:48 ` Thomas Voegtlin 2015-05-10 22:31 ` Mark Friedenbach 2015-05-10 23:11 ` Thomas Voegtlin 2015-05-28 15:53 ` Gavin Andresen 2015-05-28 17:05 ` Mike Hearn 2015-05-28 17:19 ` Gavin Andresen 2015-05-28 17:34 ` Mike Hearn 2015-05-28 18:23 ` Gavin Andresen 2015-05-29 11:26 ` Mike Hearn 2015-05-29 11:42 ` Tier Nolan 2015-05-29 11:57 ` Mike Hearn 2015-05-29 12:39 ` Gavin Andresen 2015-05-29 14:00 ` insecurity 2015-05-29 14:15 ` Braun Brelin 2015-05-29 14:09 ` Tier Nolan 2015-05-29 14:20 ` Gavin Andresen 2015-05-29 14:22 ` Mike Hearn 2015-05-29 14:21 ` Mike Hearn 2015-05-29 14:22 ` Tier Nolan 2015-05-29 16:39 ` [Bitcoin-development] Proposed alternatives to the 20MB stepfunction Raystonn . 2015-05-29 18:28 ` Tier Nolan 2015-05-29 17:53 ` [Bitcoin-development] Proposed alternatives to the 20MB step function Admin Istrator 2015-05-30 9:03 ` Aaron Voisine 2015-06-01 11:30 ` Ricardo Filipe 2015-06-01 11:46 ` Marcel Jamin 2015-05-29 18:47 ` Bryan Cheng 2015-05-30 1:36 ` Cameron Garnham 2015-05-28 17:39 ` [Bitcoin-development] Proposed alternatives to the 20MB stepfunction Raystonn . 2015-05-28 17:59 ` Pieter Wuille 2015-05-28 18:21 ` Gavin Andresen 2015-05-28 17:50 ` [Bitcoin-development] Proposed alternatives to the 20MB step function Peter Todd 2015-05-28 17:14 ` Thomas Voegtlin 2015-05-28 17:34 ` Pieter Wuille 2015-05-29 17:45 ` Aaron Voisine 2015-05-08 14:57 Steven Pine 2015-05-09 0:13 Raystonn [not found] <CAAjy6kDdB8uODpPcmS8h4eap8fke7Y2y773NHJZja8tB5mPk4Q@mail.gmail.com> 2015-05-28 16:30 ` Steven Pine [not found] ` <CABsx9T03aNRC5DRbR06nNtsiBdJAcQsGAHvbCOe3pnuRpdvq5w@mail.gmail.com> 2015-05-28 18:25 ` Steven Pine 2015-05-28 18:31 ` Gavin Andresen
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox