* [bitcoin-dev] Hard fork proposal from last week's meeting
@ 2017-03-28 16:59 Wang Chun
2017-03-28 17:13 ` Matt Corallo
` (5 more replies)
0 siblings, 6 replies; 81+ messages in thread
From: Wang Chun @ 2017-03-28 16:59 UTC (permalink / raw)
To: bitcoin-dev
I've proposed this hard fork approach last year in Hong Kong Consensus
but immediately rejected by coredevs at that meeting, after more than
one year it seems that lots of people haven't heard of it. So I would
post this here again for comment.
The basic idea is, as many of us agree, hard fork is risky and should
be well prepared. We need a long time to deploy it.
Despite spam tx on the network, the block capacity is approaching its
limit, and we must think ahead. Shall we code a patch right now, to
remove the block size limit of 1MB, but not activate it until far in
the future. I would propose to remove the 1MB limit at the next block
halving in spring 2020, only limit the block size to 32MiB which is
the maximum size the current p2p protocol allows. This patch must be
in the immediate next release of Bitcoin Core.
With this patch in core's next release, Bitcoin works just as before,
no fork will ever occur, until spring 2020. But everyone knows there
will be a fork scheduled. Third party services, libraries, wallets and
exchanges will have enough time to prepare for it over the next three
years.
We don't yet have an agreement on how to increase the block size
limit. There have been many proposals over the past years, like
BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
on. These hard fork proposals, with this patch already in Core's
release, they all become soft fork. We'll have enough time to discuss
all these proposals and decide which one to go. Take an example, if we
choose to fork to only 2MB, since 32MiB already scheduled, reduce it
from 32MiB to 2MB will be a soft fork.
Anyway, we must code something right now, before it becomes too late.
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 16:59 [bitcoin-dev] Hard fork proposal from last week's meeting Wang Chun
@ 2017-03-28 17:13 ` Matt Corallo
2017-03-29 8:45 ` Jared Lee Richardson
2017-03-28 17:23 ` Alphonse Pace
` (4 subsequent siblings)
5 siblings, 1 reply; 81+ messages in thread
From: Matt Corallo @ 2017-03-28 17:13 UTC (permalink / raw)
To: Wang Chun, Bitcoin Protocol Discussion
Not sure what "last week's meeting" is in reference to?
Agreed that the hard fork should be well-prepared, but I think its
dangerous to think that a hard fork as agreed upon would be a simple
relaxation of the block size. For example, Johnson Lau's previous
proposal, Spoonnet, which I think is probably one of the better ones,
would be incompatible with these rules.
I, of course, worry about what happens if we cannot come to consensus on
a number to soft fork down to, potentially significantly risking miner
profits (and, thus, the security of Bitcoin) if a group is able to keep
things "at the status quo". That said, for that to be alleviated we
could simply do something based on historical transaction growth (which
is somewhat linear, with a few inflection points), but that number ends
up being super low (eg somewhere around 2MB at the next halving, which
SegWit itself already provides :/.
We could, of course, focus on designing a hard fork's activation and
technical details, with a very large block size increase in it (ie
closer to 4/6MB at the next halving or so, something we at least could
be confident we could develop software for), with intention to soft fork
it back down if miner profits are suffering.
Matt
On 03/28/17 16:59, Wang Chun via bitcoin-dev wrote:
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 17:13 ` Matt Corallo
@ 2017-03-29 8:45 ` Jared Lee Richardson
0 siblings, 0 replies; 81+ messages in thread
From: Jared Lee Richardson @ 2017-03-29 8:45 UTC (permalink / raw)
To: Matt Corallo, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3973 bytes --]
> That said, for that to be alleviated we
could simply do something based on historical transaction growth (which
is somewhat linear, with a few inflection points),
Where do you get this? Transaction growth for the last 4 years averages to
+65% per year and the last 2 is +80% per year. That's very much not linear.
On Tue, Mar 28, 2017 at 10:13 AM, Matt Corallo via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Not sure what "last week's meeting" is in reference to?
>
> Agreed that the hard fork should be well-prepared, but I think its
> dangerous to think that a hard fork as agreed upon would be a simple
> relaxation of the block size. For example, Johnson Lau's previous
> proposal, Spoonnet, which I think is probably one of the better ones,
> would be incompatible with these rules.
>
> I, of course, worry about what happens if we cannot come to consensus on
> a number to soft fork down to, potentially significantly risking miner
> profits (and, thus, the security of Bitcoin) if a group is able to keep
> things "at the status quo". That said, for that to be alleviated we
> could simply do something based on historical transaction growth (which
> is somewhat linear, with a few inflection points), but that number ends
> up being super low (eg somewhere around 2MB at the next halving, which
> SegWit itself already provides :/.
>
> We could, of course, focus on designing a hard fork's activation and
> technical details, with a very large block size increase in it (ie
> closer to 4/6MB at the next halving or so, something we at least could
> be confident we could develop software for), with intention to soft fork
> it back down if miner profits are suffering.
>
> Matt
>
> On 03/28/17 16:59, Wang Chun via bitcoin-dev wrote:
> > I've proposed this hard fork approach last year in Hong Kong Consensus
> > but immediately rejected by coredevs at that meeting, after more than
> > one year it seems that lots of people haven't heard of it. So I would
> > post this here again for comment.
> >
> > The basic idea is, as many of us agree, hard fork is risky and should
> > be well prepared. We need a long time to deploy it.
> >
> > Despite spam tx on the network, the block capacity is approaching its
> > limit, and we must think ahead. Shall we code a patch right now, to
> > remove the block size limit of 1MB, but not activate it until far in
> > the future. I would propose to remove the 1MB limit at the next block
> > halving in spring 2020, only limit the block size to 32MiB which is
> > the maximum size the current p2p protocol allows. This patch must be
> > in the immediate next release of Bitcoin Core.
> >
> > With this patch in core's next release, Bitcoin works just as before,
> > no fork will ever occur, until spring 2020. But everyone knows there
> > will be a fork scheduled. Third party services, libraries, wallets and
> > exchanges will have enough time to prepare for it over the next three
> > years.
> >
> > We don't yet have an agreement on how to increase the block size
> > limit. There have been many proposals over the past years, like
> > BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> > on. These hard fork proposals, with this patch already in Core's
> > release, they all become soft fork. We'll have enough time to discuss
> > all these proposals and decide which one to go. Take an example, if we
> > choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> > from 32MiB to 2MB will be a soft fork.
> >
> > Anyway, we must code something right now, before it becomes too late.
> > _______________________________________________
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 5277 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 16:59 [bitcoin-dev] Hard fork proposal from last week's meeting Wang Chun
2017-03-28 17:13 ` Matt Corallo
@ 2017-03-28 17:23 ` Alphonse Pace
2017-03-28 17:31 ` Wang Chun
2017-03-28 17:33 ` Juan Garavaglia
2017-03-28 17:34 ` Johnson Lau
` (3 subsequent siblings)
5 siblings, 2 replies; 81+ messages in thread
From: Alphonse Pace @ 2017-03-28 17:23 UTC (permalink / raw)
To: Wang Chun, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2994 bytes --]
What meeting are you referring to? Who were the participants?
Removing the limit but relying on the p2p protocol is not really a true
32MiB limit, but a limit of whatever transport methods provide. This can
lead to differing consensus if alternative layers for relaying are used.
What you seem to be asking for is an unbound block size (or at least
determined by whatever miners produce). This has the possibility (and even
likelihood) of removing many participants from the network, including many
small miners.
32MB in less than 3 years also appears to be far beyond limits of safety
which are known to exist far sooner, and we cannot expect hardware and
networking layers to improve by those amounts in that time.
It also seems like it would be much better to wait until SegWit activates
in order to truly measure the effects on the network from this increased
capacity before committing to any additional increases.
-Alphonse
On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 3761 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 17:23 ` Alphonse Pace
@ 2017-03-28 17:31 ` Wang Chun
2017-03-28 17:33 ` Jeremy
2017-03-28 17:50 ` Douglas Roark
2017-03-28 17:33 ` Juan Garavaglia
1 sibling, 2 replies; 81+ messages in thread
From: Wang Chun @ 2017-03-28 17:31 UTC (permalink / raw)
To: Alphonse Pace; +Cc: Bitcoin Protocol Discussion
The basic idea is, let's stop the debate for whether we should upgrade
to 2MB, 8MB or 32MiB. 32MiB is well above any proposals' upper limit,
so any final decision would be a soft fork to this already deployed
release. If by 2020, we still agree 1MB is enough, it can be changed
back to 1MB limit and it would also a soft fork on top of that.
On Wed, Mar 29, 2017 at 1:23 AM, Alphonse Pace <alp.bitcoin@gmail.com> wrote:
> What meeting are you referring to? Who were the participants?
>
> Removing the limit but relying on the p2p protocol is not really a true
> 32MiB limit, but a limit of whatever transport methods provide. This can
> lead to differing consensus if alternative layers for relaying are used.
> What you seem to be asking for is an unbound block size (or at least
> determined by whatever miners produce). This has the possibility (and even
> likelihood) of removing many participants from the network, including many
> small miners.
>
> 32MB in less than 3 years also appears to be far beyond limits of safety
> which are known to exist far sooner, and we cannot expect hardware and
> networking layers to improve by those amounts in that time.
>
> It also seems like it would be much better to wait until SegWit activates in
> order to truly measure the effects on the network from this increased
> capacity before committing to any additional increases.
>
> -Alphonse
>
>
>
> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev
> <bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>> I've proposed this hard fork approach last year in Hong Kong Consensus
>> but immediately rejected by coredevs at that meeting, after more than
>> one year it seems that lots of people haven't heard of it. So I would
>> post this here again for comment.
>>
>> The basic idea is, as many of us agree, hard fork is risky and should
>> be well prepared. We need a long time to deploy it.
>>
>> Despite spam tx on the network, the block capacity is approaching its
>> limit, and we must think ahead. Shall we code a patch right now, to
>> remove the block size limit of 1MB, but not activate it until far in
>> the future. I would propose to remove the 1MB limit at the next block
>> halving in spring 2020, only limit the block size to 32MiB which is
>> the maximum size the current p2p protocol allows. This patch must be
>> in the immediate next release of Bitcoin Core.
>>
>> With this patch in core's next release, Bitcoin works just as before,
>> no fork will ever occur, until spring 2020. But everyone knows there
>> will be a fork scheduled. Third party services, libraries, wallets and
>> exchanges will have enough time to prepare for it over the next three
>> years.
>>
>> We don't yet have an agreement on how to increase the block size
>> limit. There have been many proposals over the past years, like
>> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
>> on. These hard fork proposals, with this patch already in Core's
>> release, they all become soft fork. We'll have enough time to discuss
>> all these proposals and decide which one to go. Take an example, if we
>> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
>> from 32MiB to 2MB will be a soft fork.
>>
>> Anyway, we must code something right now, before it becomes too late.
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 17:31 ` Wang Chun
@ 2017-03-28 17:33 ` Jeremy
2017-03-28 17:50 ` Douglas Roark
1 sibling, 0 replies; 81+ messages in thread
From: Jeremy @ 2017-03-28 17:33 UTC (permalink / raw)
To: Wang Chun, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 4395 bytes --]
I think it's probably safer to have a fork-to-minumum (e.g. minimal
coinbase+header) after a certain date than to fork up at a certain date. At
least in that case, the default isn't breaking consensus, but you still get
the same pressure to fork to a permanent solution.
I don't endorse the above proposal, but remarked for the sake of guiding
the argument you are making.
--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>
On Tue, Mar 28, 2017 at 1:31 PM, Wang Chun via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> The basic idea is, let's stop the debate for whether we should upgrade
> to 2MB, 8MB or 32MiB. 32MiB is well above any proposals' upper limit,
> so any final decision would be a soft fork to this already deployed
> release. If by 2020, we still agree 1MB is enough, it can be changed
> back to 1MB limit and it would also a soft fork on top of that.
>
> On Wed, Mar 29, 2017 at 1:23 AM, Alphonse Pace <alp.bitcoin@gmail.com>
> wrote:
> > What meeting are you referring to? Who were the participants?
> >
> > Removing the limit but relying on the p2p protocol is not really a true
> > 32MiB limit, but a limit of whatever transport methods provide. This can
> > lead to differing consensus if alternative layers for relaying are used.
> > What you seem to be asking for is an unbound block size (or at least
> > determined by whatever miners produce). This has the possibility (and
> even
> > likelihood) of removing many participants from the network, including
> many
> > small miners.
> >
> > 32MB in less than 3 years also appears to be far beyond limits of safety
> > which are known to exist far sooner, and we cannot expect hardware and
> > networking layers to improve by those amounts in that time.
> >
> > It also seems like it would be much better to wait until SegWit
> activates in
> > order to truly measure the effects on the network from this increased
> > capacity before committing to any additional increases.
> >
> > -Alphonse
> >
> >
> >
> > On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev
> > <bitcoin-dev@lists.linuxfoundation.org> wrote:
> >>
> >> I've proposed this hard fork approach last year in Hong Kong Consensus
> >> but immediately rejected by coredevs at that meeting, after more than
> >> one year it seems that lots of people haven't heard of it. So I would
> >> post this here again for comment.
> >>
> >> The basic idea is, as many of us agree, hard fork is risky and should
> >> be well prepared. We need a long time to deploy it.
> >>
> >> Despite spam tx on the network, the block capacity is approaching its
> >> limit, and we must think ahead. Shall we code a patch right now, to
> >> remove the block size limit of 1MB, but not activate it until far in
> >> the future. I would propose to remove the 1MB limit at the next block
> >> halving in spring 2020, only limit the block size to 32MiB which is
> >> the maximum size the current p2p protocol allows. This patch must be
> >> in the immediate next release of Bitcoin Core.
> >>
> >> With this patch in core's next release, Bitcoin works just as before,
> >> no fork will ever occur, until spring 2020. But everyone knows there
> >> will be a fork scheduled. Third party services, libraries, wallets and
> >> exchanges will have enough time to prepare for it over the next three
> >> years.
> >>
> >> We don't yet have an agreement on how to increase the block size
> >> limit. There have been many proposals over the past years, like
> >> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> >> on. These hard fork proposals, with this patch already in Core's
> >> release, they all become soft fork. We'll have enough time to discuss
> >> all these proposals and decide which one to go. Take an example, if we
> >> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> >> from 32MiB to 2MB will be a soft fork.
> >>
> >> Anyway, we must code something right now, before it becomes too late.
> >> _______________________________________________
> >> bitcoin-dev mailing list
> >> bitcoin-dev@lists.linuxfoundation.org
> >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
> >
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 6143 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 17:31 ` Wang Chun
2017-03-28 17:33 ` Jeremy
@ 2017-03-28 17:50 ` Douglas Roark
1 sibling, 0 replies; 81+ messages in thread
From: Douglas Roark @ 2017-03-28 17:50 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1.1: Type: text/plain, Size: 1999 bytes --]
On 2017/3/28 10:31, Wang Chun via bitcoin-dev wrote:
> The basic idea is, let's stop the debate for whether we should upgrade
> to 2MB, 8MB or 32MiB. 32MiB is well above any proposals' upper limit,
> so any final decision would be a soft fork to this already deployed
> release. If by 2020, we still agree 1MB is enough, it can be changed
> back to 1MB limit and it would also a soft fork on top of that.
While I think this idea isn't bad in and of itself, there is an
assumption being made that the community would come to consensus
regarding a future soft fork. This, IMO, is a dangerous assumption.
Failure would potentially leave the network at a hard fork well past any
current proposal. It would also potentially lead to miners becoming
hostile players and making political demands. ("Soft fork down to X MB
or I'll shut down 15% of the network hashrate and work to shut down more
elsewhere.") I'd hope we can all agree that such a scenario would be
terrible.
I do agree that the idea of giving everybody plenty of time to plan is
critical. (Telecom providers need months, if not years, to plan for even
simple upgrades, which often are not as simple as they look on paper.) I
just think this proposal, while well-meaning, comes across as a bit of a
trojan horse as-is. I can't get behind it, although it could potentially
be molded into something else that's interesting, e.g., Johnson Lau's
Spoonnet. Fork-to-minimum, while introducing its own potential problems,
would put much less pressure on full nodes, and on the ecosphere as a
whole if the max needed to be soft forked down.
(I'd also like to see SegWit go live so that we can get an idea of how
much pressure there really is on the network, thereby giving us a better
idea of how high we can go. I still think we're flying a bit blind in
that regard.)
--
---
Douglas Roark
Cryptocurrency, network security, travel, and art.
https://onename.com/droark
joroark@vt.edu
PGP key ID: 26623924
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 842 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 17:23 ` Alphonse Pace
2017-03-28 17:31 ` Wang Chun
@ 2017-03-28 17:33 ` Juan Garavaglia
2017-03-28 17:53 ` Alphonse Pace
1 sibling, 1 reply; 81+ messages in thread
From: Juan Garavaglia @ 2017-03-28 17:33 UTC (permalink / raw)
To: Alphonse Pace, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3763 bytes --]
Alphonse,
In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and 32MB limit valid in next halving, from network, storage and CPU perspective or 1MB was too high in 2010 what is possible or 1MB is to low today.
If is unsafe or impossible to raise the blocksize is a different topic.
Regards
Juan
From: bitcoin-dev-bounces@lists.linuxfoundation.org [mailto:bitcoin-dev-bounces@lists.linuxfoundation.org] On Behalf Of Alphonse Pace via bitcoin-dev
Sent: Tuesday, March 28, 2017 2:24 PM
To: Wang Chun <1240902@gmail.com>; Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
What meeting are you referring to? Who were the participants?
Removing the limit but relying on the p2p protocol is not really a true 32MiB limit, but a limit of whatever transport methods provide. This can lead to differing consensus if alternative layers for relaying are used. What you seem to be asking for is an unbound block size (or at least determined by whatever miners produce). This has the possibility (and even likelihood) of removing many participants from the network, including many small miners.
32MB in less than 3 years also appears to be far beyond limits of safety which are known to exist far sooner, and we cannot expect hardware and networking layers to improve by those amounts in that time.
It also seems like it would be much better to wait until SegWit activates in order to truly measure the effects on the network from this increased capacity before committing to any additional increases.
-Alphonse
On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org<mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
I've proposed this hard fork approach last year in Hong Kong Consensus
but immediately rejected by coredevs at that meeting, after more than
one year it seems that lots of people haven't heard of it. So I would
post this here again for comment.
The basic idea is, as many of us agree, hard fork is risky and should
be well prepared. We need a long time to deploy it.
Despite spam tx on the network, the block capacity is approaching its
limit, and we must think ahead. Shall we code a patch right now, to
remove the block size limit of 1MB, but not activate it until far in
the future. I would propose to remove the 1MB limit at the next block
halving in spring 2020, only limit the block size to 32MiB which is
the maximum size the current p2p protocol allows. This patch must be
in the immediate next release of Bitcoin Core.
With this patch in core's next release, Bitcoin works just as before,
no fork will ever occur, until spring 2020. But everyone knows there
will be a fork scheduled. Third party services, libraries, wallets and
exchanges will have enough time to prepare for it over the next three
years.
We don't yet have an agreement on how to increase the block size
limit. There have been many proposals over the past years, like
BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
on. These hard fork proposals, with this patch already in Core's
release, they all become soft fork. We'll have enough time to discuss
all these proposals and decide which one to go. Take an example, if we
choose to fork to only 2MB, since 32MiB already scheduled, reduce it
from 32MiB to 2MB will be a soft fork.
Anyway, we must code something right now, before it becomes too late.
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org<mailto:bitcoin-dev@lists.linuxfoundation.org>
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 8457 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 17:33 ` Juan Garavaglia
@ 2017-03-28 17:53 ` Alphonse Pace
2017-03-28 22:36 ` Juan Garavaglia
2017-03-29 9:16 ` Jared Lee Richardson
0 siblings, 2 replies; 81+ messages in thread
From: Alphonse Pace @ 2017-03-28 17:53 UTC (permalink / raw)
To: Juan Garavaglia, Wang Chun; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 4502 bytes --]
Juan,
I suggest you take a look at this paper:
http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf It may help you form
opinions based in science rather than what appears to be nothing more than
a hunch. It shows that even 4MB is unsafe. SegWit provides up to this
limit.
8MB is most definitely not safe today.
Whether it is unsafe or impossible is the topic, since Wang Chun proposed
making the block size limit 32MiB.
Wang Chun,
Can you specify what meeting you are talking about? You seem to have not
replied on that point. Who were the participants and what was the purpose
of this meeting?
-Alphonse
On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <jg@112bit.com> wrote:
> Alphonse,
>
>
>
> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and
> 32MB limit valid in next halving, from network, storage and CPU perspective
> or 1MB was too high in 2010 what is possible or 1MB is to low today.
>
>
>
> If is unsafe or impossible to raise the blocksize is a different topic.
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> *From:* bitcoin-dev-bounces@lists.linuxfoundation.org [mailto:
> bitcoin-dev-bounces@lists.linuxfoundation.org] *On Behalf Of *Alphonse
> Pace via bitcoin-dev
> *Sent:* Tuesday, March 28, 2017 2:24 PM
> *To:* Wang Chun <1240902@gmail.com>; Bitcoin Protocol Discussion <
> bitcoin-dev@lists.linuxfoundation.org>
> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> What meeting are you referring to? Who were the participants?
>
>
>
> Removing the limit but relying on the p2p protocol is not really a true
> 32MiB limit, but a limit of whatever transport methods provide. This can
> lead to differing consensus if alternative layers for relaying are used.
> What you seem to be asking for is an unbound block size (or at least
> determined by whatever miners produce). This has the possibility (and even
> likelihood) of removing many participants from the network, including many
> small miners.
>
>
>
> 32MB in less than 3 years also appears to be far beyond limits of safety
> which are known to exist far sooner, and we cannot expect hardware and
> networking layers to improve by those amounts in that time.
>
>
>
> It also seems like it would be much better to wait until SegWit activates
> in order to truly measure the effects on the network from this increased
> capacity before committing to any additional increases.
>
>
>
> -Alphonse
>
>
>
>
>
>
>
> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
[-- Attachment #2: Type: text/html, Size: 8809 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 17:53 ` Alphonse Pace
@ 2017-03-28 22:36 ` Juan Garavaglia
2017-03-29 2:59 ` Luv Khemani
2017-03-29 6:24 ` Emin Gün Sirer
2017-03-29 9:16 ` Jared Lee Richardson
1 sibling, 2 replies; 81+ messages in thread
From: Juan Garavaglia @ 2017-03-28 22:36 UTC (permalink / raw)
To: Alphonse Pace, Wang Chun; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 6095 bytes --]
Alphonse,
Even when several of the experts involved in the document you refer has my respect and admiration, I do not agree with some of their conclusions some of their estimations are not accurate other changed like Bootstrap Time, Cost per Confirmed Transaction they consider a network of 450,000,00 GH and today is 3.594.236.966 GH, the energy consumption per GH is old, the cost of electricity is wrong even when the document was made and is hard to find any parameter used that is valid for an analysis today.
Again with all respect to the experts involved in that analysis is not valid today.
I tend to believe more in Moore’s law, Butters' Law of Photonics and Kryder’s Law all has been verified for many years and support that 32 MB in 2020 are possible and equals or less than 1 MB in 2010.
Again may be is not possible Johnson Lau and LukeJr invested a significant amount of time investigating ways to do a safe HF, and may be not possible to do a safe HF today but from processing power, bandwidth and storage is totally valid and Wang Chung proposal has solid grounds.
Regards
Juan
From: Alphonse Pace [mailto:alp.bitcoin@gmail.com]
Sent: Tuesday, March 28, 2017 2:53 PM
To: Juan Garavaglia <jg@112bit.com>; Wang Chun <1240902@gmail.com>
Cc: Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
Juan,
I suggest you take a look at this paper: http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf It may help you form opinions based in science rather than what appears to be nothing more than a hunch. It shows that even 4MB is unsafe. SegWit provides up to this limit.
8MB is most definitely not safe today.
Whether it is unsafe or impossible is the topic, since Wang Chun proposed making the block size limit 32MiB.
Wang Chun,
Can you specify what meeting you are talking about? You seem to have not replied on that point. Who were the participants and what was the purpose of this meeting?
-Alphonse
On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <jg@112bit.com<mailto:jg@112bit.com>> wrote:
Alphonse,
In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and 32MB limit valid in next halving, from network, storage and CPU perspective or 1MB was too high in 2010 what is possible or 1MB is to low today.
If is unsafe or impossible to raise the blocksize is a different topic.
Regards
Juan
From: bitcoin-dev-bounces@lists.linuxfoundation.org<mailto:bitcoin-dev-bounces@lists.linuxfoundation.org> [mailto:bitcoin-dev-bounces@lists.linuxfoundation.org<mailto:bitcoin-dev-bounces@lists.linuxfoundation.org>] On Behalf Of Alphonse Pace via bitcoin-dev
Sent: Tuesday, March 28, 2017 2:24 PM
To: Wang Chun <1240902@gmail.com<mailto:1240902@gmail.com>>; Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org<mailto:bitcoin-dev@lists.linuxfoundation.org>>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
What meeting are you referring to? Who were the participants?
Removing the limit but relying on the p2p protocol is not really a true 32MiB limit, but a limit of whatever transport methods provide. This can lead to differing consensus if alternative layers for relaying are used. What you seem to be asking for is an unbound block size (or at least determined by whatever miners produce). This has the possibility (and even likelihood) of removing many participants from the network, including many small miners.
32MB in less than 3 years also appears to be far beyond limits of safety which are known to exist far sooner, and we cannot expect hardware and networking layers to improve by those amounts in that time.
It also seems like it would be much better to wait until SegWit activates in order to truly measure the effects on the network from this increased capacity before committing to any additional increases.
-Alphonse
On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org<mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
I've proposed this hard fork approach last year in Hong Kong Consensus
but immediately rejected by coredevs at that meeting, after more than
one year it seems that lots of people haven't heard of it. So I would
post this here again for comment.
The basic idea is, as many of us agree, hard fork is risky and should
be well prepared. We need a long time to deploy it.
Despite spam tx on the network, the block capacity is approaching its
limit, and we must think ahead. Shall we code a patch right now, to
remove the block size limit of 1MB, but not activate it until far in
the future. I would propose to remove the 1MB limit at the next block
halving in spring 2020, only limit the block size to 32MiB which is
the maximum size the current p2p protocol allows. This patch must be
in the immediate next release of Bitcoin Core.
With this patch in core's next release, Bitcoin works just as before,
no fork will ever occur, until spring 2020. But everyone knows there
will be a fork scheduled. Third party services, libraries, wallets and
exchanges will have enough time to prepare for it over the next three
years.
We don't yet have an agreement on how to increase the block size
limit. There have been many proposals over the past years, like
BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
on. These hard fork proposals, with this patch already in Core's
release, they all become soft fork. We'll have enough time to discuss
all these proposals and decide which one to go. Take an example, if we
choose to fork to only 2MB, since 32MiB already scheduled, reduce it
from 32MiB to 2MB will be a soft fork.
Anyway, we must code something right now, before it becomes too late.
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org<mailto:bitcoin-dev@lists.linuxfoundation.org>
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 16114 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 22:36 ` Juan Garavaglia
@ 2017-03-29 2:59 ` Luv Khemani
2017-03-29 6:24 ` Emin Gün Sirer
1 sibling, 0 replies; 81+ messages in thread
From: Luv Khemani @ 2017-03-29 2:59 UTC (permalink / raw)
To: Juan Garavaglia, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7739 bytes --]
Hi Juan
> I tend to believe more in Moore’s law, Butters' Law of Photonics and Kryder’s Law all has been verified for many years and support that 32 MB in 2020 are possible and equals or less than 1 MB in 2010.
Protocol development, especially one in control of people's money cannot be based on beliefs. Do you have actual data to show significant increases in desktop CPU, memory and bandwidth?
All empirical evidence points to the opposite.
Intel has been struggling to eek out 5-10% gains for each generation of its CPUs. The growth of the total blockchain size at 1MB alone is much faster than this.
CPU Core counts have also been stagnant for a decade.
Disk Space growth has also been slowing and with the trend towards SSDs, available disk space in a typical PC has turned negative sharply.
Regards
Luv
________________________________
From: bitcoin-dev-bounces@lists.linuxfoundation.org <bitcoin-dev-bounces@lists.linuxfoundation.org> on behalf of Juan Garavaglia via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org>
Sent: Wednesday, March 29, 2017 6:36 AM
To: Alphonse Pace; Wang Chun
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
Alphonse,
Even when several of the experts involved in the document you refer has my respect and admiration, I do not agree with some of their conclusions some of their estimations are not accurate other changed like Bootstrap Time, Cost per Confirmed Transaction they consider a network of 450,000,00 GH and today is 3.594.236.966 GH, the energy consumption per GH is old, the cost of electricity is wrong even when the document was made and is hard to find any parameter used that is valid for an analysis today.
Again with all respect to the experts involved in that analysis is not valid today.
I tend to believe more in Moore’s law, Butters' Law of Photonics and Kryder’s Law all has been verified for many years and support that 32 MB in 2020 are possible and equals or less than 1 MB in 2010.
Again may be is not possible Johnson Lau and LukeJr invested a significant amount of time investigating ways to do a safe HF, and may be not possible to do a safe HF today but from processing power, bandwidth and storage is totally valid and Wang Chung proposal has solid grounds.
Regards
Juan
From: Alphonse Pace [mailto:alp.bitcoin@gmail.com]
Sent: Tuesday, March 28, 2017 2:53 PM
To: Juan Garavaglia <jg@112bit.com>; Wang Chun <1240902@gmail.com>
Cc: Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
Juan,
I suggest you take a look at this paper: http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf It may help you form opinions based in science rather than what appears to be nothing more than a hunch. It shows that even 4MB is unsafe. SegWit provides up to this limit.
On Scaling Decentralized Blockchains<http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf>
fc16.ifca.ai
On Scaling Decentralized Blockchains (A Position Paper) Kyle Croman 0 ;1, Christian Decker 4, Ittay Eyal , Adem Efe Gencer , Ari Juels 0 ;2, Ahmed Kosba 0 ;3, Andrew ...
8MB is most definitely not safe today.
Whether it is unsafe or impossible is the topic, since Wang Chun proposed making the block size limit 32MiB.
Wang Chun,
Can you specify what meeting you are talking about? You seem to have not replied on that point. Who were the participants and what was the purpose of this meeting?
-Alphonse
On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <jg@112bit.com<mailto:jg@112bit.com>> wrote:
Alphonse,
In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and 32MB limit valid in next halving, from network, storage and CPU perspective or 1MB was too high in 2010 what is possible or 1MB is to low today.
If is unsafe or impossible to raise the blocksize is a different topic.
Regards
Juan
From: bitcoin-dev-bounces@lists.linuxfoundation.org<mailto:bitcoin-dev-bounces@lists.linuxfoundation.org> [mailto:bitcoin-dev-bounces@lists.linuxfoundation.org<mailto:bitcoin-dev-bounces@lists.linuxfoundation.org>] On Behalf Of Alphonse Pace via bitcoin-dev
Sent: Tuesday, March 28, 2017 2:24 PM
To: Wang Chun <1240902@gmail.com<mailto:1240902@gmail.com>>; Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org<mailto:bitcoin-dev@lists.linuxfoundation.org>>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
What meeting are you referring to? Who were the participants?
Removing the limit but relying on the p2p protocol is not really a true 32MiB limit, but a limit of whatever transport methods provide. This can lead to differing consensus if alternative layers for relaying are used. What you seem to be asking for is an unbound block size (or at least determined by whatever miners produce). This has the possibility (and even likelihood) of removing many participants from the network, including many small miners.
32MB in less than 3 years also appears to be far beyond limits of safety which are known to exist far sooner, and we cannot expect hardware and networking layers to improve by those amounts in that time.
It also seems like it would be much better to wait until SegWit activates in order to truly measure the effects on the network from this increased capacity before committing to any additional increases.
-Alphonse
On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org<mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
I've proposed this hard fork approach last year in Hong Kong Consensus
but immediately rejected by coredevs at that meeting, after more than
one year it seems that lots of people haven't heard of it. So I would
post this here again for comment.
The basic idea is, as many of us agree, hard fork is risky and should
be well prepared. We need a long time to deploy it.
Despite spam tx on the network, the block capacity is approaching its
limit, and we must think ahead. Shall we code a patch right now, to
remove the block size limit of 1MB, but not activate it until far in
the future. I would propose to remove the 1MB limit at the next block
halving in spring 2020, only limit the block size to 32MiB which is
the maximum size the current p2p protocol allows. This patch must be
in the immediate next release of Bitcoin Core.
With this patch in core's next release, Bitcoin works just as before,
no fork will ever occur, until spring 2020. But everyone knows there
will be a fork scheduled. Third party services, libraries, wallets and
exchanges will have enough time to prepare for it over the next three
years.
We don't yet have an agreement on how to increase the block size
limit. There have been many proposals over the past years, like
BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
on. These hard fork proposals, with this patch already in Core's
release, they all become soft fork. We'll have enough time to discuss
all these proposals and decide which one to go. Take an example, if we
choose to fork to only 2MB, since 32MiB already scheduled, reduce it
from 32MiB to 2MB will be a soft fork.
Anyway, we must code something right now, before it becomes too late.
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org<mailto:bitcoin-dev@lists.linuxfoundation.org>
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 22149 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 22:36 ` Juan Garavaglia
2017-03-29 2:59 ` Luv Khemani
@ 2017-03-29 6:24 ` Emin Gün Sirer
2017-03-29 15:34 ` Johnson Lau
1 sibling, 1 reply; 81+ messages in thread
From: Emin Gün Sirer @ 2017-03-29 6:24 UTC (permalink / raw)
To: Juan Garavaglia, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7558 bytes --]
>Even when several of the experts involved in the document you refer has my
respect and admiration, I do not agree with some of their conclusions
I'm one of the co-authors of that study. I'd be the first to agree with
your conclusion
and argue that the 4MB size suggested in that paper should not be used
without
compensation for two important changes to the network.
Our recent measurements of the Bitcoin P2P network show that network speeds
have improved tremendously. From February 2016 to February 2017, the average
provisioned bandwidth of a reachable Bitcoin node went up by approximately
70%.
And that's just in the last year.
Further, the emergence of high-speed block relay networks, like Falcon (
http://www.falcon-net.org)
and FIBRE, as well as block compression, e.g. BIP152 and xthin, change the
picture dramatically.
So, the 4MB limit mentioned in our paper should not be used as a protocol
limit today.
Best,
- egs
On Tue, Mar 28, 2017 at 3:36 PM, Juan Garavaglia via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Alphonse,
>
>
>
> Even when several of the experts involved in the document you refer has my
> respect and admiration, I do not agree with some of their conclusions some
> of their estimations are not accurate other changed like Bootstrap Time,
> Cost per Confirmed Transaction they consider a network of 450,000,00 GH and
> today is 3.594.236.966 GH, the energy consumption per GH is old, the cost
> of electricity is wrong even when the document was made and is hard to find
> any parameter used that is valid for an analysis today.
>
>
>
> Again with all respect to the experts involved in that analysis is not
> valid today.
>
>
>
> I tend to believe more in Moore’s law, Butters' Law of Photonics and
> Kryder’s Law all has been verified for many years and support that 32 MB in
> 2020 are possible and equals or less than 1 MB in 2010.
>
>
>
> Again may be is not possible Johnson Lau and LukeJr invested a significant
> amount of time investigating ways to do a safe HF, and may be not possible
> to do a safe HF today but from processing power, bandwidth and storage is
> totally valid and Wang Chung proposal has solid grounds.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> *From:* Alphonse Pace [mailto:alp.bitcoin@gmail.com]
> *Sent:* Tuesday, March 28, 2017 2:53 PM
> *To:* Juan Garavaglia <jg@112bit.com>; Wang Chun <1240902@gmail.com>
> *Cc:* Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>
>
> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> Juan,
>
>
>
> I suggest you take a look at this paper: http://fc16.ifca.ai/
> bitcoin/papers/CDE+16.pdf It may help you form opinions based in science
> rather than what appears to be nothing more than a hunch. It shows that
> even 4MB is unsafe. SegWit provides up to this limit.
>
>
>
> 8MB is most definitely not safe today.
>
>
>
> Whether it is unsafe or impossible is the topic, since Wang Chun proposed
> making the block size limit 32MiB.
>
>
>
>
>
> Wang Chun,
>
>
> Can you specify what meeting you are talking about? You seem to have not
> replied on that point. Who were the participants and what was the purpose
> of this meeting?
>
>
>
> -Alphonse
>
>
>
> On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <jg@112bit.com> wrote:
>
> Alphonse,
>
>
>
> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and
> 32MB limit valid in next halving, from network, storage and CPU perspective
> or 1MB was too high in 2010 what is possible or 1MB is to low today.
>
>
>
> If is unsafe or impossible to raise the blocksize is a different topic.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> *From:* bitcoin-dev-bounces@lists.linuxfoundation.org [mailto:
> bitcoin-dev-bounces@lists.linuxfoundation.org] *On Behalf Of *Alphonse
> Pace via bitcoin-dev
> *Sent:* Tuesday, March 28, 2017 2:24 PM
> *To:* Wang Chun <1240902@gmail.com>; Bitcoin Protocol Discussion <
> bitcoin-dev@lists.linuxfoundation.org>
> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> What meeting are you referring to? Who were the participants?
>
>
>
> Removing the limit but relying on the p2p protocol is not really a true
> 32MiB limit, but a limit of whatever transport methods provide. This can
> lead to differing consensus if alternative layers for relaying are used.
> What you seem to be asking for is an unbound block size (or at least
> determined by whatever miners produce). This has the possibility (and even
> likelihood) of removing many participants from the network, including many
> small miners.
>
>
>
> 32MB in less than 3 years also appears to be far beyond limits of safety
> which are known to exist far sooner, and we cannot expect hardware and
> networking layers to improve by those amounts in that time.
>
>
>
> It also seems like it would be much better to wait until SegWit activates
> in order to truly measure the effects on the network from this increased
> capacity before committing to any additional increases.
>
>
>
> -Alphonse
>
>
>
>
>
>
>
> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 16649 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 6:24 ` Emin Gün Sirer
@ 2017-03-29 15:34 ` Johnson Lau
2017-04-01 16:15 ` Leandro Coutinho
0 siblings, 1 reply; 81+ messages in thread
From: Johnson Lau @ 2017-03-29 15:34 UTC (permalink / raw)
To: Emin Gün Sirer, bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 8991 bytes --]
> On 29 Mar 2017, at 14:24, Emin Gün Sirer via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> >Even when several of the experts involved in the document you refer has my respect and admiration, I do not agree with some of their conclusions
>
> I'm one of the co-authors of that study. I'd be the first to agree with your conclusion
> and argue that the 4MB size suggested in that paper should not be used without
> compensation for two important changes to the network.
>
> Our recent measurements of the Bitcoin P2P network show that network speeds
> have improved tremendously. From February 2016 to February 2017, the average
> provisioned bandwidth of a reachable Bitcoin node went up by approximately 70%.
> And that's just in the last year.
4 * 144 * 30 = 17.3GB per month, or 207GB per year. Full node initialisation will become prohibitive for most users until a shortcut is made (e.g. witness pruning and UTXO commitment but these are not trust-free)
>
> Further, the emergence of high-speed block relay networks, like Falcon (http://www.falcon-net.org <http://www.falcon-net.org/>)
> and FIBRE, as well as block compression, e.g. BIP152 and xthin, change the picture dramatically.
Also as the co-author of the selfish mining paper, you should know all these technology assume big miners being benevolent.
>
> So, the 4MB limit mentioned in our paper should not be used as a protocol limit today.
>
> Best,
> - egs
>
>
>
> On Tue, Mar 28, 2017 at 3:36 PM, Juan Garavaglia via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
> Alphonse,
>
>
>
> Even when several of the experts involved in the document you refer has my respect and admiration, I do not agree with some of their conclusions some of their estimations are not accurate other changed like Bootstrap Time, Cost per Confirmed Transaction they consider a network of 450,000,00 GH and today is 3.594.236.966 GH, the energy consumption per GH is old, the cost of electricity is wrong even when the document was made and is hard to find any parameter used that is valid for an analysis today.
>
>
>
> Again with all respect to the experts involved in that analysis is not valid today.
>
>
>
> I tend to believe more in Moore’s law, Butters' Law of Photonics and Kryder’s Law all has been verified for many years and support that 32 MB in 2020 are possible and equals or less than 1 MB in 2010.
>
>
>
> Again may be is not possible Johnson Lau and LukeJr invested a significant amount of time investigating ways to do a safe HF, and may be not possible to do a safe HF today but from processing power, bandwidth and storage is totally valid and Wang Chung proposal has solid grounds.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> From: Alphonse Pace [mailto:alp.bitcoin@gmail.com <mailto:alp.bitcoin@gmail.com>]
> Sent: Tuesday, March 28, 2017 2:53 PM
> To: Juan Garavaglia <jg@112bit.com <mailto:jg@112bit.com>>; Wang Chun <1240902@gmail.com <mailto:1240902@gmail.com>>
> Cc: Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org <mailto:bitcoin-dev@lists.linuxfoundation.org>>
>
>
> Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> Juan,
>
>
>
> I suggest you take a look at this paper: http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf <http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf> It may help you form opinions based in science rather than what appears to be nothing more than a hunch. It shows that even 4MB is unsafe. SegWit provides up to this limit.
>
>
>
> 8MB is most definitely not safe today.
>
>
>
> Whether it is unsafe or impossible is the topic, since Wang Chun proposed making the block size limit 32MiB.
>
>
>
>
>
> Wang Chun,
>
>
> Can you specify what meeting you are talking about? You seem to have not replied on that point. Who were the participants and what was the purpose of this meeting?
>
>
>
> -Alphonse
>
>
>
> On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <jg@112bit.com <mailto:jg@112bit.com>> wrote:
>
> Alphonse,
>
>
>
> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and 32MB limit valid in next halving, from network, storage and CPU perspective or 1MB was too high in 2010 what is possible or 1MB is to low today.
>
>
>
> If is unsafe or impossible to raise the blocksize is a different topic.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> From: bitcoin-dev-bounces@lists.linuxfoundation.org <mailto:bitcoin-dev-bounces@lists.linuxfoundation.org> [mailto:bitcoin-dev-bounces@lists.linuxfoundation.org <mailto:bitcoin-dev-bounces@lists.linuxfoundation.org>] On Behalf Of Alphonse Pace via bitcoin-dev
> Sent: Tuesday, March 28, 2017 2:24 PM
> To: Wang Chun <1240902@gmail.com <mailto:1240902@gmail.com>>; Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org <mailto:bitcoin-dev@lists.linuxfoundation.org>>
> Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> What meeting are you referring to? Who were the participants?
>
>
>
> Removing the limit but relying on the p2p protocol is not really a true 32MiB limit, but a limit of whatever transport methods provide. This can lead to differing consensus if alternative layers for relaying are used. What you seem to be asking for is an unbound block size (or at least determined by whatever miners produce). This has the possibility (and even likelihood) of removing many participants from the network, including many small miners.
>
>
>
> 32MB in less than 3 years also appears to be far beyond limits of safety which are known to exist far sooner, and we cannot expect hardware and networking layers to improve by those amounts in that time.
>
>
>
> It also seems like it would be much better to wait until SegWit activates in order to truly measure the effects on the network from this increased capacity before committing to any additional increases.
>
>
>
> -Alphonse
>
>
>
>
>
>
>
> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
>
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org <mailto:bitcoin-dev@lists.linuxfoundation.org>
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>
>
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org <mailto:bitcoin-dev@lists.linuxfoundation.org>
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 21361 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 15:34 ` Johnson Lau
@ 2017-04-01 16:15 ` Leandro Coutinho
0 siblings, 0 replies; 81+ messages in thread
From: Leandro Coutinho @ 2017-04-01 16:15 UTC (permalink / raw)
To: Bitcoin Protocol Discussion, Johnson Lau
[-- Attachment #1: Type: text/plain, Size: 8952 bytes --]
One interesting thing to do is to compare how much does it cost to maintain
a bank check account and how much does it cost to run a full node.
It seems that it is about 120USD/year in USA:
http://m.huffpost.com/us/entry/6219730
A 4TB hard drive ~=115USD
https://www.amazon.com/gp/aw/d/B01LQQH86A/ref=mp_s_a_1_4
And it has a warranty of 3 years.
As your calculation shows, it will take more than 19 years to reach 4TB
with a 4MB blocksize.
Em 29/03/2017 12:35, "Johnson Lau via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> escreveu:
On 29 Mar 2017, at 14:24, Emin Gün Sirer via bitcoin-dev <bitcoin-dev@lists.
linuxfoundation.org> wrote:
>Even when several of the experts involved in the document you refer has my
respect and admiration, I do not agree with some of their conclusions
I'm one of the co-authors of that study. I'd be the first to agree with
your conclusion
and argue that the 4MB size suggested in that paper should not be used
without
compensation for two important changes to the network.
Our recent measurements of the Bitcoin P2P network show that network speeds
have improved tremendously. From February 2016 to February 2017, the average
provisioned bandwidth of a reachable Bitcoin node went up by approximately
70%.
And that's just in the last year.
4 * 144 * 30 = 17.3GB per month, or 207GB per year. Full node
initialisation will become prohibitive for most users until a shortcut is
made (e.g. witness pruning and UTXO commitment but these are not trust-free)
Further, the emergence of high-speed block relay networks, like Falcon (
http://www.falcon-net.org)
and FIBRE, as well as block compression, e.g. BIP152 and xthin, change the
picture dramatically.
Also as the co-author of the selfish mining paper, you should know all
these technology assume big miners being benevolent.
So, the 4MB limit mentioned in our paper should not be used as a protocol
limit today.
Best,
- egs
On Tue, Mar 28, 2017 at 3:36 PM, Juan Garavaglia via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Alphonse,
>
>
>
> Even when several of the experts involved in the document you refer has my
> respect and admiration, I do not agree with some of their conclusions some
> of their estimations are not accurate other changed like Bootstrap Time,
> Cost per Confirmed Transaction they consider a network of 450,000,00 GH and
> today is 3.594.236.966 GH, the energy consumption per GH is old, the cost
> of electricity is wrong even when the document was made and is hard to find
> any parameter used that is valid for an analysis today.
>
>
>
> Again with all respect to the experts involved in that analysis is not
> valid today.
>
>
>
> I tend to believe more in Moore’s law, Butters' Law of Photonics and
> Kryder’s Law all has been verified for many years and support that 32 MB in
> 2020 are possible and equals or less than 1 MB in 2010.
>
>
>
> Again may be is not possible Johnson Lau and LukeJr invested a significant
> amount of time investigating ways to do a safe HF, and may be not possible
> to do a safe HF today but from processing power, bandwidth and storage is
> totally valid and Wang Chung proposal has solid grounds.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> *From:* Alphonse Pace [mailto:alp.bitcoin@gmail.com]
> *Sent:* Tuesday, March 28, 2017 2:53 PM
> *To:* Juan Garavaglia <jg@112bit.com>; Wang Chun <1240902@gmail.com>
> *Cc:* Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>
>
> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> Juan,
>
>
>
> I suggest you take a look at this paper: http://fc16.ifca.ai/bit
> coin/papers/CDE+16.pdf It may help you form opinions based in science
> rather than what appears to be nothing more than a hunch. It shows that
> even 4MB is unsafe. SegWit provides up to this limit.
>
>
>
> 8MB is most definitely not safe today.
>
>
>
> Whether it is unsafe or impossible is the topic, since Wang Chun proposed
> making the block size limit 32MiB.
>
>
>
>
>
> Wang Chun,
>
>
> Can you specify what meeting you are talking about? You seem to have not
> replied on that point. Who were the participants and what was the purpose
> of this meeting?
>
>
>
> -Alphonse
>
>
>
> On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <jg@112bit.com> wrote:
>
> Alphonse,
>
>
>
> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and
> 32MB limit valid in next halving, from network, storage and CPU perspective
> or 1MB was too high in 2010 what is possible or 1MB is to low today.
>
>
>
> If is unsafe or impossible to raise the blocksize is a different topic.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> *From:* bitcoin-dev-bounces@lists.linuxfoundation.org [mailto:
> bitcoin-dev-bounces@lists.linuxfoundation.org] *On Behalf Of *Alphonse
> Pace via bitcoin-dev
> *Sent:* Tuesday, March 28, 2017 2:24 PM
> *To:* Wang Chun <1240902@gmail.com>; Bitcoin Protocol Discussion <
> bitcoin-dev@lists.linuxfoundation.org>
> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> What meeting are you referring to? Who were the participants?
>
>
>
> Removing the limit but relying on the p2p protocol is not really a true
> 32MiB limit, but a limit of whatever transport methods provide. This can
> lead to differing consensus if alternative layers for relaying are used.
> What you seem to be asking for is an unbound block size (or at least
> determined by whatever miners produce). This has the possibility (and even
> likelihood) of removing many participants from the network, including many
> small miners.
>
>
>
> 32MB in less than 3 years also appears to be far beyond limits of safety
> which are known to exist far sooner, and we cannot expect hardware and
> networking layers to improve by those amounts in that time.
>
>
>
> It also seems like it would be much better to wait until SegWit activates
> in order to truly measure the effects on the network from this increased
> capacity before committing to any additional increases.
>
>
>
> -Alphonse
>
>
>
>
>
>
>
> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 19723 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 17:53 ` Alphonse Pace
2017-03-28 22:36 ` Juan Garavaglia
@ 2017-03-29 9:16 ` Jared Lee Richardson
2017-03-29 16:00 ` Aymeric Vitte
1 sibling, 1 reply; 81+ messages in thread
From: Jared Lee Richardson @ 2017-03-29 9:16 UTC (permalink / raw)
To: Alphonse Pace, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7564 bytes --]
> I suggest you take a look at this paper: http://fc16.ifca.ai/
bitcoin/papers/CDE+16.pdf It may help you form opinions based in science
rather than what appears to be nothing more than a hunch. It shows that
even 4MB is unsafe. SegWit provides up to this limit.
I find this paper wholly unconvincing. Firstly I note that he assumes the
price of electricity is 10c/kwh in Oct 2015. As a miner operating and
building large farms at that time, I can guarantee you that almost no large
mines were paying anything even close to that high for electricity, even
then. If he had performed a detailed search on the big mines he would have
found as much, or could have asked, but it seems like it was simply made
up. Even U.S. industrial electricity prices are lower than that.
Moreover, he focuses his math almost entirely around mining, asserting in
table 1 that 98% of the "cost of processing a transaction" as being
mining. That completely misunderstands the purpose of mining. Miners
occasionally trivially resolve double spend conflicts, but miners are
paid(and played against eachother) for economic security against
attackers. They aren't paid to process transactions. Nodes process
transactions and are paid nothing to do so, and their costs are 100x more
relevant to the blocksize debate than a paper about miner costs. Miner's
operational costs relate to economic protection formulas, not the cost of a
transaction.
He also states: "the top 10% of nodes receive a 1MB block 2.4min earlier
than the bottom 10% — meaning that depending on their access to nodes, some
miners could obtain a significant and unfair lead over others in solving
hash puzzles."
He's using 2012-era logic of mining. By October 2015, no miner of any size
was in the bottom 10% of node propagation. If they were a small or medium
sized miner, they mined shares on a pool and would be at most 30 seconds
behind the pool. Pools that didn't get blocks within 20 seconds weren't
pools for long. If they were a huge miner, they ran their own pool with
good propagation times. For a scientific paper, this is reading like
someone who had absolutely no idea what was really going on in the mining
world at the time. But again, none of that relates to transaction "costs."
Transactions cost nodes money; protecting the network costs miners money.
Miners are rewarded with fees; nodes are rewarded only by utility and price
increases.
On Tue, Mar 28, 2017 at 10:53 AM, Alphonse Pace via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> Juan,
>
> I suggest you take a look at this paper: http://fc16.ifca.ai/
> bitcoin/papers/CDE+16.pdf It may help you form opinions based in science
> rather than what appears to be nothing more than a hunch. It shows that
> even 4MB is unsafe. SegWit provides up to this limit.
>
> 8MB is most definitely not safe today.
>
> Whether it is unsafe or impossible is the topic, since Wang Chun proposed
> making the block size limit 32MiB.
>
>
> Wang Chun,
>
> Can you specify what meeting you are talking about? You seem to have not
> replied on that point. Who were the participants and what was the purpose
> of this meeting?
>
> -Alphonse
>
> On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <jg@112bit.com> wrote:
>
>> Alphonse,
>>
>>
>>
>> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and
>> 32MB limit valid in next halving, from network, storage and CPU perspective
>> or 1MB was too high in 2010 what is possible or 1MB is to low today.
>>
>>
>>
>> If is unsafe or impossible to raise the blocksize is a different topic.
>>
>
>>
>> Regards
>>
>>
>>
>> Juan
>>
>>
>>
>>
>>
>> *From:* bitcoin-dev-bounces@lists.linuxfoundation.org [mailto:
>> bitcoin-dev-bounces@lists.linuxfoundation.org] *On Behalf Of *Alphonse
>> Pace via bitcoin-dev
>> *Sent:* Tuesday, March 28, 2017 2:24 PM
>> *To:* Wang Chun <1240902@gmail.com>; Bitcoin Protocol Discussion <
>> bitcoin-dev@lists.linuxfoundation.org>
>> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>>
>>
>>
>> What meeting are you referring to? Who were the participants?
>>
>>
>>
>> Removing the limit but relying on the p2p protocol is not really a true
>> 32MiB limit, but a limit of whatever transport methods provide. This can
>> lead to differing consensus if alternative layers for relaying are used.
>> What you seem to be asking for is an unbound block size (or at least
>> determined by whatever miners produce). This has the possibility (and even
>> likelihood) of removing many participants from the network, including many
>> small miners.
>>
>>
>>
>> 32MB in less than 3 years also appears to be far beyond limits of safety
>> which are known to exist far sooner, and we cannot expect hardware and
>> networking layers to improve by those amounts in that time.
>>
>>
>>
>> It also seems like it would be much better to wait until SegWit activates
>> in order to truly measure the effects on the network from this increased
>> capacity before committing to any additional increases.
>>
>>
>>
>> -Alphonse
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>> I've proposed this hard fork approach last year in Hong Kong Consensus
>> but immediately rejected by coredevs at that meeting, after more than
>> one year it seems that lots of people haven't heard of it. So I would
>> post this here again for comment.
>>
>> The basic idea is, as many of us agree, hard fork is risky and should
>> be well prepared. We need a long time to deploy it.
>>
>> Despite spam tx on the network, the block capacity is approaching its
>> limit, and we must think ahead. Shall we code a patch right now, to
>> remove the block size limit of 1MB, but not activate it until far in
>> the future. I would propose to remove the 1MB limit at the next block
>> halving in spring 2020, only limit the block size to 32MiB which is
>> the maximum size the current p2p protocol allows. This patch must be
>> in the immediate next release of Bitcoin Core.
>>
>> With this patch in core's next release, Bitcoin works just as before,
>> no fork will ever occur, until spring 2020. But everyone knows there
>> will be a fork scheduled. Third party services, libraries, wallets and
>> exchanges will have enough time to prepare for it over the next three
>> years.
>>
>> We don't yet have an agreement on how to increase the block size
>> limit. There have been many proposals over the past years, like
>> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
>> on. These hard fork proposals, with this patch already in Core's
>> release, they all become soft fork. We'll have enough time to discuss
>> all these proposals and decide which one to go. Take an example, if we
>> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
>> from 32MiB to 2MB will be a soft fork.
>>
>> Anyway, we must code something right now, before it becomes too late.
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 12328 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 9:16 ` Jared Lee Richardson
@ 2017-03-29 16:00 ` Aymeric Vitte
0 siblings, 0 replies; 81+ messages in thread
From: Aymeric Vitte @ 2017-03-29 16:00 UTC (permalink / raw)
To: Jared Lee Richardson, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 8681 bytes --]
Le 29/03/2017 à 11:16, Jared Lee Richardson via bitcoin-dev a écrit :
> Nodes process transactions and are paid nothing to do so, and their
> costs are 100x more relevant to the blocksize debate than a paper
> about miner costs.
>
> Miners are rewarded with fees; nodes are rewarded only by utility and
> price increases.
Nodes are rewarded by just nothing which is the main problem of the
bitcoin network (who is therefore not a decentralized system today)
although it seems like everybody is eluding the issue (as well as how to
find solutions to setup quickly full nodes as you quoted in another
answer to this thread, and of course design a decentralized system to
make sure that full nodes behave correctly)
Bitcoin would not be in this situation (ie maybe at the mercy of a very
small minority of freeriders among all the entities involved in the
network, ie miners, just seeking to make more and more money because
they invested in an anti-ecological pow, not understanding that bitcoin
is not just about money) if more nodes were existing and could reject
their blocks
It seems like the initial message of this thread(t) is an ultimatum:
whether you implement what we ask, whether we join BU and then > 50 is
almost reached...
>
> On Tue, Mar 28, 2017 at 10:53 AM, Alphonse Pace via bitcoin-dev
> <bitcoin-dev@lists.linuxfoundation.org
> <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
>
> Juan,
>
> I suggest you take a look at this
> paper: http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf
> <http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf> It may help you
> form opinions based in science rather than what appears to be
> nothing more than a hunch. It shows that even 4MB is unsafe.
> SegWit provides up to this limit.
>
> 8MB is most definitely not safe today.
>
> Whether it is unsafe or impossible is the topic, since Wang Chun
> proposed making the block size limit 32MiB.
>
>
> Wang Chun,
>
> Can you specify what meeting you are talking about? You seem to
> have not replied on that point. Who were the participants and
> what was the purpose of this meeting?
>
> -Alphonse
>
> On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <jg@112bit.com
> <mailto:jg@112bit.com>> wrote:
>
> Alphonse,
>
>
>
> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on
> 2016 and 32MB limit valid in next halving, from network,
> storage and CPU perspective or 1MB was too high in 2010 what
> is possible or 1MB is to low today.
>
>
>
> If is unsafe or impossible to raise the blocksize is a
> different topic.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> *From:*bitcoin-dev-bounces@lists.linuxfoundation.org
> <mailto:bitcoin-dev-bounces@lists.linuxfoundation.org>
> [mailto:bitcoin-dev-bounces@lists.linuxfoundation.org
> <mailto:bitcoin-dev-bounces@lists.linuxfoundation.org>] *On
> Behalf Of *Alphonse Pace via bitcoin-dev
> *Sent:* Tuesday, March 28, 2017 2:24 PM
> *To:* Wang Chun <1240902@gmail.com
> <mailto:1240902@gmail.com>>; Bitcoin Protocol Discussion
> <bitcoin-dev@lists.linuxfoundation.org
> <mailto:bitcoin-dev@lists.linuxfoundation.org>>
> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last
> week's meeting
>
>
>
> What meeting are you referring to? Who were the participants?
>
>
>
> Removing the limit but relying on the p2p protocol is not
> really a true 32MiB limit, but a limit of whatever transport
> methods provide. This can lead to differing consensus if
> alternative layers for relaying are used. What you seem to be
> asking for is an unbound block size (or at least determined by
> whatever miners produce). This has the possibility (and even
> likelihood) of removing many participants from the network,
> including many small miners.
>
>
>
> 32MB in less than 3 years also appears to be far beyond limits
> of safety which are known to exist far sooner, and we cannot
> expect hardware and networking layers to improve by those
> amounts in that time.
>
>
>
> It also seems like it would be much better to wait until
> SegWit activates in order to truly measure the effects on the
> network from this increased capacity before committing to any
> additional increases.
>
>
>
> -Alphonse
>
>
>
>
>
>
>
> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev
> <bitcoin-dev@lists.linuxfoundation.org
> <mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
>
> I've proposed this hard fork approach last year in Hong
> Kong Consensus
> but immediately rejected by coredevs at that meeting,
> after more than
> one year it seems that lots of people haven't heard of it.
> So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky
> and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is
> approaching its
> limit, and we must think ahead. Shall we code a patch
> right now, to
> remove the block size limit of 1MB, but not activate it
> until far in
> the future. I would propose to remove the 1MB limit at the
> next block
> halving in spring 2020, only limit the block size to 32MiB
> which is
> the maximum size the current p2p protocol allows. This
> patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just
> as before,
> no fork will ever occur, until spring 2020. But everyone
> knows there
> will be a fork scheduled. Third party services, libraries,
> wallets and
> exchanges will have enough time to prepare for it over the
> next three
> years.
>
> We don't yet have an agreement on how to increase the
> block size
> limit. There have been many proposals over the past years,
> like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248,
> BU, and so
> on. These hard fork proposals, with this patch already in
> Core's
> release, they all become soft fork. We'll have enough time
> to discuss
> all these proposals and decide which one to go. Take an
> example, if we
> choose to fork to only 2MB, since 32MiB already scheduled,
> reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it
> becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> <mailto:bitcoin-dev@lists.linuxfoundation.org>
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>
>
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> <mailto:bitcoin-dev@lists.linuxfoundation.org>
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
[-- Attachment #2: Type: text/html, Size: 23699 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 16:59 [bitcoin-dev] Hard fork proposal from last week's meeting Wang Chun
2017-03-28 17:13 ` Matt Corallo
2017-03-28 17:23 ` Alphonse Pace
@ 2017-03-28 17:34 ` Johnson Lau
2017-03-28 17:46 ` Luke Dashjr
2017-03-28 20:50 ` Tom Zander
2017-03-28 20:48 ` Tom Zander
` (2 subsequent siblings)
5 siblings, 2 replies; 81+ messages in thread
From: Johnson Lau @ 2017-03-28 17:34 UTC (permalink / raw)
To: Wang Chun, bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 3526 bytes --]
You are probably not the first one nor last one with such idea. Actually, Luke wrote up a BIP with similar idea in mind:
https://github.com/luke-jr/bips/blob/bip-hfprep/bip-hfprep.mediawiki <https://github.com/luke-jr/bips/blob/bip-hfprep/bip-hfprep.mediawiki>
Instead of just lifting the block size limit, he also suggested to remove many other rules. I think he has given up this idea because it’s just too complicated.
If we really want to prepare for a hardfork, we probably want to do more than simply increasing the size limit. For example, my spoonnet proposal:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013542.html <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013542.html>
In a HF, we may want to relocate the witness commitment to a better place. We may also want to fix Satoshi's sighash bug. These are much more than simple size increase.
So if we really want to get prepared for a potential HF with unknown parameters, I’d suggest to set a time bomb in the client, which will stop processing of transactions with big warning in GUI. The user may still have an option to continue with old rules at their own risks.
Or, instead of increasing the block size, we make a softfork to decrease the block size to 1kB and block reward to 0, activating far in the future. This is similar to the difficulty bomb in ETH, which will freeze the network.
> On 29 Mar 2017, at 00:59, Wang Chun via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 4774 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 17:34 ` Johnson Lau
@ 2017-03-28 17:46 ` Luke Dashjr
2017-03-28 20:50 ` Tom Zander
1 sibling, 0 replies; 81+ messages in thread
From: Luke Dashjr @ 2017-03-28 17:46 UTC (permalink / raw)
To: bitcoin-dev, Johnson Lau
On Tuesday, March 28, 2017 5:34:23 PM Johnson Lau via bitcoin-dev wrote:
> You are probably not the first one nor last one with such idea. Actually,
> Luke wrote up a BIP with similar idea in mind:
>
> https://github.com/luke-jr/bips/blob/bip-hfprep/bip-hfprep.mediawiki
> <https://github.com/luke-jr/bips/blob/bip-hfprep/bip-hfprep.mediawiki>
>
> Instead of just lifting the block size limit, he also suggested to remove
> many other rules. I think he has given up this idea because it’s just too
> complicated.
> ...
> So if we really want to get prepared for a potential HF with unknown
> parameters, I’d suggest to set a time bomb in the client, which will stop
> processing of transactions with big warning in GUI. The user may still
> have an option to continue with old rules at their own risks.
Indeed, actually implementing hfprep proved to be overly complicated.
I like the idea of a time bomb that just shuts down the client after it
determine it's stale and refuses to start without an explicit override.
That should work no matter what the hardfork is, and gives us a good
expectation for hardfork timeframes.
> Or, instead of increasing the block size, we make a softfork to decrease
> the block size to 1kB and block reward to 0, activating far in the future.
> This is similar to the difficulty bomb in ETH, which will freeze the
> network.
I don't like this idea. It leaves the node open to attack from blocks actually
meeting the criteria. Maybe the absolute minimum as Jeremy suggested.
Luke
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 17:34 ` Johnson Lau
2017-03-28 17:46 ` Luke Dashjr
@ 2017-03-28 20:50 ` Tom Zander
2017-03-29 4:21 ` Johnson Lau
1 sibling, 1 reply; 81+ messages in thread
From: Tom Zander @ 2017-03-28 20:50 UTC (permalink / raw)
To: Johnson Lau, Bitcoin Protocol Discussion
On Tuesday, 28 March 2017 19:34:23 CEST Johnson Lau via bitcoin-dev wrote:
> So if we really want to get prepared for a potential HF with unknown
> parameters,
That was not suggested.
Maybe you can comment on the very specific suggestion instead?
--
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 16:59 [bitcoin-dev] Hard fork proposal from last week's meeting Wang Chun
` (2 preceding siblings ...)
2017-03-28 17:34 ` Johnson Lau
@ 2017-03-28 20:48 ` Tom Zander
2017-03-29 6:32 ` Bram Cohen
2017-03-29 7:49 ` Martin Lízner
5 siblings, 0 replies; 81+ messages in thread
From: Tom Zander @ 2017-03-28 20:48 UTC (permalink / raw)
To: bitcoin-dev, Wang Chun
On Tuesday, 28 March 2017 18:59:32 CEST Wang Chun via bitcoin-dev wrote:
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
...
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork.
I think that is a very smart idea, thank you for making it.
--
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 16:59 [bitcoin-dev] Hard fork proposal from last week's meeting Wang Chun
` (3 preceding siblings ...)
2017-03-28 20:48 ` Tom Zander
@ 2017-03-29 6:32 ` Bram Cohen
2017-03-29 9:37 ` Jorge Timón
2017-03-29 7:49 ` Martin Lízner
5 siblings, 1 reply; 81+ messages in thread
From: Bram Cohen @ 2017-03-29 6:32 UTC (permalink / raw)
To: Wang Chun, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 773 bytes --]
On Tue, Mar 28, 2017 at 9:59 AM, Wang Chun via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
Much as it may be appealing to repeal the block size limit now with a grace
period until a replacement is needed in a repeal and replace strategy, it's
dubious to assume that an idea can be agreed upon later when it can't be
agreed upon now. Trying to put a time limit on it runs into the possibility
that you'll find that whatever reasons there were for not having general
agreement on a new setup before still apply, and running into the
embarrassing situation of winding up sticking with the status quo after
much sturm and drang.
[-- Attachment #2: Type: text/html, Size: 1135 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 6:32 ` Bram Cohen
@ 2017-03-29 9:37 ` Jorge Timón
2017-03-29 19:07 ` Jared Lee Richardson
0 siblings, 1 reply; 81+ messages in thread
From: Jorge Timón @ 2017-03-29 9:37 UTC (permalink / raw)
To: Bitcoin Dev, Bram Cohen
[-- Attachment #1: Type: text/plain, Size: 1873 bytes --]
While Segwit's change from 1 mb size limit to 4 mb weight limit seems to be
controversial among some users (I find that very often it is because they
have been confused about what segwit does or even outright lied about it) I
don't think it's very interesting to discuss further size increases.
I find more interesting to talk to the users and see how they think Segwit
harms them, maybe we missed something in segwit that needs to be removed
for segwit to become uncontroversial, or maybe it is just disinformation.
On the other hand, we may want to have our first uncontroversial hardfork
asap, independently of block size. For example, we could do something as
simple as fixing the timewarp attack as bip99 proposes. I cannot think of a
hf that is easier to implement or has less potential for controversy than
that.
On 29 Mar 2017 8:32 am, "Bram Cohen via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:
On Tue, Mar 28, 2017 at 9:59 AM, Wang Chun via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
Much as it may be appealing to repeal the block size limit now with a grace
period until a replacement is needed in a repeal and replace strategy, it's
dubious to assume that an idea can be agreed upon later when it can't be
agreed upon now. Trying to put a time limit on it runs into the possibility
that you'll find that whatever reasons there were for not having general
agreement on a new setup before still apply, and running into the
embarrassing situation of winding up sticking with the status quo after
much sturm and drang.
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 2889 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 9:37 ` Jorge Timón
@ 2017-03-29 19:07 ` Jared Lee Richardson
2017-04-02 19:02 ` Staf Verhaegen
0 siblings, 1 reply; 81+ messages in thread
From: Jared Lee Richardson @ 2017-03-29 19:07 UTC (permalink / raw)
To: Jorge Timón, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 4447 bytes --]
> While Segwit's change from 1 mb size limit to 4 mb weight limit seems to
be controversial among some users [..] I don't think it's very interesting
to discuss further size increases.
I think the reason for this is largely because SegWit as a blocksize
increase isn't very satisfying. It resolves to a one-time increase with no
future plans, thus engendering the same objections as people who demand we
just "raise the number to N." People can argue about what N should be, but
when N is just a flat number, we know we'll have to deal with the issue
again.
In that light I think it is even more essential to continue to discuss the
blocksize debate and problem.
> I find more interesting to talk to the users and see how they think
Segwit harms them,
From an inordinant amount of time spent reading Reddit, I believe this
largely comes down to the rumor that has a deathgrip on the BU community -
That Core are all just extensions of Blockstream, and blockstream wants to
restrict growth on-chain to force growth of their 2nd layer
services(lightning and/or sidechains).
I believe the tone of the discussion needs to be changed, and have been
trying to work to change that tone for weeks now. There's one faction that
believes that Bitcoin will rarely, if ever, benefit from a blocksize
increase, and fees rising is a desired/unavoidable result. There's a
different faction that believes Bitcoin limits are arbitrary and that all
people worldwide should be able to put any size transactions, even
microtransactions, on-chain. Both factions are extreme in their viewpoints
and resort to conspiracy theories to interpret the actions of
Core(blockstream did it) or BU(Jihan controls everything and anyone who
says overwise is a shill paid by Roger Ver!)
It is all very unhealthy for Bitcoin. Both sides need to accept that
microtransactions from all humans cannot go on-chain, and that never
increasing the blocksize doesn't mean millions of home users will run
nodes. The node argument breaks down economically and the microtransaction
argument is an impossible mountain for a blockchain to climb.
On Wed, Mar 29, 2017 at 2:37 AM, Jorge Timón via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> While Segwit's change from 1 mb size limit to 4 mb weight limit seems to
> be controversial among some users (I find that very often it is because
> they have been confused about what segwit does or even outright lied about
> it) I don't think it's very interesting to discuss further size increases.
> I find more interesting to talk to the users and see how they think Segwit
> harms them, maybe we missed something in segwit that needs to be removed
> for segwit to become uncontroversial, or maybe it is just disinformation.
>
> On the other hand, we may want to have our first uncontroversial hardfork
> asap, independently of block size. For example, we could do something as
> simple as fixing the timewarp attack as bip99 proposes. I cannot think of a
> hf that is easier to implement or has less potential for controversy than
> that.
>
> On 29 Mar 2017 8:32 am, "Bram Cohen via bitcoin-dev" <bitcoin-dev@lists.
> linuxfoundation.org> wrote:
>
> On Tue, Mar 28, 2017 at 9:59 AM, Wang Chun via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>>
>> The basic idea is, as many of us agree, hard fork is risky and should
>> be well prepared. We need a long time to deploy it.
>>
>
> Much as it may be appealing to repeal the block size limit now with a
> grace period until a replacement is needed in a repeal and replace
> strategy, it's dubious to assume that an idea can be agreed upon later when
> it can't be agreed upon now. Trying to put a time limit on it runs into the
> possibility that you'll find that whatever reasons there were for not
> having general agreement on a new setup before still apply, and running
> into the embarrassing situation of winding up sticking with the status quo
> after much sturm and drang.
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 6373 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 16:59 [bitcoin-dev] Hard fork proposal from last week's meeting Wang Chun
` (4 preceding siblings ...)
2017-03-29 6:32 ` Bram Cohen
@ 2017-03-29 7:49 ` Martin Lízner
2017-03-29 15:57 ` David Vorick
2017-03-29 19:10 ` Jared Lee Richardson
5 siblings, 2 replies; 81+ messages in thread
From: Martin Lízner @ 2017-03-29 7:49 UTC (permalink / raw)
To: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2521 bytes --]
If there should be a hard-fork, Core team should author the code. Other dev
teams have marginal support among all BTC users.
Im tending to believe, that HF is necessary evil now. But lets do it in
conservative approach:
- Fix historical BTC issues, improve code
- Plan HF activation date well ahead - 12 months+
- Allow increasing block size on year-year basis as Luke suggested
- Compromise with miners on initial block size bump (e.g. 2MB)
- SegWit
Martin Lizner
On Tue, Mar 28, 2017 at 6:59 PM, Wang Chun via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 3254 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 7:49 ` Martin Lízner
@ 2017-03-29 15:57 ` David Vorick
2017-03-29 16:08 ` Aymeric Vitte
2017-03-29 19:46 ` Jared Lee Richardson
2017-03-29 19:10 ` Jared Lee Richardson
1 sibling, 2 replies; 81+ messages in thread
From: David Vorick @ 2017-03-29 15:57 UTC (permalink / raw)
To: Martin Lízner, Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1401 bytes --]
On Mar 29, 2017 9:50 AM, "Martin Lízner via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:
Im tending to believe, that HF is necessary evil now.
I will firmly disagree. We know how to do a soft-fork blocksize increase.
If it is decided that a block size increase is justified, we can do it with
extension blocks in a way that achieves full backwards compatibility for
all nodes.
Barring a significant security motivation, there is no need to hardfork.
I am also solidly unconvinced that increasing the blocksize today is a good
move, even as little as SegWit does. It's too expensive for a home user to
run a full node, and user-run full nodes are what provide the strongest
defence against political manuveuring.
When considering what block size is acceptable, the impact of running
bitcoin in the background on affordable, non-dedicated home-hardware should
be a top consideration.
Disk space I believe is the most significant problem today, with RAM being
the second most significant problem, and finally bandwidth consumption as
the third most important consideration. I believe that v0.14 is already too
expensive on all three fronts, and that block size increases shouldn't be
considered at all until the requirements are reduced (or until consumer
hardware is better, but I believe we are talking 3-7 years of waiting if we
pick that option).
[-- Attachment #2: Type: text/html, Size: 2082 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 15:57 ` David Vorick
@ 2017-03-29 16:08 ` Aymeric Vitte
[not found] ` <CAFVRnyo1XGNbq_F8UfqqJWHCVH14iMCUMU-R5bOh+h3mtwSUJg@mail.gmail.com>
2017-03-29 19:46 ` Jared Lee Richardson
1 sibling, 1 reply; 81+ messages in thread
From: Aymeric Vitte @ 2017-03-29 16:08 UTC (permalink / raw)
To: David Vorick, Bitcoin Protocol Discussion
Le 29/03/2017 à 17:57, David Vorick via bitcoin-dev a écrit :
> It's too expensive for a home user to run a full node, and user-run
> full nodes are what provide the strongest defence against political
> manuveuring.
Yes but what makes you think that "It's too expensive for a home user to
run a full node" ? Not trivial, maybe, long to setup, for sure, but why
"expensive"? I tested running a full node from home that was normally
correctly configured and did not notice anything annoying/expensive
--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 15:57 ` David Vorick
2017-03-29 16:08 ` Aymeric Vitte
@ 2017-03-29 19:46 ` Jared Lee Richardson
1 sibling, 0 replies; 81+ messages in thread
From: Jared Lee Richardson @ 2017-03-29 19:46 UTC (permalink / raw)
To: David Vorick, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 6823 bytes --]
> When considering what block size is acceptable, the impact of running
bitcoin in the background on affordable, non-dedicated home-hardware should
be a top consideration.
Why is that a given? Is there math that outlines what the risk levels are
for various configurations of node distributions, vulnerabilities, etc?
How does one even evaluate the costs versus the benefits of node costs
versus transaction fees?
> Disk space I believe is the most significant problem today, with RAM
being the second most significant problem, and finally bandwidth
consumption as the third most important consideration. I believe that v0.14
is already too expensive on all three fronts, and that block size increases
shouldn't be considered at all until the requirements are reduced (or until
consumer hardware is better, but I believe we are talking 3-7 years of
waiting if we pick that option).
Disk space is not the largest cost, either today or in the future. Without
historical checkpointing in some fashion, bandwidth costs are more than 2
orders of magnitude higher cost than every other cost for full listening
nodes. With historical syncing discounted(i.e. pruned or nonlistening
nodes) bandwidth costs are still higher than hard drive costs.
Today: Full listening node, 133 peers, measured 1.5 TB/mo of bandwidth
consumption over two multi-day intervals. 1,500 GB/month @ ec2 low-tier
prices = $135/month, 110 GB storage = $4.95. Similar arguments extend to
consumer hardware - Comcast broadband is ~$80/mo depending on region and
comes with 1.0 TB cap in most regions, so $120/mo or even $80/mo would be
in the same ballpark. A consumer-grade 2GB hard drive is $70 and will last
for at least 2 years, so $2.93/month if the hard drive was totally
dedicated to Bitcoin and $0.16/month if we only count the percentage that
Bitcoin uses.
For a non-full listening node, ~25 peers I measured around 70 GB/month of
usage over several days, which is $6.3 per month EC2 or $5.6 proportional
Comcast cost. If someone isn't supporting syncing, there's not much point
in them not turning on pruning. Even if they didn't, a desktop in the $500
range typically comes with 1 or 2 TB of storage by default, and without
segwit or a blocksize cap increase, 3 years from now the full history will
only take up the 33% of the smaller, three year old, budget-range PC hard
drive. Even then if we assume the hard drive price declines of the last 4
years hold steady(14%, very low compared to historical gains), 330gb of
data only works out to a proportional monthly cost of $6.20 - still
slightly smaller than his bandwidth costs, and almost entirely removable by
turning on pruning since he isn't paying to help others sync.
I don't know how to evaluate the impacts of RAM or CPU usage, or
consequently electricity usage for a node yet. I'm open to quantifying any
of those if there's a method, but it seems absurd that ram could even
become a signficant factor given the abundance of cheap ram nowadays with
few programs needing it. CPU usage and thus electricity costs might become
a factor, I just don't know how to quantify it at various block scales.
Currently cpu usage isn't taxing any hardware that I run a node on in any
way I have been able to notice, not including the syncing process.
> I am also solidly unconvinced that increasing the blocksize today is a
good move, even as little as SegWit does.
The consequence of your logic that holds node operational costs down is
that transaction fees for users go up, adoption slows as various use cases
become impractical, price growth suffers, and alt coins that choose lower
fees over node cost concerns will exhibit competitive growth against
Bitcoin's crypto-currency market share. Even if you are right, that's
hardly a tradeoff not worth thoroughly investigating from every angle, the
consequences could be just as dire for Bitcoin in 10 years as it would be
if we made ourselves vulnerable.
And even if an altcoin can't take Bitcoin's dominance by lower fees, we
will not end up with millions of home users running nodes, ever. If they
did so, that would be orders of magnitude fee market competition, and
continuing increases in price, while hardware costs decline. If
transaction fees go up from space limitations, and they go up even further
in real-world terms from price increases, while node costs decline,
eventually it will cost more to send a transaction than it does to run a
node for a full month. No home users would send transactions because the
fee costs would be higher than anything they might use Bitcoin for, and so
they would not run a node for something they don't use - Why would they?
The cost of letting the ratio between node costs and transaction costs go
in the extreme favor of node costs would be worse - Lower Bitcoin
usability, adoption, and price, without any meaningful increase in security.
How do we evaluate the math on node distributions versus various attack
vectors?
On Wed, Mar 29, 2017 at 8:57 AM, David Vorick via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> On Mar 29, 2017 9:50 AM, "Martin Lízner via bitcoin-dev" <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Im tending to believe, that HF is necessary evil now.
>
>
> I will firmly disagree. We know how to do a soft-fork blocksize increase.
> If it is decided that a block size increase is justified, we can do it with
> extension blocks in a way that achieves full backwards compatibility for
> all nodes.
>
> Barring a significant security motivation, there is no need to hardfork.
>
> I am also solidly unconvinced that increasing the blocksize today is a
> good move, even as little as SegWit does. It's too expensive for a home
> user to run a full node, and user-run full nodes are what provide the
> strongest defence against political manuveuring.
>
> When considering what block size is acceptable, the impact of running
> bitcoin in the background on affordable, non-dedicated home-hardware should
> be a top consideration.
>
> Disk space I believe is the most significant problem today, with RAM being
> the second most significant problem, and finally bandwidth consumption as
> the third most important consideration. I believe that v0.14 is already too
> expensive on all three fronts, and that block size increases shouldn't be
> considered at all until the requirements are reduced (or until consumer
> hardware is better, but I believe we are talking 3-7 years of waiting if we
> pick that option).
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 8460 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 7:49 ` Martin Lízner
2017-03-29 15:57 ` David Vorick
@ 2017-03-29 19:10 ` Jared Lee Richardson
2017-03-29 19:36 ` praxeology_guy
2017-04-02 19:12 ` Staf Verhaegen
1 sibling, 2 replies; 81+ messages in thread
From: Jared Lee Richardson @ 2017-03-29 19:10 UTC (permalink / raw)
To: Martin Lízner, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 3524 bytes --]
In order for any blocksize increase to be agreed upon, more consensus is
needed. The proportion of users believing no blocksize increases are
needed is larger than the hardfork target core wants(95% consensus). The
proportion of users believing in microtransactions for all is also larger
than 5%, and both of those groups may be larger than 10% respectively. I
don't think either the Big-blocks faction nor the low-node-costs faction
have even a simple majority of support. Getting consensus is going to be a
big mess, but it is critical that it is done.
On Wed, Mar 29, 2017 at 12:49 AM, Martin Lízner via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> If there should be a hard-fork, Core team should author the code. Other
> dev teams have marginal support among all BTC users.
>
> Im tending to believe, that HF is necessary evil now. But lets do it in
> conservative approach:
> - Fix historical BTC issues, improve code
> - Plan HF activation date well ahead - 12 months+
> - Allow increasing block size on year-year basis as Luke suggested
> - Compromise with miners on initial block size bump (e.g. 2MB)
> - SegWit
>
> Martin Lizner
>
> On Tue, Mar 28, 2017 at 6:59 PM, Wang Chun via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> I've proposed this hard fork approach last year in Hong Kong Consensus
>> but immediately rejected by coredevs at that meeting, after more than
>> one year it seems that lots of people haven't heard of it. So I would
>> post this here again for comment.
>>
>> The basic idea is, as many of us agree, hard fork is risky and should
>> be well prepared. We need a long time to deploy it.
>>
>> Despite spam tx on the network, the block capacity is approaching its
>> limit, and we must think ahead. Shall we code a patch right now, to
>> remove the block size limit of 1MB, but not activate it until far in
>> the future. I would propose to remove the 1MB limit at the next block
>> halving in spring 2020, only limit the block size to 32MiB which is
>> the maximum size the current p2p protocol allows. This patch must be
>> in the immediate next release of Bitcoin Core.
>>
>> With this patch in core's next release, Bitcoin works just as before,
>> no fork will ever occur, until spring 2020. But everyone knows there
>> will be a fork scheduled. Third party services, libraries, wallets and
>> exchanges will have enough time to prepare for it over the next three
>> years.
>>
>> We don't yet have an agreement on how to increase the block size
>> limit. There have been many proposals over the past years, like
>> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
>> on. These hard fork proposals, with this patch already in Core's
>> release, they all become soft fork. We'll have enough time to discuss
>> all these proposals and decide which one to go. Take an example, if we
>> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
>> from 32MiB to 2MB will be a soft fork.
>>
>> Anyway, we must code something right now, before it becomes too late.
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 4708 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 19:10 ` Jared Lee Richardson
@ 2017-03-29 19:36 ` praxeology_guy
2017-04-02 19:12 ` Staf Verhaegen
1 sibling, 0 replies; 81+ messages in thread
From: praxeology_guy @ 2017-03-29 19:36 UTC (permalink / raw)
To: Jared Lee Richardson; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 4715 bytes --]
I think at least the three following things have to be done before the block size can be increased by any significant amount:
1. A network protocol defined UTXO snapshot format be defined, UTXO snapshots being created automatically in a deterministic periodic and low-cost fashion. Ability to synchronize starting from such a UTXO snapshot as requested by a user.
2. SPV support from a pruned node that has the latest UTXO snapshot. Probably requires committing the UTXO snapshot hash to the block.
3. Given the above fixes the problem of needing full block chain history storage, and people are comfortable with such a security model, a good portion of the network can switch to this security model, and still satisfy our desire for the system to be sufficiently distributed. This requires lots of testing.
4. More current studies on the effect of increasing the block size on synchronizing node drop out due to other reasons such as network bandwidth, memory, and CPU usage.
Without doing the above, scheduling to increasing the block size would be wreckless.
Cheers,
Praxeology Guy
-------- Original Message --------
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
Local Time: March 29, 2017 2:10 PM
UTC Time: March 29, 2017 7:10 PM
From: bitcoin-dev@lists.linuxfoundation.org
To: Martin Lízner <martin.lizner@gmail.com>, Bitcoin Protocol Discussion <bitcoin-dev@lists.linuxfoundation.org>
In order for any blocksize increase to be agreed upon, more consensus is needed. The proportion of users believing no blocksize increases are needed is larger than the hardfork target core wants(95% consensus). The proportion of users believing in microtransactions for all is also larger than 5%, and both of those groups may be larger than 10% respectively. I don't think either the Big-blocks faction nor the low-node-costs faction have even a simple majority of support. Getting consensus is going to be a big mess, but it is critical that it is done.
On Wed, Mar 29, 2017 at 12:49 AM, Martin Lízner via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
If there should be a hard-fork, Core team should author the code. Other dev teams have marginal support among all BTC users.
Im tending to believe, that HF is necessary evil now. But lets do it in conservative approach:
- Fix historical BTC issues, improve code
- Plan HF activation date well ahead - 12 months+
- Allow increasing block size on year-year basis as Luke suggested
- Compromise with miners on initial block size bump (e.g. 2MB)
- SegWit
Martin Lizner
On Tue, Mar 28, 2017 at 6:59 PM, Wang Chun via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
I've proposed this hard fork approach last year in Hong Kong Consensus
but immediately rejected by coredevs at that meeting, after more than
one year it seems that lots of people haven't heard of it. So I would
post this here again for comment.
The basic idea is, as many of us agree, hard fork is risky and should
be well prepared. We need a long time to deploy it.
Despite spam tx on the network, the block capacity is approaching its
limit, and we must think ahead. Shall we code a patch right now, to
remove the block size limit of 1MB, but not activate it until far in
the future. I would propose to remove the 1MB limit at the next block
halving in spring 2020, only limit the block size to 32MiB which is
the maximum size the current p2p protocol allows. This patch must be
in the immediate next release of Bitcoin Core.
With this patch in core's next release, Bitcoin works just as before,
no fork will ever occur, until spring 2020. But everyone knows there
will be a fork scheduled. Third party services, libraries, wallets and
exchanges will have enough time to prepare for it over the next three
years.
We don't yet have an agreement on how to increase the block size
limit. There have been many proposals over the past years, like
BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
on. These hard fork proposals, with this patch already in Core's
release, they all become soft fork. We'll have enough time to discuss
all these proposals and decide which one to go. Take an example, if we
choose to fork to only 2MB, since 32MiB already scheduled, reduce it
from 32MiB to 2MB will be a soft fork.
Anyway, we must code something right now, before it becomes too late.
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 7265 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 19:10 ` Jared Lee Richardson
2017-03-29 19:36 ` praxeology_guy
@ 2017-04-02 19:12 ` Staf Verhaegen
1 sibling, 0 replies; 81+ messages in thread
From: Staf Verhaegen @ 2017-04-02 19:12 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 340 bytes --]
Jared Lee Richardson via bitcoin-dev schreef op wo 29-03-2017 om 12:10
[-0700]:
> The proportion of users believing in microtransactions for all is also
> larger than 5%,
In order to evaluate this statement the definition of microtransaction
has to be defined. I guess there will also be no consensus on that...
greets,
Staf.
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 230 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
@ 2017-03-28 19:56 Paul Iverson
2017-03-28 20:16 ` Pieter Wuille
2017-03-28 20:43 ` Tom Zander
0 siblings, 2 replies; 81+ messages in thread
From: Paul Iverson @ 2017-03-28 19:56 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 710 bytes --]
Thank you for the proposal Wang Chung!
It is clear that, spam aside, blocks are getting full and we need increase
them soon. What I don't like about your proposal is it forces all node
operators to implicitly accept larger blocks in 2020, even maybe against
their will. 32 MB blocks might result in a loss of decentralization, and it
might be too difficult to coordinate for small blocks before it's too late.
So I think Core can't decide on hard forks like this. It must be left up to
the users. I think only choice is for Core to add a run-time option to
allow node operators to increase block size limit, so that this very
controversial decision is not coming from Core. It must come from the
community.
[-- Attachment #2: Type: text/html, Size: 815 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 19:56 Paul Iverson
@ 2017-03-28 20:16 ` Pieter Wuille
2017-03-28 20:43 ` Tom Zander
1 sibling, 0 replies; 81+ messages in thread
From: Pieter Wuille @ 2017-03-28 20:16 UTC (permalink / raw)
To: Paul Iverson, Bitcoin Protocol Discussion
On Tue, Mar 28, 2017 at 12:56 PM, Paul Iverson via bitcoin-dev
<bitcoin-dev@lists.linuxfoundation.org> wrote:
> So I think Core can't decide on hard forks like this. It must be left up to
> the users. I think only choice is for Core to add a run-time option to allow
> node operators to increase block size limit, so that this very controversial
> decision is not coming from Core. It must come from the community.
Bitcoin Core's (nor any other software's) maintainers can already not
decide on a hard fork, and I keep being confused by the focus on Core
in this topic. Even if a hard forking change (or lack thereof) was
included into a new release, it is still up to the community to choose
to run the new software. Bitcoin Core has very intentionally no
auto-update feature, as the choice for what network rules to implement
must come from node operators, not developers. Ask yourself this: if a
new Bitcoin Core release would include a new rule that blacklists
<random famous person>'s coins. What do you think would happen? I hope
that people would refuse to update, and choose to run different full
node software.
Core is not special. It is one of many pieces of software that
implement today's Bitcoin consensus rules. If a hardfork is to take
place in a way that does not result in two currencies, it must be
clear that the entire ecosystem will adopt it. Bitcoin Core will not
merge any consensus changes that do not clearly satisfy that
criterion.
--
Pieter
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 19:56 Paul Iverson
2017-03-28 20:16 ` Pieter Wuille
@ 2017-03-28 20:43 ` Tom Zander
2017-03-28 20:53 ` Alphonse Pace
1 sibling, 1 reply; 81+ messages in thread
From: Tom Zander @ 2017-03-28 20:43 UTC (permalink / raw)
To: bitcoin-dev, Paul Iverson
On Tuesday, 28 March 2017 21:56:49 CEST Paul Iverson via bitcoin-dev wrote:
> It is clear that, spam aside, blocks are getting full and we need increase
> them soon. What I don't like about your proposal is it forces all node
> operators to implicitly accept larger blocks in 2020, even maybe against
> their will. 32 MB blocks might result in a loss of decentralization, and
> it might be too difficult to coordinate for small blocks before it's too
> late.
The suggestion was not to produce 32MB blocks, so your fear here is
unfounded.
--
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 20:43 ` Tom Zander
@ 2017-03-28 20:53 ` Alphonse Pace
2017-03-28 21:06 ` Luke Dashjr
0 siblings, 1 reply; 81+ messages in thread
From: Alphonse Pace @ 2017-03-28 20:53 UTC (permalink / raw)
To: Tom Zander, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1290 bytes --]
His demand (not suggestion) allows it without any safeguards.
>This patch must be in the immediate next release of Bitcoin Core.
That is not a suggestion.
Wang - still waiting on the details of this meeting. In the spirit of
openness, I think you ought to share with the community what kind of secret
meetings are happening.
On Tue, Mar 28, 2017 at 3:43 PM, Tom Zander via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> On Tuesday, 28 March 2017 21:56:49 CEST Paul Iverson via bitcoin-dev wrote:
> > It is clear that, spam aside, blocks are getting full and we need
> increase
> > them soon. What I don't like about your proposal is it forces all node
> > operators to implicitly accept larger blocks in 2020, even maybe against
> > their will. 32 MB blocks might result in a loss of decentralization, and
> > it might be too difficult to coordinate for small blocks before it's too
> > late.
>
> The suggestion was not to produce 32MB blocks, so your fear here is
> unfounded.
>
> --
> Tom Zander
> Blog: https://zander.github.io
> Vlog: https://vimeo.com/channels/tomscryptochannel
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 2475 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-28 20:53 ` Alphonse Pace
@ 2017-03-28 21:06 ` Luke Dashjr
0 siblings, 0 replies; 81+ messages in thread
From: Luke Dashjr @ 2017-03-28 21:06 UTC (permalink / raw)
To: Alphonse Pace, Bitcoin Protocol Discussion
On Tuesday, March 28, 2017 8:53:30 PM Alphonse Pace via bitcoin-dev wrote:
> His demand (not suggestion) allows it without any safeguards.
>
> >This patch must be in the immediate next release of Bitcoin Core.
>
> That is not a suggestion.
I think it was probably a design requirement more than a demand. It makes
sense: if we're aiming to have a long lead time for a possible hardfork, we
want to get the lead time started ASAP. (It could perhaps have been
communicated clearer, but let's not read hostility into things when
unnecessary.)
Meta-topic: Can we try a little harder to avoid sequences of multiple brief
replies in a matter of minutes? Combine them to a single reply.
Luke
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
@ 2017-03-29 19:33 Daniele Pinna
2017-03-29 20:28 ` Peter R
2017-03-29 20:28 ` David Vorick
0 siblings, 2 replies; 81+ messages in thread
From: Daniele Pinna @ 2017-03-29 19:33 UTC (permalink / raw)
To: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 2558 bytes --]
What about periodically committing the entire UTXO set to a special
checkpoint block which becomes the new de facto Genesis block?
Daniele
------------------------------
Message: 5
Date: Wed, 29 Mar 2017 16:41:29 +0000
From: Andrew Johnson <andrew.johnson83@gmail.com>
To: David Vorick <david.vorick@gmail.com>
Cc: Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
Message-ID:
<CAAy62_+JtoAuM-RsrAAp5eiGiO+OHLDjzqgbnF2De7TUU7TyYg@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
I believe that as we continue to add users to the system by scaling
capacity that we will see more new nodes appear, but I'm at a bit of a loss
as to how to empirically prove it.
I do see your point on increasing load on archival nodes, but the majority
of that load is going to come from new nodes coming online, they're the
only ones going after very old blocks. I could see that as a potential
attack vector, overwhelm the archival nodes by spinning up new nodes
constantly, therefore making it difficult for a "real" new node to get up
to speed in a reasonable amount of time.
Perhaps the answer there would be a way to pay an archival node a small
amount of bitcoin in order to retrieve blocks older than a certain cutoff?
Include an IP address for the node asking for the data as metadata in the
transaction... Archival nodes could set and publish their own policy, let
the market decide what those older blocks are worth. Would also help to
incentivize running archival node, which we do need. Of course, this isn't
very user friendly.
We can take this to bitcoin-discuss, if we're getting too far off topic.
On Wed, Mar 29, 2017 at 11:25 AM David Vorick <david.vorick@gmail.com>
wrote:
>
> On Mar 29, 2017 12:20 PM, "Andrew Johnson" <andrew.johnson83@gmail.com>
> wrote:
>
> What's stopping these users from running a pruned node? Not every node
> needs to store a complete copy of the blockchain.
>
>
> Pruned nodes are not the default configuration, if it was the default
> configuration then I think you would see far more users running a pruned
> node.
>
> But that would also substantially increase the burden on archive nodes.
>
>
> Further discussion about disk space requirements should be taken to
> another thread.
>
>
> --
Andrew Johnson
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/
attachments/20170329/9b48ebe3/attachment.html>
------------------------------
[-- Attachment #2: Type: text/html, Size: 11178 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 19:33 Daniele Pinna
@ 2017-03-29 20:28 ` Peter R
2017-03-29 22:17 ` Jared Lee Richardson
2017-03-29 20:28 ` David Vorick
1 sibling, 1 reply; 81+ messages in thread
From: Peter R @ 2017-03-29 20:28 UTC (permalink / raw)
To: Daniele Pinna, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 4851 bytes --]
I believe nearly everyone at Bitcoin Unlimited would be supportive of a UTXO check-pointing scheme. I’d love to see this happen, as it would greatly reduce the time needed to get a new node up-and-running, for node operators who are comfortable trusting these commitments.
I’m confident that we could work with the miners who we have good relationships with to start including the root hash of the (lagging) UTXO set in their coinbase transactions, in order to begin transforming this idea into reality. We could also issue regular transactions from “semi-trusted” addresses controlled by known people that include the same root hash in an OP_RETURN output, which would allow cross-checking against the miners’ UTXO commitments, as part of this initial “prototype” system.
This would "get the ball rolling" on UTXO commitments in a permissionless way (no one can stop us from doing this). If the results from this prototype commitment scheme were positive, then perhaps there would be support from the community and miners to enforce a new rule which requires the (lagging) root hashes be included in new blocks. At that point, the UTXO commitment scheme is no longer a prototype but a trusted feature of the Bitcoin network.
On that topic, are there any existing proposals detailing a canonical ordering of the UTXO set and a scheme to calculate the root hash?
Best regards,
Peter
> On Mar 29, 2017, at 12:33 PM, Daniele Pinna via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> What about periodically committing the entire UTXO set to a special checkpoint block which becomes the new de facto Genesis block?
>
> Daniele
>
> ------------------------------
>
> Message: 5
> Date: Wed, 29 Mar 2017 16:41:29 +0000
> From: Andrew Johnson <andrew.johnson83@gmail.com <mailto:andrew.johnson83@gmail.com>>
> To: David Vorick <david.vorick@gmail.com <mailto:david.vorick@gmail.com>>
> Cc: Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org <mailto:bitcoin-dev@lists.linuxfoundation.org>>
> Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
> Message-ID:
> <CAAy62_+JtoAuM-RsrAAp5eiGiO+OHLDjzqgbnF2De7TUU7TyYg@mail.gmail.com <mailto:CAAy62_%2BJtoAuM-RsrAAp5eiGiO%2BOHLDjzqgbnF2De7TUU7TyYg@mail.gmail.com>>
> Content-Type: text/plain; charset="utf-8"
>
> I believe that as we continue to add users to the system by scaling
> capacity that we will see more new nodes appear, but I'm at a bit of a loss
> as to how to empirically prove it.
>
> I do see your point on increasing load on archival nodes, but the majority
> of that load is going to come from new nodes coming online, they're the
> only ones going after very old blocks. I could see that as a potential
> attack vector, overwhelm the archival nodes by spinning up new nodes
> constantly, therefore making it difficult for a "real" new node to get up
> to speed in a reasonable amount of time.
>
> Perhaps the answer there would be a way to pay an archival node a small
> amount of bitcoin in order to retrieve blocks older than a certain cutoff?
> Include an IP address for the node asking for the data as metadata in the
> transaction... Archival nodes could set and publish their own policy, let
> the market decide what those older blocks are worth. Would also help to
> incentivize running archival node, which we do need. Of course, this isn't
> very user friendly.
>
> We can take this to bitcoin-discuss, if we're getting too far off topic.
>
>
> On Wed, Mar 29, 2017 at 11:25 AM David Vorick <david.vorick@gmail.com <mailto:david.vorick@gmail.com>>
> wrote:
>
> >
> > On Mar 29, 2017 12:20 PM, "Andrew Johnson" <andrew.johnson83@gmail.com <mailto:andrew.johnson83@gmail.com>>
> > wrote:
> >
> > What's stopping these users from running a pruned node? Not every node
> > needs to store a complete copy of the blockchain.
> >
> >
> > Pruned nodes are not the default configuration, if it was the default
> > configuration then I think you would see far more users running a pruned
> > node.
> >
> > But that would also substantially increase the burden on archive nodes.
> >
> >
> > Further discussion about disk space requirements should be taken to
> > another thread.
> >
> >
> > --
> Andrew Johnson
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20170329/9b48ebe3/attachment.html <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20170329/9b48ebe3/attachment.html>>
>
> ------------------------------
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 15045 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 20:28 ` Peter R
@ 2017-03-29 22:17 ` Jared Lee Richardson
0 siblings, 0 replies; 81+ messages in thread
From: Jared Lee Richardson @ 2017-03-29 22:17 UTC (permalink / raw)
To: Peter R, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 6806 bytes --]
> I’m confident that we could work with the miners who we have good
relationships with to start including the root hash of the (lagging) UTXO
set in their coinbase transactions, in order to begin transforming this
idea into reality.
By itself, this wouldn't work without a way for a new node to differentiate
between a false history and a true one.
> We could also issue regular transactions from “semi-trusted” addresses
controlled by known people that include the same root hash in an OP_RETURN
output, which would allow cross-checking against the miners’ UTXO
commitments, as part of this initial “prototype”
This might work, but I fail to understand how a new node could verify an
address / transaction without a blockchain to back it. Even if it could,
it becomes dependent upon those addresses not being compromised, and the
owners of those addresses would become targets for potential government
operations.
Having the software silently attempt to resolve the problem is risky unless
it is foolproof. Otherwise, users will assume their software is showing
them the correct history/numbers implicitly, and if the change the utxo
attacker made was small, the users might be able to follow the main chain
totally until it was too late and the attacker struck with an address that
otherwise never transacted. Sudden, bizarre, hard to debug fork and
potentially double spend against people who picked up the fraudulent utxo.
Users already treat wallet software with some level of suspicion, asking if
they can trust x or y or z, or like the portion of the BU community
convinced that core has been compromised by blockstream bigwigs. Signed
releases could provide the same thing but would encourage both open-source
security checks of the signed utxo's and potentially of users to check
download signatures.
Either approach is better than what we have now though, so I'd support
anything.
On Wed, Mar 29, 2017 at 1:28 PM, Peter R via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> I believe nearly everyone at Bitcoin Unlimited would be supportive of a
> UTXO check-pointing scheme. I’d love to see this happen, as it would
> greatly reduce the time needed to get a new node up-and-running, for node
> operators who are comfortable trusting these commitments.
>
> I’m confident that we could work with the miners who we have good
> relationships with to start including the root hash of the (lagging) UTXO
> set in their coinbase transactions, in order to begin transforming this
> idea into reality. We could also issue regular transactions from
> “semi-trusted” addresses controlled by known people that include the same
> root hash in an OP_RETURN output, which would allow cross-checking against
> the miners’ UTXO commitments, as part of this initial “prototype” system.
>
> This would "get the ball rolling" on UTXO commitments in a permissionless
> way (no one can stop us from doing this). If the results from this
> prototype commitment scheme were positive, then perhaps there would be
> support from the community and miners to enforce a new rule which requires
> the (lagging) root hashes be included in new blocks. At that point, the
> UTXO commitment scheme is no longer a prototype but a trusted feature of
> the Bitcoin network.
>
> On that topic, are there any existing proposals detailing a canonical
> ordering of the UTXO set and a scheme to calculate the root hash?
>
> Best regards,
> Peter
>
>
> On Mar 29, 2017, at 12:33 PM, Daniele Pinna via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> What about periodically committing the entire UTXO set to a special
> checkpoint block which becomes the new de facto Genesis block?
>
> Daniele
>
> ------------------------------
>
> Message: 5
> Date: Wed, 29 Mar 2017 16:41:29 +0000
> From: Andrew Johnson <andrew.johnson83@gmail.com>
> To: David Vorick <david.vorick@gmail.com>
> Cc: Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org>
> Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
> Message-ID:
> <CAAy62_+JtoAuM-RsrAAp5eiGiO+OHLDjzqgbnF2De7TUU7TyYg@mail.gm
> ail.com>
> Content-Type: text/plain; charset="utf-8"
>
> I believe that as we continue to add users to the system by scaling
> capacity that we will see more new nodes appear, but I'm at a bit of a loss
> as to how to empirically prove it.
>
> I do see your point on increasing load on archival nodes, but the majority
> of that load is going to come from new nodes coming online, they're the
> only ones going after very old blocks. I could see that as a potential
> attack vector, overwhelm the archival nodes by spinning up new nodes
> constantly, therefore making it difficult for a "real" new node to get up
> to speed in a reasonable amount of time.
>
> Perhaps the answer there would be a way to pay an archival node a small
> amount of bitcoin in order to retrieve blocks older than a certain cutoff?
> Include an IP address for the node asking for the data as metadata in the
> transaction... Archival nodes could set and publish their own policy, let
> the market decide what those older blocks are worth. Would also help to
> incentivize running archival node, which we do need. Of course, this isn't
> very user friendly.
>
> We can take this to bitcoin-discuss, if we're getting too far off topic.
>
>
> On Wed, Mar 29, 2017 at 11:25 AM David Vorick <david.vorick@gmail.com>
> wrote:
>
> >
> > On Mar 29, 2017 12:20 PM, "Andrew Johnson" <andrew.johnson83@gmail.com>
> > wrote:
> >
> > What's stopping these users from running a pruned node? Not every node
> > needs to store a complete copy of the blockchain.
> >
> >
> > Pruned nodes are not the default configuration, if it was the default
> > configuration then I think you would see far more users running a pruned
> > node.
> >
> > But that would also substantially increase the burden on archive nodes.
> >
> >
> > Further discussion about disk space requirements should be taken to
> > another thread.
> >
> >
> > --
> Andrew Johnson
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/atta
> chments/20170329/9b48ebe3/attachment.html>
>
> ------------------------------
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 16781 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 19:33 Daniele Pinna
2017-03-29 20:28 ` Peter R
@ 2017-03-29 20:28 ` David Vorick
2017-03-29 22:08 ` Jared Lee Richardson
1 sibling, 1 reply; 81+ messages in thread
From: David Vorick @ 2017-03-29 20:28 UTC (permalink / raw)
To: Daniele Pinna, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 5716 bytes --]
> > When considering what block size is acceptable, the impact of running
bitcoin in the background on affordable, non-dedicated home-hardware should
be a top consideration.
> Why is that a given? Is there math that outlines what the risk levels
are for various configurations of node distributions, vulnerabilities,
etc? How does one even evaluate the costs versus the benefits of node
costs versus transaction fees?
It's a political assessment. Full nodes are the ultimate arbiters of
consensus. When a contentious change is suggested, only the full nodes have
the power to either accept or reject this contentious change. If home users
are not running their own full nodes, then home users have to trust and
rely on other, more powerful nodes to represent them. Of course, the more
powerful nodes, simply by nature of having more power, are going to have
different opinions and objectives from the users. And it's impossible for
5000 nodes to properly represent the views of 5,000,000 users. Users
running full nodes is important to prevent political hijacking of the
Bitcoin protocol. Running a full node yourself is the only way to guarantee
(in the absence of trust - which Bitcoin is all about eliminating trust)
that changes you are opposed to are not introduced into the network.
> Disk space is not the largest cost, either today or in the future.
Without historical checkpointing in some fashion, bandwidth costs are more
than 2 orders of magnitude higher cost than every other cost for full
listening nodes.
This statement is not true for home users, it is true for datacenter nodes.
For home users, 200 GB of bandwidth and 500 GB of bandwidth largely have
the exact same cost. I pay a fixed amount of money for my internet, and if
I use 500 GB the cost is identical to if I use 200 GB. So long as bandwidth
is kept under my home bandwidth cap, bandwidth for home nodes is _free_.
Similarly, disk space may only be $2/TB in bulk, but as a home user I have
a $1000 computer with 500 GB of total storage, 100 GB seems
(psychologically) to cost a lot closer to $200 than to $2. And if I go out
and buy an extra drive to support Bitcoin, it's going to cost about $50 no
matter what drive I pick, because that's just how much you have to spend to
get a drive. The fact that I get an extra 900 GB that I'm not using is
irrelevant - I spent $50 explicitly so I could run a bitcoin node.
The financials of home nodes follow a completely different math than the
costs you are citing by quoting datacenter prices.
> I don't know how to evaluate the impacts of RAM or CPU usage, or
consequently electricity usage for a node yet. I'm open to quantifying any
of those if there's a method, but it seems absurd that ram could even
become a signficant factor given the abundance of cheap ram nowadays with
few programs needing it.
Many home machines only have 4GB of RAM. (I am acutely aware of this
because my own software consumes about 3.5GB of RAM, which means all of our
users stuck at 4 GB cannot use my software and Chrome at the same time).
0.14 uses more than 1 GB of RAM. This I think is not really a problem for
most people, but it becomes a problem if the amount of RAM required grows
enough that they can't have all of their programs open at the same time.
1GB I think is really the limit you'd want to have before you'd start
seeing users choose not to run nodes simply because they'd rather have 300
tabs open instead.
CPU usage I think is pretty minimal. Your node is pretty busy during IBD
which is annoying but tolerable. And during normal usage a user isn't even
going to notice. Same for electricity. They aren't going to notice at the
end of the month if their electricity bill is a dollar higher because of
Bitcoin.
> The consequence of your logic that holds node operational costs down is
that transaction fees for users go up, adoption slows as various use cases
become impractical, price growth suffers, and alt coins that choose lower
fees over node cost concerns will exhibit competitive growth against
Bitcoin's crypto-currency market share. Even if you are right, that's
hardly a tradeoff not worth thoroughly investigating from every angle, the
consequences could be just as dire for Bitcoin in 10 years as it would be
if we made ourselves vulnerable.
This is very much worth considering. If transaction fees are so high that
there is no use case at all for people unwilling to buy extra hardware for
Bitcoin (a dedicated node or whatever), then there is no longer a reason to
worry about these people as users. However, I think the fees would have to
get in the $50 range for that to start to be the case. When talking about
emergency funds - that is, $10k+ that you keep in case your government
defaults, hyperinflates, seizes citizen assets, etc. etc. (situations that
many Bitcoin users today have to legitimately worry about), then you are
going to be making a few transactions per year at most, and the cost of
fees on a home node may be $150 / yr, while the cost of dedicated hardware
might be $150/yr ($600 box amortized over 4 years). We are two orders of
magnitude away from this type of fee pressure, so I think it continues to
make sense to be considering the home nodes as the target that we want to
hit.
> What about periodically committing the entire UTXO set to a special
checkpoint block which becomes the new de facto Genesis block?
This should be discussed in another thread but I don't think I'm alone in
saying that I think this could actually be done in a secure / safe /
valuable way if you did it correctly. It would reduce bandwidth pressure on
archive nodes, reduce disk pressure on full nodes, and imo make for a more
efficient network overall.
[-- Attachment #2: Type: text/html, Size: 6874 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 20:28 ` David Vorick
@ 2017-03-29 22:08 ` Jared Lee Richardson
2017-03-30 7:11 ` Luv Khemani
0 siblings, 1 reply; 81+ messages in thread
From: Jared Lee Richardson @ 2017-03-29 22:08 UTC (permalink / raw)
To: David Vorick, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 15690 bytes --]
> It's a political assessment. Full nodes are the ultimate arbiters of
consensus.
That's not true unless miners are thought of as the identical to nodes,
which is has not been true for nearly 4 years now. Nodes arbitrating a
consensus the BU theory - that nodes can restrain miners - but it doesn't
work. If miners were forked off from nonminers, the miner network could
keep their blockchain operational under attack from the nodes far better
than nodes could keep their blockchain operational under attack from the
miners. The miners could effectively grind the node network to a complete
halt and probably still run their own fork unimpeded at the same time.
This would continue until the the lack of faith in the network drove the
miners out of business economically, or until the node network capitulated
and followed the rules of the miner network.
The reason BU isn't a dire threat is that there's a great rift between the
miners just like there is between the average users, just as satoshi
intended, and that rift gives the user network the economic edge.
> If home users are not running their own full nodes, then home users have
to trust and rely on other, more powerful nodes to represent them. Of
course, the more powerful nodes, simply by nature of having more power, are
going to have different opinions and objectives from the users.
I think you're conflating mining with node operation here. Node users only
power is to block the propagation of certain things. Since miners also
have a node endpoint, they can cut the node users out of the equation by
linking with eachother directly - something they already do out of
practicality for propagation. Node users do not have the power to
arbitrate consensus, that is why we have blocks and PoW.
> And it's impossible for 5000 nodes to properly represent the views of
5,000,000 users. Users running full nodes is important to prevent political
hijacking of the Bitcoin protocol. [..] that changes you are opposed to
are not introduced into the network.
This isn't true. Non-miner nodes cannot produce blocks. Their opinion is
not represented in the blockchain in any way, the blockchain is entirely
made up of blocks. They can commit transactions, but the transactions must
follow an even stricter set of rules and short of a user activated PoW
change, the miners get to decide. It might be viable for us to introduce
ways for transactions to vote on things, but that also isn't nodes voting -
that's money voting.
Bitcoin is structured such that nodes have no votes because nodes cannot be
trusted. They don't inherently represent individuals, they don't
inherently represent value, and they don't commit work that is played
against eachother to achieve a game theory equilibrium. That's miners.
> This statement is not true for home users, it is true for datacenter
nodes. For home users, 200 GB of bandwidth and 500 GB of bandwidth largely
have the exact same cost.
Your assumption is predicated upon the idea that users pay a fixed cost for
any volume of bandwidth. That assertion is true for some users but not
true for others, and it is becoming exceedingly less true in recent years
with the addition of bandwidth caps by many ISP's. Even users without a
bandwidth cap can often get a very threatening letter if they were to max
their connection 24/7. Assuming unlimited user bandwidth in the future and
comparing that with limited datacenter bandwidth is extremely short
sighted. Fundamentally, if market forces have established that datacenter
bandwidth costs $0.09 per GB, what makes you think that ISP's don't have to
deal with the same limitations? They do, the difference is that $0.09 per
GB times the total usage across the ISP's customer base is far, far lower
than $80 times the number of customers. The more that a small group of
customers deviating wildly becomes a problem for them, the more they will
add bandwidth caps or send threatening letters or even rate-limit or stop
serving those users.
Without that assumption, your math and examples fall apart - Bandwidth
costs for full archival nodes are nearly 50 times higher than storage costs
no matter whether they are at home or in a datacenter.
> The financials of home nodes follow a completely different math than the
costs you are citing by quoting datacenter prices.
No, they really aren't without your assumption. Yes, they are somewhat
different - If someone has a 2TB hard drive but only ever uses 40% of it,
the remaining hard drive space would have a cost of zero. Those specific
examples break down when you average over several years and fifty thousand
users. If that same user was running a bitcoin node and hard drive space
was indeed a concern, they would factor that desire into the purchase of
their next computer, preferring those with larger hard drives. That
reintroduces the cost with the same individual who had no cost before. The
cost difference doesn't work out to the exact same numbers as the
datacenter costs, who have a better economy of scale but also have profit
and business overhead, but all of the math I've done indicates that over
thousands of individuals and several years of time, the costs land in the
same ballpark. For example - Comcast bandwidth cap = 1000gb @ ~$80/month.
$0.08/GB. Amazon's first tier is currently $0.09. Much closer than I
even expected before I worked out the math. I'm open to being proven wrong.
> 0.14 uses more than 1 GB of RAM.
I'm running 0.13.2 and only see 300 mb of ram. Why is 0.14 using three
times the ram?
> 1GB I think is really the limit you'd want to have before you'd start
seeing users choose not to run nodes simply
Again, while I sympathize with the concept, I don't believe holding the
growth of the entire currency back based on minimum specs is a fair
tradeoff. The impact on usecases that depend on a given fee level is total
obliteration. That's unavoidable for things like microtransactions, but a
fee level of $1/tx allows for hundreds of opportunities that a fee level of
$100/tx does not. That difference may be the deciding factor in the
network effect between Bitcoin and a competitor altcoin. Bitcoin dying out
because a better-operated coin steals its first-mover advantage is just as
bad as bitcoin dying out because an attacker halted tx propagation and
killed the network. Probably even worse - First mover advantages are
almost never retaken, but the network could recover from a peering attack
with software changes and community/miner responses.
> However, I think the fees would have to get in the $50 range for that to
start to be the case.
I calculated this out. If blocksizes aren't increased, but price increases
continue as they have in the last 3-5 years, per-node operational costs for
one month drop from roughly $10-15ish (using datacenter numbers, which you
said would be higher than home user numbers and might very well be when
amortized thoroughly) down to $5-8 in less than 8 years. If transaction
fees don't rise at all due to blockspace competition (i.e., they offset
only the minimum required for miners to economically protect Bitcoin),
they'll be above $10 in less than 4 years. I believe that comparing
1-month of node operational costs versus 1 transaction fee is a reasonable,
albeit imperfect, comparison of when users will stop caring.
That's not very far in the future at all, and fee-market competition will
probably be much, much worse for us and better for miners.
> When talking about emergency funds - that is, $10k+ that you keep in case
your government defaults, hyperinflates, seizes citizen assets, etc. etc.
(situations that many Bitcoin users today have to legitimately worry about),
So I don't mean to be rude here, but this kind of thinking is very poor
logic when applied to anyone who isn't already a libertarian Bitcoin
supporter. By anyone outside the Bitcoin world's estimation, Bitcoin is an
extremely high risk, unreliable store of value. We like to compare it to
"digital gold" because of the parameters that Satoshi chose, but saying it
does not make it true. For someone not already a believer, Bitcoin is a
risky, speculative investment into a promising future technology, and gold
is a stable physical asset with 4,000 years of acceptance history that has
the same value in nearly every city on the planet. Bitcoin is difficult to
purchase and difficult to find someone to exchange for goods or services.
Could Bitcoin become more like what you described in the future? A lot of
us hope so or we wouldn't be here right now. But in the meantime, any
other crypto currency that choses parameters similar to gold could eclipse
Bitcoin if we falter. If their currency is more usable because they
balance the ratio of node operational costs/security versus transaction
fees/usability, they have a pretty reasonable chance of doing so. And then
you won't store your $10k+ in bitcoin, you'll store in $altcoin. The
market doesn't really care who wins.
> We are two orders of magnitude away from this type of fee pressure, so I
think it continues to make sense to be considering the home nodes as the
target that we want to hit.
That's nothing, we've never had any fee competition at all until basically
November of last year. From December to March transaction fees went up by
250%, and they doubled from May to December before that. Transactions per
year are up 80% per year for the last 4 years. Things are about to get
screwed.
On Wed, Mar 29, 2017 at 1:28 PM, David Vorick via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> > > When considering what block size is acceptable, the impact of running
> bitcoin in the background on affordable, non-dedicated home-hardware should
> be a top consideration.
>
> > Why is that a given? Is there math that outlines what the risk levels
> are for various configurations of node distributions, vulnerabilities,
> etc? How does one even evaluate the costs versus the benefits of node
> costs versus transaction fees?
>
> It's a political assessment. Full nodes are the ultimate arbiters of
> consensus. When a contentious change is suggested, only the full nodes have
> the power to either accept or reject this contentious change. If home users
> are not running their own full nodes, then home users have to trust and
> rely on other, more powerful nodes to represent them. Of course, the more
> powerful nodes, simply by nature of having more power, are going to have
> different opinions and objectives from the users. And it's impossible for
> 5000 nodes to properly represent the views of 5,000,000 users. Users
> running full nodes is important to prevent political hijacking of the
> Bitcoin protocol. Running a full node yourself is the only way to guarantee
> (in the absence of trust - which Bitcoin is all about eliminating trust)
> that changes you are opposed to are not introduced into the network.
>
> > Disk space is not the largest cost, either today or in the future.
> Without historical checkpointing in some fashion, bandwidth costs are more
> than 2 orders of magnitude higher cost than every other cost for full
> listening nodes.
>
> This statement is not true for home users, it is true for datacenter
> nodes. For home users, 200 GB of bandwidth and 500 GB of bandwidth largely
> have the exact same cost. I pay a fixed amount of money for my internet,
> and if I use 500 GB the cost is identical to if I use 200 GB. So long as
> bandwidth is kept under my home bandwidth cap, bandwidth for home nodes is
> _free_.
>
> Similarly, disk space may only be $2/TB in bulk, but as a home user I have
> a $1000 computer with 500 GB of total storage, 100 GB seems
> (psychologically) to cost a lot closer to $200 than to $2. And if I go out
> and buy an extra drive to support Bitcoin, it's going to cost about $50 no
> matter what drive I pick, because that's just how much you have to spend to
> get a drive. The fact that I get an extra 900 GB that I'm not using is
> irrelevant - I spent $50 explicitly so I could run a bitcoin node.
>
> The financials of home nodes follow a completely different math than the
> costs you are citing by quoting datacenter prices.
>
> > I don't know how to evaluate the impacts of RAM or CPU usage, or
> consequently electricity usage for a node yet. I'm open to quantifying any
> of those if there's a method, but it seems absurd that ram could even
> become a signficant factor given the abundance of cheap ram nowadays with
> few programs needing it.
>
> Many home machines only have 4GB of RAM. (I am acutely aware of this
> because my own software consumes about 3.5GB of RAM, which means all of our
> users stuck at 4 GB cannot use my software and Chrome at the same time).
> 0.14 uses more than 1 GB of RAM. This I think is not really a problem for
> most people, but it becomes a problem if the amount of RAM required grows
> enough that they can't have all of their programs open at the same time.
> 1GB I think is really the limit you'd want to have before you'd start
> seeing users choose not to run nodes simply because they'd rather have 300
> tabs open instead.
>
> CPU usage I think is pretty minimal. Your node is pretty busy during IBD
> which is annoying but tolerable. And during normal usage a user isn't even
> going to notice. Same for electricity. They aren't going to notice at the
> end of the month if their electricity bill is a dollar higher because of
> Bitcoin.
>
> > The consequence of your logic that holds node operational costs down is
> that transaction fees for users go up, adoption slows as various use cases
> become impractical, price growth suffers, and alt coins that choose lower
> fees over node cost concerns will exhibit competitive growth against
> Bitcoin's crypto-currency market share. Even if you are right, that's
> hardly a tradeoff not worth thoroughly investigating from every angle, the
> consequences could be just as dire for Bitcoin in 10 years as it would be
> if we made ourselves vulnerable.
>
> This is very much worth considering. If transaction fees are so high that
> there is no use case at all for people unwilling to buy extra hardware for
> Bitcoin (a dedicated node or whatever), then there is no longer a reason to
> worry about these people as users. However, I think the fees would have to
> get in the $50 range for that to start to be the case. When talking about
> emergency funds - that is, $10k+ that you keep in case your government
> defaults, hyperinflates, seizes citizen assets, etc. etc. (situations that
> many Bitcoin users today have to legitimately worry about), then you are
> going to be making a few transactions per year at most, and the cost of
> fees on a home node may be $150 / yr, while the cost of dedicated hardware
> might be $150/yr ($600 box amortized over 4 years). We are two orders of
> magnitude away from this type of fee pressure, so I think it continues to
> make sense to be considering the home nodes as the target that we want to
> hit.
>
> > What about periodically committing the entire UTXO set to a special
> checkpoint block which becomes the new de facto Genesis block?
>
> This should be discussed in another thread but I don't think I'm alone in
> saying that I think this could actually be done in a secure / safe /
> valuable way if you did it correctly. It would reduce bandwidth pressure on
> archive nodes, reduce disk pressure on full nodes, and imo make for a more
> efficient network overall.
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 18662 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 22:08 ` Jared Lee Richardson
@ 2017-03-30 7:11 ` Luv Khemani
2017-03-30 17:16 ` Jared Lee Richardson
0 siblings, 1 reply; 81+ messages in thread
From: Luv Khemani @ 2017-03-30 7:11 UTC (permalink / raw)
To: David Vorick, Jared Lee Richardson, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 2825 bytes --]
>> If home users are not running their own full nodes, then home users have to trust and rely on other, more powerful nodes to represent them. Of course, the more powerful nodes, simply by nature of having more power, are going to have different opinions and objectives from the users.
>I think you're conflating mining with node operation here. Node users only power is to block the propagation of certain things. Since miners also have a node endpoint, they can cut the node users out of the equation by linking with eachother directly - something they already do out of practicality for propagation. Node users do not have the power to arbitrate consensus, that is why we have blocks and PoW.
You are only looking at technical aspects and missing the political aspect.
Node users decide what a Bitcoin is. It matters not how much hash power is behind a inflationary supply chain fork, full nodes protect the user from the change of any properties of Bitcoin which they do not agree with. The ability to retain this power for users is of prime importance and is arguably what gives Bitcoin most of it's value. Any increase in the cost to run a full node is an increase in cost to maintain monetary sovereignty. The ability for a user to run a node is what keeps the miners honest and prevents them from rewriting any of Bitcoin's rules.
If it's still difficult to grasp the above paragraph, ask yourself the following questions,
- What makes Bitcoin uncensorable
- What gives confidence that the 21 million limit will be upheld
- What makes transactions irreversible
- If hashpower was king as you make it to be, why havn't miners making up majority hashrate who want bigger blocks been able to change the blocksize?
The market is not storing 10s of billions of dollars in Bitcoin despite all it's risks because it is useful for everyday transactions, that is a solved problem in every part of the world (Cash/Visa/etc..).
Having said that, i fully empathise with your view that increasing transaction fees might allow competitors to gain marketshare for low value use cases. By all means, we should look into ways of solving the problem. But all these debates around blocksize is a total waste of time. Even if we fork to 2MB, 5MB, 10MB. It is irrelevant in the larger picture, transaction capacity will still be too low for global usage in the medium-long term. The additional capacity from blocksize increases are linear improvements with very large systemic costs compared with the userbase and usage which is growing exponentially. Lightning potentially offers a couple or orders of magnitude of scaling and will make blocksize a non-issue for years to come. Even if it fails to live up to the hype, you should not discount the market innovating solutions when there is money to be made.
[-- Attachment #2: Type: text/html, Size: 5497 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-30 7:11 ` Luv Khemani
@ 2017-03-30 17:16 ` Jared Lee Richardson
2017-03-31 4:21 ` Luv Khemani
0 siblings, 1 reply; 81+ messages in thread
From: Jared Lee Richardson @ 2017-03-30 17:16 UTC (permalink / raw)
To: Luv Khemani; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 9203 bytes --]
> You are only looking at technical aspects and missing the political
aspect.
Nodes don't do politics. People do, and politics is a lot larger with a
lot more moving parts than just node operation.
> full nodes protect the user from the change of any properties of Bitcoin
which they do not agree with.
Full nodes protect from nothing if the chain they attempt to use is
nonfunctional.
> The ability to retain this power for users is of prime importance and is
arguably what gives Bitcoin most of it's value
> Any increase in the cost to run a full node is an increase in cost to
maintain monetary sovereignty
This power is far more complicated than just nodes. You're implying that
node operation == political participation. Node operation is only a very
small part of the grand picture of the bitcoin balance of power.
> The ability for a user to run a node is what keeps the miners honest and
prevents them from rewriting any of Bitcoin's rules.
No, it isn't. Nodes disagreeing with miners is necessary but not
sufficient to prevent that. Nodes can't utilize a nonfunctional chain, nor
can they utilize a coin with no exchanges.
> What makes Bitcoin uncensorable
Only two things - 1. Node propagation being strong enough that a target
node can't be surrounded by attacker nodes (or so that attacker nodes can't
segment honest nodes), and 2. Miners being distributed in enough countries
and locations to avoid any single outside attacker group from having enough
leverage to prevent transaction inclusion, and miners also having enough
incentives(philosophical or economic) to refuse to collude towards
transaction exclusion.
Being able to run a node yourself has no real effect on either of the two.
Either we have enough nodes that an attacker can't segment the network or
we don't.
> What gives confidence that the 21 million limit will be upheld
What you're describing would result in a fork war. The opposition to this
would widespread and preventing an attempt relies upon mutual destruction.
If users refused to get on board, exchanges would follow users. If miners
refused to get on board, the attempt would be equally dead in the water.
It would require a majority of users, businesses and miners to change the
limit; Doing so without an overwhelming majority(90% at least) would still
result in a contentious fork that punished both sides(in price, confidence,
adoption, and possibly chain or node attacks) for refusing to agree.
Nodes have absolutely no say in the matter if they can't segment the
network, and even if they could their impact could be repaired. Users !=
Nodes.
> What makes transactions irreversible
Err, this makes me worry that you don't understand how blockchains work...
This is because miners are severely punished for attempting to mine on
anything but the longest chain. Nodes have absolutely no say in the
matter, they always follow the longest chain unless a hardfork was
applied. If the hardfork has overwhelming consensus, i.e. stopping a 51%
attack, then the attack would be handled. If the hardfork did not have
overwhelming consensus it would result in another fork war requiring users,
businesses, and miners to actively decide which to support and how, and
once again would involve mutual destruction on both forks.
Nodes don't decide any of these things. Nodes follow the longest chain,
and have no practical choices in the matter. Users not running nodes
doesn't diminish their power - Mutual destruction comes from the market
forces on the exchanges, and they could give a rats ass whether you run a
node or not.
> The market is not storing 10s of billions of dollars in Bitcoin despite
all it's risks because it is useful for everyday transactions, that is a
solved problem in every part of the world (Cash/Visa/etc..).
This is just the "bitcoin is gold" argument. Bitcoin is not gold. For
someone not already a believer, Bitcoin is a risky, speculative investment
into a promising future technology, whereas gold is a stable physical asset
with 4,000 years of acceptance history that has the same value in nearly
every city on the planet. Bitcoin is difficult to purchase and difficult
to find someone to exchange for goods or services. Literally the only
reason we have 10s of billions of dollars of value is because speculation,
which includes nearly all Bitcoin users/holders and almost all businesses
and miners. While Bitcoin borrows useful features from gold, it has more
possible uses, including uses that were never possible before Bitcoin
existed, and we believe that gives it huge potential.
The ability of other systems to do transactions, like visa or cash, come
with the limitations of those systems. Bitcoin was designed to break those
limitations and STILL provide the ability to do transactions. We might all
agree Bitcoin isn't going to ever solve the microtransaction problem, at
least not on-chain, but saying Bitcoin doesn't need utility is just
foolish. Gold doesn't need utility, gold has 4,000 years of history. We
don't.
> Even if we fork to 2MB, 5MB, 10MB. It is irrelevant in the larger
picture, transaction capacity will still be too low for global usage in the
medium-long term.
Which is why it needs to be a formula or a continuous process, not a single
number.
> Even if it fails to live up to the hype, you should not discount the
market innovating solutions when there is money to be made.
That's like saying it would be better to do nothing so someone else solves
our problem for us than it would be for us to do what we can to solve it
ourselves. Someone else solving our problem may very well be Ethereum, and
"solving it for us" is pulling Bitcoin investments, users and nodes away
into Ethereum.
> The additional capacity from blocksize increases are linear improvements
with very large systemic costs compared with the userbase and usage which
is growing exponentially.
The capacity increases do not have to be linear. The increases in utility
are linear with blocksize increases, but so are the costs. There's no
reason those blocksize increases can't be tied to or related to usage
increases, so long as the concerns about having too few nodes (or too few
fees) for security are handled.
On Thu, Mar 30, 2017 at 12:11 AM, Luv Khemani <luvb@hotmail.com> wrote:
>
> >> If home users are not running their own full nodes, then home users
> have to trust and rely on other, more powerful nodes to represent them. Of
> course, the more powerful nodes, simply by nature of having more power, are
> going to have different opinions and objectives from the users.
>
> >I think you're conflating mining with node operation here. Node users
> only power is to block the propagation of certain things. Since miners
> also have a node endpoint, they can cut the node users out of the equation
> by linking with eachother directly - something they already do out of
> practicality for propagation. Node users do not have the power to
> arbitrate consensus, that is why we have blocks and PoW.
>
> You are only looking at technical aspects and missing the political aspect.
>
> Node users decide what a Bitcoin is. It matters not how much hash power is
> behind a inflationary supply chain fork, full nodes protect the user from
> the change of any properties of Bitcoin which they do not agree with. The
> ability to retain this power for users is of prime importance and is
> arguably what gives Bitcoin most of it's value. Any increase in the cost to
> run a full node is an increase in cost to maintain monetary sovereignty.
> The ability for a user to run a node is what keeps the miners honest and
> prevents them from rewriting any of Bitcoin's rules.
>
> If it's still difficult to grasp the above paragraph, ask yourself the
> following questions,
> - What makes Bitcoin uncensorable
> - What gives confidence that the 21 million limit will be upheld
> - What makes transactions irreversible
> - If hashpower was king as you make it to be, why havn't miners making up
> majority hashrate who want bigger blocks been able to change the blocksize?
>
> The market is not storing 10s of billions of dollars in Bitcoin despite
> all it's risks because it is useful for everyday transactions, that is a
> solved problem in every part of the world (Cash/Visa/etc..).
>
> Having said that, i fully empathise with your view that increasing
> transaction fees might allow competitors to gain marketshare for low value
> use cases. By all means, we should look into ways of solving the problem.
> But all these debates around blocksize is a total waste of time. Even if we
> fork to 2MB, 5MB, 10MB. It is irrelevant in the larger picture, transaction
> capacity will still be too low for global usage in the medium-long term.
> The additional capacity from blocksize increases are linear improvements
> with very large systemic costs compared with the userbase and usage which
> is growing exponentially. Lightning potentially offers a couple or orders
> of magnitude of scaling and will make blocksize a non-issue for years to
> come. Even if it fails to live up to the hype, you should not discount the
> market innovating solutions when there is money to be made.
>
>
[-- Attachment #2: Type: text/html, Size: 15954 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-30 17:16 ` Jared Lee Richardson
@ 2017-03-31 4:21 ` Luv Khemani
2017-03-31 5:28 ` Jared Lee Richardson
0 siblings, 1 reply; 81+ messages in thread
From: Luv Khemani @ 2017-03-31 4:21 UTC (permalink / raw)
To: Jared Lee Richardson; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 7607 bytes --]
> Nodes don't do politics. People do, and politics is a lot larger with a lot more moving parts than just node operation.
Node operation is making a stand on what money you will accept.
Ie Your local store will only accept US Dollars and not Japanese Yen. Without being able to run a node, you have no way to independently determine what you are receiving, you could be paid Zimbawe Dollars and wouldn't know any better.
> Full nodes protect from nothing if the chain they attempt to use is nonfunctional.
This is highly subjective.
Just because it is nonfunctional to you, does not mean it is nonfunctional to existing users.
> This power is far more complicated than just nodes.
I never implied otherwise.
> You're implying that node operation == political participation.
Ofcourse it is. Try paying for my goods using BU/Ehtereum/Dash/etc.. or a Bitcoin forked with inflation, you will not get any goods regardless of how much hashrate those coins have.
> Miners being distributed in enough countries and locations to avoid any single outside attacker group from having enough leverage to prevent transaction inclusion, and miners also having enough incentives(philosophical or economic) to refuse to collude towards transaction exclusion.
It's good that you see the importance of this. You should also take into consideration the number of independent mining entities it takes to achieve 51% hashrate. It will be of little use to have thousands on independent miners/pools if 3 large pools make up 51% of hash rate and collude to attack the network.
> If users refused to get on board, exchanges would follow users. If miners refused to get on board, the attempt would be equally dead in the water. It would require a majority of users, businesses and miners to change the limit;
> Nodes have absolutely no say in the matter if they can't segment the network, and even if they could their impact could be repaired. Users != Nodes.
Nodes define which network they want to follow. Without a Node, you don't even get to decide which segement you are on. Either miners decide( for SPV wallets) or your wallet's server decides(Node). You have no control without a
>> What makes transactions irreversible
>Nodes have absolutely no say in the matter, they always follow the longest chain unless a hardfork was applied.
My bad here, hashpower decides order. This is the sole reason we have mining, to order transactions.
> Mutual destruction comes from the market forces on the exchanges, and they could give a rats ass whether you run a node or not.
Ability to run a node and validate rules => Confidence in currency => Higher demand => Higher exchange rate
I would not be holding any Bitcoins if it was unfeasible for me to run a Node and instead had to trust some 3rd party that the currency was not being inflated/censored. Bitcoin has value because of it's trustless properties. Otherwise, there is no difference between cryptocurrencies and fiat.
> Literally the only reason we have 10s of billions of dollars of value is because speculation, which includes nearly all Bitcoin users/holders and almost all businesses and miners. While Bitcoin borrows useful features from gold, it has more possible uses, including uses that were never possible before Bitcoin existed, and we believe that gives it huge potential.
> The ability of other systems to do transactions, like visa or cash, come with the limitations of those systems. Bitcoin was designed to break those limitations and STILL provide the ability to do transactions. We might all agree Bitcoin isn't going to ever solve the microtransaction problem, at least not on-chain, but saying Bitcoin doesn't need utility is just foolish. Gold doesn't need utility, gold has 4,000 years of history. We don't.
> There's no reason those blocksize increases can't be tied to or related to usage increases
Blocksize has nothing to do with utility, only cost of on-chain transactions.
OTOH increasing the blocksize has alot to do with introducing the very limitations that Visa/Cash have.
Why would you risk destroying Bitcoin's primary proposition (removing limitations of Cash/Visa) for insignificant capacity increase?
> That's like saying it would be better to do nothing so someone else solves our problem for us than it would be for us to do what we can to solve it ourselves. Someone else solving our problem may very well be Ethereum, and "solving it for us" is pulling Bitcoin investments, users and nodes away into Ethereum.
Who says nothing is being done? Segwit, Lightning, pre-loaded wallets like Coinbase are all solutions.
On Thu, Mar 30, 2017 at 12:11 AM, Luv Khemani <luvb@hotmail.com<mailto:luvb@hotmail.com>> wrote:
>> If home users are not running their own full nodes, then home users have to trust and rely on other, more powerful nodes to represent them. Of course, the more powerful nodes, simply by nature of having more power, are going to have different opinions and objectives from the users.
>I think you're conflating mining with node operation here. Node users only power is to block the propagation of certain things. Since miners also have a node endpoint, they can cut the node users out of the equation by linking with eachother directly - something they already do out of practicality for propagation. Node users do not have the power to arbitrate consensus, that is why we have blocks and PoW.
You are only looking at technical aspects and missing the political aspect.
Node users decide what a Bitcoin is. It matters not how much hash power is behind a inflationary supply chain fork, full nodes protect the user from the change of any properties of Bitcoin which they do not agree with. The ability to retain this power for users is of prime importance and is arguably what gives Bitcoin most of it's value. Any increase in the cost to run a full node is an increase in cost to maintain monetary sovereignty. The ability for a user to run a node is what keeps the miners honest and prevents them from rewriting any of Bitcoin's rules.
If it's still difficult to grasp the above paragraph, ask yourself the following questions,
- What makes Bitcoin uncensorable
- What gives confidence that the 21 million limit will be upheld
- What makes transactions irreversible
- If hashpower was king as you make it to be, why havn't miners making up majority hashrate who want bigger blocks been able to change the blocksize?
The market is not storing 10s of billions of dollars in Bitcoin despite all it's risks because it is useful for everyday transactions, that is a solved problem in every part of the world (Cash/Visa/etc..).
Having said that, i fully empathise with your view that increasing transaction fees might allow competitors to gain marketshare for low value use cases. By all means, we should look into ways of solving the problem. But all these debates around blocksize is a total waste of time. Even if we fork to 2MB, 5MB, 10MB. It is irrelevant in the larger picture, transaction capacity will still be too low for global usage in the medium-long term. The additional capacity from blocksize increases are linear improvements with very large systemic costs compared with the userbase and usage which is growing exponentially. Lightning potentially offers a couple or orders of magnitude of scaling and will make blocksize a non-issue for years to come. Even if it fails to live up to the hype, you should not discount the market innovating solutions when there is money to be made.
[-- Attachment #2: Type: text/html, Size: 16889 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-31 4:21 ` Luv Khemani
@ 2017-03-31 5:28 ` Jared Lee Richardson
2017-03-31 8:19 ` Luv Khemani
0 siblings, 1 reply; 81+ messages in thread
From: Jared Lee Richardson @ 2017-03-31 5:28 UTC (permalink / raw)
To: Luv Khemani; +Cc: Bitcoin Protocol Discussion
> Node operation is making a stand on what money you will accept.
> Ie Your local store will only accept US Dollars and not Japanese Yen. Without being able to run a node, you have no way to independently determine what you are receiving, you could be paid Zimbawe Dollars and wouldn't know any better.
Err, no, that's what happens when you double click the Ethereum icon
instead of the Bitcoin icon. Just because you run "Bitcoin SPV"
instead of "Bitcoin Verify Everyone's Else's Crap" doesn't mean you're
somehow going to get Ethereum payments. Your verification is just
different and the risks that come along with that are different. It's
only confusing if you make it confusing.
> This is highly subjective.
> Just because it is nonfunctional to you, does not mean it is nonfunctional to existing users.
If every block that is mined for them is deliberately empty because of
an attacker, that's nonfunctional. You can use whatever semantics you
want to describe that situation, but that's clearly what I meant.
> Ofcourse it is. Try paying for my goods using BU/Ehtereum/Dash/etc.. or a Bitcoin forked with inflation, you will not get any goods regardless of how much hashrate those coins have.
As above, if someone operates Bitcoin in SPV mode they are not
magically at risk of getting Dashcoins. They send and receive
Bitcoins just like everyone else running Bitcoin software. There's no
confusion about it and it doesn't have anything to do with hashrates
of anyone. It is just a different method of verification with
corresponding different costs of use and different security
guarantees.
> You should also take into consideration the number of independent mining entities it takes to achieve 51% hashrate. It will be of little use to have thousands on independent miners/pools if 3 large pools make up 51% of hash rate and collude to attack the network.
We're already fucked, China has 61% of the hashrate and the only thing
we can do about it is to wait for the Chinese electrical
supply/demand/transmission system to rebalance itself. Aside from
that little problem, mining distributions and pool distributions don't
significantly factor into the blocksize debate. The debate is a
choice between nodes paying more to allow greater growth and adoption,
or nodes constraining adoption in favor of debatable security
concerns.
> Nodes define which network they want to follow.
Do you really consider it choosing when there is only a single option?
And even if there was, the software would choose it for you? If it
is a Bitcoin client, it follows the Bitcoin blockchain. There is no
BU blockchain at the moment, and Bitcoin software can't possibly start
following Ethereum blockchains.
> Without a Node, you don't even get to decide which segement you are on.
Yes you do, if the segment options are known (and if they aren't,
running a node likely won't help you choose either, it will choose by
accident and you'll have no idea). You would get to choose whose
verifications to request/check from, and thus choose which segment to
follow, if any.
> Ability to run a node and validate rules => Confidence in currency
This is only true for the small minority that actually need that added
level of security & confidence, and the paranoid people who believe
they need it when they really, really don't. Some guy on reddit
spouted off the same garbage logic, but was much quieter when I got
him to admit that he didn't actually read the code of Bitcoin that he
downloaded and ran, nor any of the code of the updates. He trusted.
*gasp*
The average person doesn't need that level of security. They do
however need to be able to use it, which they cannot right now if you
consider "average" to be at least 50% of the population.
> Higher demand => Higher exchange rate
Demand comes from usage and adoption. Neither can happen us being
willing to give other people the option to trade security features for
lower costs.
> I would not be holding any Bitcoins if it was unfeasible for me to run a Node and instead had to trust some 3rd party that the currency was not being inflated/censored.
Great. Somehow I think Bitcoin's future involves very few more people
like you, and very many people who aren't paranoid and just want to be
able to send and receive Bitcoins.
> Bitcoin has value because of it's trustless properties. Otherwise, there is no difference between cryptocurrencies and fiat.
No, it has its value for many, many reasons, trustless properties is
only one of them. What I'm suggesting doesn't involve giving up
trustless properties except in your head (And not even then, since you
would almost certainly be able to afford to run a node for the rest of
your life if Bitcoin's value continues to rise as it has in the past).
And even if it did, there's a lot more reasons that a lot more people
than you would use it.
> Blocksize has nothing to do with utility, only cost of on-chain transactions.
Are you really this dense? If the cost of on-chain transactions
rises, numerous use cases get killed off. At $0.10 per tx you
probably won't buy in-game digital microtransactions with it, but you
might buy coffee with it. At $1 per tx, you probably won't buy coffee
with it but you might pay your ISP bill with it. At $20 per tx, you
probably won't pay your ISP bill with it, but you might pay your rent.
At $300 per tx you probably won't use it for anything, but a company
purchasing goods from China might. At $4000 per tx that company
probably won't use it, but international funds settlement for
million-dollar transactions might use it.
At each fee step along the way you kill of hundreds or thousands of
possible uses of Bitcoin. Killing those off means fewer people will
use it, so they will use something else instead.
> OTOH increasing the blocksize has alot to do with introducing the very limitations that Visa/Cash have.
No they don't. They only give people the option to pay more for
higher security or to accept lower security and use Bitcoin anyway.
> Why would you risk destroying Bitcoin's primary proposition (removing limitations of Cash/Visa) for insignificant capacity increase?
So far as anyone has presented actual numbers, there's no reason to
believe larger blocksizes endanger anything of the sort, even if I
agreed that that was Bitcoin's primary proposition. And I don't
believe we need an insignificant capacity increase, I used to think
that way though. I strongly believe we can handle massive increases
by adjusting our expectations of what nodes do, how they operate, how
they justify the price of their services, and what levels of security
are available and appropriate for various levels of transaction risk.
> Who says nothing is being done? Segwit, Lightning, pre-loaded wallets like Coinbase are all solutions.
Segwit is a miniscule blocksize increase and wholly inadequate
compared to the scope of the problem. Good for other reasons, though.
Lightning is not Bitcoin, it is something different(but not bad IMO)
that has different features and different consequences. I guess you
think it is ok that if your lightning node goes offline at the wrong
time, you could lose funds you never transacted with in the first
place? No? Oh, then you must be ok with lightning hub centralization
then as well as paying a monthly fee to lightning hubs for their
services. Wait, that sounds an awful lot like visa....
I have no idea what you're referring to with the pre-loaded wallets point.
On Thu, Mar 30, 2017 at 9:21 PM, Luv Khemani <luvb@hotmail.com> wrote:
>
> > Nodes don't do politics. People do, and politics is a lot larger with a lot more moving parts than just node operation.
>
>
> Node operation is making a stand on what money you will accept.
>
> Ie Your local store will only accept US Dollars and not Japanese Yen. Without being able to run a node, you have no way to independently determine what you are receiving, you could be paid Zimbawe Dollars and wouldn't know any better.
>
>
> > Full nodes protect from nothing if the chain they attempt to use is nonfunctional.
>
> This is highly subjective.
> Just because it is nonfunctional to you, does not mean it is nonfunctional to existing users.
>
> > This power is far more complicated than just nodes.
>
> I never implied otherwise.
>
> > You're implying that node operation == political participation.
>
> Ofcourse it is. Try paying for my goods using BU/Ehtereum/Dash/etc.. or a Bitcoin forked with inflation, you will not get any goods regardless of how much hashrate those coins have.
>
> > Miners being distributed in enough countries and locations to avoid any single outside attacker group from having enough leverage to prevent transaction inclusion, and miners also having enough incentives(philosophical or economic) to refuse to collude towards transaction exclusion.
>
> It's good that you see the importance of this. You should also take into consideration the number of independent mining entities it takes to achieve 51% hashrate. It will be of little use to have thousands on independent miners/pools if 3 large pools make up 51% of hash rate and collude to attack the network.
>
> > If users refused to get on board, exchanges would follow users. If miners refused to get on board, the attempt would be equally dead in the water. It would require a majority of users, businesses and miners to change the limit;
>
> > Nodes have absolutely no say in the matter if they can't segment the network, and even if they could their impact could be repaired. Users != Nodes.
>
> Nodes define which network they want to follow. Without a Node, you don't even get to decide which segement you are on. Either miners decide( for SPV wallets) or your wallet's server decides(Node). You have no control without a
>
> >> What makes transactions irreversible
> >Nodes have absolutely no say in the matter, they always follow the longest chain unless a hardfork was applied.
>
> My bad here, hashpower decides order. This is the sole reason we have mining, to order transactions.
>
> > Mutual destruction comes from the market forces on the exchanges, and they could give a rats ass whether you run a node or not.
>
> Ability to run a node and validate rules => Confidence in currency => Higher demand => Higher exchange rate
>
> I would not be holding any Bitcoins if it was unfeasible for me to run a Node and instead had to trust some 3rd party that the currency was not being inflated/censored. Bitcoin has value because of it's trustless properties. Otherwise, there is no difference between cryptocurrencies and fiat.
>
> > Literally the only reason we have 10s of billions of dollars of value is because speculation, which includes nearly all Bitcoin users/holders and almost all businesses and miners. While Bitcoin borrows useful features from gold, it has more possible uses, including uses that were never possible before Bitcoin existed, and we believe that gives it huge potential.
> > The ability of other systems to do transactions, like visa or cash, come with the limitations of those systems. Bitcoin was designed to break those limitations and STILL provide the ability to do transactions. We might all agree Bitcoin isn't going to ever solve the microtransaction problem, at least not on-chain, but saying Bitcoin doesn't need utility is just foolish. Gold doesn't need utility, gold has 4,000 years of history. We don't.
> > There's no reason those blocksize increases can't be tied to or related to usage increases
>
> Blocksize has nothing to do with utility, only cost of on-chain transactions.
> OTOH increasing the blocksize has alot to do with introducing the very limitations that Visa/Cash have.
> Why would you risk destroying Bitcoin's primary proposition (removing limitations of Cash/Visa) for insignificant capacity increase?
>
> > That's like saying it would be better to do nothing so someone else solves our problem for us than it would be for us to do what we can to solve it ourselves. Someone else solving our problem may very well be Ethereum, and "solving it for us" is pulling Bitcoin investments, users and nodes away into Ethereum.
>
> Who says nothing is being done? Segwit, Lightning, pre-loaded wallets like Coinbase are all solutions.
>
>
>
>
> On Thu, Mar 30, 2017 at 12:11 AM, Luv Khemani <luvb@hotmail.com> wrote:
>>
>>
>> >> If home users are not running their own full nodes, then home users have to trust and rely on other, more powerful nodes to represent them. Of course, the more powerful nodes, simply by nature of having more power, are going to have different opinions and objectives from the users.
>>
>> >I think you're conflating mining with node operation here. Node users only power is to block the propagation of certain things. Since miners also have a node endpoint, they can cut the node users out of the equation by linking with eachother directly - something they already do out of practicality for propagation. Node users do not have the power to arbitrate consensus, that is why we have blocks and PoW.
>>
>> You are only looking at technical aspects and missing the political aspect.
>>
>> Node users decide what a Bitcoin is. It matters not how much hash power is behind a inflationary supply chain fork, full nodes protect the user from the change of any properties of Bitcoin which they do not agree with. The ability to retain this power for users is of prime importance and is arguably what gives Bitcoin most of it's value. Any increase in the cost to run a full node is an increase in cost to maintain monetary sovereignty. The ability for a user to run a node is what keeps the miners honest and prevents them from rewriting any of Bitcoin's rules.
>>
>> If it's still difficult to grasp the above paragraph, ask yourself the following questions,
>> - What makes Bitcoin uncensorable
>> - What gives confidence that the 21 million limit will be upheld
>> - What makes transactions irreversible
>> - If hashpower was king as you make it to be, why havn't miners making up majority hashrate who want bigger blocks been able to change the blocksize?
>>
>> The market is not storing 10s of billions of dollars in Bitcoin despite all it's risks because it is useful for everyday transactions, that is a solved problem in every part of the world (Cash/Visa/etc..).
>>
>> Having said that, i fully empathise with your view that increasing transaction fees might allow competitors to gain marketshare for low value use cases. By all means, we should look into ways of solving the problem. But all these debates around blocksize is a total waste of time. Even if we fork to 2MB, 5MB, 10MB. It is irrelevant in the larger picture, transaction capacity will still be too low for global usage in the medium-long term. The additional capacity from blocksize increases are linear improvements with very large systemic costs compared with the userbase and usage which is growing exponentially. Lightning potentially offers a couple or orders of magnitude of scaling and will make blocksize a non-issue for years to come. Even if it fails to live up to the hype, you should not discount the market innovating solutions when there is money to be made.
>>
>
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-31 5:28 ` Jared Lee Richardson
@ 2017-03-31 8:19 ` Luv Khemani
2017-03-31 15:59 ` Jared Lee Richardson
0 siblings, 1 reply; 81+ messages in thread
From: Luv Khemani @ 2017-03-31 8:19 UTC (permalink / raw)
To: Jared Lee Richardson; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 12192 bytes --]
> Err, no, that's what happens when you double click the Ethereum icon
instead of the Bitcoin icon. Just because you run "Bitcoin SPV"
instead of "Bitcoin Verify Everyone's Else's Crap" doesn't mean you're
somehow going to get Ethereum payments. Your verification is just
different and the risks that come along with that are different. It's
only confusing if you make it confusing.
This is false. You could get coins which don't even exist as long as a miner mined the invalid transaction.
Peter Todd has demonstrated this on mainstream SPV wallets,
https://www.linkedin.com/pulse/peter-todds-fraud-proofs-talk-mit-bitcoin-expo-2016-mark-morris
The only reason SPV wallets do not accept ethereum payments is because of transaction/block format differences.
SPV wallets have no clue what is a valid bitcoin, they trust miners fully.
In the event of a hardfork, SPV wallets will blindly follow the longest chain.
> If every block that is mined for them is deliberately empty because of
an attacker, that's nonfunctional. You can use whatever semantics you
want to describe that situation, but that's clearly what I meant.
Not sure why you are bringing this up, this is not the case today nor does it have anything to do with blocksize.
> As above, if someone operates Bitcoin in SPV mode they are not
magically at risk of getting Dashcoins. They send and receive
Bitcoins just like everyone else running Bitcoin software. There's no
confusion about it and it doesn't have anything to do with hashrates
of anyone.
As mentioned earlier, you are at risk of receiving made up money.
SPV has everything to do with hashrate, it trusts hashrate fully.
Crafting a bitcoin transaction paying you money that i do not have is not difficult, as long as a miner mines a block with it, your SPV wallet will accept it.
> The debate is a choice between nodes paying more to allow greater growth and adoption,
or nodes constraining adoption in favor of debatable security
concerns.
Onchain transactions are not the only way to use Bitcoin the currency.
Trades you do on an exchange are not onchain, yet transacted with Bitcoin.
> And even if there was, the software would choose it for you?
People choose the software, not the other way round.
> Yes you do, if the segment options are known (and if they aren't,
running a node likely won't help you choose either, it will choose by
accident and you'll have no idea). You would get to choose whose
verifications to request/check from, and thus choose which segment to
follow, if any.
SPV does not decide, they follow longest chain.
Centralised/Server based wallets follow the server they are connecting to.
Full Nodes do not depend on a 3rd party to decide if the money received is valid.
> Are you really this dense? If the cost of on-chain transactions
rises, numerous use cases get killed off. At $0.10 per tx you
probably won't buy in-game digital microtransactions with it, but you
might buy coffee with it. At $1 per tx, you probably won't buy coffee
with it but you might pay your ISP bill with it. At $20 per tx, you
probably won't pay your ISP bill with it, but you might pay your rent.
At $300 per tx you probably won't use it for anything, but a company
purchasing goods from China might. At $4000 per tx that company
probably won't use it, but international funds settlement for
million-dollar transactions might use it.
> At each fee step along the way you kill of hundreds or thousands of
possible uses of Bitcoin. Killing those off means fewer people will
use it, so they will use something else instead.
No need to get personal.
As mentioned earlier, all these low value transactions can happen offchain.
None of the use cases will be killed off. We have sub dollar trades happening on exchanges offchain.
> The average person doesn't need that level of security.
Precisely why they do not need to be on-chain.
It is clear to me that you have not yet grasped Bitcoin's security model, especially the role Full-Nodes play in it.
Id suggest you do some more reading up and thinking about it.
Do thought experiments and take it to the extremes where nobody runs a node, what can miners do now which they could not do before?
Why don't exchanges run SPV nodes?
Further correspondence will not be fruitful until you grasp this.
On Thu, Mar 30, 2017 at 9:21 PM, Luv Khemani <luvb@hotmail.com> wrote:
>
> > Nodes don't do politics. People do, and politics is a lot larger with a lot more moving parts than just node operation.
>
>
> Node operation is making a stand on what money you will accept.
>
> Ie Your local store will only accept US Dollars and not Japanese Yen. Without being able to run a node, you have no way to independently determine what you are receiving, you could be paid Zimbawe Dollars and wouldn't know any better.
>
>
> > Full nodes protect from nothing if the chain they attempt to use is nonfunctional.
>
> This is highly subjective.
> Just because it is nonfunctional to you, does not mean it is nonfunctional to existing users.
>
> > This power is far more complicated than just nodes.
>
> I never implied otherwise.
>
> > You're implying that node operation == political participation.
>
> Ofcourse it is. Try paying for my goods using BU/Ehtereum/Dash/etc.. or a Bitcoin forked with inflation, you will not get any goods regardless of how much hashrate those coins have.
>
> > Miners being distributed in enough countries and locations to avoid any single outside attacker group from having enough leverage to prevent transaction inclusion, and miners also having enough incentives(philosophical or economic) to refuse to collude towards transaction exclusion.
>
> It's good that you see the importance of this. You should also take into consideration the number of independent mining entities it takes to achieve 51% hashrate. It will be of little use to have thousands on independent miners/pools if 3 large pools make up 51% of hash rate and collude to attack the network.
>
> > If users refused to get on board, exchanges would follow users. If miners refused to get on board, the attempt would be equally dead in the water. It would require a majority of users, businesses and miners to change the limit;
>
> > Nodes have absolutely no say in the matter if they can't segment the network, and even if they could their impact could be repaired. Users != Nodes.
>
> Nodes define which network they want to follow. Without a Node, you don't even get to decide which segement you are on. Either miners decide( for SPV wallets) or your wallet's server decides(Node). You have no control without a
>
> >> What makes transactions irreversible
> >Nodes have absolutely no say in the matter, they always follow the longest chain unless a hardfork was applied.
>
> My bad here, hashpower decides order. This is the sole reason we have mining, to order transactions.
>
> > Mutual destruction comes from the market forces on the exchanges, and they could give a rats ass whether you run a node or not.
>
> Ability to run a node and validate rules => Confidence in currency => Higher demand => Higher exchange rate
>
> I would not be holding any Bitcoins if it was unfeasible for me to run a Node and instead had to trust some 3rd party that the currency was not being inflated/censored. Bitcoin has value because of it's trustless properties. Otherwise, there is no difference between cryptocurrencies and fiat.
>
> > Literally the only reason we have 10s of billions of dollars of value is because speculation, which includes nearly all Bitcoin users/holders and almost all businesses and miners. While Bitcoin borrows useful features from gold, it has more possible uses, including uses that were never possible before Bitcoin existed, and we believe that gives it huge potential.
> > The ability of other systems to do transactions, like visa or cash, come with the limitations of those systems. Bitcoin was designed to break those limitations and STILL provide the ability to do transactions. We might all agree Bitcoin isn't going to ever solve the microtransaction problem, at least not on-chain, but saying Bitcoin doesn't need utility is just foolish. Gold doesn't need utility, gold has 4,000 years of history. We don't.
> > There's no reason those blocksize increases can't be tied to or related to usage increases
>
> Blocksize has nothing to do with utility, only cost of on-chain transactions.
> OTOH increasing the blocksize has alot to do with introducing the very limitations that Visa/Cash have.
> Why would you risk destroying Bitcoin's primary proposition (removing limitations of Cash/Visa) for insignificant capacity increase?
>
> > That's like saying it would be better to do nothing so someone else solves our problem for us than it would be for us to do what we can to solve it ourselves. Someone else solving our problem may very well be Ethereum, and "solving it for us" is pulling Bitcoin investments, users and nodes away into Ethereum.
>
> Who says nothing is being done? Segwit, Lightning, pre-loaded wallets like Coinbase are all solutions.
>
>
>
>
> On Thu, Mar 30, 2017 at 12:11 AM, Luv Khemani <luvb@hotmail.com> wrote:
>>
>>
>> >> If home users are not running their own full nodes, then home users have to trust and rely on other, more powerful nodes to represent them. Of course, the more powerful nodes, simply by nature of having more power, are going to have different opinions and objectives from the users.
>>
>> >I think you're conflating mining with node operation here. Node users only power is to block the propagation of certain things. Since miners also have a node endpoint, they can cut the node users out of the equation by linking with eachother directly - something they already do out of practicality for propagation. Node users do not have the power to arbitrate consensus, that is why we have blocks and PoW.
>>
>> You are only looking at technical aspects and missing the political aspect.
>>
>> Node users decide what a Bitcoin is. It matters not how much hash power is behind a inflationary supply chain fork, full nodes protect the user from the change of any properties of Bitcoin which they do not agree with. The ability to retain this power for users is of prime importance and is arguably what gives Bitcoin most of it's value. Any increase in the cost to run a full node is an increase in cost to maintain monetary sovereignty. The ability for a user to run a node is what keeps the miners honest and prevents them from rewriting any of Bitcoin's rules.
>>
>> If it's still difficult to grasp the above paragraph, ask yourself the following questions,
>> - What makes Bitcoin uncensorable
>> - What gives confidence that the 21 million limit will be upheld
>> - What makes transactions irreversible
>> - If hashpower was king as you make it to be, why havn't miners making up majority hashrate who want bigger blocks been able to change the blocksize?
>>
>> The market is not storing 10s of billions of dollars in Bitcoin despite all it's risks because it is useful for everyday transactions, that is a solved problem in every part of the world (Cash/Visa/etc..).
>>
>> Having said that, i fully empathise with your view that increasing transaction fees might allow competitors to gain marketshare for low value use cases. By all means, we should look into ways of solving the problem. But all these debates around blocksize is a total waste of time. Even if we fork to 2MB, 5MB, 10MB. It is irrelevant in the larger picture, transaction capacity will still be too low for global usage in the medium-long term. The additional capacity from blocksize increases are linear improvements with very large systemic costs compared with the userbase and usage which is growing exponentially. Lightning potentially offers a couple or orders of magnitude of scaling and will make blocksize a non-issue for years to come. Even if it fails to live up to the hype, you should not discount the market innovating solutions when there is money to be made.
>>
>
[-- Attachment #2: Type: text/html, Size: 18345 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-31 8:19 ` Luv Khemani
@ 2017-03-31 15:59 ` Jared Lee Richardson
2017-03-31 16:14 ` David Vorick
0 siblings, 1 reply; 81+ messages in thread
From: Jared Lee Richardson @ 2017-03-31 15:59 UTC (permalink / raw)
To: Luv Khemani; +Cc: Bitcoin Protocol Discussion
> Peter Todd has demonstrated this on mainstream SPV wallets,
> https://www.linkedin.com/pulse/peter-todds-fraud-proofs-talk-mit-bitcoin-expo-2016-mark-morris
Correct me if I'm wrong, but nothing possible if the client software
was electrum-like and used two independent sources for verification.
No?
> Do thought experiments and take it to the extremes where nobody runs a node, what can miners do now which they could not do before?
This and the next point are just reductio ad absurdem, since no one is
suggesting anything of the sort. Even in that situation, I can't think
of anything miners could do if clients used more than one independent
source for verification, ala electrum question above.
> Why don't exchanges run SPV nodes?
No one is suggesting anything like this. The cost of running a node
that could handle 300% of the 2015 worldwide nonbitcoin transaction
volume today would be a rounding error for most exchanges even if
prices didn't rise.
On Fri, Mar 31, 2017 at 1:19 AM, Luv Khemani <luvb@hotmail.com> wrote:
>> Err, no, that's what happens when you double click the Ethereum icon
>
> instead of the Bitcoin icon. Just because you run "Bitcoin SPV"
> instead of "Bitcoin Verify Everyone's Else's Crap" doesn't mean you're
> somehow going to get Ethereum payments. Your verification is just
> different and the risks that come along with that are different. It's
> only confusing if you make it confusing.
>
> This is false. You could get coins which don't even exist as long as a
> miner mined the invalid transaction.
> Peter Todd has demonstrated this on mainstream SPV wallets,
> https://www.linkedin.com/pulse/peter-todds-fraud-proofs-talk-mit-bitcoin-expo-2016-mark-morris
>
> The only reason SPV wallets do not accept ethereum payments is because of
> transaction/block format differences.
> SPV wallets have no clue what is a valid bitcoin, they trust miners fully.
>
> In the event of a hardfork, SPV wallets will blindly follow the longest
> chain.
>
>> If every block that is mined for them is deliberately empty because of
> an attacker, that's nonfunctional. You can use whatever semantics you
> want to describe that situation, but that's clearly what I meant.
>
> Not sure why you are bringing this up, this is not the case today nor does
> it have anything to do with blocksize.
>
>> As above, if someone operates Bitcoin in SPV mode they are not
> magically at risk of getting Dashcoins. They send and receive
> Bitcoins just like everyone else running Bitcoin software. There's no
> confusion about it and it doesn't have anything to do with hashrates
> of anyone.
>
> As mentioned earlier, you are at risk of receiving made up money.
> SPV has everything to do with hashrate, it trusts hashrate fully.
> Crafting a bitcoin transaction paying you money that i do not have is not
> difficult, as long as a miner mines a block with it, your SPV wallet will
> accept it.
>
>> The debate is a choice between nodes paying more to allow greater growth
>> and adoption,
> or nodes constraining adoption in favor of debatable security
> concerns.
>
> Onchain transactions are not the only way to use Bitcoin the currency.
> Trades you do on an exchange are not onchain, yet transacted with Bitcoin.
>
>> And even if there was, the software would choose it for you?
>
> People choose the software, not the other way round.
>
>> Yes you do, if the segment options are known (and if they aren't,
> running a node likely won't help you choose either, it will choose by
> accident and you'll have no idea). You would get to choose whose
> verifications to request/check from, and thus choose which segment to
> follow, if any.
>
> SPV does not decide, they follow longest chain.
> Centralised/Server based wallets follow the server they are connecting to.
> Full Nodes do not depend on a 3rd party to decide if the money received is
> valid.
>
>> Are you really this dense? If the cost of on-chain transactions
> rises, numerous use cases get killed off. At $0.10 per tx you
> probably won't buy in-game digital microtransactions with it, but you
> might buy coffee with it. At $1 per tx, you probably won't buy coffee
> with it but you might pay your ISP bill with it. At $20 per tx, you
> probably won't pay your ISP bill with it, but you might pay your rent.
> At $300 per tx you probably won't use it for anything, but a company
> purchasing goods from China might. At $4000 per tx that company
> probably won't use it, but international funds settlement for
> million-dollar transactions might use it.
>> At each fee step along the way you kill of hundreds or thousands of
> possible uses of Bitcoin. Killing those off means fewer people will
> use it, so they will use something else instead.
>
> No need to get personal.
> As mentioned earlier, all these low value transactions can happen offchain.
> None of the use cases will be killed off. We have sub dollar trades
> happening on exchanges offchain.
>
>> The average person doesn't need that level of security.
>
> Precisely why they do not need to be on-chain.
>
> It is clear to me that you have not yet grasped Bitcoin's security model,
> especially the role Full-Nodes play in it.
> Id suggest you do some more reading up and thinking about it.
> Do thought experiments and take it to the extremes where nobody runs a node,
> what can miners do now which they could not do before?
> Why don't exchanges run SPV nodes?
>
> Further correspondence will not be fruitful until you grasp this.
>
>
>
> On Thu, Mar 30, 2017 at 9:21 PM, Luv Khemani <luvb@hotmail.com> wrote:
>>
>> > Nodes don't do politics. People do, and politics is a lot larger with a
>> > lot more moving parts than just node operation.
>>
>>
>> Node operation is making a stand on what money you will accept.
>>
>> Ie Your local store will only accept US Dollars and not Japanese Yen.
>> Without being able to run a node, you have no way to independently determine
>> what you are receiving, you could be paid Zimbawe Dollars and wouldn't know
>> any better.
>>
>>
>> > Full nodes protect from nothing if the chain they attempt to use is
>> > nonfunctional.
>>
>> This is highly subjective.
>> Just because it is nonfunctional to you, does not mean it is nonfunctional
>> to existing users.
>>
>> > This power is far more complicated than just nodes.
>>
>> I never implied otherwise.
>>
>> > You're implying that node operation == political participation.
>>
>> Ofcourse it is. Try paying for my goods using BU/Ehtereum/Dash/etc.. or a
>> Bitcoin forked with inflation, you will not get any goods regardless of how
>> much hashrate those coins have.
>>
>> > Miners being distributed in enough countries and locations to avoid any
>> > single outside attacker group from having enough leverage to prevent
>> > transaction inclusion, and miners also having enough
>> > incentives(philosophical or economic) to refuse to collude towards
>> > transaction exclusion.
>>
>> It's good that you see the importance of this. You should also take into
>> consideration the number of independent mining entities it takes to achieve
>> 51% hashrate. It will be of little use to have thousands on independent
>> miners/pools if 3 large pools make up 51% of hash rate and collude to
>> attack the network.
>>
>> > If users refused to get on board, exchanges would follow users. If
>> > miners refused to get on board, the attempt would be equally dead in the
>> > water. It would require a majority of users, businesses and miners to
>> > change the limit;
>>
>> > Nodes have absolutely no say in the matter if they can't segment the
>> > network, and even if they could their impact could be repaired. Users !=
>> > Nodes.
>>
>> Nodes define which network they want to follow. Without a Node, you don't
>> even get to decide which segement you are on. Either miners decide( for SPV
>> wallets) or your wallet's server decides(Node). You have no control without
>> a
>>
>> >> What makes transactions irreversible
>> >Nodes have absolutely no say in the matter, they always follow the
>> > longest chain unless a hardfork was applied.
>>
>> My bad here, hashpower decides order. This is the sole reason we have
>> mining, to order transactions.
>>
>> > Mutual destruction comes from the market forces on the exchanges, and
>> > they could give a rats ass whether you run a node or not.
>>
>> Ability to run a node and validate rules => Confidence in currency =>
>> Higher demand => Higher exchange rate
>>
>> I would not be holding any Bitcoins if it was unfeasible for me to run a
>> Node and instead had to trust some 3rd party that the currency was not being
>> inflated/censored. Bitcoin has value because of it's trustless properties.
>> Otherwise, there is no difference between cryptocurrencies and fiat.
>>
>> > Literally the only reason we have 10s of billions of dollars of value is
>> > because speculation, which includes nearly all Bitcoin users/holders and
>> > almost all businesses and miners. While Bitcoin borrows useful features
>> > from gold, it has more possible uses, including uses that were never
>> > possible before Bitcoin existed, and we believe that gives it huge
>> > potential.
>> > The ability of other systems to do transactions, like visa or cash, come
>> > with the limitations of those systems. Bitcoin was designed to break those
>> > limitations and STILL provide the ability to do transactions. We might all
>> > agree Bitcoin isn't going to ever solve the microtransaction problem, at
>> > least not on-chain, but saying Bitcoin doesn't need utility is just foolish.
>> > Gold doesn't need utility, gold has 4,000 years of history. We don't.
>> > There's no reason those blocksize increases can't be tied to or related
>> > to usage increases
>>
>> Blocksize has nothing to do with utility, only cost of on-chain
>> transactions.
>> OTOH increasing the blocksize has alot to do with introducing the very
>> limitations that Visa/Cash have.
>> Why would you risk destroying Bitcoin's primary proposition (removing
>> limitations of Cash/Visa) for insignificant capacity increase?
>>
>> > That's like saying it would be better to do nothing so someone else
>> > solves our problem for us than it would be for us to do what we can to solve
>> > it ourselves. Someone else solving our problem may very well be Ethereum,
>> > and "solving it for us" is pulling Bitcoin investments, users and nodes away
>> > into Ethereum.
>>
>> Who says nothing is being done? Segwit, Lightning, pre-loaded wallets like
>> Coinbase are all solutions.
>>
>>
>>
>>
>> On Thu, Mar 30, 2017 at 12:11 AM, Luv Khemani <luvb@hotmail.com> wrote:
>>>
>>>
>>> >> If home users are not running their own full nodes, then home users
>>> >> have to trust and rely on other, more powerful nodes to represent them. Of
>>> >> course, the more powerful nodes, simply by nature of having more power, are
>>> >> going to have different opinions and objectives from the users.
>>>
>>> >I think you're conflating mining with node operation here. Node users
>>> > only power is to block the propagation of certain things. Since miners also
>>> > have a node endpoint, they can cut the node users out of the equation by
>>> > linking with eachother directly - something they already do out of
>>> > practicality for propagation. Node users do not have the power to arbitrate
>>> > consensus, that is why we have blocks and PoW.
>>>
>>> You are only looking at technical aspects and missing the political
>>> aspect.
>>>
>>> Node users decide what a Bitcoin is. It matters not how much hash power
>>> is behind a inflationary supply chain fork, full nodes protect the user from
>>> the change of any properties of Bitcoin which they do not agree with. The
>>> ability to retain this power for users is of prime importance and is
>>> arguably what gives Bitcoin most of it's value. Any increase in the cost to
>>> run a full node is an increase in cost to maintain monetary sovereignty. The
>>> ability for a user to run a node is what keeps the miners honest and
>>> prevents them from rewriting any of Bitcoin's rules.
>>>
>>> If it's still difficult to grasp the above paragraph, ask yourself the
>>> following questions,
>>> - What makes Bitcoin uncensorable
>>> - What gives confidence that the 21 million limit will be upheld
>>> - What makes transactions irreversible
>>> - If hashpower was king as you make it to be, why havn't miners making up
>>> majority hashrate who want bigger blocks been able to change the blocksize?
>>>
>>> The market is not storing 10s of billions of dollars in Bitcoin despite
>>> all it's risks because it is useful for everyday transactions, that is a
>>> solved problem in every part of the world (Cash/Visa/etc..).
>>>
>>> Having said that, i fully empathise with your view that increasing
>>> transaction fees might allow competitors to gain marketshare for low value
>>> use cases. By all means, we should look into ways of solving the problem.
>>> But all these debates around blocksize is a total waste of time. Even if we
>>> fork to 2MB, 5MB, 10MB. It is irrelevant in the larger picture, transaction
>>> capacity will still be too low for global usage in the medium-long term. The
>>> additional capacity from blocksize increases are linear improvements with
>>> very large systemic costs compared with the userbase and usage which is
>>> growing exponentially. Lightning potentially offers a couple or orders of
>>> magnitude of scaling and will make blocksize a non-issue for years to come.
>>> Even if it fails to live up to the hype, you should not discount the market
>>> innovating solutions when there is money to be made.
>>>
>>
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-31 15:59 ` Jared Lee Richardson
@ 2017-03-31 16:14 ` David Vorick
2017-03-31 16:46 ` Jared Lee Richardson
0 siblings, 1 reply; 81+ messages in thread
From: David Vorick @ 2017-03-31 16:14 UTC (permalink / raw)
To: Jared Lee Richardson; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 1207 bytes --]
No one is suggesting anything like this. The cost of running a node
that could handle 300% of the 2015 worldwide nonbitcoin transaction
volume today would be a rounding error for most exchanges even if
prices didn't rise.
Then explain why PayPal has multiple datacenters. And why Visa has multiple
datacenters. And why the banking systems have multiple datacenters each.
I'm guessing it's because you need that much juice to run a global payment
system at the transaction volumes that they run at.
Unless you have professional experience working directly with transaction
processors handling tens of millions of financial transactions per day, I
think we can fully discount your assessment that it would be a rounding
error in the budget of a major exchange or Bitcoin processor to handle that
much load. And even if it was, it wouldn't matter because it's extremely
important to Bitcoin's security that it's everyday users are able to and
are actively running full nodes.
I'm not going to take the time to refute everything you've been saying but
I will say that most of your comments have demonstrated a similar level of
ignorance as the one above.
This whole thread has been absurdly low quality.
[-- Attachment #2: Type: text/html, Size: 1766 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-31 16:14 ` David Vorick
@ 2017-03-31 16:46 ` Jared Lee Richardson
2017-03-31 18:23 ` David Vorick
0 siblings, 1 reply; 81+ messages in thread
From: Jared Lee Richardson @ 2017-03-31 16:46 UTC (permalink / raw)
To: David Vorick; +Cc: Bitcoin Dev
I guess I should caveat, a rounding error is a bit of exaggeration -
mostly because I previously assumed that it would take 14 years for
the network to reach such a level, something I didn't say and that you
might not grant me.
I don't know why paypal has multiple datacenters, but I'm guessing it
probably has a lot more to do with everything else they do -
interface, support, tax compliance, replication, redundancy - than it
does with the raw numbers of transaction volumes.
What I do know is the math, though. WW tx volume = 426,000,000,000 in
2015. Assuming tx size of ~500 bytes, that's 669 terabytes of data
per year. At a hard drive cost of 0.021 per GB, that's $36k a year or
so and declines ~14% a year.
The bandwidth is the really big cost. You are right that if this
hypothetical node also had to support historical syncing, the numbers
would probably be unmanagable. But that can be solved with a simple
checkpointing system for the vast majority of users, and nodes could
solve it by not supporting syncing / reducing peer count. With a peer
count of 25 I measured ~75 Gb/month with today's blocksize cap. That
works out to roughly 10 relays(sends+receives) per transaction
assuming all blocks were full, which was a pretty close approximation.
The bandwidth data of our 426 billion transactions per year works out
to 942 mbit/s. That's 310 Terabytes per month of bandwidth - At
today's high-volume price of 0.05 per GB, that's $18,500 a month or
$222,000 a year. Plus the $36k for storage per year brings it to
~$250k per year. Not a rounding error, but within the rough costs of
running an exchange - a team of 5 developers works out to ~$400-600k a
year, and the cost of compliance with EU and U.S. entities (including
lawyers) runs upwards of a million dollars a year. Then there's the
support department, probably ~$100-200k a year.
The reason I said a rounding error was that I assumed that it would
take until 2032 to reach that volume of transactions (Assuming
+80%/year of growth, which is our 4-year and 2-year historical average
tx/s growth). If hard drive prices decline by 14% per year, that cost
becomes $3,900 a year, and if bandwidth prices decline by 14% a year
that cost becomes $1800 a month($21,600 a year). Against a
multi-million dollar budget, even 3x that isn't a large concern,
though not, as I stated, a rounding error. My bad.
I didn't approximate for CPU usage, as I don't have any good estimates
for it, and I don't have significant reason to believe that it is a
higher cost than bandwidth, which seems to be the controlling cost
compared to adding CPU's.
> I'm not going to take the time to refute everything you've been saying
Care to respond to the math?
> This whole thread has been absurdly low quality.
Well, we agree on something at least.
On Fri, Mar 31, 2017 at 9:14 AM, David Vorick <david.vorick@gmail.com> wrote:
> No one is suggesting anything like this. The cost of running a node
> that could handle 300% of the 2015 worldwide nonbitcoin transaction
> volume today would be a rounding error for most exchanges even if
> prices didn't rise.
>
>
> Then explain why PayPal has multiple datacenters. And why Visa has multiple
> datacenters. And why the banking systems have multiple datacenters each.
>
> I'm guessing it's because you need that much juice to run a global payment
> system at the transaction volumes that they run at.
>
> Unless you have professional experience working directly with transaction
> processors handling tens of millions of financial transactions per day, I
> think we can fully discount your assessment that it would be a rounding
> error in the budget of a major exchange or Bitcoin processor to handle that
> much load. And even if it was, it wouldn't matter because it's extremely
> important to Bitcoin's security that it's everyday users are able to and are
> actively running full nodes.
>
> I'm not going to take the time to refute everything you've been saying but I
> will say that most of your comments have demonstrated a similar level of
> ignorance as the one above.
>
> This whole thread has been absurdly low quality.
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-31 16:46 ` Jared Lee Richardson
@ 2017-03-31 18:23 ` David Vorick
2017-03-31 18:58 ` Eric Voskuil
2017-04-01 6:15 ` Jared Lee Richardson
0 siblings, 2 replies; 81+ messages in thread
From: David Vorick @ 2017-03-31 18:23 UTC (permalink / raw)
To: Jared Lee Richardson; +Cc: Bitcoin Dev
[-- Attachment #1: Type: text/plain, Size: 3429 bytes --]
Sure, your math is pretty much entirely irrelevant because scaling systems
to massive sizes doesn't work that way.
At 400B transactions per year we're looking at block sizes of 4.5 GB, and a
database size of petabytes. How much RAM do you need to process blocks like
that? Can you fit that much RAM into a single machine? Okay, you can't fit
that much RAM into a single machine. So you have to rework the code to
operate on a computer cluster.
Already we've hit a significant problem. You aren't going to rewrite
Bitcoin to do block validation on a computer cluster overnight. Further,
are storage costs consistent when we're talking about setting up clusters?
Are bandwidth costs consistent when we're talking about setting up
clusters? Are RAM and CPU costs consistent when we're talking about setting
up clusters? No, they aren't. Clusters are a lot more expensive to set up
per-resource because they need to talk to eachother and synchronize with
eachother and you have a LOT more parts, so you have to build in
redundancies that aren't necessary in non-clusters.
Also worth pointing out that peak transaction volumes are typically 20-50x
the size of typical transaction volumes. So your cluster isn't going to
need to plan to handle 15k transactions per second, you're really looking
at more like 200k or even 500k transactions per second to handle
peak-volumes. And if it can't, you're still going to see full blocks.
You'd need a handful of experts just to maintain such a thing. Disks are
going to be failing every day when you are storing multiple PB, so you
can't just count a flat cost of $20/TB and expect that to work. You're
going to need redundancy and tolerance so that you don't lose the system
when a few of your hard drives all fail within minutes of eachother. And
you need a way to rebuild everything without taking the system offline.
This isn't even my area of expertise. I'm sure there are a dozen other
significant issues that one of the Visa architects could tell you about
when dealing with mission-critical data at this scale.
--------
Massive systems operate very differently and are much more costly per-unit
than tiny systems. Once we grow the blocksize large enough that a single
computer can't do all the processing all by itself we get into a world of
much harder, much more expensive scaling problems. Especially because we're
talking about a distributed system where the nodes don't even trust each
other. And transaction processing is largely non-parallel. You have to
check each transaction against each other transaction to make sure that
they aren't double spending eachother. This takes synchronization and
prevents 500 CPUs from all crunching the data concurrently. You have to be
a lot more clever than that to get things working and consistent.
When talking about scalability problems, you should ask yourself what other
systems in the world operate at the scales you are talking about. None of
them have cost structures in the 6 digit range, and I'd bet (without
actually knowing) that none of them have cost structures in the 7 digit
range either. In fact I know from working in a related industry that the
cost structures for the datacenters (plus the support engineers, plus the
software management, etc.) that do airline ticket processing are above $5
million per year for the larger airlines. Visa is probably even more
expensive than that (though I can only speculate).
[-- Attachment #2: Type: text/html, Size: 3691 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-31 18:23 ` David Vorick
@ 2017-03-31 18:58 ` Eric Voskuil
2017-04-01 6:15 ` Jared Lee Richardson
1 sibling, 0 replies; 81+ messages in thread
From: Eric Voskuil @ 2017-03-31 18:58 UTC (permalink / raw)
To: David Vorick, Bitcoin Protocol Discussion
As an independently verifiable, decentralized store of public information, the Bitcoin block tree and transaction DAG do have an advantage over systems such as Visa. The store is just a cache. There is no need to implement reliability in storage or in communications. It is sufficient to be able to detect invalidity. And even if a subset of nodes fail to do so, the system overall compensates.
As such the architecture of a Bitcoin node and its supporting hardware requirements are very different from an unverifiable, centralized store of private information. So in that sense the comparison below is not entirely fair. Many, if not most, of the high costs of a Visa datacenter do not apply because of Bitcoin's information architecture.
However, if the system cannot remain decentralized these architectural advantages will not hold. At that point your considerations below are entirely valid. Once the information is centralized it necessarily becomes private and fragile. Conversely, once it becomes private it necessarily becomes centralized and fragile. This fragility requires significant investment by the central authority to maintain.
So as has been said, we can have decentralization and its benefit of trustlessness or we can have Visa. We already have Visa. Making another is entirely uninteresting.
e
> On Mar 31, 2017, at 11:23 AM, David Vorick via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> Sure, your math is pretty much entirely irrelevant because scaling systems to massive sizes doesn't work that way.
>
> At 400B transactions per year we're looking at block sizes of 4.5 GB, and a database size of petabytes. How much RAM do you need to process blocks like that? Can you fit that much RAM into a single machine? Okay, you can't fit that much RAM into a single machine. So you have to rework the code to operate on a computer cluster.
>
> Already we've hit a significant problem. You aren't going to rewrite Bitcoin to do block validation on a computer cluster overnight. Further, are storage costs consistent when we're talking about setting up clusters? Are bandwidth costs consistent when we're talking about setting up clusters? Are RAM and CPU costs consistent when we're talking about setting up clusters? No, they aren't. Clusters are a lot more expensive to set up per-resource because they need to talk to eachother and synchronize with eachother and you have a LOT more parts, so you have to build in redundancies that aren't necessary in non-clusters.
>
> Also worth pointing out that peak transaction volumes are typically 20-50x the size of typical transaction volumes. So your cluster isn't going to need to plan to handle 15k transactions per second, you're really looking at more like 200k or even 500k transactions per second to handle peak-volumes. And if it can't, you're still going to see full blocks.
>
> You'd need a handful of experts just to maintain such a thing. Disks are going to be failing every day when you are storing multiple PB, so you can't just count a flat cost of $20/TB and expect that to work. You're going to need redundancy and tolerance so that you don't lose the system when a few of your hard drives all fail within minutes of eachother. And you need a way to rebuild everything without taking the system offline.
>
> This isn't even my area of expertise. I'm sure there are a dozen other significant issues that one of the Visa architects could tell you about when dealing with mission-critical data at this scale.
>
> --------
>
> Massive systems operate very differently and are much more costly per-unit than tiny systems. Once we grow the blocksize large enough that a single computer can't do all the processing all by itself we get into a world of much harder, much more expensive scaling problems. Especially because we're talking about a distributed system where the nodes don't even trust each other. And transaction processing is largely non-parallel. You have to check each transaction against each other transaction to make sure that they aren't double spending eachother. This takes synchronization and prevents 500 CPUs from all crunching the data concurrently. You have to be a lot more clever than that to get things working and consistent.
>
> When talking about scalability problems, you should ask yourself what other systems in the world operate at the scales you are talking about. None of them have cost structures in the 6 digit range, and I'd bet (without actually knowing) that none of them have cost structures in the 7 digit range either. In fact I know from working in a related industry that the cost structures for the datacenters (plus the support engineers, plus the software management, etc.) that do airline ticket processing are above $5 million per year for the larger airlines. Visa is probably even more expensive than that (though I can only speculate).
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-31 18:23 ` David Vorick
2017-03-31 18:58 ` Eric Voskuil
@ 2017-04-01 6:15 ` Jared Lee Richardson
1 sibling, 0 replies; 81+ messages in thread
From: Jared Lee Richardson @ 2017-04-01 6:15 UTC (permalink / raw)
To: David Vorick; +Cc: Bitcoin Dev
> So your cluster isn't going to need to plan to handle 15k transactions per second, you're really looking at more like 200k or even 500k transactions per second to handle peak-volumes. And if it can't, you're still going to see full blocks.
When I first began to enter the blocksize debate slime-trap that we
have all found ourselves in, I had the same line of reasoning that you
have now. It is clearly untenable that blockchains are an incredibly
inefficient and poorly designed system for massive scales of
transactions, as I'm sure you would agree. Therefore, I felt it was
an important point for people to accept this reality now and stop
trying to use Blockchains for things they weren't good for, as much
for their own good as anyone elses. I backed this by calculating some
miner fee requirements as well as the very issue you raised. A few
people argued with me rationally, and gradually I was forced to look
at a different question: Granted that we cannot fit all desired
transactions on a blockchain, how many CAN we effectively fit?
It took another month before I actually changed my mind. What changed
it was when I tried to make estimations, assuming all the reasonable
trends I could find held, about future transaction fees and future
node costs. Did they need to go up exponentially? How fast, what
would we be dealing with in the future? After seeing the huge
divergence in node operational costs without size increases($3 vs
$3000 after some number of years stands out in my memory), I tried to
adjust various things, until I started comparing the costs in BTC
terms. I eventually realized that comparing node operational costs in
BTC per unit time versus transaction costs in dollars revealed that
node operational costs per unit time could decrease without causing
transaction fees to rise. The transaction fees still had to hit $1 or
$2, sometimes $4, to remain a viable protection, but otherwise they
could become stable around those points and node operational costs per
unit time still decreased.
None of that may mean anything to you, so you may ignore it all if you
like, but my point in all of that is that I once used similar logic,
but any disagreements we may have does not mean I magically think as
you implied above. Some people think blockchains should fit any
transaction of any size, and I'm sure you and I would both agree
that's ridiculous. Blocks will nearly always be full in the future.
There is no need to attempt to handle unusual volume increases - The
fee markets will balance it and the use-cases that can barely afford
to fit on-chain will simply have to wait for awhile. The question is
not "can we handle all traffic," it is "how many use-cases can we
enable without sacrificing our most essential features?" (And for
that matter, what is each essential feature, and what is it worth?)
There are many distinct cut-off points that we could consider. On the
extreme end, Raspberry Pi's and toasters are out. Data-bound mobile
phones are out for at least the next few years if ever. Currently the
concern is around home user bandwidth limits. The next limit after
that may either be the CPU, memory, or bandwidth of a single top-end
PC. The limit after that may be the highest dataspeeds that large,
remote Bitcoin mining facilities are able to afford, but after fees
rise and a few years, they may remove that limit for us. Then the
next limit might be on the maximum amount of memory available within a
single datacenter server.
At each limit we consider, we have a choice of killing off a number of
on-chain usecases versus the cost of losing the nodes who can't reach
the next limit effectively. I have my inclinations about where the
limits would be best set, but the reality is I don't know the numbers
on the vulnerability and security risks associated with various node
distributions. I'd really like to, because if I did I could begin
evaluating the costs on each side.
> How much RAM do you need to process blocks like that?
That's a good question, and one I don't have a good handle on. How
does Bitcoin's current memory usage scale? It can't be based on the
UTXO, which is 1.7 GB while my node is only using ~450mb of ram. How
does ram consumption increase with a large block versus small ones?
Are there trade-offs that can be made to write to disk if ram usage
grew too large?
If that proved to be a prohibitively large growth number, that becomes
a worthwhile number to consider for scaling. Of note, you can
currently buy EC2 instances with 256gb of ram easily, and in 14 years
that will be even higher.
> So you have to rework the code to operate on a computer cluster.
I believe this is exactly the kind of discussion we should be having
14 years before it might be needed. Also, this wouldn't be unique -
Some software I have used in the past (graphite metric collection)
came pre-packaged with the ability to scale out to multiple machines
split loads and replicate the data, and so could future node software.
> Further, are storage costs consistent when we're talking about setting up clusters? Are bandwidth costs consistent when we're talking about setting up clusters? Are RAM and CPU costs consistent when we're talking about setting up clusters? No, they aren't.
Bandwidth costs are, as intra-datacenter bandwidth is generally free.
The other ones warrant evaluation for the distant future. I would
expect that CPU resources is the first thing we would have to change -
13 thousand transactions per second is an awful lot to process. I'm
not intimately familiar with the processing - Isn't it largely
signature verification of the transaction itself, plus a minority of
time spent checking and updating utxo values, and finally a small
number of hashes to check block validity? If signature verification
was controlling, a specialized asic chip(on a plug-in card) might be
able to verify signatures hundreds of times faster, and it could even
be on a cheap 130nm chipset like the first asic miners rushed to
market. Point being, there are options and it may warrant looking
into after the risk to node reductions.
> You'd need a handful of experts just to maintain such a thing.
I don't think this is as big a deal as it first might seem. The
software would already come written to be spanned onto multiple
machines - it just needs to be configured. For the specific question
at hand, the exchange would already have IT staff and datacenter
capacity/operations for their other operations. In the more general
case, the numbers involved don't work out to extreme concerns at that
level. The highest cpu usage I've observed on my nodes is less than
5%, less than 1% for the time I just checked, handling ~3 tx/s. So
being conservative, if it hits 100% on one core at 60-120 tx/s, that
works out to ~25-50 8-core machines. But again, that's a 2-year old
laptop CPU and we're talking about 14 years into the future. Even if
it was 25 machines, that's the kind of operation a one or two man IT
team just runs on the side with their extra duties. It isn't enough
to hire a fulltime tech for.
> Disks are going to be failing every day when you are storing multiple PB, so you can't just count a flat cost of $20/TB and expect that to work.
I mean, that's literally what Amazon does for you with S3, which was
even cheaper than the EBS datastore pricing I was looking at. So....
Even disregarding that, raid operation was a solved thing more than 10
years ago, and hard drives 14 years out would be roughly ~110 TB for a
$240 hard drive at a 14%/year growth rate. In 2034 the blockchain
would fit on 10 of those. Not exactly a "failing every day" kind of
problem. By 2040, you'd need *gasp* 22 $240 hard drives. I mean, it
is a lot, but not a lot like you're implying.
> And you need a way to rebuild everything without taking the system offline.
That depends heavily upon the tradeoffs the businesses can make. I
don't think node operation at an exchange is a five-nines uptime
operation. They could probably tolerate 3 nines. The worst that
happens is occasionally people's withdrawals and deposit are delayed
slightly. It won't shut down trading.
> I'm sure there are a dozen other significant issues that one of the Visa architects could tell you about when dealing with mission-critical data at this scale.
Visa stores the only copy. They can't afford to lose the data.
Bitcoin isn't like that, as others pointed out. And for most
businesses, if their node must be rebooted periodically, it isn't a
huge deal.
> Once we grow the blocksize large enough that a single computer can't do all the processing all by itself we get into a world of much harder, much more expensive scaling problems.
Ok, when is that point, and what is the tradeoff in terms of nodes?
Just because something is hard doesn't mean it isn't worth doing.
That's just a defeatist attitude. How big can we get, for what
tradeoffs, and what do we need to do to get there?
> You have to check each transaction against each other transaction to make sure that they aren't double spending eachother.
This is really not that hard. Have a central database, update/check
the utxo values in block-store increments. If a utxo has already been
used this increment, the block is invalid. If the database somehow
got too big(not going to happen at these scales, but if it did), it
can be sharded trivially on the transaction information. These are
solved problems, the free database software that's available is pretty
powerful.
> You have to be a lot more clever than that to get things working and consistent.
NO, NOT CLEVER. WE CAN'T DO THAT.
Sorry, I had to. :)
> None of them have cost structures in the 6 digit range, and I'd bet (without actually knowing) that none of them have cost structures in the 7 digit range either.
I know of and have experience working with systems that handled
several orders of magnitude more data than this. None of the issues
brought up above are problems that someone hasn't solved. Transaction
commitments to databases? Data consistency across multiple workers?
Data storage measured in exabytes? Data storage and updates
approaching hundreds of millions of datapoints per second? These
things are done every single day at numerous companies.
On Fri, Mar 31, 2017 at 11:23 AM, David Vorick <david.vorick@gmail.com> wrote:
> Sure, your math is pretty much entirely irrelevant because scaling systems
> to massive sizes doesn't work that way.
>
> At 400B transactions per year we're looking at block sizes of 4.5 GB, and a
> database size of petabytes. How much RAM do you need to process blocks like
> that? Can you fit that much RAM into a single machine? Okay, you can't fit
> that much RAM into a single machine. So you have to rework the code to
> operate on a computer cluster.
>
> Already we've hit a significant problem. You aren't going to rewrite Bitcoin
> to do block validation on a computer cluster overnight. Further, are storage
> costs consistent when we're talking about setting up clusters? Are bandwidth
> costs consistent when we're talking about setting up clusters? Are RAM and
> CPU costs consistent when we're talking about setting up clusters? No, they
> aren't. Clusters are a lot more expensive to set up per-resource because
> they need to talk to eachother and synchronize with eachother and you have a
> LOT more parts, so you have to build in redundancies that aren't necessary
> in non-clusters.
>
> Also worth pointing out that peak transaction volumes are typically 20-50x
> the size of typical transaction volumes. So your cluster isn't going to need
> to plan to handle 15k transactions per second, you're really looking at more
> like 200k or even 500k transactions per second to handle peak-volumes. And
> if it can't, you're still going to see full blocks.
>
> You'd need a handful of experts just to maintain such a thing. Disks are
> going to be failing every day when you are storing multiple PB, so you can't
> just count a flat cost of $20/TB and expect that to work. You're going to
> need redundancy and tolerance so that you don't lose the system when a few
> of your hard drives all fail within minutes of eachother. And you need a way
> to rebuild everything without taking the system offline.
>
> This isn't even my area of expertise. I'm sure there are a dozen other
> significant issues that one of the Visa architects could tell you about when
> dealing with mission-critical data at this scale.
>
> --------
>
> Massive systems operate very differently and are much more costly per-unit
> than tiny systems. Once we grow the blocksize large enough that a single
> computer can't do all the processing all by itself we get into a world of
> much harder, much more expensive scaling problems. Especially because we're
> talking about a distributed system where the nodes don't even trust each
> other. And transaction processing is largely non-parallel. You have to check
> each transaction against each other transaction to make sure that they
> aren't double spending eachother. This takes synchronization and prevents
> 500 CPUs from all crunching the data concurrently. You have to be a lot more
> clever than that to get things working and consistent.
>
> When talking about scalability problems, you should ask yourself what other
> systems in the world operate at the scales you are talking about. None of
> them have cost structures in the 6 digit range, and I'd bet (without
> actually knowing) that none of them have cost structures in the 7 digit
> range either. In fact I know from working in a related industry that the
> cost structures for the datacenters (plus the support engineers, plus the
> software management, etc.) that do airline ticket processing are above $5
> million per year for the larger airlines. Visa is probably even more
> expensive than that (though I can only speculate).
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
@ 2017-03-29 19:50 Raystonn .
2017-03-30 10:34 ` Tom Zander
0 siblings, 1 reply; 81+ messages in thread
From: Raystonn . @ 2017-03-29 19:50 UTC (permalink / raw)
To: Jared Lee Richardson, bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 4056 bytes --]
Low node costs are a good goal for nodes that handle transactions the node operator can afford. Nobody is going to run a node for a network they do not use for their own transactions. If transactions have fees that prohibit use for most economic activity, that means node count will drop until nodes are generally run by those who settle large amounts. That is very centralizing.
Raystonn
On 29 Mar 2017 12:14 p.m., Jared Lee Richardson via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org> wrote:
In order for any blocksize increase to be agreed upon, more consensus is needed. The proportion of users believing no blocksize increases are needed is larger than the hardfork target core wants(95% consensus). The proportion of users believing in microtransactions for all is also larger than 5%, and both of those groups may be larger than 10% respectively. I don't think either the Big-blocks faction nor the low-node-costs faction have even a simple majority of support. Getting consensus is going to be a big mess, but it is critical that it is done.
On Wed, Mar 29, 2017 at 12:49 AM, Martin Lízner via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org<mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
If there should be a hard-fork, Core team should author the code. Other dev teams have marginal support among all BTC users.
Im tending to believe, that HF is necessary evil now. But lets do it in conservative approach:
- Fix historical BTC issues, improve code
- Plan HF activation date well ahead - 12 months+
- Allow increasing block size on year-year basis as Luke suggested
- Compromise with miners on initial block size bump (e.g. 2MB)
- SegWit
Martin Lizner
On Tue, Mar 28, 2017 at 6:59 PM, Wang Chun via bitcoin-dev <bitcoin-dev@lists.linuxfoundation.org<mailto:bitcoin-dev@lists.linuxfoundation.org>> wrote:
I've proposed this hard fork approach last year in Hong Kong Consensus
but immediately rejected by coredevs at that meeting, after more than
one year it seems that lots of people haven't heard of it. So I would
post this here again for comment.
The basic idea is, as many of us agree, hard fork is risky and should
be well prepared. We need a long time to deploy it.
Despite spam tx on the network, the block capacity is approaching its
limit, and we must think ahead. Shall we code a patch right now, to
remove the block size limit of 1MB, but not activate it until far in
the future. I would propose to remove the 1MB limit at the next block
halving in spring 2020, only limit the block size to 32MiB which is
the maximum size the current p2p protocol allows. This patch must be
in the immediate next release of Bitcoin Core.
With this patch in core's next release, Bitcoin works just as before,
no fork will ever occur, until spring 2020. But everyone knows there
will be a fork scheduled. Third party services, libraries, wallets and
exchanges will have enough time to prepare for it over the next three
years.
We don't yet have an agreement on how to increase the block size
limit. There have been many proposals over the past years, like
BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
on. These hard fork proposals, with this patch already in Core's
release, they all become soft fork. We'll have enough time to discuss
all these proposals and decide which one to go. Take an example, if we
choose to fork to only 2MB, since 32MiB already scheduled, reduce it
from 32MiB to 2MB will be a soft fork.
Anyway, we must code something right now, before it becomes too late.
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org<mailto:bitcoin-dev@lists.linuxfoundation.org>
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org<mailto:bitcoin-dev@lists.linuxfoundation.org>
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Type: text/html, Size: 5556 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-29 19:50 Raystonn .
@ 2017-03-30 10:34 ` Tom Zander
2017-03-30 11:19 ` David Vorick
2017-03-30 11:24 ` Aymeric Vitte
0 siblings, 2 replies; 81+ messages in thread
From: Tom Zander @ 2017-03-30 10:34 UTC (permalink / raw)
To: bitcoin-dev, Raystonn .
On Wednesday, 29 March 2017 21:50:48 CEST Raystonn . via bitcoin-dev wrote:
> Low node costs are a good goal for nodes that handle transactions the node
> operator can afford. Nobody is going to run a node for a network they do
> not use for their own transactions. If transactions have fees that
> prohibit use for most economic activity, that means node count will drop
> until nodes are generally run by those who settle large amounts. That is
> very centralizing.
>
> Raystonn
The idea that people won’t run a node for a network they don’t use for their
own transactions is a very good observation and a good reason to get on-
chain scaling happening well before lightning hits.
--
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-30 10:34 ` Tom Zander
@ 2017-03-30 11:19 ` David Vorick
2017-03-30 21:42 ` Jared Lee Richardson
2017-03-30 11:24 ` Aymeric Vitte
1 sibling, 1 reply; 81+ messages in thread
From: David Vorick @ 2017-03-30 11:19 UTC (permalink / raw)
To: Bitcoin Dev, Tom Zander
[-- Attachment #1: Type: text/plain, Size: 2078 bytes --]
> What we want is a true fee-market where the miner can decide to make a
block
> smaller to get people to pay more fees, because if we were to go to 16MB
> blocks in one go, the cost of the miner would go up, but his reward based
on
> fees will go down!
> A block so big that 100% of the transactions will always be mined in the
> next block will just cause a large section of people to no longer feel the
> need to pay fees.
> As such I don’t fear the situation where the block size limit goes up a
lot
> in one go, because it is not in anyone’s interest to make the actual block
> size follow.
There have been attacks demonstrated where a malicious miner with
sufficient hashrate can leverage large blocks to exacerbate selfish mining.
Adversarial behaviors from miners need to be considered, it's not safe to
simply assume that a miner won't have reasons to attack the network. We
already know that large empty blocks (rather, blocks with fake
transactions) can be leveraged in ways that both damages the network and
increases miner profits.
In general, fear of other currencies passing Bitcoin is unsubstantiated.
Bitcoin has by far the strongest development team, and also is by far the
most decentralized. To the best of my knowledge, Bitcoin is the only
cryptocurrency out there that is both not-dead and also lacks a strong
central leadership.
A coin like ethereum may even be able to pass Bitcoin in market cap. But
that's okay. Ethereum has very different properties and it's not something
I would trust as a tool to provide me with political sovereignty. Ethereum
passing Bitcoin in market cap does not mean that it has proved superior to
Bitcoin. It could just mean that enterprises are really excited about
permissioned blockchains. That's not interesting to me at any market cap.
Bitcoin's core value add is and should continue to be decentralization and
trustlessness. Nobody is remotely close to competing with Bitcoin on those
fronts, and in my mind that's far more important than any of the other
mania anyway.
[-- Attachment #2: Type: text/html, Size: 3964 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-30 11:19 ` David Vorick
@ 2017-03-30 21:42 ` Jared Lee Richardson
0 siblings, 0 replies; 81+ messages in thread
From: Jared Lee Richardson @ 2017-03-30 21:42 UTC (permalink / raw)
To: David Vorick, Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 8503 bytes --]
> There have been attacks demonstrated where a malicious miner with
sufficient hashrate can leverage large blocks to exacerbate selfish mining.
Can you give me a link to this? Having done a lot of mining, I really
really doubt this. I'm assuming the theory relies upon propagation times
and focuses on small miners versus large ones, but that's wrong.
Propagation times don't affect small miners disproportionately, though they
might affect small POOLS disproportionately, that isn't the same thing at
all. No miner since at least 2014 has operated a full node directly with
each miner - it is incredibly impractical to do so. They retrieve only the
merkle root hash and other parameters from the stratum server, which is a
very small packet and does not increase with the size of the blocks. If
they really want to select which transactions to include, some pools offer
options of that sort(or can, I believe) but almost no one does. If they
don't like how their pool picks transactions, they'll use a different pool,
that simple.
If there's some other theory about a miner exploiting higher blocksizes
selfishly then I'd love to read up on it to understand it. If what
you/others actually meant by that was smaller "pools," that's a much much
smaller problem. Pools don't earn major profits and generally are at the
mercy of their miners if they make bad choices or can't fix low
performance. For pools, block propagation time was a major major issue
even before blocks were full, and latency + packet loss between mining
units and the pool is also a big concern. I was seeing occasional block
propagation delays(over a minute) on a fiber connection in 2013/4 due to
minute differences in peering. If a pool can't afford enough bandwidth to
keep propagation times down, they can't be a pool. Bigger blocksizes will
make it so they even more totally-can't-be-a-pool, but they already can't
be a pool, so who cares. Plus, compact blocks should have already solve
nearly all of this problem as I understand it.
So definitely want to know more if I'm misunderstanding the attack vector.
> We already know that large empty blocks (rather, blocks with fake
transactions) can be leveraged in ways that both damages the network and
increases miner profits.
Maybe you're meaning an attack where other pools get stuck on validation
due to processing issues? This is also a nonissue. The smallest viable
pool has enough difficulties with other, non-hardware related issues that
buying the largest, beefiest standard processor available with ample RAM
won't even come up on the radar. No one cares about $600 in hardware
versus $1000 in hardware when it takes you 6 weeks to get your peering and
block propagation configuration just right and another 6 months to convince
miners to substantially use your pool.
If you meant miners and not pools, that's also wrong. Mining hardware
doesn't validate blocks anymore, it hasn't been practical for years. They
only get the merkle root hash of the valid transaction set. The pool
handles the rest.
> In general, fear of other currencies passing Bitcoin is unsubstantiated.
Bitcoin has by far the strongest development team, and also is by far the
most decentralized.
Markets only care a little bit what your development team is like.
Ethereum has Vitalik, who is an incredibly smart and respectable dude,
while BU absolutely hates the core developers right now. Markets are more
likely to put more faith in a single leader than core right now if that
comparison was really made.
"Most decentralized" is nearly impossible to quantify, and has almost no
value to speculators. Since all of these markets are highly speculative,
they only care about future demand. Future demand relies upon future use.
Unsubstantiated? Ethereum is already 28% of Bitcoin by cap and 24% by
trading. Four months ago that was 4%. Their transaction volume also
doubled. What world are you living in?
> A coin like ethereum may even be able to pass Bitcoin in market cap. But
that's okay. Ethereum has very different properties and it's not something
I would trust as a tool to provide me with political sovereignty.
Well great, I guess so long as you're ok with it we'll just roll with it.
Wait, no. If Bitcoin loses its first-mover network effect, a small cadre
of die-hard libertarians are not going to be able to keep it from becoming
a page in the history books. Die hard libertarians can barely keep a voice
in the U.S. congress - neither markets nor day-to-day users particularly
care about the philosophy, they care about what it can do for them.
> Ethereum passing Bitcoin in market cap does not mean that it has proved
superior to Bitcoin.
The markets have literally told us why Ethereum is shooting up. Its
because the Bitcoin community has fractured around a debate with nearly no
progress on a solution for the last 3 years, and especially because BU
appears to be strong enough to think they can fork and the markets know
full well what a contentious fork will do to Bitcoin's near-term future.
> It could just mean that enterprises are really excited about permissioned
blockchains.
Then it would have happened not when the BU situation imploded but when
Microsoft announced they were working with Ethereum on things like that.
No one cared about Microsoft's announcement. You don't seriously believe
what you're saying, do you?
> That's not interesting to me at any market cap.
I agree with you, but Bitcoin becoming a page in the history books because
a few die-hard libertarians didn't think price or adoption was important is
a big, big concern, especially when they almost have veto power. Markets
don't care about philosophy, they care about future value. Bitcoin has
value because we think it may be the most useful new innovation in the
future. If we screw that future usefulness up, philosophy gives us no more
value than Friendster has today.
On Thu, Mar 30, 2017 at 4:19 AM, David Vorick via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
> > What we want is a true fee-market where the miner can decide to make a
> block
> > smaller to get people to pay more fees, because if we were to go to 16MB
> > blocks in one go, the cost of the miner would go up, but his reward
> based on
> > fees will go down!
> > A block so big that 100% of the transactions will always be mined in the
> > next block will just cause a large section of people to no longer feel
> the
> > need to pay fees.
>
> > As such I don’t fear the situation where the block size limit goes up a
> lot
> > in one go, because it is not in anyone’s interest to make the actual
> block
> > size follow.
>
> There have been attacks demonstrated where a malicious miner with
> sufficient hashrate can leverage large blocks to exacerbate selfish mining.
> Adversarial behaviors from miners need to be considered, it's not safe to
> simply assume that a miner won't have reasons to attack the network. We
> already know that large empty blocks (rather, blocks with fake
> transactions) can be leveraged in ways that both damages the network and
> increases miner profits.
>
> In general, fear of other currencies passing Bitcoin is unsubstantiated.
> Bitcoin has by far the strongest development team, and also is by far the
> most decentralized. To the best of my knowledge, Bitcoin is the only
> cryptocurrency out there that is both not-dead and also lacks a strong
> central leadership.
>
> A coin like ethereum may even be able to pass Bitcoin in market cap. But
> that's okay. Ethereum has very different properties and it's not something
> I would trust as a tool to provide me with political sovereignty. Ethereum
> passing Bitcoin in market cap does not mean that it has proved superior to
> Bitcoin. It could just mean that enterprises are really excited about
> permissioned blockchains. That's not interesting to me at any market cap.
>
> Bitcoin's core value add is and should continue to be decentralization and
> trustlessness. Nobody is remotely close to competing with Bitcoin on those
> fronts, and in my mind that's far more important than any of the other
> mania anyway.
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 11912 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-30 10:34 ` Tom Zander
2017-03-30 11:19 ` David Vorick
@ 2017-03-30 11:24 ` Aymeric Vitte
1 sibling, 0 replies; 81+ messages in thread
From: Aymeric Vitte @ 2017-03-30 11:24 UTC (permalink / raw)
To: Tom Zander, Bitcoin Protocol Discussion
Except if people have some incentive to do it, simple example: I have
some servers, they are doing some work but are not so busy finally, I
can decide to run some nodes, this does not cost me more (and less for
the planet than setting up new servers) and I get some rewards (as an
illustration of this my servers are mining zcash and running zcash
nodes, this is of course absolutely not profitable but since this does
not disturb what the servers are primarly intended for and I get some
small zecs with no additionnal costs, why not doing it?) Of course we
can then consider that people doing this are finally using the network...
Le 30/03/2017 à 12:34, Tom Zander via bitcoin-dev a écrit :
> On Wednesday, 29 March 2017 21:50:48 CEST Raystonn . via bitcoin-dev wrote:
>> Low node costs are a good goal for nodes that handle transactions the node
>> operator can afford. Nobody is going to run a node for a network they do
>> not use for their own transactions. If transactions have fees that
>> prohibit use for most economic activity, that means node count will drop
>> until nodes are generally run by those who settle large amounts. That is
>> very centralizing.
>>
>> Raystonn
> The idea that people won’t run a node for a network they don’t use for their
> own transactions is a very good observation and a good reason to get on-
> chain scaling happening well before lightning hits.
>
--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
@ 2017-03-31 21:23 Rodney Morris
2017-03-31 23:13 ` Eric Voskuil
0 siblings, 1 reply; 81+ messages in thread
From: Rodney Morris @ 2017-03-31 21:23 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 2174 bytes --]
You guessed wrong. Multiple data centres are as much about redundancy and
resiliency, and latency.
As for the cost, data centre space, business grade communication lines, and
staff are orders of magnitude more expensive than the physical hardware
they support.
I'd like to call you out on your continuing reduction to absurdity and
slippery slope arguments. Just because we can't handle 4GB blocks today,
doesn't mean we shouldn't aim in that direction. Doesn't mean we shouldn't
be taking our first second and third baby steps in that direction.
If the obsession with every personal computer being able to run a fill node
continues then bitcoin will be consigned to the dustbin of history, a
footnote to the story of the global crypto currency that eventually took
over the world.
Thanks
Rodney
Date: Fri, 31 Mar 2017 12:14:42 -0400
From: David Vorick <david.vorick@gmail.com>
To: Jared Lee Richardson <jaredr26@gmail.com>
Cc: Bitcoin Dev <bitcoin-dev@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
Message-ID:
<CAFVRnyqSMVj2Ttc4_5vuk73Z5yRJdxeSodvkdjqsrHbgghcmUQ@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Then explain why PayPal has multiple datacenters. And why Visa has multiple
datacenters. And why the banking systems have multiple datacenters each.
I'm guessing it's because you need that much juice to run a global payment
system at the transaction volumes that they run at.
Unless you have professional experience working directly with transaction
processors handling tens of millions of financial transactions per day, I
think we can fully discount your assessment that it would be a rounding
error in the budget of a major exchange or Bitcoin processor to handle that
much load. And even if it was, it wouldn't matter because it's extremely
important to Bitcoin's security that it's everyday users are able to and
are actively running full nodes.
I'm not going to take the time to refute everything you've been saying but
I will say that most of your comments have demonstrated a similar level of
ignorance as the one above.
This whole thread has been absurdly low quality.
[-- Attachment #2: Type: text/html, Size: 3131 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-31 21:23 Rodney Morris
@ 2017-03-31 23:13 ` Eric Voskuil
[not found] ` <CABerxhGeofH4iEonjB1xKOkHcEVJrR+D4QhHSw5cWYsjmW4JpQ@mail.gmail.com>
` (2 more replies)
0 siblings, 3 replies; 81+ messages in thread
From: Eric Voskuil @ 2017-03-31 23:13 UTC (permalink / raw)
To: Rodney Morris, Bitcoin Protocol Discussion
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
On 03/31/2017 02:23 PM, Rodney Morris via bitcoin-dev wrote:
> If the obsession with every personal computer being able to run a
> fill node continues then bitcoin will be consigned to the dustbin
> of history,
The cause of the block size debate is the failure to understand the
Bitcoin security model. This failure is perfectly exemplified by the
above statement. If a typical personal computer cannot run a node
there is no security.
e
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAEBCAAGBQJY3uJ8AAoJEDzYwH8LXOFOrBoH/1VdXQObKZ2JPHL387Sd8qT4
zzWt8tKFD+6/uCS8re97h1lZcbwb3EzBOB1J15mJ3fqTOU/rPCitN+JZAMgpw/z9
NGNp4KQDHo3vLiWWOq2GhJzyVAOcDKYLsY8/NrHK91OtABD2XIq9gERwRoZZE4rb
OPSjSAGvDK8cki72O7HpyEKX5WEyHsHNK/JmBDdTjlzkMcNEbBlYMgO24RC6x+UA
8Fh17rOcfGv6amIbmS7mK3EMkkGL83WmsgJKXNl4inI1R8z5hVKRqOFMPxmTDXVc
dEHtw8poHOX1Ld85m0+Tk2S7IdH66PCnhsKL9l6vlH02uAvLNfKxb+291q2g3YU=
=HPCK
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 81+ messages in thread
[parent not found: <CABerxhGeofH4iEonjB1xKOkHcEVJrR+D4QhHSw5cWYsjmW4JpQ@mail.gmail.com>]
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
[not found] ` <CABerxhGeofH4iEonjB1xKOkHcEVJrR+D4QhHSw5cWYsjmW4JpQ@mail.gmail.com>
@ 2017-04-01 1:41 ` Rodney Morris
0 siblings, 0 replies; 81+ messages in thread
From: Rodney Morris @ 2017-04-01 1:41 UTC (permalink / raw)
To: Eric Voskuil; +Cc: Bitcoin Protocol Discussion
[-- Attachment #1: Type: text/plain, Size: 1638 bytes --]
I didn't say typical, I said every. Currently a raspberry pi on shitty adsl
can run a full node. What's wrong with needing a high end pc and good
connectivity to run a full node?
People that want to, can. People that don't want to, won't, no matter how
low spec the machine you need.
If nobody uses bitcoin, all the security in the world provides no value.
The value of bitcoin is provided by people using bitcoin, and people will
only use bitcoin if it provides value to them. Security is one aspect
only. And the failure to understand that is what has led to the block size
debate.
Rodney
On 1 Apr 2017 10:12, "Eric Voskuil" <eric@voskuil.org> wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
On 03/31/2017 02:23 PM, Rodney Morris via bitcoin-dev wrote:
> If the obsession with every personal computer being able to run a
> fill node continues then bitcoin will be consigned to the dustbin
> of history,
The cause of the block size debate is the failure to understand the
Bitcoin security model. This failure is perfectly exemplified by the
above statement. If a typical personal computer cannot run a node
there is no security.
e
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAEBCAAGBQJY3uJ8AAoJEDzYwH8LXOFOrBoH/1VdXQObKZ2JPHL387Sd8qT4
zzWt8tKFD+6/uCS8re97h1lZcbwb3EzBOB1J15mJ3fqTOU/rPCitN+JZAMgpw/z9
NGNp4KQDHo3vLiWWOq2GhJzyVAOcDKYLsY8/NrHK91OtABD2XIq9gERwRoZZE4rb
OPSjSAGvDK8cki72O7HpyEKX5WEyHsHNK/JmBDdTjlzkMcNEbBlYMgO24RC6x+UA
8Fh17rOcfGv6amIbmS7mK3EMkkGL83WmsgJKXNl4inI1R8z5hVKRqOFMPxmTDXVc
dEHtw8poHOX1Ld85m0+Tk2S7IdH66PCnhsKL9l6vlH02uAvLNfKxb+291q2g3YU=
=HPCK
-----END PGP SIGNATURE-----
[-- Attachment #2: Type: text/html, Size: 2388 bytes --]
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-03-31 23:13 ` Eric Voskuil
[not found] ` <CABerxhGeofH4iEonjB1xKOkHcEVJrR+D4QhHSw5cWYsjmW4JpQ@mail.gmail.com>
@ 2017-04-01 6:18 ` Jared Lee Richardson
2017-04-01 7:41 ` Eric Voskuil
[not found] ` <CAAt2M1_kuCBQWd9dis5UwJX8+XGVPjjiOA54aD74iS2L0cYcTQ@mail.gmail.com>
2 siblings, 1 reply; 81+ messages in thread
From: Jared Lee Richardson @ 2017-04-01 6:18 UTC (permalink / raw)
To: Eric Voskuil, Bitcoin Protocol Discussion
> If a typical personal computer cannot run a node
> there is no security.
If you can't describe an attack that is made possible when typical
personal computers can't run nodes, this kind of logic has no place in
this discussion.
On Fri, Mar 31, 2017 at 4:13 PM, Eric Voskuil via bitcoin-dev
<bitcoin-dev@lists.linuxfoundation.org> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> On 03/31/2017 02:23 PM, Rodney Morris via bitcoin-dev wrote:
>> If the obsession with every personal computer being able to run a
>> fill node continues then bitcoin will be consigned to the dustbin
>> of history,
>
> The cause of the block size debate is the failure to understand the
> Bitcoin security model. This failure is perfectly exemplified by the
> above statement. If a typical personal computer cannot run a node
> there is no security.
>
> e
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.22 (GNU/Linux)
>
> iQEcBAEBCAAGBQJY3uJ8AAoJEDzYwH8LXOFOrBoH/1VdXQObKZ2JPHL387Sd8qT4
> zzWt8tKFD+6/uCS8re97h1lZcbwb3EzBOB1J15mJ3fqTOU/rPCitN+JZAMgpw/z9
> NGNp4KQDHo3vLiWWOq2GhJzyVAOcDKYLsY8/NrHK91OtABD2XIq9gERwRoZZE4rb
> OPSjSAGvDK8cki72O7HpyEKX5WEyHsHNK/JmBDdTjlzkMcNEbBlYMgO24RC6x+UA
> 8Fh17rOcfGv6amIbmS7mK3EMkkGL83WmsgJKXNl4inI1R8z5hVKRqOFMPxmTDXVc
> dEHtw8poHOX1Ld85m0+Tk2S7IdH66PCnhsKL9l6vlH02uAvLNfKxb+291q2g3YU=
> =HPCK
> -----END PGP SIGNATURE-----
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
^ permalink raw reply [flat|nested] 81+ messages in thread
* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
2017-04-01 6:18 ` Jared Lee Richardson
@ 2017-04-01 7:41 ` Eric Voskuil
[not found] ` <CAAt2M1_sHsCD_AX-vm-oy-4tY+dKoDAJhfVUc4tnoNBFn-a+Dg@mail.gmail.com>
[not found] ` <CAD1TkXusCe-O3CGQkXyRw_m3sXS9grGxMqkMk8dOvFNXeV5zGQ@mail.gmail.com>
0 siblings, 2 replies; 81+ messages in thread
From: Eric Voskuil @ 2017-04-01 7:41 UTC (permalink / raw)
To: Jared Lee Richardson, Bitcoin Protocol Discussion
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
On 03/31/2017 11:18 PM, Jared Lee Richardson wrote:
>> If a typical personal computer cannot run a node there is no
>> security.
>
> If you can't describe an attack that is made possible when typical
> personal computers can't run nodes, this kind of logic has no place
> in this discussion.
"Governments are good at cutting off the heads of a centrally
controlled networks..."
e
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAEBCAAGBQJY31m0AAoJEDzYwH8LXOFOayIH/0DcWukHZUVTV8952mkWnqjS
RCM8StQOuuTQ/2elvKoZa/nEv1PvpOQEO/AxJDEdIKOqjdXoc/QdZT/Qj834yyFi
mmNLm3x8voO7rTFEVtBrXQ4VYO7Zj5gVy6nRyMrhSGtzg4XqYiyGVoijiumfXOvq
ejLwyWJEf8klBwegIPkX4XX6UYjNyBt+E32Je7NxUbi54EPDRszWpEGGKfJrWiCQ
JO2jqB3O2RbMd0J1onBt2AGsjeQSE3HO0EBQSkdGQZ7PVSdE3I49uT2aAaScnPOt
ymbNz4QtlUWWpUgEI6VSjxHCGjX4+Vrn3HLRwjLe4nS2EX3mOVNY8MHMvbCeAuY=
=tD9k
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 81+ messages in thread
[parent not found: <CAAt2M1_kuCBQWd9dis5UwJX8+XGVPjjiOA54aD74iS2L0cYcTQ@mail.gmail.com>]
end of thread, other threads:[~2017-04-02 19:12 UTC | newest]
Thread overview: 81+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-28 16:59 [bitcoin-dev] Hard fork proposal from last week's meeting Wang Chun
2017-03-28 17:13 ` Matt Corallo
2017-03-29 8:45 ` Jared Lee Richardson
2017-03-28 17:23 ` Alphonse Pace
2017-03-28 17:31 ` Wang Chun
2017-03-28 17:33 ` Jeremy
2017-03-28 17:50 ` Douglas Roark
2017-03-28 17:33 ` Juan Garavaglia
2017-03-28 17:53 ` Alphonse Pace
2017-03-28 22:36 ` Juan Garavaglia
2017-03-29 2:59 ` Luv Khemani
2017-03-29 6:24 ` Emin Gün Sirer
2017-03-29 15:34 ` Johnson Lau
2017-04-01 16:15 ` Leandro Coutinho
2017-03-29 9:16 ` Jared Lee Richardson
2017-03-29 16:00 ` Aymeric Vitte
2017-03-28 17:34 ` Johnson Lau
2017-03-28 17:46 ` Luke Dashjr
2017-03-28 20:50 ` Tom Zander
2017-03-29 4:21 ` Johnson Lau
2017-03-28 20:48 ` Tom Zander
2017-03-29 6:32 ` Bram Cohen
2017-03-29 9:37 ` Jorge Timón
2017-03-29 19:07 ` Jared Lee Richardson
2017-04-02 19:02 ` Staf Verhaegen
2017-03-29 7:49 ` Martin Lízner
2017-03-29 15:57 ` David Vorick
2017-03-29 16:08 ` Aymeric Vitte
[not found] ` <CAFVRnyo1XGNbq_F8UfqqJWHCVH14iMCUMU-R5bOh+h3mtwSUJg@mail.gmail.com>
2017-03-29 16:18 ` David Vorick
2017-03-29 16:20 ` Andrew Johnson
2017-03-29 16:25 ` David Vorick
2017-03-29 16:41 ` Andrew Johnson
2017-03-29 17:14 ` Aymeric Vitte
2017-03-29 20:53 ` Jared Lee Richardson
2017-03-29 20:32 ` Jared Lee Richardson
2017-03-29 21:36 ` praxeology_guy
2017-03-29 22:33 ` Aymeric Vitte
2017-03-30 5:23 ` Ryan J Martin
2017-03-30 10:30 ` Tom Zander
2017-03-30 16:44 ` Jared Lee Richardson
2017-03-30 20:51 ` Jared Lee Richardson
2017-03-30 21:57 ` Tom Zander
[not found] ` <CAD1TkXvx=RKvjC8BUstwtQxUUQwG4eiU9XmF1wr=bU=xcVg5WQ@mail.gmail.com>
2017-03-30 10:13 ` Aymeric Vitte
2017-03-29 19:46 ` Jared Lee Richardson
2017-03-29 19:10 ` Jared Lee Richardson
2017-03-29 19:36 ` praxeology_guy
2017-04-02 19:12 ` Staf Verhaegen
2017-03-28 19:56 Paul Iverson
2017-03-28 20:16 ` Pieter Wuille
2017-03-28 20:43 ` Tom Zander
2017-03-28 20:53 ` Alphonse Pace
2017-03-28 21:06 ` Luke Dashjr
2017-03-29 19:33 Daniele Pinna
2017-03-29 20:28 ` Peter R
2017-03-29 22:17 ` Jared Lee Richardson
2017-03-29 20:28 ` David Vorick
2017-03-29 22:08 ` Jared Lee Richardson
2017-03-30 7:11 ` Luv Khemani
2017-03-30 17:16 ` Jared Lee Richardson
2017-03-31 4:21 ` Luv Khemani
2017-03-31 5:28 ` Jared Lee Richardson
2017-03-31 8:19 ` Luv Khemani
2017-03-31 15:59 ` Jared Lee Richardson
2017-03-31 16:14 ` David Vorick
2017-03-31 16:46 ` Jared Lee Richardson
2017-03-31 18:23 ` David Vorick
2017-03-31 18:58 ` Eric Voskuil
2017-04-01 6:15 ` Jared Lee Richardson
2017-03-29 19:50 Raystonn .
2017-03-30 10:34 ` Tom Zander
2017-03-30 11:19 ` David Vorick
2017-03-30 21:42 ` Jared Lee Richardson
2017-03-30 11:24 ` Aymeric Vitte
2017-03-31 21:23 Rodney Morris
2017-03-31 23:13 ` Eric Voskuil
[not found] ` <CABerxhGeofH4iEonjB1xKOkHcEVJrR+D4QhHSw5cWYsjmW4JpQ@mail.gmail.com>
2017-04-01 1:41 ` Rodney Morris
2017-04-01 6:18 ` Jared Lee Richardson
2017-04-01 7:41 ` Eric Voskuil
[not found] ` <CAAt2M1_sHsCD_AX-vm-oy-4tY+dKoDAJhfVUc4tnoNBFn-a+Dg@mail.gmail.com>
[not found] ` <CAAt2M19Gt8PmcPUGUHKm2kpMskpN4soF6M-Rb46HazKMV2D9mg@mail.gmail.com>
2017-04-01 14:45 ` Natanael
[not found] ` <CAD1TkXusCe-O3CGQkXyRw_m3sXS9grGxMqkMk8dOvFNXeV5zGQ@mail.gmail.com>
2017-04-01 18:42 ` Jared Lee Richardson
[not found] ` <CAAt2M1_kuCBQWd9dis5UwJX8+XGVPjjiOA54aD74iS2L0cYcTQ@mail.gmail.com>
[not found] ` <CAAt2M19Nr2KdyRkM_arJ=LBnqDQQyLQ2QQ-UBC8=gFnemCdPMg@mail.gmail.com>
2017-04-01 13:26 ` Natanael
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox