* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
@ 2015-06-28 5:34 Raystonn
2015-06-28 10:07 ` Adam Back
0 siblings, 1 reply; 43+ messages in thread
From: Raystonn @ 2015-06-28 5:34 UTC (permalink / raw)
To: Peter Todd; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/html, Size: 3157 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 5:34 [bitcoin-dev] A Proposed Compromise to the Block Size Limit Raystonn
@ 2015-06-28 10:07 ` Adam Back
2015-06-28 10:29 ` Benjamin
2015-06-28 12:32 ` Milly Bitcoin
0 siblings, 2 replies; 43+ messages in thread
From: Adam Back @ 2015-06-28 10:07 UTC (permalink / raw)
To: Raystonn; +Cc: bitcoin-dev
On 28 June 2015 at 07:34, Raystonn <raystonn@hotmail.com> wrote:
> nodes are limited to 133 connections. This is 8 outgoing connections and
> 125 incoming connections. [...] Once your full node reaches 133 connections,
> it will see no further increase in load [...] Only transaction rate will affect the
> load on your node.
The total system cost is more relevant, or total cost per user. I think you
are stuck on the O( t * m ) t = tx, m = nodes thinking. Total cost per user
is increasing. That better scaling algorithms need to be found. That's why
people are working on lightning-like systems.
> fear larger blocks based on an assumption of exponential growth of work, which just
> isn't the case.
People have been explaining quadratic system level increase, which is
not exponential,
wrong assumption.
> Decentralisation is planned to scale down once the 133 connection limit is
> hit. Like it or not, this is the current state of the code.
No people are not assuming decentralisation would decrease. They are assuming
the number of economically dependent full nodes would increase, that's where the
O( n^2 ) comes from! If we assume say c= 0.1% of users will run full nodes,
and users make some small-world assumed number of transactions that doesnt
increase greatly as more users are added to the network, then O( t * m
) => O( n^2 ).
Seeing decentralisation failing isn't a useful direction as Bitcoin depends on
decentralisation for most of it's useful security properties. People running
around saying great lets centralise Bitcoin and scale it, are not working on
Bitcoin. They may more usefully go work on competing systems without
proof of work as that's where this line of reasoning ends up. There
are companies working on such things. Some of them support Bitcoin IOUs.
Some of them have job openings.
We can improve decentralisation, and use bandwidth and relay improvements
to get some increase in throughput. But starting a direction of simplistic
thinking about an ever increasing block-size mode of thinking is destructive
and not Bitcoin. If you want to do that, you need to do it in an offchain
system. You cant build on sand so your offchain system wont be useful
if Bitcoin doesnt have reasonable decentralisation to retain useful meaning.
Hence lightning. There are existing layer 2 things that have on-chain netting.
Go work on one of those. But people need to understand the constraints
and stop arguing to break Bitcoin to "scale". It's too simplistic.
Even Gavin's proposal is not trying to do that, hence reference to
Nielsen's law.
His parameters are too high for too long for basic safety or prudence, but the
general idea to reclaim some throughput from network advances, is reasonable.
Also decentralisation is key, and that is something we can improve with pooling
protocols to phase out the artificial centralisation. We can also
educate people
to use fullnode they economically depend on to keep the full to SPV ratio
reasonable which is also needed for security.
Adam
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 10:07 ` Adam Back
@ 2015-06-28 10:29 ` Benjamin
2015-06-28 12:37 ` Adam Back
2015-06-28 12:32 ` Milly Bitcoin
1 sibling, 1 reply; 43+ messages in thread
From: Benjamin @ 2015-06-28 10:29 UTC (permalink / raw)
To: Adam Back; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 598 bytes --]
I agree that naive scaling will likely lead to bad outcomes. They might
have the advantage though, as this would mean not changing Bitcoin.
Level2 and Lightning is not well defined. If you move money to a third
party, even if it is within the constrained of a locked contract, then I
don't think that will solve the issues. Blockchain does not know about
offchain and moving between offchain and onchain requires liquidity and a
pricing mechanism. That is exactly the problem with side-chains. If you
have off-chain transactions on an exchange, they are ID'ed in their system,
subject to KYC/AML.
[-- Attachment #2: Type: text/html, Size: 694 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 10:07 ` Adam Back
2015-06-28 10:29 ` Benjamin
@ 2015-06-28 12:32 ` Milly Bitcoin
1 sibling, 0 replies; 43+ messages in thread
From: Milly Bitcoin @ 2015-06-28 12:32 UTC (permalink / raw)
To: bitcoin-dev
>Also decentralisation is key, and that is something we can improve
with pooling protocols to phase out the artificial centralisation.
So how is the level of decentralization measured? I see many claims on
this list that such-and-such action will increase or decrease
centralization and sometimes people talk in absolutes such as something
being decentralized or centralized. Some of the arguments seem to make
claims without providing any kind of analysis or explanation.
Nothing is truly decentralized and decentralization is just an
approximation of having a collection of centralized systems interact in
some way. I would suggest coming up with some sort of metric so these
discussions can start from a baseline when discussing changes.
Russ
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 10:29 ` Benjamin
@ 2015-06-28 12:37 ` Adam Back
2015-06-28 16:32 ` Raystonn .
0 siblings, 1 reply; 43+ messages in thread
From: Adam Back @ 2015-06-28 12:37 UTC (permalink / raw)
To: Benjamin; +Cc: bitcoin-dev
On 28 June 2015 at 12:29, Benjamin <benjamin.l.cordes@gmail.com> wrote:
> I agree that naive scaling will likely lead to bad outcomes. They might have
> the advantage though, as this would mean not changing Bitcoin.
Sure we can work incrementally and carefully, and this is exactly what
Bitcoin has been doing, and *must* do for safety and security for the
last 5 years!
That doesnt mean that useful serious improvements have not been made.
> Level2 and Lightning is not well defined. If you move money to a third
> party, even if it is within the constrained of a locked contract, then I
> don't think that will solve the issues.
I think you misunderstand how lightning works. Every lightning
transaction *is* a valid bitcoin transaction that could be posted to
the Bitcoin network to reclaim funds if a hub went permanently
offline. It is just that while the hubs involved remain in service,
there is no need to do so. This is why it has been described as a
(write coalescing) write cache layer for Bitcoin.>
I believe people expect lightning to be peer 2 peer like bitcoin.
Adam
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 12:37 ` Adam Back
@ 2015-06-28 16:32 ` Raystonn .
2015-06-28 17:12 ` Mark Friedenbach
0 siblings, 1 reply; 43+ messages in thread
From: Raystonn . @ 2015-06-28 16:32 UTC (permalink / raw)
To: Adam Back, Benjamin; +Cc: bitcoin-dev
Write coalescing works fine when you have multiple writes headed to the same
(contiguous) location. Will lightning be useful when we have more unique
transactions being sent to different addresses, and not just multiple
transactions between the same sender and address? I have doubts.
-----Original Message-----
From: Adam Back
Sent: Sunday, June 28, 2015 5:37 AM
To: Benjamin
Cc: bitcoin-dev@lists.linuxfoundation.org
Subject: Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
On 28 June 2015 at 12:29, Benjamin <benjamin.l.cordes@gmail.com> wrote:
> I agree that naive scaling will likely lead to bad outcomes. They might
> have
> the advantage though, as this would mean not changing Bitcoin.
Sure we can work incrementally and carefully, and this is exactly what
Bitcoin has been doing, and *must* do for safety and security for the
last 5 years!
That doesnt mean that useful serious improvements have not been made.
> Level2 and Lightning is not well defined. If you move money to a third
> party, even if it is within the constrained of a locked contract, then I
> don't think that will solve the issues.
I think you misunderstand how lightning works. Every lightning
transaction *is* a valid bitcoin transaction that could be posted to
the Bitcoin network to reclaim funds if a hub went permanently
offline. It is just that while the hubs involved remain in service,
there is no need to do so. This is why it has been described as a
(write coalescing) write cache layer for Bitcoin.>
I believe people expect lightning to be peer 2 peer like bitcoin.
Adam
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 16:32 ` Raystonn .
@ 2015-06-28 17:12 ` Mark Friedenbach
2015-06-28 17:18 ` Benjamin
2015-06-28 17:29 ` Gavin Andresen
0 siblings, 2 replies; 43+ messages in thread
From: Mark Friedenbach @ 2015-06-28 17:12 UTC (permalink / raw)
To: Raystonn .; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 3840 bytes --]
Think in terms of participants, not addresses. A participant in the
lightning network has a couple of connections to various hubs, from which
the participant is able to send or receive coin. The user is able to send
coins to anyone connected to the lightning network by means of an atomic
transaction through any path of the network. But the only payment from them
that ever hits the chain is their settlement with the hub.
Imagine there was a TCP/IP data chain and corresponding lightning network.
Everyone connected to the network has an "IP" channel with their ISP.
Through this channel they can send data to anywhere on the network, and a
traceroute shows what hops the data would take. But when settlement
actually occurs all the network sees is the net amount of data that has
gone through each segment -- without any context. There's no record
preserved on-chain of who sent data to whom, just that X bytes went through
the pipe on the way to somewhere unspecified.
So it is with lightning payment networks. You open a channel with a hub and
through that channel send coins to anyone accessible to the network.
Channels only close when a participant needs the funds for non-lightning
reasons, or when hubs need to rebalance. And when they do, observers on the
chain learn nothing more than how much net coin moved across that single
link. They learn nothing about where that coin eventually ended up.
So back to your original question, each channel can be considered to have a
pseudonymous identity, and each new channel given a new identity. Channel
closures can even be coinjoin'd when the other party is cooperating. But
ultimately, lightning usefully solves a problem where participants have
semi-long lived payment endpoints.
On Sun, Jun 28, 2015 at 9:32 AM, Raystonn . <raystonn@hotmail.com> wrote:
> Write coalescing works fine when you have multiple writes headed to the
> same (contiguous) location. Will lightning be useful when we have more
> unique transactions being sent to different addresses, and not just
> multiple transactions between the same sender and address? I have doubts.
>
>
> -----Original Message----- From: Adam Back
> Sent: Sunday, June 28, 2015 5:37 AM
> To: Benjamin
> Cc: bitcoin-dev@lists.linuxfoundation.org
> Subject: Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
>
>
> On 28 June 2015 at 12:29, Benjamin <benjamin.l.cordes@gmail.com> wrote:
>
>> I agree that naive scaling will likely lead to bad outcomes. They might
>> have
>> the advantage though, as this would mean not changing Bitcoin.
>>
>
> Sure we can work incrementally and carefully, and this is exactly what
> Bitcoin has been doing, and *must* do for safety and security for the
> last 5 years!
> That doesnt mean that useful serious improvements have not been made.
>
> Level2 and Lightning is not well defined. If you move money to a third
>> party, even if it is within the constrained of a locked contract, then I
>> don't think that will solve the issues.
>>
>
> I think you misunderstand how lightning works. Every lightning
> transaction *is* a valid bitcoin transaction that could be posted to
> the Bitcoin network to reclaim funds if a hub went permanently
> offline. It is just that while the hubs involved remain in service,
> there is no need to do so. This is why it has been described as a
> (write coalescing) write cache layer for Bitcoin.>
>
> I believe people expect lightning to be peer 2 peer like bitcoin.
>
> Adam
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 5125 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 17:12 ` Mark Friedenbach
@ 2015-06-28 17:18 ` Benjamin
2015-06-28 17:29 ` Gavin Andresen
1 sibling, 0 replies; 43+ messages in thread
From: Benjamin @ 2015-06-28 17:18 UTC (permalink / raw)
To: Mark Friedenbach; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 4753 bytes --]
"You open a channel with a hub and through that channel send coins to
anyone accessible to the network."
Define hub *precisely* and you will find there are
some significant problems here.
a) does everyone know each other in the network? In Bitcoin transacting
parties exchange keys out of band. How do I know that Alice is owner of a
pubkey? I don't, and if don't know Alice I'm out of luck and can't transact
with here (or trust another PKI).
b) hubs need incentives. There are not going to put up collateral just for
nothing.
c) how is complexity reduced? I would speculate that most transactions are
one-time transactions in the time frame of days.
LT is a very interesting idea, but far from actual implementation.
On Sun, Jun 28, 2015 at 7:12 PM, Mark Friedenbach <mark@friedenbach.org>
wrote:
> Think in terms of participants, not addresses. A participant in the
> lightning network has a couple of connections to various hubs, from which
> the participant is able to send or receive coin. The user is able to send
> coins to anyone connected to the lightning network by means of an atomic
> transaction through any path of the network. But the only payment from them
> that ever hits the chain is their settlement with the hub.
>
> Imagine there was a TCP/IP data chain and corresponding lightning network.
> Everyone connected to the network has an "IP" channel with their ISP.
> Through this channel they can send data to anywhere on the network, and a
> traceroute shows what hops the data would take. But when settlement
> actually occurs all the network sees is the net amount of data that has
> gone through each segment -- without any context. There's no record
> preserved on-chain of who sent data to whom, just that X bytes went through
> the pipe on the way to somewhere unspecified.
>
> So it is with lightning payment networks. You open a channel with a hub
> and through that channel send coins to anyone accessible to the network.
> Channels only close when a participant needs the funds for non-lightning
> reasons, or when hubs need to rebalance. And when they do, observers on the
> chain learn nothing more than how much net coin moved across that single
> link. They learn nothing about where that coin eventually ended up.
>
> So back to your original question, each channel can be considered to have
> a pseudonymous identity, and each new channel given a new identity. Channel
> closures can even be coinjoin'd when the other party is cooperating. But
> ultimately, lightning usefully solves a problem where participants have
> semi-long lived payment endpoints.
>
> On Sun, Jun 28, 2015 at 9:32 AM, Raystonn . <raystonn@hotmail.com> wrote:
>
>> Write coalescing works fine when you have multiple writes headed to the
>> same (contiguous) location. Will lightning be useful when we have more
>> unique transactions being sent to different addresses, and not just
>> multiple transactions between the same sender and address? I have doubts.
>>
>>
>> -----Original Message----- From: Adam Back
>> Sent: Sunday, June 28, 2015 5:37 AM
>> To: Benjamin
>> Cc: bitcoin-dev@lists.linuxfoundation.org
>> Subject: Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
>>
>>
>> On 28 June 2015 at 12:29, Benjamin <benjamin.l.cordes@gmail.com> wrote:
>>
>>> I agree that naive scaling will likely lead to bad outcomes. They might
>>> have
>>> the advantage though, as this would mean not changing Bitcoin.
>>>
>>
>> Sure we can work incrementally and carefully, and this is exactly what
>> Bitcoin has been doing, and *must* do for safety and security for the
>> last 5 years!
>> That doesnt mean that useful serious improvements have not been made.
>>
>> Level2 and Lightning is not well defined. If you move money to a third
>>> party, even if it is within the constrained of a locked contract, then I
>>> don't think that will solve the issues.
>>>
>>
>> I think you misunderstand how lightning works. Every lightning
>> transaction *is* a valid bitcoin transaction that could be posted to
>> the Bitcoin network to reclaim funds if a hub went permanently
>> offline. It is just that while the hubs involved remain in service,
>> there is no need to do so. This is why it has been described as a
>> (write coalescing) write cache layer for Bitcoin.>
>>
>> I believe people expect lightning to be peer 2 peer like bitcoin.
>>
>> Adam
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
>
[-- Attachment #2: Type: text/html, Size: 6762 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 17:12 ` Mark Friedenbach
2015-06-28 17:18 ` Benjamin
@ 2015-06-28 17:29 ` Gavin Andresen
2015-06-28 17:45 ` Mark Friedenbach
` (3 more replies)
1 sibling, 4 replies; 43+ messages in thread
From: Gavin Andresen @ 2015-06-28 17:29 UTC (permalink / raw)
To: Mark Friedenbach; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1454 bytes --]
On Sun, Jun 28, 2015 at 1:12 PM, Mark Friedenbach <mark@friedenbach.org>
wrote:
> But ultimately, lightning usefully solves a problem where participants
> have semi-long lived payment endpoints.
Very few of my own personal Bitcoin transactions fit that use-case.
In fact, very few of my own personal dollar transactions fit that use-case
(I suppose if I was addicted to Starbucks I'd have one of their payment
cards that I topped up every once in a while, which would map nicely onto a
payment channel). I suppose I could setup a payment channel with the
grocery store I shop at once a week, but that would be inconvenient (I'd
have to pre-fund it) and bad for my privacy.
I can see how payment channels would work between big financial
institutions as a settlement layer, but isn't that exactly the
centralization concern that is making a lot of people worried about
increasing the max block size?
And if there are only a dozen or two popular hubs, that's much worse
centralization-wise compared to a few thousand fully-validating Bitcoin
nodes.
Don't get me wrong, I think the Lightning Network is a fantastic idea and a
great experiment and will likely be used for all sorts of great payment
innovations (micropayments for bandwidth maybe, or maybe paying workers by
the hour instead of at the end of the month). But I don't think it is a
scaling solution for the types of payments the Bitcoin network is handling
today.
--
--
Gavin Andresen
[-- Attachment #2: Type: text/html, Size: 2076 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 17:29 ` Gavin Andresen
@ 2015-06-28 17:45 ` Mark Friedenbach
2015-06-28 17:51 ` Adam Back
` (2 subsequent siblings)
3 siblings, 0 replies; 43+ messages in thread
From: Mark Friedenbach @ 2015-06-28 17:45 UTC (permalink / raw)
To: Gavin Andresen; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1644 bytes --]
Gavin, do you use a debit card or credit card? Then you do fit that use
case. When you buy a coffee at Starbucks, it is your bank that pays
Starbuck's bank. So it is with micropayment hubs.
On Sun, Jun 28, 2015 at 1:12 PM, Mark Friedenbach <mark@friedenbach.org>
wrote:
> But ultimately, lightning usefully solves a problem where participants
> have semi-long lived payment endpoints.
Very few of my own personal Bitcoin transactions fit that use-case.
In fact, very few of my own personal dollar transactions fit that use-case
(I suppose if I was addicted to Starbucks I'd have one of their payment
cards that I topped up every once in a while, which would map nicely onto a
payment channel). I suppose I could setup a payment channel with the
grocery store I shop at once a week, but that would be inconvenient (I'd
have to pre-fund it) and bad for my privacy.
I can see how payment channels would work between big financial
institutions as a settlement layer, but isn't that exactly the
centralization concern that is making a lot of people worried about
increasing the max block size?
And if there are only a dozen or two popular hubs, that's much worse
centralization-wise compared to a few thousand fully-validating Bitcoin
nodes.
Don't get me wrong, I think the Lightning Network is a fantastic idea and a
great experiment and will likely be used for all sorts of great payment
innovations (micropayments for bandwidth maybe, or maybe paying workers by
the hour instead of at the end of the month). But I don't think it is a
scaling solution for the types of payments the Bitcoin network is handling
today.
--
--
Gavin Andresen
[-- Attachment #2: Type: text/html, Size: 2363 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 17:29 ` Gavin Andresen
2015-06-28 17:45 ` Mark Friedenbach
@ 2015-06-28 17:51 ` Adam Back
2015-06-28 18:58 ` Adam Back
2015-06-28 17:53 ` Jorge Timón
2015-06-28 19:22 ` Andrew Lapp
3 siblings, 1 reply; 43+ messages in thread
From: Adam Back @ 2015-06-28 17:51 UTC (permalink / raw)
To: Gavin Andresen; +Cc: bitcoin-dev
On Sun, Jun 28, 2015 at 1:12 PM, Mark Friedenbach <mark@friedenbach.org> wrote:
> But ultimately, lightning usefully solves a problem where participants have semi-long lived payment endpoints.
Recipients do benefit from keeping connections to hubs because if a
hub goes away or a user abandons a hub that tends to generate new
on-chain traffic for balance reclaim, and new channel establishment,
as we understand the limits so far.
On 28 June 2015 at 19:29, Gavin Andresen <gavinandresen@gmail.com> wrote:
> Very few of my own personal Bitcoin transactions fit that use-case.
I believe Mark is talking about the one hop (direct) connections
benefits from being long-lived; the payment destination is not
restricted in the same way. It's more like having a static IP address
with your ISP, that doesnt stop you reaching anywhere on the internet.
Say the Lightning Network has an average fan out of 10, now subject to
capital and rebalancing flows in the network you can pay anyone of a
billion people in 9 hops. Maybe the fanout is lumpy, with some bigger
hubs - that just serves to reduce the number of hops. Maybe there are
some capitalisation limits, that is dealt with by negative fees and
recirculation (more on that below) or failing that recapitalisation
on-chain. Some people assume that the hub will run out of
capitalisation on a given channel, however if people and hubs retain
redundant channels they can be paid to rebalance channels, and even
users can be paid by other users if there is a net flow from some
users, to a given business eg starbucks, where the users just buy new
BTC for USD and spend and dont earn BTC. Rebalancing would work
because the exchange where they buy new BTC would be incentivised to
pay starbucks (or whoever has excess coins on a channel) to send the
coins back to the users topping up by paying them negative fees,
because the fees to do that should be less than using on-chain
transactions.
> But I don't think it is a scaling solution for the types of payments the Bitcoin
> network is handling today.
Actually I think it may well be able to do that very well. We dont
know for sure how it will work until we see the balance and
effectiveness of the network algorithms against usage (eg simulating
from Bitcoin's historic usage say), but there's good reason to see
that BTC can recirculate and rebalance due to the reversible
non-expiring channels and capitalisation requirements can be lower
than simple expectation due higher velocity and redistribution of fees
to anyone with excess liquidity and connectivity heading in the right
direction.
Adam
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 17:29 ` Gavin Andresen
2015-06-28 17:45 ` Mark Friedenbach
2015-06-28 17:51 ` Adam Back
@ 2015-06-28 17:53 ` Jorge Timón
2015-06-28 19:22 ` Andrew Lapp
3 siblings, 0 replies; 43+ messages in thread
From: Jorge Timón @ 2015-06-28 17:53 UTC (permalink / raw)
To: Gavin Andresen; +Cc: bitcoin-dev
On Sun, Jun 28, 2015 at 7:29 PM, Gavin Andresen <gavinandresen@gmail.com> wrote:
> On Sun, Jun 28, 2015 at 1:12 PM, Mark Friedenbach <mark@friedenbach.org>
> wrote:
>>
>> But ultimately, lightning usefully solves a problem where participants
>> have semi-long lived payment endpoints.
>
>
> Very few of my own personal Bitcoin transactions fit that use-case.
>
> In fact, very few of my own personal dollar transactions fit that use-case
> (I suppose if I was addicted to Starbucks I'd have one of their payment
> cards that I topped up every once in a while, which would map nicely onto a
> payment channel). I suppose I could setup a payment channel with the grocery
> store I shop at once a week, but that would be inconvenient (I'd have to
> pre-fund it) and bad for my privacy.
Unlike other payment channels designs, the lightning payment channel
network allows you to pay to people that you haven't sent a pre-fund
to.
There's must be a path in the network from you to the payee.
That's simpler with only a few hubs although too few hubs is bad for privacy.
> I can see how payment channels would work between big financial institutions
> as a settlement layer, but isn't that exactly the centralization concern
> that is making a lot of people worried about increasing the max block size?
Worried about financial institutions using Bitcoin? No. Who said that?
> And if there are only a dozen or two popular hubs, that's much worse
> centralization-wise compared to a few thousand fully-validating Bitcoin
> nodes.
Remember the hubs cannot steal any coins.
> Don't get me wrong, I think the Lightning Network is a fantastic idea and a
> great experiment and will likely be used for all sorts of great payment
> innovations (micropayments for bandwidth maybe, or maybe paying workers by
> the hour instead of at the end of the month). But I don't think it is a
> scaling solution for the types of payments the Bitcoin network is handling
> today.
I don't see how people could pay coffees with bitcoin in the long term
otherwise.
Bitcoin IOUs from a third party (or federation) maybe, but not with
real p2p btc.
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 17:51 ` Adam Back
@ 2015-06-28 18:58 ` Adam Back
2015-06-28 21:05 ` Gavin Andresen
2015-07-10 2:55 ` Tom Harding
0 siblings, 2 replies; 43+ messages in thread
From: Adam Back @ 2015-06-28 18:58 UTC (permalink / raw)
To: Adam Back; +Cc: bitcoin-dev
This is probably going to sound impolite, but I think it's pertinent.
Gavin, on dwelling on the the fact that you appear to not understand
the basics of the lightning network, I am a little alarmed about this,
given your recent proposals to unilaterally push the network into
quite dangerous areas of game theory, to lobby companies etc.
People are super polite and respectful around here, but this is not
looking good, if you don't mind me saying so. You can't make balanced
or informed trade-offs on block-size schedules stretching into the
future, if you don't understand work that is underway, and has been
for months. Lightning is a major candidate approach the rest of the
technical community sees for Bitcoin to scale.
Lightning allows Bitcoin to scale even without a block-size increase,
and therefore considerably impacts any calculation of how much
block-size is required. In this light you appear to have been
attempting to push through a change without even understanding the
alternatives or greater ecosystem.
Adam
On 28 June 2015 at 19:51, Adam Back <adam@cypherspace.org> wrote:
> On Sun, Jun 28, 2015 at 1:12 PM, Mark Friedenbach <mark@friedenbach.org> wrote:
>> But ultimately, lightning usefully solves a problem where participants have semi-long lived payment endpoints.
>
> Recipients do benefit from keeping connections to hubs because if a
> hub goes away or a user abandons a hub that tends to generate new
> on-chain traffic for balance reclaim, and new channel establishment,
> as we understand the limits so far.
>
> On 28 June 2015 at 19:29, Gavin Andresen <gavinandresen@gmail.com> wrote:
>> Very few of my own personal Bitcoin transactions fit that use-case.
>
> I believe Mark is talking about the one hop (direct) connections
> benefits from being long-lived; the payment destination is not
> restricted in the same way. It's more like having a static IP address
> with your ISP, that doesnt stop you reaching anywhere on the internet.
>
> Say the Lightning Network has an average fan out of 10, now subject to
> capital and rebalancing flows in the network you can pay anyone of a
> billion people in 9 hops. Maybe the fanout is lumpy, with some bigger
> hubs - that just serves to reduce the number of hops. Maybe there are
> some capitalisation limits, that is dealt with by negative fees and
> recirculation (more on that below) or failing that recapitalisation
> on-chain. Some people assume that the hub will run out of
> capitalisation on a given channel, however if people and hubs retain
> redundant channels they can be paid to rebalance channels, and even
> users can be paid by other users if there is a net flow from some
> users, to a given business eg starbucks, where the users just buy new
> BTC for USD and spend and dont earn BTC. Rebalancing would work
> because the exchange where they buy new BTC would be incentivised to
> pay starbucks (or whoever has excess coins on a channel) to send the
> coins back to the users topping up by paying them negative fees,
> because the fees to do that should be less than using on-chain
> transactions.
>
>> But I don't think it is a scaling solution for the types of payments the Bitcoin
>> network is handling today.
>
> Actually I think it may well be able to do that very well. We dont
> know for sure how it will work until we see the balance and
> effectiveness of the network algorithms against usage (eg simulating
> from Bitcoin's historic usage say), but there's good reason to see
> that BTC can recirculate and rebalance due to the reversible
> non-expiring channels and capitalisation requirements can be lower
> than simple expectation due higher velocity and redistribution of fees
> to anyone with excess liquidity and connectivity heading in the right
> direction.
>
> Adam
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 17:29 ` Gavin Andresen
` (2 preceding siblings ...)
2015-06-28 17:53 ` Jorge Timón
@ 2015-06-28 19:22 ` Andrew Lapp
2015-06-28 19:40 ` Benjamin
3 siblings, 1 reply; 43+ messages in thread
From: Andrew Lapp @ 2015-06-28 19:22 UTC (permalink / raw)
To: bitcoin-dev
I don't mind a set of central authorities being part of an option IF the
central authority doesn't need to be trusted. On the blockchain, the
larger miner is, the more you have to trust them to not collude with
anyone to reverse your payments or destroy the trust in the system in
some attack. On the Lightning network, a large hub can't steal my money.
I think most people share the sentiment that trustlessness is what
matters and decentralization is just a synonym for trustlessness when
talking about the blockchain and mining, however decentralization isn't
necessarily synonymous with trustlessness nor is centralization
synonymous with trust-requiring when you're talking about something else.
-Andrew Lapp
On 06/28/2015 01:29 PM, Gavin Andresen wrote:
> I can see how payment channels would work between big financial
> institutions as a settlement layer, but isn't that exactly the
> centralization concern that is making a lot of people worried about
> increasing the max block size?
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 19:22 ` Andrew Lapp
@ 2015-06-28 19:40 ` Benjamin
0 siblings, 0 replies; 43+ messages in thread
From: Benjamin @ 2015-06-28 19:40 UTC (permalink / raw)
To: Andrew Lapp; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1690 bytes --]
"On the Lightning network, a large hub can't steal my money." Malicious
hubs could flood the network. The way it is discussed now it's not
resistant to Sybil attack either. It's an interesting idea in a very early
stage. Not at all a drop-in replacement for Bitcoin anytime soon, as some
imply. Blockstream shouldn't make these issues into pitches of their own
tech of their for-profit enterprise.
On Sun, Jun 28, 2015 at 9:22 PM, Andrew Lapp <lapp0@purdue.edu> wrote:
> I don't mind a set of central authorities being part of an option IF the
> central authority doesn't need to be trusted. On the blockchain, the larger
> miner is, the more you have to trust them to not collude with anyone to
> reverse your payments or destroy the trust in the system in some attack. On
> the Lightning network, a large hub can't steal my money.
>
> I think most people share the sentiment that trustlessness is what matters
> and decentralization is just a synonym for trustlessness when talking about
> the blockchain and mining, however decentralization isn't necessarily
> synonymous with trustlessness nor is centralization synonymous with
> trust-requiring when you're talking about something else.
>
> -Andrew Lapp
>
> On 06/28/2015 01:29 PM, Gavin Andresen wrote:
>
>> I can see how payment channels would work between big financial
>> institutions as a settlement layer, but isn't that exactly the
>> centralization concern that is making a lot of people worried about
>> increasing the max block size?
>>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
[-- Attachment #2: Type: text/html, Size: 2535 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 18:58 ` Adam Back
@ 2015-06-28 21:05 ` Gavin Andresen
2015-06-28 21:23 ` Michael Naber
2015-06-28 22:07 ` Adam Back
2015-07-10 2:55 ` Tom Harding
1 sibling, 2 replies; 43+ messages in thread
From: Gavin Andresen @ 2015-06-28 21:05 UTC (permalink / raw)
To: Adam Back; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 972 bytes --]
On Sun, Jun 28, 2015 at 2:58 PM, Adam Back <adam@cypherspace.org> wrote:
> This is probably going to sound impolite, but I think it's pertinent.
>
> Gavin, on dwelling on the the fact that you appear to not understand
> the basics of the lightning network, I am a little alarmed about this
If I don't see how switching from using the thousands of fully-validating
bitcoin nodes with (tens? hundreds?) of Lightning Network hubs is better in
terms of decentralization (or security, in terms of Sybil/DoS attacks),
then I doubt other people do, either. You need to do a better job of
explaining it.
But even if you could convince me that it WAS better from a
security/decentralization point of view:
a) Lightning Network is nothing but a whitepaper right now. We are a long
way from a practical implementation supported by even one wallet.
b) The Lightning Network paper itself says bigger blocks will be needed
even if (especially if!) Lightning is wildly successful.
[-- Attachment #2: Type: text/html, Size: 1403 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 21:05 ` Gavin Andresen
@ 2015-06-28 21:23 ` Michael Naber
2015-06-28 22:07 ` Adam Back
1 sibling, 0 replies; 43+ messages in thread
From: Michael Naber @ 2015-06-28 21:23 UTC (permalink / raw)
To: Gavin Andresen, Peter Todd, Adam Back; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 3232 bytes --]
Bitcoin Core exists to solve the global consensus problem. Global network
consensus means that there is global network recognition that a particular
transaction has occurred and is irreversible. Systems like hub-and-spoke,
payment channels, Lightning, etc. are useful, but they are not solutions to
the global consensus problem, because they do not meet this definition of
global consensus.
Let us focus our efforts on the goal of making Bitcoin Core the best
solution to the global consensus problem. Let us address Peter Todd’s
requirements to raise the block size limit to 8MB:
1) Run a successful test-net with 8MB blocks and show that the network
works and small miners are not unduly disadvantaged
2) Address Peter Todd's concern: “without scarcity of blockchain space
there is no reason to think that transaction fees won’t fall to the
marginal cost of including a transaction, which doesn’t leave anything to
pay for proof-of-work security”
Regarding 1: This is not done yet, though it seems reasonable enough to do.
Regarding 2: It is a fallacy to believe that artificially constraining
capacity of Bitcoin Core below the limits of technology will lead to
increased fees and therefore lead to sufficient security in the far-future.
Constraining capacity below the limits of technology will ultimately only
drive users seeking global consensus to solutions other than Bitcoin Core,
perhaps through a fork.
Demand for user access to high-capacity global consensus is real, and the
technology exists to deliver it; if we don't meet that demand in Bitcoin
Core, it's inevitably going to get met through some other product. Let's
not let that happen. Let's keep Bitcoin Core the best solution to the
global consensus problem.
Thoughts? Is there anything else not mentioned above which anyone would
like done in order to raise the block size to a static 8 MB?
On Sun, Jun 28, 2015 at 5:05 PM, Gavin Andresen <gavinandresen@gmail.com>
wrote:
> On Sun, Jun 28, 2015 at 2:58 PM, Adam Back <adam@cypherspace.org> wrote:
>
>> This is probably going to sound impolite, but I think it's pertinent.
>>
>> Gavin, on dwelling on the the fact that you appear to not understand
>> the basics of the lightning network, I am a little alarmed about this
>
>
> If I don't see how switching from using the thousands of fully-validating
> bitcoin nodes with (tens? hundreds?) of Lightning Network hubs is better in
> terms of decentralization (or security, in terms of Sybil/DoS attacks),
> then I doubt other people do, either. You need to do a better job of
> explaining it.
>
> But even if you could convince me that it WAS better from a
> security/decentralization point of view:
>
> a) Lightning Network is nothing but a whitepaper right now. We are a long
> way from a practical implementation supported by even one wallet.
>
> b) The Lightning Network paper itself says bigger blocks will be needed
> even if (especially if!) Lightning is wildly successful.
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 4260 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 21:05 ` Gavin Andresen
2015-06-28 21:23 ` Michael Naber
@ 2015-06-28 22:07 ` Adam Back
2015-06-29 0:59 ` Eric Lombrozo
` (3 more replies)
1 sibling, 4 replies; 43+ messages in thread
From: Adam Back @ 2015-06-28 22:07 UTC (permalink / raw)
To: Gavin Andresen; +Cc: bitcoin-dev
On 28 June 2015 at 23:05, Gavin Andresen <gavinandresen@gmail.com> wrote:
> On Sun, Jun 28, 2015 at 2:58 PM, Adam Back <adam@cypherspace.org> wrote:
>>
>> This is probably going to sound impolite, but I think it's pertinent.
>>
>> Gavin, on dwelling on the the fact that you appear to not understand
>> the basics of the lightning network, I am a little alarmed about this
>
> If I don't see how switching from using the thousands of fully-validating
> bitcoin nodes with (tens? hundreds?) of Lightning Network hubs is better in
> terms of decentralization (or security, in terms of Sybil/DoS attacks),
Its a source routed network, not a broadcast network. Fees are
charged on channels so
DoS is just a way to pay people a multiple of bandwidth cost.
in terms of trustlessness Andrew Lapp explained it pretty well:
> I don't mind a set of central authorities being part of an option IF the central authority
> doesn't need to be trusted. On the blockchain, the larger miner is, the more you have
> to trust them to not collude with anyone to reverse your payments or destroy the trust
> in the system in some attack. On the Lightning network, a large hub can't steal my
> money.
>
> I think most people share the sentiment that trustlessness is what matters and
> decentralization is just a synonym for trustlessness when talking about the blockchain
> and mining, however decentralization isn't necessarily synonymous with trustlessness
> nor is centralization synonymous with trust-requiring when you're talking about
> something else.
Gavin wrote:
> then I doubt other people do, either. You need to do a better job of explaining it.
I gave it a go a couple of posts up. I didnt realise people here
proposing mega-blocks were not paying attention to the whole lightning
concept and detail.
People said lots of things about how it's better to work on lightning,
to scale algorithmically, rather than increasing block-size to
dangerously centralising proportions.
Did you think we were Gish Galloping you? We were completely serious.
The paper is on http://lightning.network
though it is not so clearly explained there, however Joseph is working
on improving the paper as I understand it.
Rusty wrote a high-level blog explainer: http://rusty.ozlabs.org/?p=450
though I don't recall that he got into recirculation, negative fees
etc. A good question
for the lightning-dev mailing list maybe.
http://lists.linuxfoundation.org/pipermail/lightning-dev/
There are a couple of recorded presentation videos / podcasts from Joseph Poon.
sf bitcoin dev presentation:
https://www.youtube.com/watch?v=2QH5EV_Io0E
epicenter bitcoin:
https://www.youtube.com/watch?v=fBS_ieDwQ9k
There's a related paper from Christian Decker "Duplex Micropayment Channels"
http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
> But even if you could convince me that it WAS better from a
> security/decentralization point of view:
We don't need to convince people, we just have to code it and
demonstrate it, which people are working on.
But Lightning does need a decentralised and secure Bitcoin network for
anchor and reclaim transactions, so take it easy with the mega-blocks
in the mean-time.
> a) Lightning Network is nothing but a whitepaper right now. We are a long
> way from a practical implementation supported by even one wallet.
maybe you want to check in on
https://github.com/ElementsProject/lightning
and help code it.
I expect we can get something running inside a year. Which kind of
obviates the burning "need" for a schedule into the far future rising
to 8GB with unrealistic bandwidth growth assumptions that will surely
cause centralisation problems.
For block-size I think it would be better to have a 2-4 year or one
off size bump with policy limits and then re-evaluate after we've seen
what lightning can do.
I have been saying the same thing ad-nauseam for weeks.
> b) The Lightning Network paper itself says bigger blocks will be needed even
> if (especially if!) Lightning is wildly successful.
Not nearly as big as if you tried to put the transactions it would
enable on the chain, that's for sure! We dont know what that limit is
but people have been imagining 1,000 or 10,000 transactions per anchor
transaction. If micro-payments get popular many more.
Basically users would park Bitcoins a on a hub channel instead of the
blockchain. The channel can stay up indefinitely, and the user has
assurances analogous to greenaddress time-lock mechanism
Flexcap maybe a better solution because that allows bursting
block-size when economically rational.
Note that the time-locks with lightning are assumed to be relative
CTLV eg using the mechanism as Mark Friedenbach described in a post
here, and as implemented in the elements sidechain, so there is not a
huge rush to reclaim funds. They can be spread out in time.
If you want to scale Bitcoin - like really scale it - work on
lightning. Lightning + a decentralised and secure Bitcoin, scales
further and is more trustless than Bitcoin forced into centralisation
via premature mega-blocks.
To my mind a shorter, more conservative block-size increase to give a
few years room is enough for now. We'll be in a better position to
know what the right next step is after lightning is running.
Something to mention is you can elide transactions before reclaiming.
So long as the balancing transaction is correct, someone online can
swap it for you with an equal balance one with less hops of
intermediate payment flows.
It's pretty interesting what you can do already. I'm fairly confident
we're not finished algorithmically optimising it either. It's
surprising how much new territory there is just sitting there
unexplored.
Adam
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 22:07 ` Adam Back
@ 2015-06-29 0:59 ` Eric Lombrozo
2015-06-29 1:13 ` Eric Lombrozo
` (2 subsequent siblings)
3 siblings, 0 replies; 43+ messages in thread
From: Eric Lombrozo @ 2015-06-29 0:59 UTC (permalink / raw)
To: Adam Back; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 6691 bytes --]
There’s no question that a flooding mesh network requiring global consensus for every transactions is not the way. It’s also clear that a routable protocol capable of compensating hubs is basically the holy grail.
So what’s there to discuss?
- Eric
> On Jun 28, 2015, at 3:07 PM, Adam Back <adam@cypherspace.org> wrote:
>
> On 28 June 2015 at 23:05, Gavin Andresen <gavinandresen@gmail.com> wrote:
>> On Sun, Jun 28, 2015 at 2:58 PM, Adam Back <adam@cypherspace.org> wrote:
>>>
>>> This is probably going to sound impolite, but I think it's pertinent.
>>>
>>> Gavin, on dwelling on the the fact that you appear to not understand
>>> the basics of the lightning network, I am a little alarmed about this
>>
>> If I don't see how switching from using the thousands of fully-validating
>> bitcoin nodes with (tens? hundreds?) of Lightning Network hubs is better in
>> terms of decentralization (or security, in terms of Sybil/DoS attacks),
>
> Its a source routed network, not a broadcast network. Fees are
> charged on channels so
> DoS is just a way to pay people a multiple of bandwidth cost.
>
> in terms of trustlessness Andrew Lapp explained it pretty well:
>> I don't mind a set of central authorities being part of an option IF the central authority
>> doesn't need to be trusted. On the blockchain, the larger miner is, the more you have
>> to trust them to not collude with anyone to reverse your payments or destroy the trust
>> in the system in some attack. On the Lightning network, a large hub can't steal my
>> money.
>>
>> I think most people share the sentiment that trustlessness is what matters and
>> decentralization is just a synonym for trustlessness when talking about the blockchain
>> and mining, however decentralization isn't necessarily synonymous with trustlessness
>> nor is centralization synonymous with trust-requiring when you're talking about
>> something else.
>
> Gavin wrote:
>> then I doubt other people do, either. You need to do a better job of explaining it.
>
> I gave it a go a couple of posts up. I didnt realise people here
> proposing mega-blocks were not paying attention to the whole lightning
> concept and detail.
>
> People said lots of things about how it's better to work on lightning,
> to scale algorithmically, rather than increasing block-size to
> dangerously centralising proportions.
> Did you think we were Gish Galloping you? We were completely serious.
>
> The paper is on http://lightning.network
>
> though it is not so clearly explained there, however Joseph is working
> on improving the paper as I understand it.
>
> Rusty wrote a high-level blog explainer: http://rusty.ozlabs.org/?p=450
>
> though I don't recall that he got into recirculation, negative fees
> etc. A good question
> for the lightning-dev mailing list maybe.
>
> http://lists.linuxfoundation.org/pipermail/lightning-dev/
>
> There are a couple of recorded presentation videos / podcasts from Joseph Poon.
>
> sf bitcoin dev presentation:
>
> https://www.youtube.com/watch?v=2QH5EV_Io0E
>
> epicenter bitcoin:
>
> https://www.youtube.com/watch?v=fBS_ieDwQ9k
>
> There's a related paper from Christian Decker "Duplex Micropayment Channels"
>
> http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
>
>> But even if you could convince me that it WAS better from a
>> security/decentralization point of view:
>
> We don't need to convince people, we just have to code it and
> demonstrate it, which people are working on.
>
> But Lightning does need a decentralised and secure Bitcoin network for
> anchor and reclaim transactions, so take it easy with the mega-blocks
> in the mean-time.
>
>> a) Lightning Network is nothing but a whitepaper right now. We are a long
>> way from a practical implementation supported by even one wallet.
>
> maybe you want to check in on
>
> https://github.com/ElementsProject/lightning
>
> and help code it.
>
> I expect we can get something running inside a year. Which kind of
> obviates the burning "need" for a schedule into the far future rising
> to 8GB with unrealistic bandwidth growth assumptions that will surely
> cause centralisation problems.
>
> For block-size I think it would be better to have a 2-4 year or one
> off size bump with policy limits and then re-evaluate after we've seen
> what lightning can do.
>
> I have been saying the same thing ad-nauseam for weeks.
>
>> b) The Lightning Network paper itself says bigger blocks will be needed even
>> if (especially if!) Lightning is wildly successful.
>
> Not nearly as big as if you tried to put the transactions it would
> enable on the chain, that's for sure! We dont know what that limit is
> but people have been imagining 1,000 or 10,000 transactions per anchor
> transaction. If micro-payments get popular many more.
>
> Basically users would park Bitcoins a on a hub channel instead of the
> blockchain. The channel can stay up indefinitely, and the user has
> assurances analogous to greenaddress time-lock mechanism
>
> Flexcap maybe a better solution because that allows bursting
> block-size when economically rational.
>
> Note that the time-locks with lightning are assumed to be relative
> CTLV eg using the mechanism as Mark Friedenbach described in a post
> here, and as implemented in the elements sidechain, so there is not a
> huge rush to reclaim funds. They can be spread out in time.
>
> If you want to scale Bitcoin - like really scale it - work on
> lightning. Lightning + a decentralised and secure Bitcoin, scales
> further and is more trustless than Bitcoin forced into centralisation
> via premature mega-blocks.
>
> To my mind a shorter, more conservative block-size increase to give a
> few years room is enough for now. We'll be in a better position to
> know what the right next step is after lightning is running.
>
> Something to mention is you can elide transactions before reclaiming.
> So long as the balancing transaction is correct, someone online can
> swap it for you with an equal balance one with less hops of
> intermediate payment flows.
>
>
> It's pretty interesting what you can do already. I'm fairly confident
> we're not finished algorithmically optimising it either. It's
> surprising how much new territory there is just sitting there
> unexplored.
>
> Adam
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Message signed with OpenPGP using GPGMail --]
[-- Type: application/pgp-signature, Size: 842 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 22:07 ` Adam Back
2015-06-29 0:59 ` Eric Lombrozo
@ 2015-06-29 1:13 ` Eric Lombrozo
2015-06-29 1:45 ` Andy Schroder
2015-06-30 0:42 ` Tom Harding
3 siblings, 0 replies; 43+ messages in thread
From: Eric Lombrozo @ 2015-06-29 1:13 UTC (permalink / raw)
To: Adam Back; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 7228 bytes --]
The Lightning network is essentially a contract negotiation scheme that rewards cooperation. Defection amounts to either broadcasting early or not responding to signature requests. If done right, either of these situations incurs a bigger cost to the uncooperative party than cooperation. This is why I say blockchains are like a fix to the prisoner’s dilemma.
The blockchain becomes essentially a dispute resolution mechanism and a way to anchor stuff. There’s no use case covered by the current method of “flood the entire network and confirm on blockchain” that can’t be covered by a method of “participate in a contract which guarantees me payment on the blockchain if anyone is uncooperative but which rarely requires touching the blockchain” methinks.
- Eric Lombrozo
> On Jun 28, 2015, at 3:07 PM, Adam Back <adam@cypherspace.org> wrote:
>
> On 28 June 2015 at 23:05, Gavin Andresen <gavinandresen@gmail.com> wrote:
>> On Sun, Jun 28, 2015 at 2:58 PM, Adam Back <adam@cypherspace.org> wrote:
>>>
>>> This is probably going to sound impolite, but I think it's pertinent.
>>>
>>> Gavin, on dwelling on the the fact that you appear to not understand
>>> the basics of the lightning network, I am a little alarmed about this
>>
>> If I don't see how switching from using the thousands of fully-validating
>> bitcoin nodes with (tens? hundreds?) of Lightning Network hubs is better in
>> terms of decentralization (or security, in terms of Sybil/DoS attacks),
>
> Its a source routed network, not a broadcast network. Fees are
> charged on channels so
> DoS is just a way to pay people a multiple of bandwidth cost.
>
> in terms of trustlessness Andrew Lapp explained it pretty well:
>> I don't mind a set of central authorities being part of an option IF the central authority
>> doesn't need to be trusted. On the blockchain, the larger miner is, the more you have
>> to trust them to not collude with anyone to reverse your payments or destroy the trust
>> in the system in some attack. On the Lightning network, a large hub can't steal my
>> money.
>>
>> I think most people share the sentiment that trustlessness is what matters and
>> decentralization is just a synonym for trustlessness when talking about the blockchain
>> and mining, however decentralization isn't necessarily synonymous with trustlessness
>> nor is centralization synonymous with trust-requiring when you're talking about
>> something else.
>
> Gavin wrote:
>> then I doubt other people do, either. You need to do a better job of explaining it.
>
> I gave it a go a couple of posts up. I didnt realise people here
> proposing mega-blocks were not paying attention to the whole lightning
> concept and detail.
>
> People said lots of things about how it's better to work on lightning,
> to scale algorithmically, rather than increasing block-size to
> dangerously centralising proportions.
> Did you think we were Gish Galloping you? We were completely serious.
>
> The paper is on http://lightning.network
>
> though it is not so clearly explained there, however Joseph is working
> on improving the paper as I understand it.
>
> Rusty wrote a high-level blog explainer: http://rusty.ozlabs.org/?p=450
>
> though I don't recall that he got into recirculation, negative fees
> etc. A good question
> for the lightning-dev mailing list maybe.
>
> http://lists.linuxfoundation.org/pipermail/lightning-dev/
>
> There are a couple of recorded presentation videos / podcasts from Joseph Poon.
>
> sf bitcoin dev presentation:
>
> https://www.youtube.com/watch?v=2QH5EV_Io0E
>
> epicenter bitcoin:
>
> https://www.youtube.com/watch?v=fBS_ieDwQ9k
>
> There's a related paper from Christian Decker "Duplex Micropayment Channels"
>
> http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
>
>> But even if you could convince me that it WAS better from a
>> security/decentralization point of view:
>
> We don't need to convince people, we just have to code it and
> demonstrate it, which people are working on.
>
> But Lightning does need a decentralised and secure Bitcoin network for
> anchor and reclaim transactions, so take it easy with the mega-blocks
> in the mean-time.
>
>> a) Lightning Network is nothing but a whitepaper right now. We are a long
>> way from a practical implementation supported by even one wallet.
>
> maybe you want to check in on
>
> https://github.com/ElementsProject/lightning
>
> and help code it.
>
> I expect we can get something running inside a year. Which kind of
> obviates the burning "need" for a schedule into the far future rising
> to 8GB with unrealistic bandwidth growth assumptions that will surely
> cause centralisation problems.
>
> For block-size I think it would be better to have a 2-4 year or one
> off size bump with policy limits and then re-evaluate after we've seen
> what lightning can do.
>
> I have been saying the same thing ad-nauseam for weeks.
>
>> b) The Lightning Network paper itself says bigger blocks will be needed even
>> if (especially if!) Lightning is wildly successful.
>
> Not nearly as big as if you tried to put the transactions it would
> enable on the chain, that's for sure! We dont know what that limit is
> but people have been imagining 1,000 or 10,000 transactions per anchor
> transaction. If micro-payments get popular many more.
>
> Basically users would park Bitcoins a on a hub channel instead of the
> blockchain. The channel can stay up indefinitely, and the user has
> assurances analogous to greenaddress time-lock mechanism
>
> Flexcap maybe a better solution because that allows bursting
> block-size when economically rational.
>
> Note that the time-locks with lightning are assumed to be relative
> CTLV eg using the mechanism as Mark Friedenbach described in a post
> here, and as implemented in the elements sidechain, so there is not a
> huge rush to reclaim funds. They can be spread out in time.
>
> If you want to scale Bitcoin - like really scale it - work on
> lightning. Lightning + a decentralised and secure Bitcoin, scales
> further and is more trustless than Bitcoin forced into centralisation
> via premature mega-blocks.
>
> To my mind a shorter, more conservative block-size increase to give a
> few years room is enough for now. We'll be in a better position to
> know what the right next step is after lightning is running.
>
> Something to mention is you can elide transactions before reclaiming.
> So long as the balancing transaction is correct, someone online can
> swap it for you with an equal balance one with less hops of
> intermediate payment flows.
>
>
> It's pretty interesting what you can do already. I'm fairly confident
> we're not finished algorithmically optimising it either. It's
> surprising how much new territory there is just sitting there
> unexplored.
>
> Adam
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
[-- Attachment #2: Message signed with OpenPGP using GPGMail --]
[-- Type: application/pgp-signature, Size: 842 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 22:07 ` Adam Back
2015-06-29 0:59 ` Eric Lombrozo
2015-06-29 1:13 ` Eric Lombrozo
@ 2015-06-29 1:45 ` Andy Schroder
2015-06-30 0:42 ` Tom Harding
3 siblings, 0 replies; 43+ messages in thread
From: Andy Schroder @ 2015-06-29 1:45 UTC (permalink / raw)
To: Adam Back; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 6690 bytes --]
Regarding privacy and the lightening network. Has this been well
addressed? I haven't seen much that leads me to believe there is. Only
options I see are to have many open payment channels, but that is still
limiting and inefficient, or require an extensive number of hops in your
payment route, but this is also limiting.
Andy Schroder
On 06/28/2015 06:07 PM, Adam Back wrote:
> On 28 June 2015 at 23:05, Gavin Andresen <gavinandresen@gmail.com> wrote:
>> On Sun, Jun 28, 2015 at 2:58 PM, Adam Back <adam@cypherspace.org> wrote:
>>> This is probably going to sound impolite, but I think it's pertinent.
>>>
>>> Gavin, on dwelling on the the fact that you appear to not understand
>>> the basics of the lightning network, I am a little alarmed about this
>> If I don't see how switching from using the thousands of fully-validating
>> bitcoin nodes with (tens? hundreds?) of Lightning Network hubs is better in
>> terms of decentralization (or security, in terms of Sybil/DoS attacks),
> Its a source routed network, not a broadcast network. Fees are
> charged on channels so
> DoS is just a way to pay people a multiple of bandwidth cost.
>
> in terms of trustlessness Andrew Lapp explained it pretty well:
>> I don't mind a set of central authorities being part of an option IF the central authority
>> doesn't need to be trusted. On the blockchain, the larger miner is, the more you have
>> to trust them to not collude with anyone to reverse your payments or destroy the trust
>> in the system in some attack. On the Lightning network, a large hub can't steal my
>> money.
>>
>> I think most people share the sentiment that trustlessness is what matters and
>> decentralization is just a synonym for trustlessness when talking about the blockchain
>> and mining, however decentralization isn't necessarily synonymous with trustlessness
>> nor is centralization synonymous with trust-requiring when you're talking about
>> something else.
> Gavin wrote:
>> then I doubt other people do, either. You need to do a better job of explaining it.
> I gave it a go a couple of posts up. I didnt realise people here
> proposing mega-blocks were not paying attention to the whole lightning
> concept and detail.
>
> People said lots of things about how it's better to work on lightning,
> to scale algorithmically, rather than increasing block-size to
> dangerously centralising proportions.
> Did you think we were Gish Galloping you? We were completely serious.
>
> The paper is on http://lightning.network
>
> though it is not so clearly explained there, however Joseph is working
> on improving the paper as I understand it.
>
> Rusty wrote a high-level blog explainer: http://rusty.ozlabs.org/?p=450
>
> though I don't recall that he got into recirculation, negative fees
> etc. A good question
> for the lightning-dev mailing list maybe.
>
> http://lists.linuxfoundation.org/pipermail/lightning-dev/
>
> There are a couple of recorded presentation videos / podcasts from Joseph Poon.
>
> sf bitcoin dev presentation:
>
> https://www.youtube.com/watch?v=2QH5EV_Io0E
>
> epicenter bitcoin:
>
> https://www.youtube.com/watch?v=fBS_ieDwQ9k
>
> There's a related paper from Christian Decker "Duplex Micropayment Channels"
>
> http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
>
>> But even if you could convince me that it WAS better from a
>> security/decentralization point of view:
> We don't need to convince people, we just have to code it and
> demonstrate it, which people are working on.
>
> But Lightning does need a decentralised and secure Bitcoin network for
> anchor and reclaim transactions, so take it easy with the mega-blocks
> in the mean-time.
>
>> a) Lightning Network is nothing but a whitepaper right now. We are a long
>> way from a practical implementation supported by even one wallet.
> maybe you want to check in on
>
> https://github.com/ElementsProject/lightning
>
> and help code it.
>
> I expect we can get something running inside a year. Which kind of
> obviates the burning "need" for a schedule into the far future rising
> to 8GB with unrealistic bandwidth growth assumptions that will surely
> cause centralisation problems.
>
> For block-size I think it would be better to have a 2-4 year or one
> off size bump with policy limits and then re-evaluate after we've seen
> what lightning can do.
>
> I have been saying the same thing ad-nauseam for weeks.
>
>> b) The Lightning Network paper itself says bigger blocks will be needed even
>> if (especially if!) Lightning is wildly successful.
> Not nearly as big as if you tried to put the transactions it would
> enable on the chain, that's for sure! We dont know what that limit is
> but people have been imagining 1,000 or 10,000 transactions per anchor
> transaction. If micro-payments get popular many more.
>
> Basically users would park Bitcoins a on a hub channel instead of the
> blockchain. The channel can stay up indefinitely, and the user has
> assurances analogous to greenaddress time-lock mechanism
>
> Flexcap maybe a better solution because that allows bursting
> block-size when economically rational.
>
> Note that the time-locks with lightning are assumed to be relative
> CTLV eg using the mechanism as Mark Friedenbach described in a post
> here, and as implemented in the elements sidechain, so there is not a
> huge rush to reclaim funds. They can be spread out in time.
>
> If you want to scale Bitcoin - like really scale it - work on
> lightning. Lightning + a decentralised and secure Bitcoin, scales
> further and is more trustless than Bitcoin forced into centralisation
> via premature mega-blocks.
>
> To my mind a shorter, more conservative block-size increase to give a
> few years room is enough for now. We'll be in a better position to
> know what the right next step is after lightning is running.
>
> Something to mention is you can elide transactions before reclaiming.
> So long as the balancing transaction is correct, someone online can
> swap it for you with an equal balance one with less hops of
> intermediate payment flows.
>
>
> It's pretty interesting what you can do already. I'm fairly confident
> we're not finished algorithmically optimising it either. It's
> surprising how much new territory there is just sitting there
> unexplored.
>
> Adam
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 555 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 22:07 ` Adam Back
` (2 preceding siblings ...)
2015-06-29 1:45 ` Andy Schroder
@ 2015-06-30 0:42 ` Tom Harding
3 siblings, 0 replies; 43+ messages in thread
From: Tom Harding @ 2015-06-30 0:42 UTC (permalink / raw)
To: Adam Back; +Cc: bitcoin-dev
On 6/28/2015 3:07 PM, Adam Back wrote:
> We dont know what that limit is but people have been imagining 1,000
> or 10,000 transactions per anchor transaction. Basically users would
> park Bitcoins a on a hub channel instead of the blockchain.
This re-introduces a solved problem (solved by bitcoin better than
anything else) - worrying whether your "payment hub" actually connects
to whom you wish to pay.
There will be enormous network effects and centralization pressure in
the payment-hub space. A few entities, maybe single entity, should be
expected to quickly corner the market and own the whole thing.
This concept is far too untested to justify amateur economic meddling in
the bitcoin fee market by setting a restrictive hard cap below technical
feasibility.
I can guess exactly who would want to keep bitcoin from improving:
*those who hope to be the future payment hub oligarchs*.
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-28 18:58 ` Adam Back
2015-06-28 21:05 ` Gavin Andresen
@ 2015-07-10 2:55 ` Tom Harding
1 sibling, 0 replies; 43+ messages in thread
From: Tom Harding @ 2015-07-10 2:55 UTC (permalink / raw)
To: Adam Back; +Cc: bitcoin-dev
>> On 6/28/2015 9:32 AM, Raystonn . wrote:
>>> Write coalescing works fine when you have multiple writes headed to
>>> the same (contiguous) location. Will lightning be useful when we
>>> have more unique transactions being sent to different addresses, and
>>> not just multiple transactions between the same sender and address?
>>> I have doubts.
> On 6/28/2015 10:29 AM, Gavin Andresen wrote:
>> Don't get me wrong, I think the Lightning Network is a fantastic idea
>> and a great experiment and will likely be used for all sorts of great
>> payment innovations (micropayments for bandwidth maybe, or maybe
>> paying workers by the hour instead of at the end of the month). But I
>> don't think it is a scaling solution for the types of payments the
>> Bitcoin network is handling today.
On 6/28/2015 11:58 AM, Adam Back wrote:
> Lightning allows Bitcoin to scale even without a block-size increase,
> and therefore considerably impacts any calculation of how much
> block-size is required. In this light you appear to have been
> attempting to push through a change without even understanding the
> alternatives or greater ecosystem.
Lightning Network (LN) does not "allow Bitcoin to scale". LN is a
bitcoin application. The properties of LN are dependent on bitcoin, but
they are distinct from bitcoin.
In particular, an under-appreciated aspect of LN is that in order for
your interactions to be consolidated and consume less blockchain space,
you must give up significant control of the money you send AND the money
you receive.
If either sender or receiver wants to record a transaction in the
blockchain immediately, there is no space savings versus bitcoin. More
blockchain space is actually, used, due to LN overhead.
If both sender and receiver are willing to delay recording in the
blockchain, then the situation is analogous to using banks. Sender's
hub pays from sender channel, to receiver channel at receiver's hub.
Neither side fully relinquishes custody of the money in their multisig
payment hub channels -- this is an improvement on traditional bank
accounts -- BUT...
- Sender is required to lock funds under his hub's signature - this is
well discussed
- Less well discussed: *to achieve any consolidation at all, receiver
must ALSO be willing to lock received funds under his hub's signature*
I'll put it another way. LN only "solves" the scaling problem if
receiver's hub has pre-commited sufficient funds to cover the receipts,
AND if receiver endures for a period of time -- directly related to the
scaling factor -- being unable to spend money received UNLESS his
payment hub signs off on his spend instructions.
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 17:54 ` Peter Todd
2015-06-27 17:58 ` Venzen Khaosan
@ 2015-06-27 19:34 ` Benjamin
1 sibling, 0 replies; 43+ messages in thread
From: Benjamin @ 2015-06-27 19:34 UTC (permalink / raw)
To: Peter Todd; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1791 bytes --]
On Sat, Jun 27, 2015 at 7:54 PM, Peter Todd <pete@petertodd.org> wrote:
> On Sat, Jun 27, 2015 at 07:46:55PM +0200, Benjamin wrote:
> > There is no ensured Quality of service, is there? If you "bid" higher,
> then
> > you don't know what you are going to get. Also because you have no way of
> > knowing what *others* are bidding. Only if you have auctions (increasing
> > increments) you can establish a feedback loop to settle demand and
> supply.
> > And the supply side doesn't adapt. Adapting supply would help resolve
> parts
> > of the capacity problem.
>
> There's lots of markets where there is no assured quality of service,
> and where the bids others are making aren't known. Most financial
> markets work that way - there's only ever probabalistic guarantees that
> for a given amount of money you'll be able to buy a certain amount of
> gold at any given time for instance. Similarly for nearly all
> commodities the infrastructure required to mine those commodities has
> very little room for short, medium, or even long-term production
> increases, so whatever the production supply is at a given time is
> pretty much fixed.
>
hmm? if the current ask for 1 ounce of gold is 100$, then you need to bid
100$ to get 1 ounce of gold. If tomorrow everyone agree 1ounce of gold
should be worth 200$, then the bid moves accordingly. of course production
changes based on prices. otherwise the economy would not function. if price
of some stuff goes up, more people produce that stuff. in terms of a price
for a transaction and the use of a blockchain, unfortunately there is not a
way to just add computational supply. that's an inherent weakness of how
blockchains are structured. ideally it would be as simple as demanding more
resources as in scaling a webservices with AWS.
[-- Attachment #2: Type: text/html, Size: 2327 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 18:02 ` Jameson Lopp
@ 2015-06-27 18:47 ` Peter Todd
0 siblings, 0 replies; 43+ messages in thread
From: Peter Todd @ 2015-06-27 18:47 UTC (permalink / raw)
To: Jameson Lopp; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1184 bytes --]
On Sat, Jun 27, 2015 at 02:02:05PM -0400, Jameson Lopp wrote:
> > For Bitcoin to have O(n) scaling you have to assume that the number of
> > validation nodes doesn't scale with the number of users, thus resulting
> > in a system where users trust others to do validation for them. That is
> > not a global consensus system; that's a trust-based system.
> >
> >
> Why does it matter what the "total work" of the network is? Anyone who is
> participating as a node on the network only cares about the resources
> required to run their own node, not the resources everyone else needs to
> run their nodes.
>
> Also, no assumption needed, it is quite clear that the number of nodes is
> not scaling along with the number of users. If anything it appears to be
> inversely proportional.
Which is a huge problem.
Concretely, what O(n^2) scaling means is that the more Bitcoin is
adopted, the harder it is to use in a decentralized way that doesn't
trust others; the blocksize limit puts a cap on how centralized Bitcoin
can get in a given technological landscape.
--
'peter'[:-1]@petertodd.org
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 650 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 17:34 ` Peter Todd
@ 2015-06-27 18:02 ` Jameson Lopp
2015-06-27 18:47 ` Peter Todd
0 siblings, 1 reply; 43+ messages in thread
From: Jameson Lopp @ 2015-06-27 18:02 UTC (permalink / raw)
To: Peter Todd; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 2460 bytes --]
On Sat, Jun 27, 2015 at 1:34 PM, Peter Todd <pete@petertodd.org> wrote:
> On Sat, Jun 27, 2015 at 01:25:14PM -0400, Michael Naber wrote:
> > Global network consensus means that there is global network recognition
> > that a particular transaction has occurred and is irreversible. The
> > off-chain solutions you describe, while probably useful for other
> purposes,
> > do not exhibit this characteristic and so they are not global network
> > consensus networks.
>
> Hub-and-spoke payment channels and the Lightning network are not
> off-chain solutions, they are ways to more efficiently use on-chain
> transactions to achive the goal of moving assets from point a to point
> b, resulting in more economic transactions being done with fewer - but
> not zero! - blockchain transactions.
>
> Off-chain transaction systems such as Changetip allow economic
> transactions to happen with no blockchain transactions at all.
>
> > Bitcoin Core scales as O(N), where N is the number of transactions. Can
> we
> > do better than this while still achieving global consensus?
>
> No, Bitcoin the network scales with O(n^2) with your above criteria, as
> each node creates k transactions, thus each node has to verify k*n
> transactions, resulting in O(n^2) total work.
>
> For Bitcoin to have O(n) scaling you have to assume that the number of
> validation nodes doesn't scale with the number of users, thus resulting
> in a system where users trust others to do validation for them. That is
> not a global consensus system; that's a trust-based system.
>
>
Why does it matter what the "total work" of the network is? Anyone who is
participating as a node on the network only cares about the resources
required to run their own node, not the resources everyone else needs to
run their nodes.
Also, no assumption needed, it is quite clear that the number of nodes is
not scaling along with the number of users. If anything it appears to be
inversely proportional.
> There's nothing inherently wrong with that, but why change Bitcoin
> itself into a trust-based system, when you can preserve the global
> consensus functionality, and built a trust-based system on top of it?
>
> --
> 'peter'[:-1]@petertodd.org
> 0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 3431 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 17:54 ` Peter Todd
@ 2015-06-27 17:58 ` Venzen Khaosan
2015-06-27 19:34 ` Benjamin
1 sibling, 0 replies; 43+ messages in thread
From: Venzen Khaosan @ 2015-06-27 17:58 UTC (permalink / raw)
To: bitcoin-dev
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Very interesting point and comparison. So the fee market is unknown,
similar to a market maker's orderbook - except in the case of Bitcoin
it is not being deliberately hidden from users, its just not knowable
how miners are positioning at any given moment.
On 06/28/2015 12:54 AM, Peter Todd wrote:
> On Sat, Jun 27, 2015 at 07:46:55PM +0200, Benjamin wrote:
>> There is no ensured Quality of service, is there? If you "bid"
>> higher, then you don't know what you are going to get. Also
>> because you have no way of knowing what *others* are bidding.
>> Only if you have auctions (increasing increments) you can
>> establish a feedback loop to settle demand and supply. And the
>> supply side doesn't adapt. Adapting supply would help resolve
>> parts of the capacity problem.
>
> There's lots of markets where there is no assured quality of
> service, and where the bids others are making aren't known. Most
> financial markets work that way - there's only ever probabalistic
> guarantees that for a given amount of money you'll be able to buy a
> certain amount of gold at any given time for instance. Similarly
> for nearly all commodities the infrastructure required to mine
> those commodities has very little room for short, medium, or even
> long-term production increases, so whatever the production supply
> is at a given time is pretty much fixed.
>
>
>
> _______________________________________________ bitcoin-dev mailing
> list bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBAgAGBQJVjuRiAAoJEGwAhlQc8H1mjnwIAIiUSPf6agfMXsFgupoihsTV
Pr1mJWHdLjrF5QadmdyooivYGPkY+zmfJ+N3fkr8l++PDGh03u0RgALf/gwJSSAQ
qSeMmjSZb8ZEkyLlZAGVHT8Ph+lRda65CVxYspKu/54TolqEezOHVaon9uWYVjtB
cSd8fWoqJMq05Pz25QPagxFUpXmtFX1KvxUWqeGkRsuqMgeWbCurQKpOhRXu48nH
Si73iOIyDUT9i1WsPvlpOi0pSxDlGnkMQKaEyIN5JJfKo1imRAtKVRLZh43rXpSW
jeZf8LMRwd49K4vnvHXZ0UbKWhpelh6XJari22citZ7yb5w5iENAcoP/cSGhLaY=
=nfF5
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 17:46 ` Benjamin
@ 2015-06-27 17:54 ` Peter Todd
2015-06-27 17:58 ` Venzen Khaosan
2015-06-27 19:34 ` Benjamin
0 siblings, 2 replies; 43+ messages in thread
From: Peter Todd @ 2015-06-27 17:54 UTC (permalink / raw)
To: Benjamin; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1149 bytes --]
On Sat, Jun 27, 2015 at 07:46:55PM +0200, Benjamin wrote:
> There is no ensured Quality of service, is there? If you "bid" higher, then
> you don't know what you are going to get. Also because you have no way of
> knowing what *others* are bidding. Only if you have auctions (increasing
> increments) you can establish a feedback loop to settle demand and supply.
> And the supply side doesn't adapt. Adapting supply would help resolve parts
> of the capacity problem.
There's lots of markets where there is no assured quality of service,
and where the bids others are making aren't known. Most financial
markets work that way - there's only ever probabalistic guarantees that
for a given amount of money you'll be able to buy a certain amount of
gold at any given time for instance. Similarly for nearly all
commodities the infrastructure required to mine those commodities has
very little room for short, medium, or even long-term production
increases, so whatever the production supply is at a given time is
pretty much fixed.
--
'peter'[:-1]@petertodd.org
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 650 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 17:37 ` Peter Todd
@ 2015-06-27 17:46 ` Benjamin
2015-06-27 17:54 ` Peter Todd
0 siblings, 1 reply; 43+ messages in thread
From: Benjamin @ 2015-06-27 17:46 UTC (permalink / raw)
To: Peter Todd; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1233 bytes --]
There is no ensured Quality of service, is there? If you "bid" higher, then
you don't know what you are going to get. Also because you have no way of
knowing what *others* are bidding. Only if you have auctions (increasing
increments) you can establish a feedback loop to settle demand and supply.
And the supply side doesn't adapt. Adapting supply would help resolve parts
of the capacity problem.
On Sat, Jun 27, 2015 at 7:37 PM, Peter Todd <pete@petertodd.org> wrote:
> On Sat, Jun 27, 2015 at 07:26:00PM +0200, Benjamin wrote:
> > "Thus we have a fixed capacity system where access is mediated by supply
> > and demand transaction fees."
> >
> > There is no supply and demand. That would mean users would be able to
> adapt
> > fees and get different quality of service depending on current capacity.
> > For example if peak load is 10x average load, then at those times fees
> > would be higher and users would delay transactions to smooth out demand.
>
> That's exactly how Bitcoin works already. See my article on how
> transaction fees work for more details:
>
> https://gist.github.com/petertodd/8e87c782bdf342ef18fb
>
> --
> 'peter'[:-1]@petertodd.org
> 0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
>
[-- Attachment #2: Type: text/html, Size: 1883 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 17:26 ` Benjamin
@ 2015-06-27 17:37 ` Peter Todd
2015-06-27 17:46 ` Benjamin
0 siblings, 1 reply; 43+ messages in thread
From: Peter Todd @ 2015-06-27 17:37 UTC (permalink / raw)
To: Benjamin; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 744 bytes --]
On Sat, Jun 27, 2015 at 07:26:00PM +0200, Benjamin wrote:
> "Thus we have a fixed capacity system where access is mediated by supply
> and demand transaction fees."
>
> There is no supply and demand. That would mean users would be able to adapt
> fees and get different quality of service depending on current capacity.
> For example if peak load is 10x average load, then at those times fees
> would be higher and users would delay transactions to smooth out demand.
That's exactly how Bitcoin works already. See my article on how
transaction fees work for more details:
https://gist.github.com/petertodd/8e87c782bdf342ef18fb
--
'peter'[:-1]@petertodd.org
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 650 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 17:25 ` Michael Naber
@ 2015-06-27 17:34 ` Peter Todd
2015-06-27 18:02 ` Jameson Lopp
0 siblings, 1 reply; 43+ messages in thread
From: Peter Todd @ 2015-06-27 17:34 UTC (permalink / raw)
To: Michael Naber; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1758 bytes --]
On Sat, Jun 27, 2015 at 01:25:14PM -0400, Michael Naber wrote:
> Global network consensus means that there is global network recognition
> that a particular transaction has occurred and is irreversible. The
> off-chain solutions you describe, while probably useful for other purposes,
> do not exhibit this characteristic and so they are not global network
> consensus networks.
Hub-and-spoke payment channels and the Lightning network are not
off-chain solutions, they are ways to more efficiently use on-chain
transactions to achive the goal of moving assets from point a to point
b, resulting in more economic transactions being done with fewer - but
not zero! - blockchain transactions.
Off-chain transaction systems such as Changetip allow economic
transactions to happen with no blockchain transactions at all.
> Bitcoin Core scales as O(N), where N is the number of transactions. Can we
> do better than this while still achieving global consensus?
No, Bitcoin the network scales with O(n^2) with your above criteria, as
each node creates k transactions, thus each node has to verify k*n
transactions, resulting in O(n^2) total work.
For Bitcoin to have O(n) scaling you have to assume that the number of
validation nodes doesn't scale with the number of users, thus resulting
in a system where users trust others to do validation for them. That is
not a global consensus system; that's a trust-based system.
There's nothing inherently wrong with that, but why change Bitcoin
itself into a trust-based system, when you can preserve the global
consensus functionality, and built a trust-based system on top of it?
--
'peter'[:-1]@petertodd.org
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 650 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 17:20 ` Peter Todd
@ 2015-06-27 17:26 ` Benjamin
2015-06-27 17:37 ` Peter Todd
0 siblings, 1 reply; 43+ messages in thread
From: Benjamin @ 2015-06-27 17:26 UTC (permalink / raw)
To: Peter Todd; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 2920 bytes --]
"Thus we have a fixed capacity system where access is mediated by supply
and demand transaction fees."
There is no supply and demand. That would mean users would be able to adapt
fees and get different quality of service depending on current capacity.
For example if peak load is 10x average load, then at those times fees
would be higher and users would delay transactions to smooth out demand.
On Sat, Jun 27, 2015 at 7:20 PM, Peter Todd <pete@petertodd.org> wrote:
> On Sat, Jun 27, 2015 at 12:19:04PM -0400, Michael Naber wrote:
> > That test seems like a reasonable suggestion; 840GB is not prohibitive
> > given today's computing costs. What other than the successful result of
> > that test would you want to see before agreeing to increase the block
> size
> > to 8MB?
>
> The two main things you need to show is:
>
> 1) Small, anonymous, miners remain approximately as profitable as large
> miners, regardless of whether they are in the world, and even when
> miners are under attack. Remember I'm talking about mining here, not
> just hashing - the process of selling your hashpower to someone else who
> is actually doing the mining.
>
> As for "approximately as profitable", based on a 10% profit margin, a 5%
> profitability difference between a negligable ~0% hashing power miner
> and a 50% hashing power miner is a good standard here.
>
> The hard part here is basically keeping orphan rates low, as the %5
> profitability different on %10 profit margin implies an orphan rate of
> about 0.5% - roughly what we have right now if not actually a bit lower.
> That also implies blocks propagate across the network in just a few
> seconds in the worst case, where blocks are being generated with
> transactions in them that are not already in mempools - circumventing
> propagation optimization techniques. As we're talking about small
> miners, we can't assume the miners are directly conneted to each other.
> (which itself is dangerous from an attack point of view - if they're
> directly connected they can be DoS attacked)
>
> 2) Medium to long term plan to pay for hashing power. Without scarcity
> of blockchain space there is no reason to think that transaction fees
> won't fall to the marginal cost of including a transaction, which
> doesn't leave anything to pay for proof-of-work security. A proposal
> meeting this criteria will have to be clever if you don't keep the
> blocksize sufficiently limited that transaction fees are non-negligable.
> One possible approach - if probably politically non-viable - would be to
> change the inflation schedule so that the currency is inflated
> indefinitely.
>
> --
> 'peter'[:-1]@petertodd.org
> 0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 4012 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 16:37 ` Peter Todd
@ 2015-06-27 17:25 ` Michael Naber
2015-06-27 17:34 ` Peter Todd
0 siblings, 1 reply; 43+ messages in thread
From: Michael Naber @ 2015-06-27 17:25 UTC (permalink / raw)
To: Peter Todd, Mark Friedenbach, Adam Back; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1920 bytes --]
Global network consensus means that there is global network recognition
that a particular transaction has occurred and is irreversible. The
off-chain solutions you describe, while probably useful for other purposes,
do not exhibit this characteristic and so they are not global network
consensus networks.
Bitcoin Core scales as O(N), where N is the number of transactions. Can we
do better than this while still achieving global consensus?
On Sat, Jun 27, 2015 at 12:37 PM, Peter Todd <pete@petertodd.org> wrote:
> On Sat, Jun 27, 2015 at 12:09:16PM -0400, Michael Naber wrote:
> > The goal of Bitcoin Core is to meet the demand for global consensus as
> > effectively as possible. Please let's keep the conversation on how to
> best
> > meet that goal.
>
> Keep in mind that Andresen and Hearn both propose that the majority of
> Bitcoin users, even businesses, abandon the global consensus technology
> aspect of Bitcoin - running full nodes - and instead adopt trust
> technology instead - running SPV nodes.
>
> We're very much focused on meeting the demand for global consensus
> technology, but unfortunately global consensus is also has inherently
> O(n^2) scaling with current approaches available. Thus we have a fixed
> capacity system where access is mediated by supply and demand
> transaction fees.
>
> > The off-chain solutions you enumerate are are useful solutions in their
> > respective domains, but none of them solves the global consensus problem
> > with any greater efficiency than Bitcoin does.
>
> Solutions like (hub-and-spoke) payment channels, Lightning, etc. allow
> users of the global consensus technology in Bitcoin to use that
> technology in much more effcient ways, leveraging a relatively small
> amount of global consensus to do large numbers of transactions
> trustlessly.
>
> --
> 'peter'[:-1]@petertodd.org
> 0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
>
[-- Attachment #2: Type: text/html, Size: 2557 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 16:19 ` Michael Naber
@ 2015-06-27 17:20 ` Peter Todd
2015-06-27 17:26 ` Benjamin
0 siblings, 1 reply; 43+ messages in thread
From: Peter Todd @ 2015-06-27 17:20 UTC (permalink / raw)
To: Michael Naber; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 2224 bytes --]
On Sat, Jun 27, 2015 at 12:19:04PM -0400, Michael Naber wrote:
> That test seems like a reasonable suggestion; 840GB is not prohibitive
> given today's computing costs. What other than the successful result of
> that test would you want to see before agreeing to increase the block size
> to 8MB?
The two main things you need to show is:
1) Small, anonymous, miners remain approximately as profitable as large
miners, regardless of whether they are in the world, and even when
miners are under attack. Remember I'm talking about mining here, not
just hashing - the process of selling your hashpower to someone else who
is actually doing the mining.
As for "approximately as profitable", based on a 10% profit margin, a 5%
profitability difference between a negligable ~0% hashing power miner
and a 50% hashing power miner is a good standard here.
The hard part here is basically keeping orphan rates low, as the %5
profitability different on %10 profit margin implies an orphan rate of
about 0.5% - roughly what we have right now if not actually a bit lower.
That also implies blocks propagate across the network in just a few
seconds in the worst case, where blocks are being generated with
transactions in them that are not already in mempools - circumventing
propagation optimization techniques. As we're talking about small
miners, we can't assume the miners are directly conneted to each other.
(which itself is dangerous from an attack point of view - if they're
directly connected they can be DoS attacked)
2) Medium to long term plan to pay for hashing power. Without scarcity
of blockchain space there is no reason to think that transaction fees
won't fall to the marginal cost of including a transaction, which
doesn't leave anything to pay for proof-of-work security. A proposal
meeting this criteria will have to be clever if you don't keep the
blocksize sufficiently limited that transaction fees are non-negligable.
One possible approach - if probably politically non-viable - would be to
change the inflation schedule so that the currency is inflated
indefinitely.
--
'peter'[:-1]@petertodd.org
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 650 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 16:09 ` Michael Naber
2015-06-27 16:28 ` Mark Friedenbach
@ 2015-06-27 16:37 ` Peter Todd
2015-06-27 17:25 ` Michael Naber
1 sibling, 1 reply; 43+ messages in thread
From: Peter Todd @ 2015-06-27 16:37 UTC (permalink / raw)
To: Michael Naber; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1375 bytes --]
On Sat, Jun 27, 2015 at 12:09:16PM -0400, Michael Naber wrote:
> The goal of Bitcoin Core is to meet the demand for global consensus as
> effectively as possible. Please let's keep the conversation on how to best
> meet that goal.
Keep in mind that Andresen and Hearn both propose that the majority of
Bitcoin users, even businesses, abandon the global consensus technology
aspect of Bitcoin - running full nodes - and instead adopt trust
technology instead - running SPV nodes.
We're very much focused on meeting the demand for global consensus
technology, but unfortunately global consensus is also has inherently
O(n^2) scaling with current approaches available. Thus we have a fixed
capacity system where access is mediated by supply and demand
transaction fees.
> The off-chain solutions you enumerate are are useful solutions in their
> respective domains, but none of them solves the global consensus problem
> with any greater efficiency than Bitcoin does.
Solutions like (hub-and-spoke) payment channels, Lightning, etc. allow
users of the global consensus technology in Bitcoin to use that
technology in much more effcient ways, leveraging a relatively small
amount of global consensus to do large numbers of transactions
trustlessly.
--
'peter'[:-1]@petertodd.org
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 650 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 16:09 ` Michael Naber
@ 2015-06-27 16:28 ` Mark Friedenbach
2015-06-27 16:37 ` Peter Todd
1 sibling, 0 replies; 43+ messages in thread
From: Mark Friedenbach @ 2015-06-27 16:28 UTC (permalink / raw)
To: Michael Naber; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 4666 bytes --]
I really suggest you look into the layer2 systems Adam pointed to, as you
appear to be misinformed about their properties. There are many proposals
which really do achieve global consensus using the block chain, just in a
delayed (and cached) fashion that is still 100% safe.
It is possible to go off-chain without losing the trustlessness and
security of the block chain.
On Sat, Jun 27, 2015 at 9:09 AM, Michael Naber <mickeybob@gmail.com> wrote:
> The goal of Bitcoin Core is to meet the demand for global consensus as
> effectively as possible. Please let's keep the conversation on how to best
> meet that goal.
>
> The off-chain solutions you enumerate are are useful solutions in their
> respective domains, but none of them solves the global consensus problem
> with any greater efficiency than Bitcoin does.
>
>
> On Sat, Jun 27, 2015 at 11:33 AM, Adam Back <adam@cypherspace.org> wrote:
>
>> Michael Naber wrote:
>> > Bitcoin Core must remain the lowest-fee, highest-capacity, most secure,
>> distributed, fastest, overall best solution possible to the global
>> consensus problem.
>>
>> Everyone here is excited about the potential of Bitcoin and would
>> aspirationally like it to reach its full potential as fast as
>> possible. But the block-size is not a free variable, half those
>> parameters you listed are in conflict with each other. We're trying
>> to improve both decentralisation and throughput short-term while
>> people work on algorithmic improvements mid-term. If you are
>> interested you can take a look through the proposals:
>>
>>
>> http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/008603.html
>>
>> Note that probably 99% of Bitcoin transactions already happen
>> off-chain in exchanges, tipping services, hosted wallets etc. Maybe
>> you're already using them, assuming you are a bitcoin user.
>> They constitute an early stage layer 2, some of them even have on
>> chain netting and scale faster than the block-chain.
>>
>> You can also read about layer 2, the lightning network paper and the
>> duplex micropayment channel paper:
>>
>> http://lightning.network/lightning-network-paper-DRAFT-0.5.pdf
>>
>> http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
>>
>> and read the development list and look at the code:
>>
>> http://lists.linuxfoundation.org/pipermail/lightning-dev/
>> https://github.com/ElementsProject/lightning
>>
>> Adam
>>
>>
>> On 27 June 2015 at 16:39, Michael Naber <mickeybob@gmail.com> wrote:
>> > Demand to participate in a low-fee global consensus network will likely
>> > continue to rise. Technology already exists to meet that rising demand
>> using
>> > a blockchain with sufficient block size. Whether that blockchain is
>> Bitcoin
>> > Core with an increased block size, or whether it is a fork, market
>> forces
>> > make it almost certain that demand will be met by a blockchain with
>> adequate
>> > capacity. These forces ensure that not only today’s block size will be
>> > increased, but also that future increases will occur should the demand
>> > arise.
>> >
>> > In order to survive, Bitcoin Core must remain the lowest-fee,
>> > highest-capacity, most secure, distributed, fastest, overall best
>> solution
>> > possible to the global consensus problem. Attempting to artificially
>> > constrain the block size below the limits of technology for any reason
>> is a
>> > conflict with this objective and a threat to the survival of Bitcoin
>> Core.
>> > At the same time, scheduling large future increases or permitting
>> unlimited
>> > dynamic scaling of the block size limit raises concerns over
>> availability of
>> > future computing resources. Instead, we should manually increase the
>> block
>> > size limit as demand occurs, except in the special case that increasing
>> the
>> > limit would cause an undue burden upon users wishing to validate the
>> > integrity of the blockchain.
>> >
>> > Compromise: Can we agree that raising the block size to a static 8MB now
>> > with a plan to increase it further should demand necessitate except in
>> the
>> > special case above is a reasonable path forward?
>> >
>> > _______________________________________________
>> > bitcoin-dev mailing list
>> > bitcoin-dev@lists.linuxfoundation.org
>> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>> >
>>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
[-- Attachment #2: Type: text/html, Size: 6477 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 15:21 ` Peter Todd
2015-06-27 15:29 ` Randi Joseph
@ 2015-06-27 16:19 ` Michael Naber
2015-06-27 17:20 ` Peter Todd
1 sibling, 1 reply; 43+ messages in thread
From: Michael Naber @ 2015-06-27 16:19 UTC (permalink / raw)
To: Peter Todd; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1644 bytes --]
That test seems like a reasonable suggestion; 840GB is not prohibitive
given today's computing costs. What other than the successful result of
that test would you want to see before agreeing to increase the block size
to 8MB?
On Sat, Jun 27, 2015 at 11:21 AM, Peter Todd <pete@petertodd.org> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
>
>
> On 27 June 2015 10:39:51 GMT-04:00, Michael Naber <mickeybob@gmail.com>
> wrote:
> >Compromise: Can we agree that raising the block size to a static 8MB
> >now
> >with a plan to increase it further should demand necessitate except in
> >the
> >special case above is a reasonable path forward?
>
> It's not a reasonable path forward right now given the lack of testing
> done with 8MB+ blocks, among many other problems. A way to help make that
> appear more reasonable would be to setup a 8MB testnet as I suggested, with
> two years or so of 8MB blocks in history as well as a large UTXO set to
> test performance characteristics.
>
> Of course, that'll be a 840GB download - if that's unreasonable you might
> want to ask why 8MB blocks are reasonable...
> -----BEGIN PGP SIGNATURE-----
>
> iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVjr9n
> AAoJEMCF8hzn9Lnc47AIAIIwu4maaJs4pAKpK00jQnhPNIQ8LPvijD/8vvyugA1z
> OLxlRrn8zs7JPFbxWOAzK2qzT1RksSd0gbXqWm/Saqk9CAG5LBp7Oq0HAVE23XYt
> 6BvyhjyhYaZjDrv+SZvlSjdl5xfpDNPMIXMi7XblKD9hm1GIUSVIYAOinOSVIy0B
> HlKyn/xc4MaO8DuzQcs0vsNMudVQFLMOLjMWz/7iv41NnB/Ujjzv/6845Z1g7Opf
> d5AfxhPHZixshqav/lF7ly7xQwSZZpoJCyFdtzCNG47EQmFYY9e22uy1KVzS7Zeo
> qYPi3KRx5+vFtHHJMDYG5EIMTwI4l/4+lY/Sd0CFWss=
> =0IOS
> -----END PGP SIGNATURE-----
>
>
[-- Attachment #2: Type: text/html, Size: 2186 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 15:33 ` Adam Back
@ 2015-06-27 16:09 ` Michael Naber
2015-06-27 16:28 ` Mark Friedenbach
2015-06-27 16:37 ` Peter Todd
0 siblings, 2 replies; 43+ messages in thread
From: Michael Naber @ 2015-06-27 16:09 UTC (permalink / raw)
To: Adam Back; +Cc: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 3904 bytes --]
The goal of Bitcoin Core is to meet the demand for global consensus as
effectively as possible. Please let's keep the conversation on how to best
meet that goal.
The off-chain solutions you enumerate are are useful solutions in their
respective domains, but none of them solves the global consensus problem
with any greater efficiency than Bitcoin does.
On Sat, Jun 27, 2015 at 11:33 AM, Adam Back <adam@cypherspace.org> wrote:
> Michael Naber wrote:
> > Bitcoin Core must remain the lowest-fee, highest-capacity, most secure,
> distributed, fastest, overall best solution possible to the global
> consensus problem.
>
> Everyone here is excited about the potential of Bitcoin and would
> aspirationally like it to reach its full potential as fast as
> possible. But the block-size is not a free variable, half those
> parameters you listed are in conflict with each other. We're trying
> to improve both decentralisation and throughput short-term while
> people work on algorithmic improvements mid-term. If you are
> interested you can take a look through the proposals:
>
>
> http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/008603.html
>
> Note that probably 99% of Bitcoin transactions already happen
> off-chain in exchanges, tipping services, hosted wallets etc. Maybe
> you're already using them, assuming you are a bitcoin user.
> They constitute an early stage layer 2, some of them even have on
> chain netting and scale faster than the block-chain.
>
> You can also read about layer 2, the lightning network paper and the
> duplex micropayment channel paper:
>
> http://lightning.network/lightning-network-paper-DRAFT-0.5.pdf
>
> http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
>
> and read the development list and look at the code:
>
> http://lists.linuxfoundation.org/pipermail/lightning-dev/
> https://github.com/ElementsProject/lightning
>
> Adam
>
>
> On 27 June 2015 at 16:39, Michael Naber <mickeybob@gmail.com> wrote:
> > Demand to participate in a low-fee global consensus network will likely
> > continue to rise. Technology already exists to meet that rising demand
> using
> > a blockchain with sufficient block size. Whether that blockchain is
> Bitcoin
> > Core with an increased block size, or whether it is a fork, market forces
> > make it almost certain that demand will be met by a blockchain with
> adequate
> > capacity. These forces ensure that not only today’s block size will be
> > increased, but also that future increases will occur should the demand
> > arise.
> >
> > In order to survive, Bitcoin Core must remain the lowest-fee,
> > highest-capacity, most secure, distributed, fastest, overall best
> solution
> > possible to the global consensus problem. Attempting to artificially
> > constrain the block size below the limits of technology for any reason
> is a
> > conflict with this objective and a threat to the survival of Bitcoin
> Core.
> > At the same time, scheduling large future increases or permitting
> unlimited
> > dynamic scaling of the block size limit raises concerns over
> availability of
> > future computing resources. Instead, we should manually increase the
> block
> > size limit as demand occurs, except in the special case that increasing
> the
> > limit would cause an undue burden upon users wishing to validate the
> > integrity of the blockchain.
> >
> > Compromise: Can we agree that raising the block size to a static 8MB now
> > with a plan to increase it further should demand necessitate except in
> the
> > special case above is a reasonable path forward?
> >
> > _______________________________________________
> > bitcoin-dev mailing list
> > bitcoin-dev@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
>
[-- Attachment #2: Type: text/html, Size: 5279 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 14:39 Michael Naber
2015-06-27 15:21 ` Peter Todd
@ 2015-06-27 15:33 ` Adam Back
2015-06-27 16:09 ` Michael Naber
1 sibling, 1 reply; 43+ messages in thread
From: Adam Back @ 2015-06-27 15:33 UTC (permalink / raw)
To: Michael Naber; +Cc: bitcoin-dev
Michael Naber wrote:
> Bitcoin Core must remain the lowest-fee, highest-capacity, most secure, distributed, fastest, overall best solution possible to the global consensus problem.
Everyone here is excited about the potential of Bitcoin and would
aspirationally like it to reach its full potential as fast as
possible. But the block-size is not a free variable, half those
parameters you listed are in conflict with each other. We're trying
to improve both decentralisation and throughput short-term while
people work on algorithmic improvements mid-term. If you are
interested you can take a look through the proposals:
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/008603.html
Note that probably 99% of Bitcoin transactions already happen
off-chain in exchanges, tipping services, hosted wallets etc. Maybe
you're already using them, assuming you are a bitcoin user.
They constitute an early stage layer 2, some of them even have on
chain netting and scale faster than the block-chain.
You can also read about layer 2, the lightning network paper and the
duplex micropayment channel paper:
http://lightning.network/lightning-network-paper-DRAFT-0.5.pdf
http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
and read the development list and look at the code:
http://lists.linuxfoundation.org/pipermail/lightning-dev/
https://github.com/ElementsProject/lightning
Adam
On 27 June 2015 at 16:39, Michael Naber <mickeybob@gmail.com> wrote:
> Demand to participate in a low-fee global consensus network will likely
> continue to rise. Technology already exists to meet that rising demand using
> a blockchain with sufficient block size. Whether that blockchain is Bitcoin
> Core with an increased block size, or whether it is a fork, market forces
> make it almost certain that demand will be met by a blockchain with adequate
> capacity. These forces ensure that not only today’s block size will be
> increased, but also that future increases will occur should the demand
> arise.
>
> In order to survive, Bitcoin Core must remain the lowest-fee,
> highest-capacity, most secure, distributed, fastest, overall best solution
> possible to the global consensus problem. Attempting to artificially
> constrain the block size below the limits of technology for any reason is a
> conflict with this objective and a threat to the survival of Bitcoin Core.
> At the same time, scheduling large future increases or permitting unlimited
> dynamic scaling of the block size limit raises concerns over availability of
> future computing resources. Instead, we should manually increase the block
> size limit as demand occurs, except in the special case that increasing the
> limit would cause an undue burden upon users wishing to validate the
> integrity of the blockchain.
>
> Compromise: Can we agree that raising the block size to a static 8MB now
> with a plan to increase it further should demand necessitate except in the
> special case above is a reasonable path forward?
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 15:29 ` Randi Joseph
@ 2015-06-27 15:32 ` Peter Todd
0 siblings, 0 replies; 43+ messages in thread
From: Peter Todd @ 2015-06-27 15:32 UTC (permalink / raw)
To: Randi Joseph, Michael Naber, bitcoin-dev
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
On 27 June 2015 11:29:07 GMT-04:00, Randi Joseph <randi@codehalo.com> wrote:
>I wish you were just as prudent when you were recommending full RBF to
>mining pools.
You know, if doing that is imprudent, then people are using Bitcoin in a recklessly dangerous way.
-----BEGIN PGP SIGNATURE-----
iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVjsHq
AAoJEMCF8hzn9Lnc47AH/1Yzl2xJhikeRG7qUmfwIkeuoXWRo9+T30Qj59ii49WU
Nsnpuo6X98p0qz1j8fUKOiY4PQZ6wZYbBg8mTB/EGM8O99Zr8JCPJW0f8l07aXfk
J4NGcgJGzn0CI/E11e4IJICqiLmZgfDa9I1+dNPskQlMu1QsCtju7GQboMpsMv/x
q+Z0dFh9KJnESn8G7ULcrA9ERqu9bGPWLWYOFPcQW0GZQxTLF19Rv0j4njkWbKu6
h2I2LQJcI2uEINRhP6FWFMqCdnaHcox1vtkpeUdVUT3IrjNqhFsXt+M43yUOZ4LU
UsKEj04dfPTjfGpmz/DwTp694VxWCKFGfS4mxGDaxO0=
=DFeN
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 15:21 ` Peter Todd
@ 2015-06-27 15:29 ` Randi Joseph
2015-06-27 15:32 ` Peter Todd
2015-06-27 16:19 ` Michael Naber
1 sibling, 1 reply; 43+ messages in thread
From: Randi Joseph @ 2015-06-27 15:29 UTC (permalink / raw)
To: Peter Todd, Michael Naber, bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 474 bytes --]
I wish you were just as prudent when you were recommending full RBF to
mining pools.
On 6/27/15 11:21 AM, Peter Todd wrote:
> It's not a reasonable path forward right now given the lack of testing done with 8MB+ blocks, among many other problems. A way to help make that appear more reasonable would be to setup a 8MB testnet as I suggested, with two years or so of 8MB blocks in history as well as a large UTXO set to test performance characteristics.
--
Randi Joseph
[-- Attachment #2: Type: text/html, Size: 1049 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
* Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
2015-06-27 14:39 Michael Naber
@ 2015-06-27 15:21 ` Peter Todd
2015-06-27 15:29 ` Randi Joseph
2015-06-27 16:19 ` Michael Naber
2015-06-27 15:33 ` Adam Back
1 sibling, 2 replies; 43+ messages in thread
From: Peter Todd @ 2015-06-27 15:21 UTC (permalink / raw)
To: Michael Naber, bitcoin-dev
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
On 27 June 2015 10:39:51 GMT-04:00, Michael Naber <mickeybob@gmail.com> wrote:
>Compromise: Can we agree that raising the block size to a static 8MB
>now
>with a plan to increase it further should demand necessitate except in
>the
>special case above is a reasonable path forward?
It's not a reasonable path forward right now given the lack of testing done with 8MB+ blocks, among many other problems. A way to help make that appear more reasonable would be to setup a 8MB testnet as I suggested, with two years or so of 8MB blocks in history as well as a large UTXO set to test performance characteristics.
Of course, that'll be a 840GB download - if that's unreasonable you might want to ask why 8MB blocks are reasonable...
-----BEGIN PGP SIGNATURE-----
iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVjr9n
AAoJEMCF8hzn9Lnc47AIAIIwu4maaJs4pAKpK00jQnhPNIQ8LPvijD/8vvyugA1z
OLxlRrn8zs7JPFbxWOAzK2qzT1RksSd0gbXqWm/Saqk9CAG5LBp7Oq0HAVE23XYt
6BvyhjyhYaZjDrv+SZvlSjdl5xfpDNPMIXMi7XblKD9hm1GIUSVIYAOinOSVIy0B
HlKyn/xc4MaO8DuzQcs0vsNMudVQFLMOLjMWz/7iv41NnB/Ujjzv/6845Z1g7Opf
d5AfxhPHZixshqav/lF7ly7xQwSZZpoJCyFdtzCNG47EQmFYY9e22uy1KVzS7Zeo
qYPi3KRx5+vFtHHJMDYG5EIMTwI4l/4+lY/Sd0CFWss=
=0IOS
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 43+ messages in thread
* [bitcoin-dev] A Proposed Compromise to the Block Size Limit
@ 2015-06-27 14:39 Michael Naber
2015-06-27 15:21 ` Peter Todd
2015-06-27 15:33 ` Adam Back
0 siblings, 2 replies; 43+ messages in thread
From: Michael Naber @ 2015-06-27 14:39 UTC (permalink / raw)
To: bitcoin-dev
[-- Attachment #1: Type: text/plain, Size: 1509 bytes --]
Demand to participate in a low-fee global consensus network will likely
continue to rise. Technology already exists to meet that rising demand
using a blockchain with sufficient block size. Whether that blockchain is
Bitcoin Core with an increased block size, or whether it is a fork, market
forces make it almost certain that demand will be met by a blockchain with
adequate capacity. These forces ensure that not only today’s block size
will be increased, but also that future increases will occur should the
demand arise.
In order to survive, Bitcoin Core must remain the lowest-fee,
highest-capacity, most secure, distributed, fastest, overall best solution
possible to the global consensus problem. Attempting to artificially
constrain the block size below the limits of technology for any reason is a
conflict with this objective and a threat to the survival of Bitcoin Core.
At the same time, scheduling large future increases or permitting unlimited
dynamic scaling of the block size limit raises concerns over availability
of future computing resources. Instead, we should manually increase the
block size limit as demand occurs, except in the special case that
increasing the limit would cause an undue burden upon users wishing to
validate the integrity of the blockchain.
Compromise: Can we agree that raising the block size to a static 8MB now
with a plan to increase it further should demand necessitate except in the
special case above is a reasonable path forward?
[-- Attachment #2: Type: text/html, Size: 1570 bytes --]
^ permalink raw reply [flat|nested] 43+ messages in thread
end of thread, other threads:[~2015-07-10 2:55 UTC | newest]
Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-28 5:34 [bitcoin-dev] A Proposed Compromise to the Block Size Limit Raystonn
2015-06-28 10:07 ` Adam Back
2015-06-28 10:29 ` Benjamin
2015-06-28 12:37 ` Adam Back
2015-06-28 16:32 ` Raystonn .
2015-06-28 17:12 ` Mark Friedenbach
2015-06-28 17:18 ` Benjamin
2015-06-28 17:29 ` Gavin Andresen
2015-06-28 17:45 ` Mark Friedenbach
2015-06-28 17:51 ` Adam Back
2015-06-28 18:58 ` Adam Back
2015-06-28 21:05 ` Gavin Andresen
2015-06-28 21:23 ` Michael Naber
2015-06-28 22:07 ` Adam Back
2015-06-29 0:59 ` Eric Lombrozo
2015-06-29 1:13 ` Eric Lombrozo
2015-06-29 1:45 ` Andy Schroder
2015-06-30 0:42 ` Tom Harding
2015-07-10 2:55 ` Tom Harding
2015-06-28 17:53 ` Jorge Timón
2015-06-28 19:22 ` Andrew Lapp
2015-06-28 19:40 ` Benjamin
2015-06-28 12:32 ` Milly Bitcoin
-- strict thread matches above, loose matches on Subject: below --
2015-06-27 14:39 Michael Naber
2015-06-27 15:21 ` Peter Todd
2015-06-27 15:29 ` Randi Joseph
2015-06-27 15:32 ` Peter Todd
2015-06-27 16:19 ` Michael Naber
2015-06-27 17:20 ` Peter Todd
2015-06-27 17:26 ` Benjamin
2015-06-27 17:37 ` Peter Todd
2015-06-27 17:46 ` Benjamin
2015-06-27 17:54 ` Peter Todd
2015-06-27 17:58 ` Venzen Khaosan
2015-06-27 19:34 ` Benjamin
2015-06-27 15:33 ` Adam Back
2015-06-27 16:09 ` Michael Naber
2015-06-27 16:28 ` Mark Friedenbach
2015-06-27 16:37 ` Peter Todd
2015-06-27 17:25 ` Michael Naber
2015-06-27 17:34 ` Peter Todd
2015-06-27 18:02 ` Jameson Lopp
2015-06-27 18:47 ` Peter Todd
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox