* [Bitcoin-development] Near-term scalability
@ 2012-06-15 11:29 Mike Hearn
2012-06-15 13:08 ` Matt Corallo
` (3 more replies)
0 siblings, 4 replies; 12+ messages in thread
From: Mike Hearn @ 2012-06-15 11:29 UTC (permalink / raw)
To: Bitcoin Dev
I had to hit the sack last night as it was 2am CET, but I'd like to
sum up the discussion we had on IRC about scalability and SatoshiDice
in particular.
I think we all agreed on the following:
- Having senders/buyers pay no fees is psychologically desirable even
though we all understand that eventually, somebody, somewhere will be
paying fees to use Bitcoin
- In the ideal world Bitcoin would scale perfectly and there would be
no need for there to be some "winners" and some "losers" when it comes
to confirmation time.
There was discussion of some one-off changes to address the current
situation, namely de-ranking transactions that re-use addresses. Gavin
and myself were not keen on this idea, primarily because it just
avoids the real problem and Bitcoin already has a good way to
prioritize transactions via the fees mechanism itself. The real issue
is that SatoshiDice does indeed pay fees and generates a lot of
transactions, pushing more traditional traffic out due to artificial
throttles.
The following set of proposals were discussed:
(1) Change the mining code to group transactions together with their
mempool dependencies and then calculate all fees as a group. A tx with
a fee of 1 BTC that depends on 5 txns with zero fees would result in
all 6 transactions being considered to have a fee of 1BTC and
therefore become prioritized for inclusion. This allows a transition
to "receiver pays" model for fees. There are many advantages. One is
that it actually makes sense ... it's always the receiver who wants
confirmations because it's the receiver that fears double spends.
Senders never do. What's more, whilst Bitcoin is designed to operate
on a zero-trust model in the real world trust often exists and it can
be used to optimize by passing groups of transactions around with
their dependencies, until that group passes a trust boundary and gets
broadcast with a send-to-self tx to add fees. Another advantage is it
simplifies usage for end users who primarily buy rather than sell,
because it avoids the need to guess at fees, one of the most
problematic parts of Bitcoins design now.
The disadvantages are that it can result in extra transactions that
exist only for adding fees, and it requires a more modern payment
protocol than the direct-IP protocol Satoshi designed.
It would help address the current situation by avoiding angry users
who want to buy things, but don't know what fee to set and so their
transactions get stuck.
(2) SatoshiDice should use the same fee algorithms as Bitcoin-Qt to
avoid paying excessive fees and queue-jumping. Guess that's on my
plate.
(3) Scalability improvements seem like a no brainer to everyone, it's
just a case of how complicated they are.
(4) Making the block size limit float is better than picking a new
arbitrary threshold.
On the forums Matt stated that block chain pruning was a no-go because
"it makes bitcoin more centralized". I think we've thrashed this one
out sufficiently well by now that there should be a united opinion on
it. There are technical ways to implement it such that there is no
change of trust requirements. All the other issues (finding archival
nodes, etc) can be again addressed with sufficient programming.
For the case of huge blocks slowing down end user syncing and wasting
their resources, SPV clients like MultiBit and Android Wallet already
exist and will get better with time. If Jeff implements the bloom
filtering p2p commands I'll make bitcoinj use them and that'll knock
out excessive bandwidth usage and parse overheads from end users who
are on these clients. At some point Bitcoin-Qt can have a dual mode,
but who knows when that'll get implemented.
Does that all sound reasonable?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bitcoin-development] Near-term scalability
2012-06-15 11:29 [Bitcoin-development] Near-term scalability Mike Hearn
@ 2012-06-15 13:08 ` Matt Corallo
2012-06-15 13:34 ` Mike Hearn
[not found] ` <CAAS2fgTJ0UH0Gr6gVMNZwOiv41WzZVesyvNCULj8UfCPPGxQrw@mail.gmail.com>
` (2 subsequent siblings)
3 siblings, 1 reply; 12+ messages in thread
From: Matt Corallo @ 2012-06-15 13:08 UTC (permalink / raw)
To: Mike Hearn; +Cc: Bitcoin Dev
On Fri, 2012-06-15 at 13:29 +0200, Mike Hearn wrote:
> I had to hit the sack last night as it was 2am CET, but I'd like to
> sum up the discussion we had on IRC about scalability and SatoshiDice
> in particular.
>
> I think we all agreed on the following:
>
> - Having senders/buyers pay no fees is psychologically desirable even
> though we all understand that eventually, somebody, somewhere will be
> paying fees to use Bitcoin
>
> - In the ideal world Bitcoin would scale perfectly and there would be
> no need for there to be some "winners" and some "losers" when it comes
> to confirmation time.
>
> There was discussion of some one-off changes to address the current
> situation, namely de-ranking transactions that re-use addresses. Gavin
> and myself were not keen on this idea, primarily because it just
> avoids the real problem and Bitcoin already has a good way to
> prioritize transactions via the fees mechanism itself. The real issue
> is that SatoshiDice does indeed pay fees and generates a lot of
> transactions, pushing more traditional traffic out due to artificial
> throttles.
The idea can be more generalized in that there are many cases where the
generator of a transaction doesn't care about confirmation times, and
would really be willing to make their transaction lower priority than
other 0-fee transactions. This enables the first point with lower
confirmation times for a while longer.
As it turns out, we already have an indication that someone is willing
to wait longer for confirmations - rapid reuse of an address.
1) Green Addresses: The whole point of a green address is that you are
trusted based on your address, not necessarily based on confirmations of
your transactions. In this case, you are generally willing to wait a
bit longer for confirmations than the average user depositing coins into
their Mt. Gox account.
2) Donation Addresses: If you are using a publicized donation address,
you probably aren't depending on getting your coins *now* to turn around
and ship a product and, again, you are a bit more willing to tolerate
longer confirmation times.
3) Lazy (or overworked) coders: If, for whatever reason, someone
designing a bitcoin site decides that it is simply easier to make users
pay to a single address for everything, such actions should generally be
discouraged. Such a setup is worse for end-user privacy. Also, such
laziness (or likely just overworked and not having time to fix the
issue) is likely also laziness across the board including ignoring
multisend for payouts. If you discourage such address use forcing site
designers to implement more sane policies, hopefully they will do enough
research to also do multisend. Note that though this is where one
addresses sites like SatoshiDice, its also the one where we are likely
to have the least impact...
One of the ways to implement such deprioritization of rapidly-reused
addresses is to limit the count of address re-uses by default in memory
pool. By limiting relaying of such transactions, you a) give nodes
across the network some small say in the transactions which they have to
deal with relaying outside of blocks, instead of relying on miners to
make decisions which are good for the total network load, but which are
worse for them. b) You allow sites which wish to re-use addresses to do
so initially to keep the time-to-launch the same as it is today, but
force them to re-think their design decisions as they grow to
(hopefully) decrease their impact on the average Bitcoin full-node
operator. Sites which begin to see their transactions rate-limited have
several options:
1) Make a deal with a miner to feed them their list of now-non-relayed
transactions outside of the regular p2p network and have them manually
added to blocks. Id argue that such setups are going to become more
common in the future and such out-of-band transaction relaying should be
encouraged. This also shifts the delay for other transactions from a
constant delay getting into blocks until there is room for additional
0-fee transactions to a spike on each block from the given miner. I
highly prefer this, as you would see usually only one or two block delay
getting your transaction confirmed at the worst case, instead of a very
fuzzy unknown delay that could stretch on for some time.
2) Use rotating addresses. This is likely the simplest to implement,
and I would absolutely think this is what most sites would end up doing.
Though it doesn't result in a decreased load on the transaction-relaying
nodes, it does at least allow for a minor improvement in user privacy.
In the end, it boils down to an optional transaction deprioritization.
>
> The following set of proposals were discussed:
>
> (1) Change the mining code to group transactions together with their
> mempool dependencies and then calculate all fees as a group. A tx with
> a fee of 1 BTC that depends on 5 txns with zero fees would result in
> all 6 transactions being considered to have a fee of 1BTC and
> therefore become prioritized for inclusion. This allows a transition
> to "receiver pays" model for fees. There are many advantages. One is
> that it actually makes sense ... it's always the receiver who wants
> confirmations because it's the receiver that fears double spends.
> Senders never do. What's more, whilst Bitcoin is designed to operate
> on a zero-trust model in the real world trust often exists and it can
> be used to optimize by passing groups of transactions around with
> their dependencies, until that group passes a trust boundary and gets
> broadcast with a send-to-self tx to add fees. Another advantage is it
> simplifies usage for end users who primarily buy rather than sell,
> because it avoids the need to guess at fees, one of the most
> problematic parts of Bitcoins design now.
>
> The disadvantages are that it can result in extra transactions that
> exist only for adding fees, and it requires a more modern payment
> protocol than the direct-IP protocol Satoshi designed.
>
> It would help address the current situation by avoiding angry users
> who want to buy things, but don't know what fee to set and so their
> transactions get stuck.
>
> (2) SatoshiDice should use the same fee algorithms as Bitcoin-Qt to
> avoid paying excessive fees and queue-jumping. Guess that's on my
> plate.
>
> (3) Scalability improvements seem like a no brainer to everyone, it's
> just a case of how complicated they are.
I think all of the above are largely no brianers to everyone.
>
> (4) Making the block size limit float is better than picking a new
> arbitrary threshold.
Definitely something that is very appealing as we need to scale up.
>
> On the forums Matt stated that block chain pruning was a no-go because
> "it makes bitcoin more centralized". I think we've thrashed this one
> out sufficiently well by now that there should be a united opinion on
> it. There are technical ways to implement it such that there is no
> change of trust requirements. All the other issues (finding archival
> nodes, etc) can be again addressed with sufficient programming.
My point was that the easiest way to do it would be to ship a pruned
snapshot with Bitcoin, and such a system, while verifiable, would
increase Bitocin's centralization. Though it is quite possible to prune
the chain while downloading at checkpoints or when blocks are N deep, it
complicates the initial download if no one has the chain to begin with.
Another point I made was that by doing chain pruning by default, we may
see a decrease in non-fClient nodes (for compatibility, I would assume
pruned nodes have to set fClient) which is what old clients look for to
connect to, possibly complicating using Bitcoin for clients that either
wish to run a full IBD or older clients which need a non-fClient node
before they are happy (which could be an issue when you look at the very
poor "upgrade-apathy" in the Bitcoin community with people running
long-outdated versions because they don't feel like upgrading).
All that said, I do believe pruning will eventually have to come to
encourage p2pool and other getmemorypool-based pool mining, but
(obviously) its something that needs careful consideration in its
overall effects across the network before its applied.
>
> For the case of huge blocks slowing down end user syncing and wasting
> their resources, SPV clients like MultiBit and Android Wallet already
> exist and will get better with time. If Jeff implements the bloom
> filtering p2p commands I'll make bitcoinj use them and that'll knock
> out excessive bandwidth usage and parse overheads from end users who
> are on these clients. At some point Bitcoin-Qt can have a dual mode,
> but who knows when that'll get implemented.
>
> Does that all sound reasonable?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bitcoin-development] Near-term scalability
2012-06-15 13:08 ` Matt Corallo
@ 2012-06-15 13:34 ` Mike Hearn
2012-06-15 16:18 ` Matt Corallo
0 siblings, 1 reply; 12+ messages in thread
From: Mike Hearn @ 2012-06-15 13:34 UTC (permalink / raw)
To: Matt Corallo; +Cc: Bitcoin Dev
> The idea can be more generalized in that there are many cases where the
> generator of a transaction doesn't care about confirmation times, and
> would really be willing to make their transaction lower priority than
> other 0-fee transactions.
Just to be clear, I think this solution is a hack and don't support it
because it's yet another change of network rules. Some random people
will get whacked because of a heuristic "rule of thumb".
If it's implemented, SD could/would switch to fresh addresses and
nothing would have been achieved except making an already complex
system more complex.
I disagree with the notion that you need "less important than free".
If you care about the confirmation time of a transaction that was sent
to you and you need space in a limited resource, you can pay for it.
It's an auction like any other. Besides, the idea that transactions
are free today is just a psychological trick befitting governments but
not us - transactions are funded by aggressive hyperinflation. I would
never describe Bitcoin as a free system and I suggest nobody else does
either.
If grouped fee calculations are implemented, we can keep the nice
property that the person who cares about double spending risk pays the
fees, and if you assume most transactions are hub-and-spoke from
buyers to merchants, rather than a pure p2p graph, in practice it'll
work out to seeming free most of the time even if seen globally it
doesn't make much difference.
> My point was that the easiest way to do it would be to ship a pruned
> snapshot with Bitcoin, and such a system, while verifiable, would
> increase Bitocin's centralization.
I'm not sure why. If you want to audit everything from scratch, after
checking the code you could just blow away the included files and then
"-connect=archive.bitcoin.org" or something like that. After
rebuilding the chain from scratch, check the databases for consistency
with the included data.
It reduces the number of nodes with full copies of the block chain,
yes, but as long as there's at least one copy of the old data in an
accessible location new nodes can still bootstrap just fine.
I'm sure we can find organizations willing to host full chains for
people who want to rebuild their databases from scratch, given how
cheap disk space is.
> connect to, possibly complicating using Bitcoin for clients that either
> wish to run a full IBD or older clients which need a non-fClient node
Yes, but old nodes probably have a copy of the chain already, so it
wouldn't affect them. New blocks would still be fully distributed,
right?
The only case where it'd cause issues is if you install a fresh copy
of a very old node. Not a common occurrence, and those nodes will have
to wait until they find an archival node announcing itself. Those
nodes could be made to announce more frequently than normal, if need
be.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bitcoin-development] Near-term scalability
2012-06-15 13:34 ` Mike Hearn
@ 2012-06-15 16:18 ` Matt Corallo
0 siblings, 0 replies; 12+ messages in thread
From: Matt Corallo @ 2012-06-15 16:18 UTC (permalink / raw)
To: Mike Hearn; +Cc: Bitcoin Dev
On Fri, 2012-06-15 at 15:34 +0200, Mike Hearn wrote:
> > The idea can be more generalized in that there are many cases where the
> > generator of a transaction doesn't care about confirmation times, and
> > would really be willing to make their transaction lower priority than
> > other 0-fee transactions.
>
> Just to be clear, I think this solution is a hack and don't support it
> because it's yet another change of network rules. Some random people
> will get whacked because of a heuristic "rule of thumb".
Its arguably not a change to network rules as its something that users
can already do today by patching their clients. Obviously any
implementation would have sane defaults which allowed for a significant
number of transactions to/from a given address at a time, avoiding
whacking random people unless they are large enough that they should
really already be fully aware of how bitcoin works.
>
> If it's implemented, SD could/would switch to fresh addresses and
> nothing would have been achieved except making an already complex
> system more complex.
I would think SD would switch to using fresh addresses for each bet.
But even that is a good thing, at least where user privacy is concerned.
However, I would hope that SD would see the rule tweak and, in order to
avoid having to generate a number of new addresses per second (or, if
they went the pool route, having a huge pool of many thousands of
addresses), they would consider implementing sendmulti support.
>
> I disagree with the notion that you need "less important than free".
> If you care about the confirmation time of a transaction that was sent
> to you and you need space in a limited resource, you can pay for it.
> It's an auction like any other. Besides, the idea that transactions
> are free today is just a psychological trick befitting governments but
> not us - transactions are funded by aggressive hyperinflation. I would
> never describe Bitcoin as a free system and I suggest nobody else does
> either.
I agree, free transactions isnt something we should aggressively push as
a feature of Bitcoin, its simply not. However, in the current system
free transactions are usually confirmed within a small number of blocks,
and for a number of users, that is an important feature that draws them
to get through the initial hurdles of converting money to Bitcoin and
understanding enough of the system to trust it. I believe that if we
can incentive large transaction creators to avoid delaying free
transactions, we should and giving them the option to delay their own
transactions seems like a perfectly reasonable way to do so. Even if
you drop all the per-address limit stuff, allowing transaction creators
to add a simple flag to transactions seems reasonable when they want to
encourage Bitcoin to continue to grow as it does today. Obviously
keeping free transactions confirming won't be possible forever, but
hopefully that will be as a result of natural growth which can encourage
further growth without the need for free transactions and not as a
result of a few actors in the community creating a transaction volume
significantly greater than their user-base.
>
> If grouped fee calculations are implemented, we can keep the nice
> property that the person who cares about double spending risk pays the
> fees, and if you assume most transactions are hub-and-spoke from
> buyers to merchants, rather than a pure p2p graph, in practice it'll
> work out to seeming free most of the time even if seen globally it
> doesn't make much difference.
ACK, thats an important thing to implement IMO, but I really dont see it
as something that replaces the option to deprioritize your own
transactions to below 0-fee transactions. It could even allow users who
receive payouts which are below 0-fee transactions to place a fee on the
subsequent transactions to allow the payouts to confirm quicker (if done
right).
>
> > My point was that the easiest way to do it would be to ship a pruned
> > snapshot with Bitcoin, and such a system, while verifiable, would
> > increase Bitocin's centralization.
>
> I'm not sure why. If you want to audit everything from scratch, after
> checking the code you could just blow away the included files and then
> "-connect=archive.bitcoin.org" or something like that. After
> rebuilding the chain from scratch, check the databases for consistency
> with the included data.
I would be surprised if more than a handful of devs audit such a thing.
And I would say that does define an increase in centralization.
>
> It reduces the number of nodes with full copies of the block chain,
> yes, but as long as there's at least one copy of the old data in an
> accessible location new nodes can still bootstrap just fine.
Sadly, old nodes do not know where to look for such data, and I'm fairly
certain people running old nodes don't read the forums enough to catch
when it is announced that old nodes should make sure to
-connect=archive.bitcoin.org in order to avoid initially having horrible
initial bootstrap times and eventually not being able to connect to
full-chain-serving nodes at all.
>
> I'm sure we can find organizations willing to host full chains for
> people who want to rebuild their databases from scratch, given how
> cheap disk space is.
Sadly, disk space isnt the issue. Each connection to bitcoind (not that
it cant be fixed, but currently) eats a nice chunk of memory. An
organization that wants to provide nodes for old nodes to connect to
would need to have a significant number of open incoming connection
slots, have plenty of bandwidth for nodes that are in IBD and have
plenty of memory and CPU to manage all the connections.
>
> > connect to, possibly complicating using Bitcoin for clients that either
> > wish to run a full IBD or older clients which need a non-fClient node
>
> Yes, but old nodes probably have a copy of the chain already, so it
> wouldn't affect them. New blocks would still be fully distributed,
> right?
Sadly, BDB's infamous database corrupted messages appear all too often,
and the usual response is "delete the chain and resync." I have a hard
time believing that old nodes will rarely be in IBD.
>
> The only case where it'd cause issues is if you install a fresh copy
> of a very old node. Not a common occurrence, and those nodes will have
> to wait until they find an archival node announcing itself. Those
> nodes could be made to announce more frequently than normal, if need
> be.
I agree that its very possible to have archival nodes available and to
make it work, but I have yet to see anyone doing any work to actually
get commitments to run archival nodes and I have yet to see any
discussion of what, exactly, that would entail.
Matt
^ permalink raw reply [flat|nested] 12+ messages in thread
* [Bitcoin-development] Near-term scalability
[not found] ` <CAAS2fgTJ0UH0Gr6gVMNZwOiv41WzZVesyvNCULj8UfCPPGxQrw@mail.gmail.com>
@ 2012-06-15 16:53 ` Gregory Maxwell
0 siblings, 0 replies; 12+ messages in thread
From: Gregory Maxwell @ 2012-06-15 16:53 UTC (permalink / raw)
To: Bitcoin Development
[I originally sent an earlier version of this message to Mike off
list, but I figure it's worth adding to the public discussion]
On Fri, Jun 15, 2012 at 7:29 AM, Mike Hearn <mike@plan99.net> wrote:
> (4) Making the block size limit float is better than picking a new
> arbitrary threshold.
> On the forums Matt stated that block chain pruning was a no-go because
> "it makes bitcoin more centralized". I think we've thrashed this one
> out sufficiently well by now that there should be a united opinion on
> it.
By itself letting the size float has non-trivial existential risk. A
Bitcoin with expensive transactions due to competition for space in
blocks can be front-ended with fast payment systems and still provide
the promised decentralized currency. Bitcoin with a very large
blockchain and blocks does not. It would do the bitcoin users no good
to increase the transaction volume while concurrently making Bitcoin
more or less pointless over the alternatives.
Scalability must be improved, we can unite on that opinion. But
scalability can't come at the expense of what made Bitcoin worth
having in the first place.
Fortunately it appear to be possible to greatly increase the
scalability without compromising on keeping the costs of operating a
fully validating node very low, for example Pieter's experimentation
with txout+txid indexing (for the 'flip the chain' proposals)
indicates that the data required right now to validate further
transactions is only about 85MiB— and that would be somewhat smaller
with compression and with clients which intentionally try to reduce
the set of unspent transactions. Commitments to these indexes in the
chain would allow almost-full validating nodes with fairly limited
resources. (Almost-full meaning they would not validate the history
long before they started, they'd trusted header difficulty for that. They
could still mine and otherwise act as full nodes).
Achieving scalability improvements without breaking the radical
decentralization will be a lot harder than just improving scalability
but it's effort that is justified if the scalability is actually
needed.
How much decentralization is needed in the end? That isn't clear— "As
much as possible" should generally be the goal. Modern currencies
aren't controlled by single parties but by tens of thousands of
parties locked in economic, legal, and political compromise that
limits their control. In Bitcoin the traditional controls that keep
parties honest are non-existent and if they were just directly applied
we'd potentially lose the properties that make Bitcoin distinct and
useful (e.g. make all miners mine only with FED permission and you
just have a really bandwidth inefficient interface to the dollar).
Instead we have aggressive decentralization and autonomous rule
enforcement.
Mike pointed out that "Before he left Satoshi made a comment saying
he used to think Bitcoin would need millions of nodes if it became
really popular, but in the end he thought it could do fine with just
tens of thousands" I'm not so sure— and I think the truth is in
between. Tens of thousands of nodes— run by a self-selecting bunch of
people who reap the greatest rewards from controlling the validation
of Bitcoin, who by that criteria necessarily have a lot in common with
each other and perhaps not with the regular users— could easily be an
outcome where control is _less_ publicly vested than popular
government controlled currencies. We probably don't need the raw
numbers of nodes, but we need a distribution of ownership and a
distribution of interest (e.g. not a system by bankers for bankers) of
those nodes which I think can only be achieved by making them cheap to
operate and having a lot more than we actually need. — though not so
much that it has to run on every laptop.
The core challenge is that the only obvious ways to justify the cost
of maintaining expensive validation infrastructure is because you
intend to manipulate the currency using it or because you intend to
prevent other people from manipulating the currency. The latter
motivation is potentially subject to a tragedy of the commons— you
don't need to run a full validating node as long as 'enough' other
people do, and enough is a nice slippery slope to zero. Right now
just the random computers I— some random geek— had at home prior to
Bitcoin could store over a hundred years of max size blocks and
process the maximum rate of transactions. With the costs so low
there isn't any real question about a consolidation of validation
making Bitcoin pointless. You could probably increase the scale 10x
without breaking that analysis but beyond that unless the
cost-per-scale goes down a highly consolidated future seems likely.
40 years from now why would people use Bitcoin over centralized
private banknotes like paypal or democratic government controlled
currencies?
Perhaps Bitcoin transaction could transition to being more of the
same— controlled by a consortium of banks, exchanging gigabyte blocks
over terabit ethernet, but I think that would be sad. An alternative
which was autonomous and decentralized even if the transactions were
somewhat slow or costly would be excellent competition for everything
else, and it's something I think man kind ought to have.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bitcoin-development] Near-term scalability
2012-06-15 11:29 [Bitcoin-development] Near-term scalability Mike Hearn
2012-06-15 13:08 ` Matt Corallo
[not found] ` <CAAS2fgTJ0UH0Gr6gVMNZwOiv41WzZVesyvNCULj8UfCPPGxQrw@mail.gmail.com>
@ 2012-06-15 16:56 ` Stefan Thomas
2012-06-15 17:37 ` Mike Koss
2012-06-15 20:56 ` Gavin Andresen
3 siblings, 1 reply; 12+ messages in thread
From: Stefan Thomas @ 2012-06-15 16:56 UTC (permalink / raw)
To: bitcoin-development
Thanks Mike for the writeup - I'm very sad to have missed the discussion
on IRC since fee economics are probably my favorite topic, but I'll try
to contribute to the email discussion instead.
> (4) Making the block size limit float is better than picking a new
> arbitrary threshold.
Fees are a product of both real and artificial limits to transaction
validation.
The artificial limits like the block size limit are essentially putting
a floor on prices by limiting supply beyond what it would otherwise be.
E.g. the network could confirm more transactions theoretically, but the
block size limit prevents it.
The real limits are the bandwidth, computing and memory resources of
participating nodes. For the sake of argument suppose a 1 TB block was
released into the network right now and we'll also assume there was no
block size limit of any kind. Many nodes would likely not be able to
successfully download this block in under 10-30 minutes, so there is a
very good chance that other miners will have generated two blocks before
this block makes its way to them.
What does this mean? The miner generating a 1 TB block knows this would
happen. So in terms of economic self interest he will generate the
largest possible block that he is still confident that other miners will
accept and process. A miner who receives a block will also consider
whether to build on it based on whether they think other miners will be
able to download it. In other words, if I receive a large block I may
decide not to mine on it, because I believe that the majority of mining
power will not mine on it - because it is either too large for them to
download or because their rules against large blocks reject it.
It's important to understand that in practice economic actors tend to
plan ahead. In other words, if there is no block size limit that doesn't
mean that there will be constant forks and total chaos. Rather, no miner
will ever want to have a block rejected due to size, there is plenty of
incentive to be conservative with your limits. Even if there are forks,
this simply means that miners have decided that they can make more money
by including more transactions at the cost of the occasional dud.
Therefore, from an economic perspective, we do not need a global block
size limit of any kind. As "guardians of the network" the only thing we
need to do is to let miners figure out what they wanna do.
HOWEVER, the existing economic incentives won't manifest unless somebody
translates them into code. We have to give our users (miners & endusers)
the tools to create a genuine fee-based verification market.
On the miner side: I would make the block size limit configurable with a
relatively high default. If the default is too low few people will
bother changing it, which means that it is not worth changing (because a
majority uses the default anyway), which means even fewer people will
change it and so on.
The block size limit should also be a soft rather than a hard limit -
here are some ideas for this:
- The default limit for accepting blocks from others should always be
significantly greater than the default limit for blocks that the client
itself will generate.
- There should be different size limits for side chains that are longer
than the currently active chain. In other words, I might reject a block
for being slightly too large, but if everyone else accepts it I should
eventually accept it too, and my client should also consider
automatically raising my size limit if this happens a lot.
The rationale for the soft limit is to allow for gradual upward
adjustment. It needs to be risky for individual miners to raise the size
of their blocks to new heights, but ideally there won't be one solid
wall for them to run into.
On the user side: I would display the fee on the Send Coins dialog and
allow users to choose a different fee per transaction. We also talked
about adding some UI feedback where the client tries to estimate how
long a transaction will take to confirm given a certain fee, based on
recent information about what it observed from the network. If the fee
can be changed on the Send Coins tab, then this could be a red, yellow,
green visual indication whether the fee is sufficient, adequate or
dangerously low.
A criticism one might raise is: "The block size limit is not to protect
miners, but to protect end users who may have less resources than miners
and can't download gigantic block chains." - That's a viewpoint that is
certainly valid. I believe that we will be able to do a lot just with
efficiency improvements, pruning, compression and whatnot. But when it
comes down to it, I'd prefer a large network with cheap
microtransactions even if that means that consumer hardware can't
operate as a standalone validating node anymore. Headers-only mode is
already a much-requested feature anyway and there are many ways of
improving the security of various header-only or lightweight protocols.
(I just saw Greg's message advocating the opposite viewpoint, I'll
respond to that as soon as I can.)
> (1) Change the mining code to group transactions together with their
> mempool dependencies and then calculate all fees as a group.
+1 Very good change. This would allow miners to maximize their revenue
and in doing so better represent the existing priorities that users
express through fees.
> There was discussion of some one-off changes to address the current
> situation, namely de-ranking transactions that re-use addresses.
Discouraging address reuse will not change the amount of transactions, I
think we all agree on that. As for whether it improves the
prioritization, I'm not sure. Use cases that we seek to discourage may
simply switch to random addresses and I don't agree in and of itself
this is a benefit (see item 4 below). Here are a few reasons one might
be against this proposal:
1) Certain use cases like green addresses will be forced to become more
complicated than they would otherwise need to be.
2) It will be harder to read information straight out of the block
chain, for example right now we can pretty easily see how much volume is
caused by Satoshi Dice, perhaps allowing us to make better decisions.
3) The address index that is used by block explorers and lightweight
client servers will grow unnecessarily (an address -> tx index will be
larger if the number of unique addresses increases given the same number
of txs), so for people like myself who work on that type of software
you're actually making our scalability equation slightly worse.
4) You're forcing people into privacy best practices which you think are
good, but others may not subscribe to. For example I have absolutely
zero interest in privacy, anyone who cares that I buy Bitcoins with my
salary and spend them on paragliding is welcome to know about it.
Frankly, if I cared about privacy I wouldn't be using Bitcoin. If other
people want to use mixing services and randomize their addresses and
communicate through Tor that's fine, but the client shouldn't force me
to do those things if I don't want to by "deprioritizing" my transactions.
5) We may not like firstbits, but the fact remains that for now they are
extremely popular, because they improve the user experience where we
failed to do so. If you deprioritize transactions to reused addresses
you'll for example deprioritize all/most of Girls Gone Bitcoin, which
(again, like it or not) is one of the few practical, sustainable niches
that Bitcoin has managed to carve out for itself so far.
> Having senders/buyers pay no fees is psychologically desirable even
> though we all understand that eventually, somebody, somewhere will be
> paying fees to use Bitcoin
Free is just an extreme form of cheap, so if we can make transactions
very cheap (through efficiency and very large blocks) then it will be
easier for charitable miners to include free transactions. In practice,
my prediction is that free transactions on the open network will simply
not be possible in the long run. Dirty hacks aside there is simply no
way of distinguishing a spam transaction from a charity-worthy
transaction. So the way I envision free transactions in the future is
that there may be miners in partnership with wallet providers like
BlockChain.info that let you submit feeless transactions straight to
them based on maybe a captcha or some ads. (For the purist, the captcha
challenge and response could be communicated across the bitcoin network,
but I think we agree that such things should ideally take place
out-of-band.)
That way, the available charity of miners who wish to include feeless
transactions would go to human users as opposed to the potentially
infinite demand of auto-generated feeless transactions.
On 6/15/2012 1:29 PM, Mike Hearn wrote:
> I had to hit the sack last night as it was 2am CET, but I'd like to
> sum up the discussion we had on IRC about scalability and SatoshiDice
> in particular.
>
> I think we all agreed on the following:
>
> - Having senders/buyers pay no fees is psychologically desirable even
> though we all understand that eventually, somebody, somewhere will be
> paying fees to use Bitcoin
>
> - In the ideal world Bitcoin would scale perfectly and there would be
> no need for there to be some "winners" and some "losers" when it comes
> to confirmation time.
>
> There was discussion of some one-off changes to address the current
> situation, namely de-ranking transactions that re-use addresses. Gavin
> and myself were not keen on this idea, primarily because it just
> avoids the real problem and Bitcoin already has a good way to
> prioritize transactions via the fees mechanism itself. The real issue
> is that SatoshiDice does indeed pay fees and generates a lot of
> transactions, pushing more traditional traffic out due to artificial
> throttles.
>
> The following set of proposals were discussed:
>
> (1) Change the mining code to group transactions together with their
> mempool dependencies and then calculate all fees as a group. A tx with
> a fee of 1 BTC that depends on 5 txns with zero fees would result in
> all 6 transactions being considered to have a fee of 1BTC and
> therefore become prioritized for inclusion. This allows a transition
> to "receiver pays" model for fees. There are many advantages. One is
> that it actually makes sense ... it's always the receiver who wants
> confirmations because it's the receiver that fears double spends.
> Senders never do. What's more, whilst Bitcoin is designed to operate
> on a zero-trust model in the real world trust often exists and it can
> be used to optimize by passing groups of transactions around with
> their dependencies, until that group passes a trust boundary and gets
> broadcast with a send-to-self tx to add fees. Another advantage is it
> simplifies usage for end users who primarily buy rather than sell,
> because it avoids the need to guess at fees, one of the most
> problematic parts of Bitcoins design now.
>
> The disadvantages are that it can result in extra transactions that
> exist only for adding fees, and it requires a more modern payment
> protocol than the direct-IP protocol Satoshi designed.
>
> It would help address the current situation by avoiding angry users
> who want to buy things, but don't know what fee to set and so their
> transactions get stuck.
>
> (2) SatoshiDice should use the same fee algorithms as Bitcoin-Qt to
> avoid paying excessive fees and queue-jumping. Guess that's on my
> plate.
>
> (3) Scalability improvements seem like a no brainer to everyone, it's
> just a case of how complicated they are.
>
> (4) Making the block size limit float is better than picking a new
> arbitrary threshold.
>
> On the forums Matt stated that block chain pruning was a no-go because
> "it makes bitcoin more centralized". I think we've thrashed this one
> out sufficiently well by now that there should be a united opinion on
> it. There are technical ways to implement it such that there is no
> change of trust requirements. All the other issues (finding archival
> nodes, etc) can be again addressed with sufficient programming.
>
> For the case of huge blocks slowing down end user syncing and wasting
> their resources, SPV clients like MultiBit and Android Wallet already
> exist and will get better with time. If Jeff implements the bloom
> filtering p2p commands I'll make bitcoinj use them and that'll knock
> out excessive bandwidth usage and parse overheads from end users who
> are on these clients. At some point Bitcoin-Qt can have a dual mode,
> but who knows when that'll get implemented.
>
> Does that all sound reasonable?
>
> ------------------------------------------------------------------------------
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bitcoin-development] Near-term scalability
2012-06-15 16:56 ` Stefan Thomas
@ 2012-06-15 17:37 ` Mike Koss
2012-06-15 18:38 ` Amir Taaki
0 siblings, 1 reply; 12+ messages in thread
From: Mike Koss @ 2012-06-15 17:37 UTC (permalink / raw)
To: Stefan Thomas; +Cc: bitcoin-development
[-- Attachment #1: Type: text/plain, Size: 15169 bytes --]
Grouping mempool transactions based on fees of the group seems
an unnecessary complexity; it makes it harder to predict if an isolated
transaction has enough "juice" to be included in the next Block.
Given your point about economic actors adapting to conditions, would it not
be simpler to use a individual "fee per byte" priority algorithm and let
transaction generators distribute their fees accordingly (and more
predictably)?
This simpler algorithm will prune arbitrary transactions sub-optimally, but
has the benefit of being more understandable and predictable from the point
of view of transaction generators.
On Fri, Jun 15, 2012 at 9:56 AM, Stefan Thomas <moon@justmoon.de> wrote:
> Thanks Mike for the writeup - I'm very sad to have missed the discussion
> on IRC since fee economics are probably my favorite topic, but I'll try
> to contribute to the email discussion instead.
>
> > (4) Making the block size limit float is better than picking a new
> > arbitrary threshold.
>
> Fees are a product of both real and artificial limits to transaction
> validation.
>
> The artificial limits like the block size limit are essentially putting
> a floor on prices by limiting supply beyond what it would otherwise be.
> E.g. the network could confirm more transactions theoretically, but the
> block size limit prevents it.
>
> The real limits are the bandwidth, computing and memory resources of
> participating nodes. For the sake of argument suppose a 1 TB block was
> released into the network right now and we'll also assume there was no
> block size limit of any kind. Many nodes would likely not be able to
> successfully download this block in under 10-30 minutes, so there is a
> very good chance that other miners will have generated two blocks before
> this block makes its way to them.
>
> What does this mean? The miner generating a 1 TB block knows this would
> happen. So in terms of economic self interest he will generate the
> largest possible block that he is still confident that other miners will
> accept and process. A miner who receives a block will also consider
> whether to build on it based on whether they think other miners will be
> able to download it. In other words, if I receive a large block I may
> decide not to mine on it, because I believe that the majority of mining
> power will not mine on it - because it is either too large for them to
> download or because their rules against large blocks reject it.
>
> It's important to understand that in practice economic actors tend to
> plan ahead. In other words, if there is no block size limit that doesn't
> mean that there will be constant forks and total chaos. Rather, no miner
> will ever want to have a block rejected due to size, there is plenty of
> incentive to be conservative with your limits. Even if there are forks,
> this simply means that miners have decided that they can make more money
> by including more transactions at the cost of the occasional dud.
>
> Therefore, from an economic perspective, we do not need a global block
> size limit of any kind. As "guardians of the network" the only thing we
> need to do is to let miners figure out what they wanna do.
>
> HOWEVER, the existing economic incentives won't manifest unless somebody
> translates them into code. We have to give our users (miners & endusers)
> the tools to create a genuine fee-based verification market.
>
> On the miner side: I would make the block size limit configurable with a
> relatively high default. If the default is too low few people will
> bother changing it, which means that it is not worth changing (because a
> majority uses the default anyway), which means even fewer people will
> change it and so on.
>
> The block size limit should also be a soft rather than a hard limit -
> here are some ideas for this:
>
> - The default limit for accepting blocks from others should always be
> significantly greater than the default limit for blocks that the client
> itself will generate.
>
> - There should be different size limits for side chains that are longer
> than the currently active chain. In other words, I might reject a block
> for being slightly too large, but if everyone else accepts it I should
> eventually accept it too, and my client should also consider
> automatically raising my size limit if this happens a lot.
>
> The rationale for the soft limit is to allow for gradual upward
> adjustment. It needs to be risky for individual miners to raise the size
> of their blocks to new heights, but ideally there won't be one solid
> wall for them to run into.
>
> On the user side: I would display the fee on the Send Coins dialog and
> allow users to choose a different fee per transaction. We also talked
> about adding some UI feedback where the client tries to estimate how
> long a transaction will take to confirm given a certain fee, based on
> recent information about what it observed from the network. If the fee
> can be changed on the Send Coins tab, then this could be a red, yellow,
> green visual indication whether the fee is sufficient, adequate or
> dangerously low.
>
> A criticism one might raise is: "The block size limit is not to protect
> miners, but to protect end users who may have less resources than miners
> and can't download gigantic block chains." - That's a viewpoint that is
> certainly valid. I believe that we will be able to do a lot just with
> efficiency improvements, pruning, compression and whatnot. But when it
> comes down to it, I'd prefer a large network with cheap
> microtransactions even if that means that consumer hardware can't
> operate as a standalone validating node anymore. Headers-only mode is
> already a much-requested feature anyway and there are many ways of
> improving the security of various header-only or lightweight protocols.
>
> (I just saw Greg's message advocating the opposite viewpoint, I'll
> respond to that as soon as I can.)
>
>
> > (1) Change the mining code to group transactions together with their
> > mempool dependencies and then calculate all fees as a group.
>
> +1 Very good change. This would allow miners to maximize their revenue
> and in doing so better represent the existing priorities that users
> express through fees.
>
>
> > There was discussion of some one-off changes to address the current
> > situation, namely de-ranking transactions that re-use addresses.
>
> Discouraging address reuse will not change the amount of transactions, I
> think we all agree on that. As for whether it improves the
> prioritization, I'm not sure. Use cases that we seek to discourage may
> simply switch to random addresses and I don't agree in and of itself
> this is a benefit (see item 4 below). Here are a few reasons one might
> be against this proposal:
>
> 1) Certain use cases like green addresses will be forced to become more
> complicated than they would otherwise need to be.
>
> 2) It will be harder to read information straight out of the block
> chain, for example right now we can pretty easily see how much volume is
> caused by Satoshi Dice, perhaps allowing us to make better decisions.
>
> 3) The address index that is used by block explorers and lightweight
> client servers will grow unnecessarily (an address -> tx index will be
> larger if the number of unique addresses increases given the same number
> of txs), so for people like myself who work on that type of software
> you're actually making our scalability equation slightly worse.
>
> 4) You're forcing people into privacy best practices which you think are
> good, but others may not subscribe to. For example I have absolutely
> zero interest in privacy, anyone who cares that I buy Bitcoins with my
> salary and spend them on paragliding is welcome to know about it.
> Frankly, if I cared about privacy I wouldn't be using Bitcoin. If other
> people want to use mixing services and randomize their addresses and
> communicate through Tor that's fine, but the client shouldn't force me
> to do those things if I don't want to by "deprioritizing" my transactions.
>
> 5) We may not like firstbits, but the fact remains that for now they are
> extremely popular, because they improve the user experience where we
> failed to do so. If you deprioritize transactions to reused addresses
> you'll for example deprioritize all/most of Girls Gone Bitcoin, which
> (again, like it or not) is one of the few practical, sustainable niches
> that Bitcoin has managed to carve out for itself so far.
>
>
> > Having senders/buyers pay no fees is psychologically desirable even
> > though we all understand that eventually, somebody, somewhere will be
> > paying fees to use Bitcoin
>
> Free is just an extreme form of cheap, so if we can make transactions
> very cheap (through efficiency and very large blocks) then it will be
> easier for charitable miners to include free transactions. In practice,
> my prediction is that free transactions on the open network will simply
> not be possible in the long run. Dirty hacks aside there is simply no
> way of distinguishing a spam transaction from a charity-worthy
> transaction. So the way I envision free transactions in the future is
> that there may be miners in partnership with wallet providers like
> BlockChain.info that let you submit feeless transactions straight to
> them based on maybe a captcha or some ads. (For the purist, the captcha
> challenge and response could be communicated across the bitcoin network,
> but I think we agree that such things should ideally take place
> out-of-band.)
>
> That way, the available charity of miners who wish to include feeless
> transactions would go to human users as opposed to the potentially
> infinite demand of auto-generated feeless transactions.
>
>
>
> On 6/15/2012 1:29 PM, Mike Hearn wrote:
> > I had to hit the sack last night as it was 2am CET, but I'd like to
> > sum up the discussion we had on IRC about scalability and SatoshiDice
> > in particular.
> >
> > I think we all agreed on the following:
> >
> > - Having senders/buyers pay no fees is psychologically desirable even
> > though we all understand that eventually, somebody, somewhere will be
> > paying fees to use Bitcoin
> >
> > - In the ideal world Bitcoin would scale perfectly and there would be
> > no need for there to be some "winners" and some "losers" when it comes
> > to confirmation time.
> >
> > There was discussion of some one-off changes to address the current
> > situation, namely de-ranking transactions that re-use addresses. Gavin
> > and myself were not keen on this idea, primarily because it just
> > avoids the real problem and Bitcoin already has a good way to
> > prioritize transactions via the fees mechanism itself. The real issue
> > is that SatoshiDice does indeed pay fees and generates a lot of
> > transactions, pushing more traditional traffic out due to artificial
> > throttles.
> >
> > The following set of proposals were discussed:
> >
> > (1) Change the mining code to group transactions together with their
> > mempool dependencies and then calculate all fees as a group. A tx with
> > a fee of 1 BTC that depends on 5 txns with zero fees would result in
> > all 6 transactions being considered to have a fee of 1BTC and
> > therefore become prioritized for inclusion. This allows a transition
> > to "receiver pays" model for fees. There are many advantages. One is
> > that it actually makes sense ... it's always the receiver who wants
> > confirmations because it's the receiver that fears double spends.
> > Senders never do. What's more, whilst Bitcoin is designed to operate
> > on a zero-trust model in the real world trust often exists and it can
> > be used to optimize by passing groups of transactions around with
> > their dependencies, until that group passes a trust boundary and gets
> > broadcast with a send-to-self tx to add fees. Another advantage is it
> > simplifies usage for end users who primarily buy rather than sell,
> > because it avoids the need to guess at fees, one of the most
> > problematic parts of Bitcoins design now.
> >
> > The disadvantages are that it can result in extra transactions that
> > exist only for adding fees, and it requires a more modern payment
> > protocol than the direct-IP protocol Satoshi designed.
> >
> > It would help address the current situation by avoiding angry users
> > who want to buy things, but don't know what fee to set and so their
> > transactions get stuck.
> >
> > (2) SatoshiDice should use the same fee algorithms as Bitcoin-Qt to
> > avoid paying excessive fees and queue-jumping. Guess that's on my
> > plate.
> >
> > (3) Scalability improvements seem like a no brainer to everyone, it's
> > just a case of how complicated they are.
> >
> > (4) Making the block size limit float is better than picking a new
> > arbitrary threshold.
> >
> > On the forums Matt stated that block chain pruning was a no-go because
> > "it makes bitcoin more centralized". I think we've thrashed this one
> > out sufficiently well by now that there should be a united opinion on
> > it. There are technical ways to implement it such that there is no
> > change of trust requirements. All the other issues (finding archival
> > nodes, etc) can be again addressed with sufficient programming.
> >
> > For the case of huge blocks slowing down end user syncing and wasting
> > their resources, SPV clients like MultiBit and Android Wallet already
> > exist and will get better with time. If Jeff implements the bloom
> > filtering p2p commands I'll make bitcoinj use them and that'll knock
> > out excessive bandwidth usage and parse overheads from end users who
> > are on these clients. At some point Bitcoin-Qt can have a dual mode,
> > but who knows when that'll get implemented.
> >
> > Does that all sound reasonable?
> >
> >
> ------------------------------------------------------------------------------
> > Live Security Virtual Conference
> > Exclusive live event will cover all the ways today's security and
> > threat landscape has changed and how IT managers can respond. Discussions
> > will include endpoint security, mobile security and the latest in malware
> > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> > _______________________________________________
> > Bitcoin-development mailing list
> > Bitcoin-development@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/bitcoin-development
> >
>
>
>
> ------------------------------------------------------------------------------
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>
--
Mike Koss
CTO, CoinLab
(425) 246-7701 (m)
A Bitcoin Primer <http://coinlab.com/a-bitcoin-primer.pdf> - What you need
to know about Bitcoins.
[-- Attachment #2: Type: text/html, Size: 17585 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bitcoin-development] Near-term scalability
2012-06-15 17:37 ` Mike Koss
@ 2012-06-15 18:38 ` Amir Taaki
[not found] ` <CAAS2fgSVbYFkkhP_0Ny5ULB-DJKN-3hZLkqWukrGL80-UenMwQ@mail.gmail.com>
0 siblings, 1 reply; 12+ messages in thread
From: Amir Taaki @ 2012-06-15 18:38 UTC (permalink / raw)
To: bitcoin-development
Forcing users to switch addresses per received payment to work around a bad fee system would be a braindead decision. You might love software and playing with web plugins, but not everyone does. Artists like Rap News can right now simply throw up an address and begin accepting donations. That's a hugely powerful and impactful selling point for Bitcoin.
I don't really see these problems as a concern. Stefan made an excellent post which touched on this, in that miners have an incentive to keep block sizes low so that their blocks propagate. The real problem here is not about block propagation but the user experience. The way I see it, Bitcoin is becoming more specialised over time and part of that process is abstraction. In the past we all used the Satoshi client for mining, merchant functions, validating blocks and personal uses. These are rapidly diverging, and managing the blockchain is not something that user clients should be doing.
Mike is right when he says the network only needs a few thousand nodes to function fairly. I am not worried about Bitcoin becoming corrupted because of it being a network "by bankers for bankers" because unlike the conventional finance industry, there are no artificial barriers to entry beyond the base cost. This network would always be competitive and strictly operate based on market dynamics.
Case in point: http://en.wikipedia.org/wiki/Coase_theorem
With strict property rights and zero (or low) transaction costs, the allocation of a system does not matter. The system will make efficient use of its resources. I don't see why a cabal would try to corrupt Bitcoin at expense to themselves when a new competitor can enter the market and undercut them. It's why we expect the ROI on mining to be 0 or negative.
I figured out that if you trust data from a blockchain service and only accept data with multiple confirms from each connected service, then you can trivially calculate the probability of being fed corrupt data (assuming a fixed chance per server). In this way, the model is a fault tolerant byzantine system. The chance of being manipulated falls expontentially as you add more servers. And these services can be made highly scalable if you see my BIP 33.
https://en.bitcoin.it/wiki/BIP_0033
________________________________
From: Mike Koss <mike@coinlab.com>
To: Stefan Thomas <moon@justmoon.de>
Cc: bitcoin-development@lists.sourceforge.net
Sent: Friday, June 15, 2012 7:37 PM
Subject: Re: [Bitcoin-development] Near-term scalability
Grouping mempool transactions based on fees of the group seems an unnecessary complexity; it makes it harder to predict if an isolated transaction has enough "juice" to be included in the next Block.
Given your point about economic actors adapting to conditions, would it not be simpler to use a individual "fee per byte" priority algorithm and let transaction generators distribute their fees accordingly (and more predictably)?
This simpler algorithm will prune arbitrary transactions sub-optimally, but has the benefit of being more understandable and predictable from the point of view of transaction generators.
On Fri, Jun 15, 2012 at 9:56 AM, Stefan Thomas <moon@justmoon.de> wrote:
Thanks Mike for the writeup - I'm very sad to have missed the discussion
>on IRC since fee economics are probably my favorite topic, but I'll try
>to contribute to the email discussion instead.
>
>
>> (4) Making the block size limit float is better than picking a new
>> arbitrary threshold.
>
>Fees are a product of both real and artificial limits to transaction
>validation.
>
>The artificial limits like the block size limit are essentially putting
>a floor on prices by limiting supply beyond what it would otherwise be.
>E.g. the network could confirm more transactions theoretically, but the
>block size limit prevents it.
>
>The real limits are the bandwidth, computing and memory resources of
>participating nodes. For the sake of argument suppose a 1 TB block was
>released into the network right now and we'll also assume there was no
>block size limit of any kind. Many nodes would likely not be able to
>successfully download this block in under 10-30 minutes, so there is a
>very good chance that other miners will have generated two blocks before
>this block makes its way to them.
>
>What does this mean? The miner generating a 1 TB block knows this would
>happen. So in terms of economic self interest he will generate the
>largest possible block that he is still confident that other miners will
>accept and process. A miner who receives a block will also consider
>whether to build on it based on whether they think other miners will be
>able to download it. In other words, if I receive a large block I may
>decide not to mine on it, because I believe that the majority of mining
>power will not mine on it - because it is either too large for them to
>download or because their rules against large blocks reject it.
>
>It's important to understand that in practice economic actors tend to
>plan ahead. In other words, if there is no block size limit that doesn't
>mean that there will be constant forks and total chaos. Rather, no miner
>will ever want to have a block rejected due to size, there is plenty of
>incentive to be conservative with your limits. Even if there are forks,
>this simply means that miners have decided that they can make more money
>by including more transactions at the cost of the occasional dud.
>
>Therefore, from an economic perspective, we do not need a global block
>size limit of any kind. As "guardians of the network" the only thing we
>need to do is to let miners figure out what they wanna do.
>
>HOWEVER, the existing economic incentives won't manifest unless somebody
>translates them into code. We have to give our users (miners & endusers)
>the tools to create a genuine fee-based verification market.
>
>On the miner side: I would make the block size limit configurable with a
>relatively high default. If the default is too low few people will
>bother changing it, which means that it is not worth changing (because a
>majority uses the default anyway), which means even fewer people will
>change it and so on.
>
>The block size limit should also be a soft rather than a hard limit -
>here are some ideas for this:
>
>- The default limit for accepting blocks from others should always be
>significantly greater than the default limit for blocks that the client
>itself will generate.
>
>- There should be different size limits for side chains that are longer
>than the currently active chain. In other words, I might reject a block
>for being slightly too large, but if everyone else accepts it I should
>eventually accept it too, and my client should also consider
>automatically raising my size limit if this happens a lot.
>
>The rationale for the soft limit is to allow for gradual upward
>adjustment. It needs to be risky for individual miners to raise the size
>of their blocks to new heights, but ideally there won't be one solid
>wall for them to run into.
>
>On the user side: I would display the fee on the Send Coins dialog and
>allow users to choose a different fee per transaction. We also talked
>about adding some UI feedback where the client tries to estimate how
>long a transaction will take to confirm given a certain fee, based on
>recent information about what it observed from the network. If the fee
>can be changed on the Send Coins tab, then this could be a red, yellow,
>green visual indication whether the fee is sufficient, adequate or
>dangerously low.
>
>A criticism one might raise is: "The block size limit is not to protect
>miners, but to protect end users who may have less resources than miners
>and can't download gigantic block chains." - That's a viewpoint that is
>certainly valid. I believe that we will be able to do a lot just with
>efficiency improvements, pruning, compression and whatnot. But when it
>comes down to it, I'd prefer a large network with cheap
>microtransactions even if that means that consumer hardware can't
>operate as a standalone validating node anymore. Headers-only mode is
>already a much-requested feature anyway and there are many ways of
>improving the security of various header-only or lightweight protocols.
>
>(I just saw Greg's message advocating the opposite viewpoint, I'll
>respond to that as soon as I can.)
>
>
>
>> (1) Change the mining code to group transactions together with their
>> mempool dependencies and then calculate all fees as a group.
>
>+1 Very good change. This would allow miners to maximize their revenue
>and in doing so better represent the existing priorities that users
>express through fees.
>
>
>
>> There was discussion of some one-off changes to address the current
>> situation, namely de-ranking transactions that re-use addresses.
>
>Discouraging address reuse will not change the amount of transactions, I
>think we all agree on that. As for whether it improves the
>prioritization, I'm not sure. Use cases that we seek to discourage may
>simply switch to random addresses and I don't agree in and of itself
>this is a benefit (see item 4 below). Here are a few reasons one might
>be against this proposal:
>
>1) Certain use cases like green addresses will be forced to become more
>complicated than they would otherwise need to be.
>
>2) It will be harder to read information straight out of the block
>chain, for example right now we can pretty easily see how much volume is
>caused by Satoshi Dice, perhaps allowing us to make better decisions.
>
>3) The address index that is used by block explorers and lightweight
>client servers will grow unnecessarily (an address -> tx index will be
>larger if the number of unique addresses increases given the same number
>of txs), so for people like myself who work on that type of software
>you're actually making our scalability equation slightly worse.
>
>4) You're forcing people into privacy best practices which you think are
>good, but others may not subscribe to. For example I have absolutely
>zero interest in privacy, anyone who cares that I buy Bitcoins with my
>salary and spend them on paragliding is welcome to know about it.
>Frankly, if I cared about privacy I wouldn't be using Bitcoin. If other
>people want to use mixing services and randomize their addresses and
>communicate through Tor that's fine, but the client shouldn't force me
>to do those things if I don't want to by "deprioritizing" my transactions.
>
>5) We may not like firstbits, but the fact remains that for now they are
>extremely popular, because they improve the user experience where we
>failed to do so. If you deprioritize transactions to reused addresses
>you'll for example deprioritize all/most of Girls Gone Bitcoin, which
>(again, like it or not) is one of the few practical, sustainable niches
>that Bitcoin has managed to carve out for itself so far.
>
>
>
>> Having senders/buyers pay no fees is psychologically desirable even
>> though we all understand that eventually, somebody, somewhere will be
>> paying fees to use Bitcoin
>
>Free is just an extreme form of cheap, so if we can make transactions
>very cheap (through efficiency and very large blocks) then it will be
>easier for charitable miners to include free transactions. In practice,
>my prediction is that free transactions on the open network will simply
>not be possible in the long run. Dirty hacks aside there is simply no
>way of distinguishing a spam transaction from a charity-worthy
>transaction. So the way I envision free transactions in the future is
>that there may be miners in partnership with wallet providers like
>BlockChain.info that let you submit feeless transactions straight to
>them based on maybe a captcha or some ads. (For the purist, the captcha
>challenge and response could be communicated across the bitcoin network,
>but I think we agree that such things should ideally take place
>out-of-band.)
>
>That way, the available charity of miners who wish to include feeless
>transactions would go to human users as opposed to the potentially
>infinite demand of auto-generated feeless transactions.
>
>
>
>
>On 6/15/2012 1:29 PM, Mike Hearn wrote:
>> I had to hit the sack last night as it was 2am CET, but I'd like to
>> sum up the discussion we had on IRC about scalability and SatoshiDice
>> in particular.
>>
>> I think we all agreed on the following:
>>
>> - Having senders/buyers pay no fees is psychologically desirable even
>> though we all understand that eventually, somebody, somewhere will be
>> paying fees to use Bitcoin
>>
>> - In the ideal world Bitcoin would scale perfectly and there would be
>> no need for there to be some "winners" and some "losers" when it comes
>> to confirmation time.
>>
>> There was discussion of some one-off changes to address the current
>> situation, namely de-ranking transactions that re-use addresses. Gavin
>> and myself were not keen on this idea, primarily because it just
>> avoids the real problem and Bitcoin already has a good way to
>> prioritize transactions via the fees mechanism itself. The real issue
>> is that SatoshiDice does indeed pay fees and generates a lot of
>> transactions, pushing more traditional traffic out due to artificial
>> throttles.
>>
>> The following set of proposals were discussed:
>>
>> (1) Change the mining code to group transactions together with their
>> mempool dependencies and then calculate all fees as a group. A tx with
>> a fee of 1 BTC that depends on 5 txns with zero fees would result in
>> all 6 transactions being considered to have a fee of 1BTC and
>> therefore become prioritized for inclusion. This allows a transition
>> to "receiver pays" model for fees. There are many advantages. One is
>> that it actually makes sense ... it's always the receiver who wants
>> confirmations because it's the receiver that fears double spends.
>> Senders never do. What's more, whilst Bitcoin is designed to operate
>> on a zero-trust model in the real world trust often exists and it can
>> be used to optimize by passing groups of transactions around with
>> their dependencies, until that group passes a trust boundary and gets
>> broadcast with a send-to-self tx to add fees. Another advantage is it
>> simplifies usage for end users who primarily buy rather than sell,
>> because it avoids the need to guess at fees, one of the most
>> problematic parts of Bitcoins design now.
>>
>> The disadvantages are that it can result in extra transactions that
>> exist only for adding fees, and it requires a more modern payment
>> protocol than the direct-IP protocol Satoshi designed.
>>
>> It would help address the current situation by avoiding angry users
>> who want to buy things, but don't know what fee to set and so their
>> transactions get stuck.
>>
>> (2) SatoshiDice should use the same fee algorithms as Bitcoin-Qt to
>> avoid paying excessive fees and queue-jumping. Guess that's on my
>> plate.
>>
>> (3) Scalability improvements seem like a no brainer to everyone, it's
>> just a case of how complicated they are.
>>
>> (4) Making the block size limit float is better than picking a new
>> arbitrary threshold.
>>
>> On the forums Matt stated that block chain pruning was a no-go because
>> "it makes bitcoin more centralized". I think we've thrashed this one
>> out sufficiently well by now that there should be a united opinion on
>> it. There are technical ways to implement it such that there is no
>> change of trust requirements. All the other issues (finding archival
>> nodes, etc) can be again addressed with sufficient programming.
>>
>> For the case of huge blocks slowing down end user syncing and wasting
>> their resources, SPV clients like MultiBit and Android Wallet already
>> exist and will get better with time. If Jeff implements the bloom
>> filtering p2p commands I'll make bitcoinj use them and that'll knock
>> out excessive bandwidth usage and parse overheads from end users who
>> are on these clients. At some point Bitcoin-Qt can have a dual mode,
>> but who knows when that'll get implemented.
>>
>> Does that all sound reasonable?
>>
>> ------------------------------------------------------------------------------
>> Live Security Virtual Conference
>> Exclusive live event will cover all the ways today's security and
>> threat landscape has changed and how IT managers can respond. Discussions
>> will include endpoint security, mobile security and the latest in malware
>> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
>> _______________________________________________
>> Bitcoin-development mailing list
>> Bitcoin-development@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>>
>
>
>------------------------------------------------------------------------------
>Live Security Virtual Conference
>Exclusive live event will cover all the ways today's security and
>threat landscape has changed and how IT managers can respond. Discussions
>will include endpoint security, mobile security and the latest in malware
>threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
>_______________________________________________
>Bitcoin-development mailing list
>Bitcoin-development@lists.sourceforge.net
>https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>
--
Mike Koss
CTO, CoinLab
(425) 246-7701 (m)
A Bitcoin Primer - What you need to know about Bitcoins.
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bitcoin-development] Near-term scalability
[not found] ` <CAAS2fgSVbYFkkhP_0Ny5ULB-DJKN-3hZLkqWukrGL80-UenMwQ@mail.gmail.com>
@ 2012-06-15 18:50 ` Amir Taaki
2012-06-15 18:55 ` Gregory Maxwell
0 siblings, 1 reply; 12+ messages in thread
From: Amir Taaki @ 2012-06-15 18:50 UTC (permalink / raw)
To: bitcoin-development
> less expensive. This is no more "real" or less "artificial" then an
> imposed licensing fee or the like and it is not subject to market
> forces.
Sure, the market is not always efficient nor desirable. This seems more like a social question though about choice and information. I do strongly feel that users should have more control over their technology, and a say in how Bitcoin operates. It is our job to present the choices and inform them to make good decisions. If we think how to implement this with a social component of the users operating the network rather than hard and fast rules, I think that's the preferrable way.
Part of the problem is that Satoshi didn't totally anticipate the growth of the network. The block reward (the subsidy) is too high, which is why transactions can afford to be so cheap. What would happen if blocks required a cumulative fee of XN BTC for N transactions before being accepted?
----- Original Message -----
From: Gregory Maxwell <gmaxwell@gmail.com>
To: Amir Taaki <zgenjix@yahoo.com>
Cc:
Sent: Friday, June 15, 2012 8:43 PM
Subject: Re: [Bitcoin-development] Near-term scalability
On Fri, Jun 15, 2012 at 2:38 PM, Amir Taaki <zgenjix@yahoo.com> wrote:
> Forcing users to switch addresses per received payment to work around a bad fee system would be a braindead decision. You might love software and playing with web plugins, but not everyone does. Artists like Rap News can right now simply throw up an address and begin accepting donations. That's a hugely powerful and impactful selling point for Bitcoin.
And that use case does not need fast confirmations!
This is making the point.
>there are no artificial barriers to entry beyond the base cost. This network would always be competitive and strictly operate based on market dynamics.
The users of bitcoin can collectively choose how expensive operating a
full node is by accepting validation rules that allow it to be more or
less expensive. This is no more "real" or less "artificial" then an
imposed licensing fee or the like and it is not subject to market
forces.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bitcoin-development] Near-term scalability
2012-06-15 18:50 ` Amir Taaki
@ 2012-06-15 18:55 ` Gregory Maxwell
0 siblings, 0 replies; 12+ messages in thread
From: Gregory Maxwell @ 2012-06-15 18:55 UTC (permalink / raw)
To: Amir Taaki; +Cc: bitcoin-development
On Fri, Jun 15, 2012 at 2:50 PM, Amir Taaki <zgenjix@yahoo.com> wrote:
> Part of the problem is that Satoshi didn't totally anticipate the growth of the network. The block reward (the subsidy) is too high, which is why transactions can afford to be so cheap. What would happen if blocks required a cumulative fee of XN BTC for N transactions before being accepted?
I would take the last block I solved and use it to write a transaction
to nowhere which which gave all 50 BTC out in fee. This pays for as
many transactions in the block as I like for any value of X you want
to choose.
You should read the bitcointalk forums more often: variants on that
idea are frequently suggested and dismantled. There is a lot of noise
there but also a lot of ideas and knowing what doesn't work is good
too.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bitcoin-development] Near-term scalability
2012-06-15 11:29 [Bitcoin-development] Near-term scalability Mike Hearn
` (2 preceding siblings ...)
2012-06-15 16:56 ` Stefan Thomas
@ 2012-06-15 20:56 ` Gavin Andresen
2012-06-16 7:55 ` Mike Hearn
3 siblings, 1 reply; 12+ messages in thread
From: Gavin Andresen @ 2012-06-15 20:56 UTC (permalink / raw)
To: Mike Hearn; +Cc: Bitcoin Dev
> (1) Change the mining code to group transactions together with their
> mempool dependencies and then calculate all fees as a group.
I think there is general consensus this is a good idea.
> (2) SatoshiDice should use the same fee algorithms as Bitcoin-Qt to
> avoid paying excessive fees and queue-jumping. Guess that's on my
> plate.
I don't think the problem is with SatoshiDice, but is with the current
fixed/inflexible fee rules:
----------------------------
I've been thinking about fees a lot the last couple of days, and I'm
close to making a formal proposal. Here are my thoughts so far:
It seems to me there are two typical use cases:
Case 1: I want my transaction to be confirmed quickly, and am willing
to pay a fee to make that happen.
Case 2: I want my transaction to be confirmed eventually. I'd rather
not pay, unless I have to.
I don't think the current code handles those two cases as well as it
could; here's a proposal to fix that:
o Let miners decide on how many free transactions they'll support, by
letting them specify how much of each block to set aside for 'free'
transactions (bytes) and what the threshold for 'free' is
(bitcoins/kilobyte). I think a new RPC call to get/set the parameters
dynamically is the right way to go.
o Change the block creation code to calculate a
bitcoin-fee-per-kilobyte for each transaction, where the fee and size
are both calculated based on the transaction and it's dependent
descendants (so we get the receiver-can-opt-to-pay-the-fee behavior we
want). Order transactions so highest-paying get into the non-free
space first.
o Fill up the "free" space (if any) with the highest-priority
transactions, where priority is a function of transaction size, age of
inputs, number of bitcoins... and ratio of inputs to outputs (to
encourage combining inputs so more pruning is possible).
The fee-paying space in a block lets Use Case #1 users compete to make
their fees high enough to get into the block.
The free space should let non-spamming Use Case #2 users (who don't
send a lot of transactions, and so have well-aged, high-priority
inputs) send transactions for free, at least as long as there are
miners willing to accept free transactions.
The question is: how do clients suggest fees to users if miners might
have very different fee policies?
I think full, transaction-verifying clients can watch how long
transactions stay in the memory pool to figure it out. I'm gathering
statistics right now to test a couple of simple heuristic algorithms
for reasonable fee/priority policies.
But that won't work for newly started clients that haven't seen a lot
of transactions enter/exit the memory pool, or SPV clients that can't
lookup transaction inputs (so can't calculate what fees are being paid
-- and once we have bloom filters may not be paying attention anything
but their own transactions, anyway).
I'm still thinking about that.
Maybe a new p2p network command: you give me a list of block hashes, I
tell you average fees paid per kilobyte for fee-paying transactions in
those blocks, and minimum and average priority of free transactions in
those blocks.
Maybe the big mining pools all publish their fee policies and that
information somehow gets to clients (encoded in the coinbase? ... but
they have a strong incentive to lie to try put upward pressure on
fees... ).
Maybe each client developer runs a "fee policy server" and clients
periodically ask it for reasonable fee rules (HTTP fetch info from a
web page that is updated as often or infrequently as is convenient,
maybe). I think I like this solution the best, it should let clients
compete to have the smartest/bestest algorithms for saving their
user's money on transaction fees.
--
--
Gavin Andresen
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Bitcoin-development] Near-term scalability
2012-06-15 20:56 ` Gavin Andresen
@ 2012-06-16 7:55 ` Mike Hearn
0 siblings, 0 replies; 12+ messages in thread
From: Mike Hearn @ 2012-06-16 7:55 UTC (permalink / raw)
To: Gavin Andresen; +Cc: Bitcoin Dev
[resend, sorry gavin]
I think these ideas all make a ton of sense, some have been floating
around for a while in various forms but it's good to draw them
together coherently.
> o Fill up the "free" space (if any) with the highest-priority
> transactions, where priority is a function of transaction size, age of
> inputs, number of bitcoins... and ratio of inputs to outputs (to
> encourage combining inputs so more pruning is possible).
Is more incentive needed? If you have tons of tiny outputs you already
have incentives to merge them because otherwise your txns will become
large and the fees needed to overcome the DoS limits and gain priority
will rise.
The code to do it is a bit irritating as you really want to de-frag
wallets in the background when the user is not likely to need the
outputs quickly, and I suspect over time transaction volumes will
become diurnal so it'd be cheaper to do that at night time, but it's
all possible.
> But that won't work for newly started clients that haven't seen a lot
> of transactions enter/exit the memory pool
Peers could provide first-seen timestamps for transactions when
announced or when downloaded with Jeffs proposed command, but the
timestamps are not necessarily trustable. Not sure if that'd open up
new attacks.
> or SPV clients that can't lookup transaction inputs
SPV clients can do it by getdata-ing on the relevant inputs, but it's
very bandwidth intensive just to guesstimate fees.
> Maybe each client developer runs a "fee policy server"
That's reasonable. I don't believe this case is worth worrying about
right now. For the common cases of
a) Customer buys from merchant (runs full node)
b) Trusted person sends money to trusting person (does not need confirms)
it wouldn't matter after the changes to the block creation code. It's
only really an issue when a user running an SPV client wishes to
accept money from somebody they do not trust, and they want it to
confirm quick-ish (within an hour), but can tolerate delays up to
that. I think this is likely to be rare.
Much more common is that you want to accept the payment immediately,
which is an oft discussed but different problem.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2012-06-16 7:56 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-15 11:29 [Bitcoin-development] Near-term scalability Mike Hearn
2012-06-15 13:08 ` Matt Corallo
2012-06-15 13:34 ` Mike Hearn
2012-06-15 16:18 ` Matt Corallo
[not found] ` <CAAS2fgTJ0UH0Gr6gVMNZwOiv41WzZVesyvNCULj8UfCPPGxQrw@mail.gmail.com>
2012-06-15 16:53 ` Gregory Maxwell
2012-06-15 16:56 ` Stefan Thomas
2012-06-15 17:37 ` Mike Koss
2012-06-15 18:38 ` Amir Taaki
[not found] ` <CAAS2fgSVbYFkkhP_0Ny5ULB-DJKN-3hZLkqWukrGL80-UenMwQ@mail.gmail.com>
2012-06-15 18:50 ` Amir Taaki
2012-06-15 18:55 ` Gregory Maxwell
2012-06-15 20:56 ` Gavin Andresen
2012-06-16 7:55 ` Mike Hearn
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox